id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2308.05535
Effects of Dynamic and Stochastic Travel Times on the Operation of Mobility-on-Demand Services
Mobility-on-Demand (MoD) services have been an active research topic in recent years. Many studies focused on developing control algorithms to supply efficient services. To cope with a large search space to solve the underlying vehicle routing problem, studies usually apply hard time-constraints on pick-up and drop-off while considering static network travel times to reduce computational time. As travel times in real street networks are dynamic and stochastic, assigned routes considered feasible by the control algorithm in one time step might become infeasible in the next. Since once assigned and confirmed, customers should still be part of the solution, damage control is necessary to counteract this effect. In this study, a detailed simulation framework for MoD services is coupled with a microscopic traffic simulation to create dynamic and stochastic travel times, and tested in a case study for Munich, Germany. Results showed that the combination of inaccurate travel time estimation and damage control strategies for infeasible routes deteriorates the performance of MoD services -- hailing and pooling -- significantly. Moreover, customers suffer from unreliable pick-up time and travel time estimations. Allowing re-assignments of initial vehicle schedules according to updated system states helps to restore system efficiency and reliability, but only to a minor extent.
Fynn Wolf, Roman Engelhardt, Yunfei Zhang, Florian Dandl, Klaus Bogenberger
2023-08-10T12:28:21Z
http://arxiv.org/abs/2308.05535v1
# Effects of Dynamic and Stochastic Travel Times on the Operation of Mobility-on-Demand Services ###### Abstract Mobility-on-Demand (MoD) services have been an active research topic in recent years. Many studies focused on developing control algorithms to supply efficient services. To cope with a large search space to solve the underlying vehicle routing problem, studies usually apply hard time-constraints on pick-up and drop-off while considering static network travel times to reduce computational time. As travel times in real street networks are dynamic and stochastic, assigned routes considered feasible by the control algorithm in one time step might become infeasible in the next. Since once assigned and confirmed, customers should still be part of the solution, damage control is necessary to counteract this effect. In this study, a detailed simulation framework for MoD services is coupled with a microscopic traffic simulation to create dynamic and stochastic travel times, and tested in a case study for Munich, Germany. Results showed that the combination of inaccurate travel time estimation and damage control strategies for infeasible routes deteriorates the performance of MoD services - hailing and pooling - significantly. Moreover, customers suffer from unreliable pick-up time and travel time estimations. Allowing re-assignments of initial vehicle schedules according to updated system states helps to restore system efficiency and reliability, but only to a minor extent. Ride Pooling, Ride Sharing, Mobility On-Demand, Microscopic Traffic Simulation, Agent-based Simulation ## 1 Introduction Mobility-on-Demand (MoD) services started to play a vital role in urban mobility in recent years. Especially considering upcoming autonomous vehicles (AVs), large-scale services with cheap fares can be offered. The introduction of such convenient service might decrease parking space consumption by replacing private vehicles trips and additionally vehicle miles traveled if trips are shared in pooled services. Travel time estimation and prediction is an important aspect for MoD services. From an operator's point of view, vehicle routing and dispatching heavily relies on travel time estimations to evaluate costs of possible routing decisions and check the feasibility of schedules regarding customers' pick-up and drop-off time constraints. From a customer's point of view, a convenient service should provide accurate information about expected pick-up and drop-off times, in the best case even before they book the service. Reliability and convenience of a service will suffer heavily if those estimations are incorrect, especially for long-term acceptance of shared autonomous vehicle services [16]. Many recent studies focused on developing operational strategies for hailing (e.g. [14, 15]...) or pooling services (e.g. [1, 13, 12, 14, 15]...). Nevertheless, usually deterministic network travel times from historic or simulated data is applied. On the one hand, this assumption allows the preprocessing of routing tables to reduce the need to compute fastest routes, which otherwise becomes the computational bottlenecks of solving the assignment problem. On the other hand, hard time constraints on customer pick-up and drop-off are usually applied to further reduce the search space of the underlying vehicle routing problem. If network travel times are deterministic and known, it can be guaranteed that a feasible vehicle schedule assigned once will also remain feasible in the future [Dandl et al., 2021a]. These models therefore lack at evaluating the traffic effect the MoD service has on the overall system and additionally, they fail in providing damage control strategies to deal with infeasible solutions when travel time estimations are flawed. To evaluate the traffic impact of MoD services, different methods are developed to achieve the balance between accuracy and computational efficiency. They vary from retrieving dynamics from map services [Markov et al., 2021], link transmission models [Levin, 2017], or direct coupling with a microscopic simulation such as Aimsun Next [Dandl et al., 2017]. To reduce the need of fastest path computations, macroscopic fundamental diagram (MFD) based methods have been proposed to estimate traffic and travel times in different network regions [Dandl et al., 2021b]. All these studies have in common, that they utilize complex traffic models to investigate the impacts of MoD services on the traffic system itself. Nevertheless, the operational implications of arrival reliability are not evaluated. This reliability issue can be degraded into two sub-problems, namely 1) uncertainty regarding waiting and traveling times and 2) route changes due to stochastic requests. To the authors' knowledge, only Liu et al. [2019] deals with the uncertainty regarding waiting and traveling times of MoD services. They compared different path finding algorithms (reliable or shortest) and different information (historical or real-time). The results shown that using the most reliable path model (with the largest probability of arrival within a specified time budget) and historical travel time can significantly improve the on-time arrival reliability. Fielbaum and Alonso-Mora [2020] systematically analyzes the general uncertainty for MoD services, which can be caused by other users or by the operational rules. With respect to the provided literature, we seek to answer the following two research questions: 1. How do dynamic and stochastic travel times affect the fleet performance indicators of MoD services? 2. How do dynamic and stochastic travel times affect the difference between expected and realized waiting time? To answer these two questions, we evaluate the effects of dynamic travel times on a MoD services by coupling the MoD simulation framework "FleetPy"1[Engelhardt et al., 2022] with the microscopic traffic simulator Aimsun Next and compare the effects of dynamic and stochastic travel times to the often made assumption of static and deterministic travel times. Footnote 1: [https://github.com/TUM-VT/FleetPy](https://github.com/TUM-VT/FleetPy) Our contributions can be summarized as follows: * Coupling of a MoD simulation framework with microscopic traffic simulation * Developing a damage control strategy to cope with stochastic travel times * Evaluation of the impact of dynamic and stochastic travel times on users' and operators' indicators * Evaluation of the impact of re-assignments with dynamic and stochastic travel time ## 2 Methodology First, the fleet simulation tool FleetPy is described. Inherently, FleetPy focuses on the control aspects and uses a simplified traffic representation with deterministic travel times. To evaluate the impacts of this simplification, in the second part of this section the coupling of FleetPy with the microscopic traffic simulator "Aimsun Next"2 is described. With this coupling, FleetPy controls the fleet, i.e. assigns vehicle schedules and provides feedback to customers, while vehicle movement is performed in the microscopic traffic simulator. Footnote 2: [https://www.aimsun.com/aimsun-next/](https://www.aimsun.com/aimsun-next/) ### FleetPy FleetPy is an agent-based simulation model to analyze MoD fleets. For this study, three main types of agents are of relevance: 1) Customers requesting trips from the MoD service, 2) an operator that offers the service by (centrally) controlling a set of vehicles, and 3) the fleet vehicles picking up and dropping off customers according to the schedules assigned by the operator. To estimate vehicle travel times, the operator represents the street network as a directed graph \(G_{op}=(N_{op},E_{op})\) with nodes \(N_{op}\) and edges \(E_{op}\). Each edge \(e\in E_{op}\) is associated with a distance \(d_{e}\) and a travel time \(\tau_{e}(t)\). Depending on the simulation time \(t\) and the existing traffic state, a goal of the operator is to estimate edge travel times \(\tau_{e}(t)\) to accurately predict times for vehicles to arrive at scheduled stops. Four main steps are conducted during a FleetPy time step: 1. Boarding and alighting processes of vehicles are registered by the fleet operator. 2. New MoD customers enter the simulation and request a trip \(i\) at time \(t_{i}\) by providing origin \(o_{i}\in N_{op}\) and destination \(d_{i}\in N_{op}\). 3. The operator evaluates whether it can serve the request within the given time constraints. If so, an expected pick-up time \(t_{i}^{pu}\) and drop-off time \(t_{i}^{do}\) is provided. 4. The operator assigns new/updated schedules to its vehicles. Assigning new schedules is one core feature of FleetPy and will be described on a high level in the following. A schedule is defined as a series of stops at network nodes \(N_{op}\) where boarding and alighting processes of vehicles are conducted. In between these stops, vehicles are traveling on the fastest route in the network \(G_{op}\). There are multiple possible permutations of stops as soon as more than one passenger is assigned to a vehicle \(v\in V\), which are enumerated. The \(k\)-th possible permutation of stops for the schedule \(\psi_{k}(v,R_{\psi})\) serving all requests in the set \(R_{\psi}\) is considered feasible if 1. the drop-off stop succeeds the pick-up stop for each customer. 2. the number of on-board customers never exceeds the vehicle capacity (\(c_{v}\)). 3. each customer is (supposed to be) picked up before a maximum waiting time \(w_{max}\) elapsed. 4. if the operator offers a pooling service, the maximum additional travel time must not exceed a detour factor \(\delta_{max}\) compared to a direct trip. Each feasible schedule is rated by a cost function to be minimized, defined as \[\rho( \psi_{k}(v,R_{\psi}))=t_{end}(\psi_{k})-t_{sim}-P_{r}|R_{\psi}|+\] \[+P_{delay}\sum_{r\in R_{\psi}}(\text{max}[t_{r,pu}(\psi_{k})-t_{ r,pu}^{latest},0])+\] \[+P_{delay}\sum_{r\in R_{\psi}}(\text{max}[t_{r,do}(\psi_{k})-t_{ do}^{latest},0])\;. \tag{1}\] \(t_{end}(\psi_{k})\) refers to the expected time this schedule is finished. Together with the current simulation time \(t_{sim}\), the first term measures the time of execution. With the number of served requests by the schedule \(|R_{\psi}|\) and a large assignment Figure 1: High level flowchart of the coupled simulation of Aimsun Next and FleetPy. reward \(P_{r}=1.000.000\)s, the objective prioritizes serving as many customers as possible. The last two terms penalize delays in arrival times of the vehicle. \(t_{r,pu}(\psi_{k})\) and \(t_{r,do}(\psi_{k})\) refer to pick-up and drop-off times of the requests according to the plan, respectively. \(t_{r,pu}^{latest}\) and \(t_{do}^{latest}\) reflect their latest pick-up and drop-off times. \(P_{delay}=10\) weighs the delays if present. Conflating all possible permutations, the cost \(\rho_{\psi}\) of serving the set of requests \(R_{\psi}\) with vehicle \(v\) is defined as that permutation \(k\) of \(\psi_{k}(v,R_{\psi})\) that minimizes \(\rho(\psi_{k}(v,R_{\psi}))\). At each optimization interval, the operator assigns new feasible schedules to accommodate new customers by solving the following integer linear problem: minimize \[\sum_{v\in V,j\in\Psi}\rho_{vj}x_{vj}\] (2) s.t. \[\sum_{j\in\Psi}x_{vj}\leq 1 \forall v\in V\] (3) \[\sum_{v\in V,j\in\Psi_{r}}x_{vj}=1 \forall r\in R_{a}\] (4) \[\sum_{v\in V,j\in\Psi_{r}}x_{vj}\leq 1 \forall u\in R_{u}\] (5) \(x_{vj}\in\{0,1\}\) is a binary decision variable to assign a schedule \(j\) of the set of all available schedules \(\Psi\) to vehicle \(v\). Eq. (3) ensures that not more than one schedule is assigned to each vehicle. Eq. (4) ensures that each request from the set of requests that have been assigned in previous time step \(R_{a}\) are assigned again. Contrarily, yet unassigned requests from the set \(R_{u}\) can remain unassigned as stated in constraint (5). Thereby, \(\Psi_{r}\) refers to the set of schedules that serve request \(r\). If an unassigned request \(r\in R_{u}\) is not assigned to a vehicle, it is assumed that this request leaves the system unserved. On the other hand, if it is assigned, the corresponding assigned schedule is used to determine the expected pick-up \(t_{i,pu}^{exp}\) and travel time which are provided to the customer. Additionally, the request is moved to the set \(R_{a}\) in the next optimization step. The core of the algorithm is to create the set of feasible schedules \(\Psi\). The algorithm applied in this study is a variant of the constraint programming approach developed by Alonso-Mora et al. (2017). By explicitly exploiting the time constraint for feasible schedules, a systematic exhaustive search can be done to create all feasible routes. Because of constraints in the length of this paper, we refer to original publication. Details of the implementation can be found in Engelhardt et al. (2020). This study distinguishes two options of set of schedules that are created for the assignment problem: 1. No Re-assignment: Once a request \(r\) is assigned to vehicle \(v\), no schedule of another vehicle to serve request \(r\) is included in the assignment problem to guarantee that a request remains assigned to the same vehicle. This might be convenient for the customer because the assigned vehicle position can directly be provided and its approach can be tracked. However, optimization potential might be lost. 2. With Re-assignment: Here, every feasible schedule is included in the assignment problem and the assigned vehicle for each customer might change while still fulfilling pick-up and drop-off constraints. This re-assignment occurs when a better global assignment solution is found. Due to the coupling with Aimsun, the actual vehicle travel times are not the same as the ones the algorithm for creating feasible schedules assumes. Hence, created schedules that were considered feasible in the last time step might become infeasible due to delays (e.g. congestion or unfortunate signal timings). If only feasible schedules are considered in the assignment problem, the problem -- more specifically equation (4) -- cannot be feasible as soon as \(\Psi_{r}=\) for any previously assigned request \(r\in R_{a}\). Therefore, each currently assigned vehicle schedule is included in the assignment problem, even though it might have become infeasible. In the worst case, the strategy is therefore to stick with the current schedule. It should be noted, that if an assigned vehicle schedule became infeasible, no additional customers can be accommodated until it finished this schedule. In case re-assignments are possible, the cost function penalizes infeasible schedules and closer vehicles could be re-assigned to serve customers in time. At the end of the assignment stage, idle vehicles are rebalanced to provide supply in the whole network. The algorithm described in Alonso-Mora et al. (2017) is used, which sends idle vehicles to all network locations where customers had to be rejected. ### Coupling with Aimsun Next A simplified overview of the simulation flow of the coupling wit Aimsun Next is shown in Fig. 1. The simulation flow is separated into three main entities: The Aimsun Next microsimulation, its Aimsun Next API accessible with Python and the fleet control tool FleetPy. On a high level, the simulation time is controlled in the Aimsun Next microsimulation and triggers updates of simulation time and vehicle states in FleetPy via its Python API. FleetPy computes updated vehicle schedules to be performed in Aimsun Next as feedback. Network PreprocessingAimsun Next and FleetPy are not accessing the same network object. Therefore, network representations have to be translated to allow seamless integration. On a high level, the Aimsun Next network consists of sections and turns that connect different sections within intersections. Each section and turn is converted to a FleetPy edge and added to the network \(G_{op}\). The corresponding Aimsun Next object identifier is assigned to each edge to allow the translation between the different networks. Simulation InitializationTo begin, the traffic microsimulation and the fleet control are started and initialized. This initialization firstly synchronizes the absolute simulation start time between the coupled frameworks. Additionally, interaction periods are defined. Traffic microsimulations usually have simulation time steps on the sub-second scale. However, the fleet control does not require updates this frequently. Instead, fleet control updates are only triggered every \(\Delta t_{fc}\) (in the range of 10 seconds to a minute). Additionally, the frequency of updating network travel times \(t_{statistics}\) for the fleet operator is defined. Simulation FlowAt the beginning of each simulation time step, the microsimulation performs vehicle movements. The API tracks fleet vehicles, i.e. their location and if they reached their destination. Boarding processes are controlled in FleetPy. When a vehicle reaches a destination, FleetPy checks if a boarding process has to be started. If so, the actual boarding and alighting times \(t_{i,pu}^{act}\) and \(t_{i,do}^{act}\) are set, respectively. Likewise, FleetPy checks if boarding process ended and new vehicle routes should start. If so, corresponding routes are sent to the API to create new vehicles in Aimsun. Additionally, every \(\Delta t_{fc}\), a new fleet optimization is triggered. First, the positions of vehicles that are currently on-route are set, and their assigned schedule is checked for feasibility. Next, new customer requests for the time step are gathered, and the fleet optimization described in the previous section is triggered. Unassigned requests will leave the system (and look for another mobility option). New vehicle schedules to accommodate accepted customers are assigned to the vehicles. Finally, new routes (i.e. the fastest paths in network \(G_{op}\)) for vehicles that received a new or updated routing task are collected and returned to Aimsun Next. To calculate new edge travel times for the fleet operator, the duration vehicles need to pass each Aimsun Next section is tracked within each statistics interval \(t_{statistics}\). In this study, not only fleet vehicles, but all vehicles within the microsimulation are tracked. At the end of an interval, the average duration on each edge is provided to the fleet operator as an estimate for the current travel times on each edge. ## 3 Case study To evaluate the impact of the coupling of the fleet control tool FleetPy with the microsimulation Aimsun Next, a case study for Munich, Germany is performed. ### Aimsun Next Model The Aimsun Next model described in F. Dandl et al. (2017) is used in this study. The network is shown in Fig. 2. The operating area of the MoD service is given by the blue boundary, resembling the urban area of the city of Munich. The model is calibrated using hourly time-dependent private vehicle OD-Matrices and loop detector data for given sections. The simulation is run for the morning peak from 6:00 to 10:00 a.m.. In total there are 620k vehicle trips in the microsimulation. The MoD demand is generated from the private vehicle trip OD matrices. 5% of those entries that start and end within the operating area shown in Fig. 2 are used to generate MoD customers with Poisson processes, resulting in 13k trips within the simulation period. The first and the last \(3600\)s of each simulation are used as warm-up and cool-down period, respectively, and not included in the evaluation. ### Scenarios In all scenarios, the MoD operator applies a maximum waiting time constraint \(w_{max}=480\)s and a maximum detour constraint of \(\delta_{max}=40\%\) is used. The boarding time is assumed to be \(30\)s. New customers are assigned and new vehicle routes are computed every \(\Delta t_{fc}=60\)s. Edge travel times in the operator network are updated every \(t_{statistics}=1800\)s. In total, 6 scenarios are created. They differ in at least one of the following attributes: 1. **Coupled/Not Coupled (C/nC):** Coupled scenarios correspond to simulations with vehicle movement performed in Aimsun Next. In not Coupled scenarios, vehicle movements are performed in a deterministic network. To have comparable scenarios, the travel times are extracted from the corresponding coupled scenarios and used as edge travel times in these scenarios. 2. **Hailing/Pooling (H/P):** In the hailing service, trips are not shared. Within the simulation scenarios, this is achieved by setting the vehicle capacity to \(c_{v}^{hail}=1\), while the capacity for pooling vehicles is \(c_{v}^{pool}=4\). To supply a similar service level, 1550 vehicles are applied for the hailing service and 1150 for the pooling service. 3. **With/Without Re-assignment (wR/nR)**: When re-assignment is not allowed, the initially assigned vehicle has to pick up the customer. If re-assignment is allowed, the optimization can change the vehicle assigned to a customer. ## 4 Results Table 1 shows aggregated KPIs of all simulated scenarios. For all scenarios, the service level is similarly high with around 95% served customers. Compared to the hailing service, even with 400 fewer vehicles, more customers can be served in the pooling case especially in not coupled scenarios. Additionally served customers in the pooling scenarios result in slightly increased Fleet KM compared to hailing. Nevertheless, with shared trips the fleet efficiency measured by average occupancy and saved distance (the relative decrease in Fleet KM compared to all customers driving in private vehicles from origin to destination) increases significantly in the pooling scenarios. Figure 2: Street network of the Aimsun Next model of Munich. However, once the coupling to Aimsun Next is enabled, all fleet KPIs (except for served customers) change significantly. Especially, the increase of fleet utilization of around 20% in all scenarios is notable, indicating that the efficiency of the fleet decreases significantly in the presence of dynamic and stochastic travel times. This directly translates to other fleet KPIs of the MoD operator: The fleet vehicle kilometers increase, and saved distance and average occupancy decrease. The decreased performance can be explained by two effects: 1) Generally, vehicle travel times tend to be longer, which can be observed in the increase in customer travel time in the hailing service from \(667\)s to \(875\)s. This effect likely results from the travel time estimation: The average link travel times from the last 30 min are used to predict travel times of the next 30 min. As traffic tends to increase during the simulation period, link travel times are usually underestimated using this method. 2) The stochasticity of travel times leads to infeasible assigned vehicle routes, due to violation of time constraints. The strategy in this paper to deal with infeasibilities is to maintain the assigned schedule until finished. This results in non-optimal assignments, which then results in worse performance as indicated by the KPIs. Allowing re-assignment can improve the service, mainly by mitigating the previous argument. Infeasible schedules are penalized in the objective, and therefore it is tried to assign other vehicles that can fulfill time constraints. From customers' perspective, average waiting time and travel times change significantly for all services when the coupling is enabled. Waiting time increases by up to 36% (P:nR) and travel time by up to 23% (H:nR). Both effects are likely due to previously discussed underestimation of vehicles travel times. To measure the reliability of initially communicated pick-up and travel times, the quantities \(\Delta_{i}^{wt}=t_{i,pu}^{act}-t_{i,pu}^{exp}\) and \(\Delta_{i}^{tt}=t_{i,do}^{act}-t_{i,do}^{exp}\) are defined for each served customer \(i\), respectively. \(t_{i,pu}^{exp}\) (\(t_{i,do}^{exp}\)) and \(t_{i,pu}^{act}\) (\(t_{i,do}^{act}\)) correspond to communicated pick-up (drop-off) times with assignment and actual pick-up (drop-off) times, respectively. Fig. 3 shows histograms of the \(\Delta^{wt}\)-distributions. All uncoupled scenarios show a very prominent peak at \(0\)s as deterministic travel times allow a precise forecast of the vehicle arrival. Small deviations can be seen for customer that are picked up when network travel times are updated. Higher deviations can be observed for uncoupled scenarios for pooling with re-assignment because the vehicle to pick up a customer can be different from originally planned and stops for other customers might still be included. For coupled scenarios, the distributions broaden and shift towards positive values, indicating on average delayed pick-ups (between \(60\)s to \(124\)s) and less reliable pick-up times (standard deviations of the distribution between \(209\)s and \(339\)s). Nevertheless, it can be observed that reassignment can reduce the mean as well as the standard deviation for the \(\Delta^{wt}\) distributions from \(124\)s and \(339\)s to \(60\)s and \(209\)s, respectively. \begin{table} \begin{tabular}{c|c||c|c|c|c|c|c|c|c|c} Scenario & \multirow{2}{*}{Coupled} & \multirow{2}{*}{Customers} & \multirow{2}{*}{Feret KM} & Avg. & Sneed & Fleet & Avg. & Avg. & Avg. (Std.) & Avg. (Std.) \\ & & & [\%] & [10\({}^{3}\) km] & [Perkm] & [\%] & [\%] & [\%] & [\%] & [\%] & [\%] \\ \hline \hline \multirow{2}{*}{H:nR} & mC & 94 & 34 & 0.8 & -25 & 54 & 264 & 667 & 2 (24) & 17 (11) \\ & C & 94 & 36 & 0.8 & -37 & 72 & 367 & 875 & 106 (270) & 241 (417) \\ \hline \multirow{2}{*}{P:nR} & mC & 97 & 35 & 1.2 & 7 & 59 & 253 & 809 & 3 (27) & 82 (151) \\ & C & 93 & 40 & 1.0 & -11 & 81 & 396 & 1097 & 124 (339) & 393 (613) \\ \hline \multirow{2}{*}{P:wR} & mC & 98 & 35 & 1.2 & 10 & 57 & 255 & 804 & 2 (72) & 71 (155) \\ & C & 96 & 40 & 1.1 & -8 & 73 & 342 & 1015 & 60 (209) & 306 (465) \\ \end{tabular} \end{table} Table 1: KPIs of simulated scenarios. Figure 3: Histograms of differences in actual and communicated waiting time \(\Delta^{wt}\). Similar observations can be made for the distributions of the reliability of travel times \(\Delta^{tt}\). Compared to \(\Delta^{wt}\), this quantity is generally higher for pooling scenarios because new customers can be picked-up on-route resulting in additional travel time. Computational times exceeded 15h in some pooling scenarios, run on a single Intel Xeon Silver 2.10 GHz processor. Around 75% of computational is spent on routing queries that cannot be preprocessed in this study. ## 5 Conclusion MoD services have been an active research topic in recent years. Many studies focused on developing control algorithms to supply efficient services. Most of the algorithms apply hard time constraints to limit the search space, but the effect to deal with dynamic and stochastic travel times which inevitably lead to violations of these constraints are hardly discussed. To evaluate the effects, this study presented a coupled framework of a detailed MoD service simulation with a microscopic traffic simulation. Results showed that the combination of inaccurate travel time estimation and damage control strategies for infeasible routes due to violated time constraints deteriorates the performance of MoD services - hailing and pooling - significantly. Also, customers suffer from unreliable pick-up time and travel time estimations. Allowing re-assignments of initial vehicle schedules according to update system states helps to restore system efficiency and reliability, but only to a minor extent. Overall, this study shows the necessity of further research to 1) the incorporation of more accurate online travel time estimation algorithms that are applicable for an extensive use of routing queries as needed for MoD matching algorithms and 2) more sophisticated damage control algorithms that deal with infeasible routing and assignment solutions due to flawed travel time estimations. ## 6 Acknowledgements The authors thank the European Union's Horizon Europe research and innovation programme for providing funding via the project CONDUCTOR under Grant Agreement No 101077049. The authors remain responsible for all findings and opinions presented in the paper.
2308.13357
A topological model for partial equivariance in deep learning and data analysis
In this article, we propose a topological model to encode partial equivariance in neural networks. To this end, we introduce a class of operators, called P-GENEOs, that change data expressed by measurements, respecting the action of certain sets of transformations, in a non-expansive way. If the set of transformations acting is a group, then we obtain the so-called GENEOs. We then study the spaces of measurements, whose domains are subject to the action of certain self-maps, and the space of P-GENEOs between these spaces. We define pseudo-metrics on them and show some properties of the resulting spaces. In particular, we show how such spaces have convenient approximation and convexity properties.
Lucia Ferrari, Patrizio Frosini, Nicola Quercioli, Francesca Tombari
2023-08-25T13:06:13Z
http://arxiv.org/abs/2308.13357v1
# A topological model for partial equivariance in deep learning and data analysis ###### Abstract. In this article, we propose a topological model to encode partial equivariance in neural networks. To this end, we introduce a class of operators, called P-GENEOs, that change data expressed by measurements, respecting the action of certain sets of transformations, in a non-expansive way. If the set of transformations acting is a group, then we obtain the so-called GENEOs. We then study the spaces of measurements, whose domains are subject to the action of certain self-maps, and the space of P-GENEOs between these spaces. We define pseudo-metrics on them and show some properties of the resulting spaces. In particular, we show how such spaces have convenient approximation and convexity properties. 2010 Mathematics Subject Classification: Primary 55N35, Secondary 47H09, 54H15 ## 1. Introduction Over the past decade, several geometric techniques have been incorporated into Deep Learning (DL), giving rise to the new field of Geometric Deep Learning (GDL) ([8, 15, 4]. This geometric approach to deep learning is exploited with a dual purpose. On one hand, geometry provides a common mathematical framework to study neural network architectures. On the other hand, a geometric bias, based on prior knowledge of the data set, can be incorporated into DL models. In this second case, GDL models take advantage of the symmetries imposed by an observer, which encode and elaborate the data. The general blueprint of many deep learning architectures is modelled by group equivariance to encode such properties. If we consider measurements on a data set and a group encoding their symmetries, i.e., transformations taking admissible measurements to admissible measurements (for example, rotation or translation of an image), the group equivariance is the property guaranteeing that such symmetries are preserved after applying an operator (e.g., a layer in a neural network) on the observed data. In particular, taking the input measurements \(\Phi\), the output measurements \(\Psi\) and, respectively, their symmetry groups \(G\) and \(H\), the agent \(F\colon\Phi\to\Psi\) is \(T\)-equivariant if \(F(\varphi g)=F(\varphi)T(g)\), for any \(\varphi\) in \(\Phi\) and any \(g\) in \(G\), where \(T\) is a group homomorphism from \(G\) to \(H\). In the theory of Group Equivariant Non-Expansive Operators (GENEOs) ([1, 3, 2, 5, 9, 6, 16, 11]), as in many other GDL models, the collection of all symmetries is represented by a group, but in some applications, the group axioms do not necessarily hold since real-world data rarely follow strict mathematical symmetries due to noise, incompleteness or symmetry-breaking features. As an example, we can consider a data set that contains images of digits and the group of rotations as the group acting on it. Rotating an image of the digit '6' by a straight angle returns an image that the user would most likely interpret as '9'. At the same time, we may want to be able to rotate the digit '6' by small angles while preserving its meaning. It is then desirable to extend the theory of GENEOs by relaxing the hypotheses on sets of transformations. The main aim of this article is to give a generalization of the results obtained for GENEOs to a new mathematical framework where the property of equivariance is maintained only for some transformations of the measurements, encoding a partial equivariance with respect to the action of the group of all transformations. To this end, we introduce the concept of Partial Group Equivariant Non-Expansive Operator (P-GENEO). In this new model there are some substantial differences with respect to the theory of GENEOs: 1. The user chooses two sets of measurements in input: the one containing the original measurements and another set that encloses the admissible variations of such measurements, defined in the same domain. For example, in the case where the function that represents the digit '6' is being observed, we define an initial space that contains this function and another space that contains certain small rotations of '6', but excludes all the others. 2. Instead of considering a group of transformations we consider a set containing only those that do not change the meaning of our data, i.e., only those associating to Figure 1. Example of a symmetry breaking feature. Applying a rotation \(g\) of \(\pi/4\), the digit ‘6’ preserves its meaning (left). The rotation \(g^{4}\) of \(\pi\) is, instead, not admissible, since it transforms the digit ‘6’ into the digit ‘9’ (right). each original measurement another one inside the set of its admissible variations. Therefore, by choosing the initial spaces, the user defines also which transformations of the data set, given by right composition, are admissible and which ones are not. 3. We define partial GENEOs, or P-GENEOs, as a generalisation of GENEOs. P-GENEOs are operators that respect the two sets of measurements in input and the set of transformations relating them. The term partial refers to the fact that the set of transformations does not necessarily need to be a group. With these assumptions in mind we will extend the results proven in [1] and [17] for GENEOs. We will define suitable pseudo-metrics on the spaces of measurements, the set of transformations and the set of non-expansive operators. Grounding on their induced topological structures, we prove compactness and convexity of the space of P-GENEOs, under the assumption that the function spaces are compact and convex. These are useful properties from a computational point of view. For example, compactness guarantees that the space can be approximated by a finite set. Moreover, convexity allows us to take the convex combination of P-GENEOs in order to generate new ones. ## 2. Related work The main motivation for our work is that observed data rarely follow strict mathematical symmetries. This may be due, for example, to the presence of noise in data measurements. The idea of relaxing the hypothesis of equivariance in GDL and data analysis is not novel, as it is shown by the recent increase in the number of publications in this area (see, for example, [19],[20], [21], [22], [10] and [7]). We identify two main ways to transform data via operators that are not strictly equivariant, due to the lack of strict symmetries of the measurements. On one hand, one could define _approximately equivariant_ operator. These are operators for which equivariance holds up to small perturbation. In this case, given two groups, \(G\) and \(H\), acting on the spaces of measurements \(\Phi\) and \(\Psi\), respectively, and a homomorphism between them, \(T\colon G\to H\), we say that \(F\colon\Phi\to\Psi\) is \(\varepsilon\)-equivariant if, for any \(g\in G\) and for any \(\varphi\in\Phi\), \(\|F(\varphi g)-F(\varphi)T(g)\|_{\infty}\leq\varepsilon\). Alternatively, when defining operators transforming the measurements of certain data sets, equivariance may be substituted by _partial equivariance_. In this case, equivariance is guaranteed for a subset of the groups acting on the space of measurements, with no guarantees for this subset to be a subgroup. Among the previously cited articles about relaxing the property of equivariance in DL, the approach of [10] is closer to an approximate equivariance model. There the authors use a Bayesian approach to introduce an inductive bias in their network that is sensitive to approximate symmetry. The authors of [19], instead, utilize a partial equivariance approach, where a probability distribution is defined and associated with each group convolutional layer of the architecture and, the parameter defining it are either learnt, to achieve equivariance, or partially learnt, to achieve partial equivariance. The importance of choosing equivariance with respect to different acting groups on each layer of the CNN was actually first observed in [22] for the group of Euclidean isometries in \(\mathbb{R}^{2}\). The point of view of this article is closer to the latter. Our P-GENEOs are indeed operators that preserve the action of certain sets ruling the admissibility of the transformations of the measurements of our data sets. Moreover, non-expansiveness plays a crucial role in our model. This is, in fact, the feature allowing us to obtain compactness and approximability in the space of operators, and distinguishing our model from the existing literature on equivariant machine learning. ## 3. Mathematical setting ### Data sets and operations Consider a set \(X\) and the normed vector space \((\mathbb{R}^{X}_{b},\|\cdot\|_{\infty})\), where \(\mathbb{R}^{X}_{b}\) is the space of all bounded real-valued functions on \(X\) and \(\|\cdot\|_{\infty}\) is the usual uniform norm, i.e., for any \(f\in\mathbb{R}^{X}_{b}\), \(\|f\|_{\infty}:=\sup_{x\in X}\lvert f(x)\rvert\). On the set \(X\) the space of transformations is given by elements of \(\operatorname{Aut}(X)\), i.e., the group of bijections from \(X\) to itself. Then, we can consider the right group action \(\mathcal{R}\) defined as follows (we represent composition as a juxtaposition of functions): \[\mathcal{R}\colon\mathbb{R}^{X}_{b}\times\operatorname{Aut}(X)\to\mathbb{R}^{ X}_{b},\ \ \ \ (\varphi,s)\mapsto\varphi s.\] _Remark 3.1_.: For every \(s\in\operatorname{Aut}(X)\), the map \(\mathcal{R}_{s}\colon\mathbb{R}^{X}_{b}\to\mathbb{R}^{X}_{b}\), with \(\mathcal{R}_{s}(\varphi):=\varphi s\) preserves the distances. In fact, for any \(\varphi_{1},\varphi_{2}\in\mathbb{R}^{X}_{b}\), by bijectivity of \(s\), we have that \[\|\mathcal{R}_{s}(\varphi_{1})-\mathcal{R}_{s}(\varphi_{2})\|_{ \infty} =\sup_{x\in X}\left\lvert\varphi_{1}s(x)-\varphi_{2}s(x)\right\rvert\] \[=\sup_{y\in X}\left\lvert\varphi_{1}(y)-\varphi_{2}(y)\right\rvert\] \[=\|\varphi_{1}-\varphi_{2}\|_{\infty}.\] In our model our data sets are represented as two sets \(\Phi\) and \(\Phi^{\prime}\) of bounded real-valued measurements on \(X\). In particular, \(X\) represents the space where the measurements can be made, \(\Phi\) is the space of permissible measurements, and \(\Phi^{\prime}\) is a space which \(\Phi\) can be transformed into, without changing the interpretation of its measurements after a transformation is applied. In other words, we want to be able to apply some admissible transformations on the space \(X\), so that the resulting changes in the measurements in \(\Phi\) are contained in the space \(\Phi^{\prime}\). Thus, in our model, we consider operations on \(X\) in the following way: **Definition 3.2**.: A \((\Phi,\Phi^{\prime})\)**-operation** is an element \(s\) of \(\operatorname{Aut}(X)\) such that, for any measurement \(\varphi\in\Phi\), the composition \(\varphi s\) belongs to \(\Phi^{\prime}\). The set of all \((\Phi,\Phi^{\prime})\)-operations is denoted by \(\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X)\). _Remark 3.3_.: We can observe that the identity function \(\operatorname{id}_{X}\) is an element of \(\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X)\) if and only if \(\Phi\subseteq\Phi^{\prime}\). For any \(s\in\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X)\), the restriction to \(\Phi\times\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X)\) of the map \(\mathcal{R}_{s}\) takes values in \(\Phi^{\prime}\) since \(\mathcal{R}_{s}(\varphi):=\varphi s\in\Phi^{\prime}\) for any \(\varphi\in\Phi\). We can consider the restriction of the map \(\mathcal{R}\) (for simplicity, we will continue to use the same symbol to denote this restriction): \[\mathcal{R}\colon\Phi\times\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X)\to\Phi^ {\prime},\ \ \ \ (\varphi,s)\mapsto\varphi s\] where \(\mathcal{R}(\varphi,s)=\mathcal{R}_{s}(\varphi)\), for every \(s\in\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X)\) and every \(\varphi\in\Phi\). **Definition 3.4**.: Let \(X\) be a set. A **perception triple** is a triple \((\Phi,\Phi^{\prime},S)\) with \(\Phi,\Phi^{\prime}\subseteq\mathbb{R}_{b}^{X}\) and \(S\subseteq\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X)\). The set \(X\) is called the **domain** of the perception triple and is denoted by \(\operatorname{dom}(\Phi,\Phi^{\prime},S)\). _Example 3.5_.: Given \(X=\mathbb{R}^{2}\), consider two rectangles \(R\) and \(R^{\prime}\) in \(X\). Assume \(\Phi:=\{\varphi\colon X\to[0,1]:\ \operatorname{supp}(\varphi)\subseteq R\}\) and \(\Phi^{\prime}:=\{\varphi^{\prime}\colon X\to[0,1]:\ \operatorname{supp}(\varphi^{\prime}) \subseteq R^{\prime}\}\). We recall that, if we consider a function \(f\colon X\to\mathbb{R}\), the _support_ of \(f\) is the set of points in the domain where the function does not vanish, i.e., \(\operatorname{supp}(f)=\{x\in X\ |\ f(x)\neq 0\}\). Consider \(S\) as the set of translations that bring \(R\) into \(R^{\prime}\). The triple \((\Phi,\Phi^{\prime},S)\) is a perception triple. If \(\Phi\) represents a set of grey level images, \(S\) determines which translations can be applied to our pictures. ### Pseudo-metrics on data sets In our model, data are represented as function spaces, that is, considering a generic set \(X\), sets \(\Omega\subseteq\mathbb{R}_{b}^{X}\) of bounded real-valued functions. We endow the real line \(\mathbb{R}\) with the usual Euclidean metric and the space \(X\) with an extended pseudo-metric induced by \(\Omega\): \[D_{X}^{\Omega}(x_{1},x_{2})=\underset{\omega\in\Omega}{\sup}|\omega(x_{1})- \omega(x_{2})|\] for every \(x_{1},x_{2}\in X\). The choice of this pseudo-metric over X means that two points can only be distinguished if they assume different values for some measurements. For example, if \(\Phi\) contains only a constant function and \(X\) contains at least two points, the distance between any two points of \(X\) is always null. The pseudo-metric space \(X_{\Omega}:=(X,D_{X}^{\Omega})\) can be considered as a topological space with the basis \[\mathcal{B}_{\Omega}=\{B_{\Omega}(x_{0},r)\}_{x_{0}\in X,\ r\in\mathbb{R}^{+} }=\left\{\{x\in X:\ D_{X}^{\Omega}(x,x_{0})<r\}\right\}_{x_{0}\in X,\ r\in \mathbb{R}^{+}},\] and the induced topology is denoted by \(\tau_{\Omega}\). The reason for considering a topological space \(X\), rather than just a set, follows from the need of formalising the assumption that data are stable under small perturbations. _Remark 3.6_.: In our case, there are two collections of functions \(\Phi\) and \(\Phi^{\prime}\) in \(\mathbb{R}^{X}_{b}\) representing our data, both of which induce a topology on \(X\). Hence, in the model, we consider two pseudo-metric spaces \(X_{\Phi}\) and \(X_{\Phi^{\prime}}\) with the same underlying set \(X\). If \(\Phi\subseteq\Phi^{\prime}\subseteq\mathbb{R}^{X}_{b}\), then the topologies \(\tau_{\Phi^{\prime}}\) and \(\tau_{\Phi^{\prime}}\) are comparable and, in particular, \(\tau_{\Phi^{\prime}}\) is finer than \(\tau_{\Phi}\). Now, given a set \(\Omega\subseteq\mathbb{R}^{X}_{b}\), we will prove a result about the compactness of the pseudo-metric space \(X_{\Omega}\). Before proceeding, let us recall the following lemma (e.g., see [13]): **Lemma 3.7**.: _Let (P,d) be a pseudo-metric space. The following conditions are equivalent:_ (1) _P is totally bounded;_ (2) _every sequence in P admits a Cauchy subsequence._ **Theorem 3.8**.: _If \(\Omega\) is totally bounded, then \(X_{\Omega}\) is totally bounded._ Proof.: By Lemma 3.7 it will suffice to prove that every sequence in \(X\) admits a Cauchy subsequence with respect to the pseudo-metric \(D^{\Omega}_{X}\). Consider a sequence \((x_{i})_{i\in\mathbb{N}}\) in \(X_{\Omega}\) and take a real number \(\varepsilon>0\). Since \(\Omega\) is totally bounded, we can find a finite subset \(\Omega_{\varepsilon}=\{\omega_{1},\dots,\omega_{n}\}\) such that for every \(\omega\in\Omega\) there exists \(\omega_{r}\in\Omega\) for which \(\|\omega-\omega_{r}\|_{\infty}<\varepsilon\). We can consider now the real sequence \((\omega_{1}(x_{i}))_{i\in\mathbb{N}}\), which is bounded since \(\Omega\in\mathbb{R}^{X}_{b}\). From Bolzano-Weierstrass Theorem it follows that we can extract a convergent subsequence \((\omega_{1}(x_{i_{h}}))_{h\in\mathbb{N}}\). Again, we can extract from \((\omega_{2}(x_{i_{h}}))_{h\in\mathbb{N}}\) another convergent subsequence \((\omega_{2}(x_{i_{h_{t}}}))_{t\in\mathbb{N}}\). Repeating the process, we are able to extract a subsequence of \((x_{i})_{i\in\mathbb{N}}\), that for simplicity of notation we can indicate as \((x_{i_{j}})_{j\in\mathbb{N}}\), such that \((\omega_{k}(x_{i_{j}}))_{j\in\mathbb{N}}\) is a convergent subsequence in \(\mathbb{R}\), and hence a Cauchy sequence in \(\mathbb{R}\), for every \(k\in\{1,\dots,n\}\). By construction, \(\Omega_{\varepsilon}\) is finite, then we can find an index \(\bar{\jmath}\) such that for any \(k\in\{1,\dots,n\}\) \[|\omega_{k}(x_{i_{\ell}})-\omega_{k}(x_{i_{m}})|\leq\varepsilon,\quad\text{ for every}\quad\ell,m\geq\bar{\jmath}.\] Furthermore we have that, for any \(\omega\in\Omega\), any \(\omega_{k}\in\Omega_{\varepsilon}\) and any \(\ell,m\in\mathbb{N}\) \[|\omega(x_{i_{\ell}})-\omega(x_{i_{m}})| \leq|\omega(x_{i_{\ell}})-\omega_{k}(x_{i_{\ell}})|+|\omega_{k} (x_{i_{\ell}})-\omega_{k}(x_{i_{m}})|+|\omega_{k}(x_{i_{m}})-\omega(x_{i_{m}})|\] \[\leq\|\omega-\omega_{k}\|_{\infty}+|\omega_{k}(x_{i_{\ell}})- \omega_{k}(x_{i_{m}})|+\|\omega_{k}-\omega\|_{\infty}.\] We observe that the choice of \(\bar{\jmath}\) depends only on \(\varepsilon\) and \(\Omega_{\varepsilon}\), not on \(k\). Then, choosing a \(\omega_{k}\in\Omega_{\varepsilon}\) such that \(\|\omega_{k}-\omega\|_{\infty}<\varepsilon\), we get \(\|\omega(x_{i_{\ell}})-\omega(x_{i_{m}})\|_{\infty}<3\varepsilon\) for every \(\omega\in\Omega\) and every \(l,m\geq\bar{\jmath}\). Then, \[D^{\Omega}_{X}(x_{i_{\ell}},x_{i_{m}})=\sup_{\omega\in\Omega}|\omega(x_{i_{ \ell}})-\omega(x_{i_{m}})|<3\varepsilon\quad\text{for every}\quad\ell,m\geq \bar{\jmath}.\] Then \((x_{i_{j}})_{j\in\mathbb{N}}\) is a Cauchy sequence in \(X_{\Omega}\). For Lemma 3.7 the statement holds. **Corollary 3.9**.: _If \(\Omega\) is totally bounded and \(X_{\Omega}\) is complete, then \(X_{\Omega}\) is compact._ Proof.: From Theorem 3.8 we have that \(X_{\Omega}\) is totally bounded and since by hypothesis it is also complete, it is compact. Now, we will prove that the choice of the pseudo-metric \(D_{X}^{\Omega}\) on \(X\) makes the functions in \(\Omega\) non-expansive. **Definition 3.10**.: Consider two pseudo-metric spaces \((P,d_{P})\) and \((Q,d_{Q})\). A function \(f\colon P\to Q\) is **non-expansive** if \(d_{Q}(f(p_{1}),f(p_{2}))\leq d_{P}(p_{1},p_{2})\) for any \(p_{1},p_{2}\in P\). We denote by \(\textbf{NE}(P,Q)\) the space of all non-expansive functions from \((P,d_{P})\) to \((Q,d_{Q})\). **Proposition 3.11**.: \(\Omega\subseteq\textbf{NE}(X_{\Omega},\mathbb{R})\)_._ Proof.: For any \(x_{1},x_{2}\in X\) we have that \[|\omega(x_{1})-\omega(x_{2})|\leq\sup_{\omega\in\Omega}|\omega(x_{1})-\omega( x_{2})|=D_{X}^{\Omega}(x_{1},x_{2}).\] Then, the topology on X induced by \(D_{X}^{\Omega}\) naturally makes the measurements in \(\Omega\) continuous. In particular, since the previous results hold for a generic \(\Omega\subseteq\mathbb{R}_{b}^{X}\), they are also true for \(\Phi\) and \(\Phi^{\prime}\) in our model. _Remark 3.12_.: Assume that \((\Phi,\Phi^{\prime},S)\) is a perception triple. A function \(\varphi^{\prime}\in\Phi^{\prime}\) may not be continuous from \(X_{\Phi}\) to \(\mathbb{R}\) and a function \(\varphi\in\Phi\) may not be continuous from \(X_{\Phi^{\prime}}\) to \(\mathbb{R}\). In other words, the topology on \(X\) induced by the pseudo-metric of one of the function spaces does not make the functions in the other continuous. _Example 3.13_.: Assume \(X=\mathbb{R}\) and for every \(a,b\in\mathbb{R}\) consider the functions \(\varphi_{a}\colon X\to\mathbb{R}\) and \(\varphi_{b}^{\prime}\colon X\to\mathbb{R}\) defined by setting \[\varphi_{a}(x)=\begin{cases}0&\text{ if }x\geq a\\ 1&\text{ otherwise}\end{cases},\quad\quad\quad\varphi_{b}^{\prime}(x)=\begin{cases} 0&\text{ if }x\leq b\\ 1&\text{ otherwise}\end{cases}.\] Suppose \(\Phi:=\{\varphi_{a}:a\geq 0\}\) and \(\Phi^{\prime}:=\{\varphi_{b}^{\prime}:b\leq 0\}\), and consider the symmetry with respect to the y-axis, i.e., the map \(s(x)=-x\). Surely, \(s\in\text{Aut}_{\Phi,\Phi^{\prime}}(X)\). We can observe that the function \(\varphi_{1}\in\Phi\) is not continuous from \(X_{\Phi}^{\prime}\) to \(\mathbb{R}\); indeed \(D_{X}^{\Phi^{\prime}}(0,2)=0\), but \(|\varphi_{1}(0)-\varphi_{1}(2)|=1\). However, if \(\Phi\subseteq\Phi^{\prime}\), we have that the functions in \(\Phi\) are also continuous on \(X_{\Phi^{\prime}}\), indeed: **Corollary 3.14**.: _If \(\Phi\subseteq\Phi^{\prime}\), then \(\Phi\subseteq\textbf{NE}(X_{\Phi^{\prime}},\mathbb{R})\)._ Proof.: By Proposition 3.11 the statement trivially holds since \(\Phi\subseteq\Phi^{\prime}\subseteq\textbf{NE}(X_{\Phi^{\prime}},\mathbb{R})\). ### Pseudo-metrics on the space of operations **Proposition 3.15**.: _Every element of \(\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X)\) is non-expansive from \(X_{\Phi^{\prime}}\) to \(X_{\Phi}\)._ Proof.: Considering a bijection \(s\in\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X)\) we have that \[D_{X}^{\Phi}(s(x_{1}),s(x_{2})) =\sup_{\varphi\in\Phi}|\varphi s(x_{1})-\varphi s(x_{2})|\] \[=\sup_{\varphi\in\Phi s}|\varphi(x_{1})-\varphi(x_{2})|\] \[\leq\sup_{\varphi^{\prime}\in\Phi^{\prime}}|\varphi^{\prime}(x_{1 })-\varphi^{\prime}(x_{2})|=D_{X}^{\Phi^{\prime}}(x_{1},x_{2})\] for every \(x_{1},x_{2}\in X\), where \(\Phi s=\{\varphi s,\varphi\in\Phi\}\). Then, \(s\in\mathbf{NE}(X_{\Phi^{\prime}},X_{\Phi})\) and the statement is proved. Now we are ready to put more structure on \(\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X)\). Considering a set \(\Omega\subseteq\mathbb{R}_{b}^{X}\) of bounded real-valued functions, we can endow the set \(\operatorname{Aut}(X)\) with a pseudo-metric inherited from \(\Omega\): \[D_{\operatorname{Aut}}^{\Omega}(s_{1},s_{2}):=\sup_{\omega\in\Omega}\lVert \omega s_{1}-\omega s_{2}\rVert_{\infty}\] for any \(s_{1},s_{2}\) in \(\operatorname{Aut}(X)\). _Remark 3.16_.: Analogously to what happens in Remark 3.6 for \(X\), the sets \(\Phi\) and \(\Phi^{\prime}\) can endow \(\operatorname{Aut}(X)\) with two possibly different pseudo-metrics \(D_{\operatorname{Aut}}^{\Phi}\) and \(D_{\operatorname{Aut}}^{\Phi^{\prime}}\). In particular, we can consider \(\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X)\) as a pseudo-metric subspace of \(\operatorname{Aut}(X)\) with the induced pseudo-metrics. _Remark 3.17_.: We observe that, for any \(s_{1},s_{2}\) in \(\operatorname{Aut}(X)\), \[D_{\operatorname{Aut}}^{\Omega}(s_{1},s_{2}) :=\sup_{\omega\in\Omega}\lVert\omega s_{1}-\omega s_{2}\rVert_{\infty}\] \[=\sup_{x\in X}\sup_{\omega\in\Omega}|\omega(s_{1}(x))-\omega(s_{2 }(x))| \tag{3.3.1}\] \[=\sup_{x\in X}D_{X}^{\Omega}(s_{1}(x),s_{2}(x)).\] In other words, the pseudo-metric \(D_{\operatorname{Aut}}^{\Omega}\), which is based on the action of the elements of \(\operatorname{Aut}(X)\) on the set \(\Omega\), is exactly the usual uniform pseudo-metric on \(X_{\Omega}\). ### The space of operations Since we are only interested in transformations of functions in \(\Phi\), it would be natural to just endow \(\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X)\) with the pseudo-metric \(D_{\operatorname{Aut}}^{\Phi}\). However, it is sometimes necessary to consider the pseudo-metric \(D_{\operatorname{Aut}}^{\Phi^{\prime}}\) in order to guarantee the continuity of the composition of elements in \(\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X)\), whenever it is admissible. Consider two elements \(s,t\) in \(\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X)\) such that \(st\) is still an element of \(\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X)\), i.e., for every function \(\varphi\in\Phi\) we have that \(\varphi st\in\Phi^{\prime}\). Then, for any \(\varphi\in\Phi\) we have that \[\varphi^{\prime}:=\varphi s\in\Phi s\subseteq\Phi^{\prime},\quad\varphi^{\prime }t\in\Phi^{\prime}.\] Therefore, \(t\) is also an element of \(\operatorname{Aut}_{\Phi s,\Phi^{\prime}}(X)\). By definition \(\Phi s\) is contained in \(\Phi^{\prime}\) for every \(s\in\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X)\) and this justifies the choice of considering in \(\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X)\) also the pseudo-metric \(D^{\Phi^{\prime}}_{\operatorname{Aut}}\). We have shown in particular that if \(s,t\) are elements of \(\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X)\) such that \(st\) is still an element of \(\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X)\), then \(t\) is an element of \(\operatorname{Aut}_{\Phi s,\Phi^{\prime}}(X)\), which is an implication of the following proposition: **Proposition 3.18**.: _Let \(s,t\in\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X)\). Then \(st\in\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X)\) if and only if \(t\in\operatorname{Aut}_{\Phi s,\Phi^{\prime}}(X)\)._ Proof.: If the composition \(st\) belongs to \(\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X)\), we have already proved that \(t\in\operatorname{Aut}_{\Phi s,\Phi^{\prime}}(X)\). On the other hand, if \(t\in\operatorname{Aut}_{\Phi s,\Phi^{\prime}}(X)\) we have that \(\bar{\varphi}t\in\Phi^{\prime}\) for every \(\bar{\varphi}\in\Phi s\). Since \(\varphi(st)=(\varphi s)t\), it follows that \(\varphi(st)\in\Phi^{\prime}\) for every \(\varphi\in\Phi\). Therefore, \(st\in\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X)\) and the statement is proved. _Remark 3.19_.: Let \(t\in\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X)\). We can observe that if \(s\in\operatorname{Aut}_{\Phi}(X)\), then \(\Phi s\subseteq\Phi\) and \(st\in\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X)\). **Lemma 3.20**.: _Consider \(r,s,t\in\operatorname{Aut}(X)\). For any \(\Omega\subseteq\mathbb{R}_{b}^{X}\), it holds that_ \[D^{\Omega}_{\operatorname{Aut}}(rt,st)=D^{\Omega}_{\operatorname{Aut}}(r,s).\] Proof.: Since \(\mathcal{R}_{t}\) preserves the distances, we have that: \[D^{\Omega}_{\operatorname{Aut}}(rt,st) :=\sup_{\omega\in\Omega}\|\omega rt-\omega st\|_{\infty}\] \[=\sup_{\omega\in\Phi}\|\omega r-\omega s\|_{\infty}\] \[=D^{\Omega}_{\operatorname{Aut}}(r,s).\] **Lemma 3.21**.: _Consider \(r,s\in\operatorname{Aut}(X)\) and \(t\in\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X)\). It holds that_ \[D^{\Phi}_{\operatorname{Aut}}(tr,ts)\leq D^{\Phi^{\prime}}_{\operatorname{Aut }}(r,s).\] Proof.: Since \(\Phi t\subseteq\Phi^{\prime}\), we have that: \[D^{\Phi}_{\mathrm{Aut}}(tr,ts) =\sup_{\varphi\in\Phi}\|\varphi tr-\varphi ts\|_{\infty}\] \[=\sup_{\varphi^{\prime}\in\Phi t}\|\varphi^{\prime}r-\varphi^{ \prime}s\|_{\infty}\] \[\leq\sup_{\varphi^{\prime}\in\Phi^{\prime}}\|\varphi^{\prime}r- \varphi^{\prime}s\|_{\infty}\] \[=D^{\Phi^{\prime}}_{\mathrm{Aut}}(r,s).\] Let \(\Pi\) be the set of all pairs \((s,t)\) such that \(s,t,st\in\mathrm{Aut}_{\Phi,\Phi^{\prime}}(X)\). We endow \(\Pi\) with the pseudo-metric \[D_{\Pi}((s_{1},t_{1}),(s_{2},t_{2})):=D^{\Phi}_{\mathrm{Aut}}(s_{1},s_{2})+D^{ \Phi^{\prime}}_{\mathrm{Aut}}(t_{1},t_{2})\] and the corresponding topology. **Proposition 3.22**.: _The function \(\circ\colon\Pi\to(\mathrm{Aut}_{\Phi,\Phi^{\prime}}(X),D^{\Phi}_{\mathrm{Aut}})\) that maps \((s,t)\) to \(st\) is non-expansive, and hence continuous._ Proof.: Consider two elements \((s_{1},t_{1}),(s_{2},t_{2})\) of \(\Pi\). By Lemma 3.20 and Lemma 3.21, \[D^{\Phi}_{\mathrm{Aut}}(s_{1}t_{1},s_{2}t_{2}) \leq D^{\Phi}_{\mathrm{Aut}}(s_{1}t_{1},s_{2}t_{1})+D^{\Phi}_{ \mathrm{Aut}}(s_{2}t_{1},s_{2}t_{2})\] \[\leq D^{\Phi}_{\mathrm{Aut}}(s_{1},s_{2})+D^{\Phi^{\prime}}_{ \mathrm{Aut}}(t_{1},t_{2})\] \[=D_{\Pi}((s_{1},t_{1}),(s_{2},t_{2})).\] Therefore, the statement is proved. Let \(\Upsilon\) be the set of all \(s\) with \(s,s^{-1}\in\mathrm{Aut}_{\Phi,\Phi^{\prime}}(X)\). **Proposition 3.23**.: _The function \((\cdot)^{-1}\colon(\Upsilon,D^{\Phi^{\prime}}_{\mathrm{Aut}})\to(\mathrm{Aut}_ {\Phi,\Phi^{\prime}}(X),D^{\Phi}_{\mathrm{Aut}})\), that maps \(s\) to \(s^{-1}\), is non-expansive, and hence continuous._ Proof.: Consider two bijections \(s_{1},s_{2}\in\Upsilon\). Because of Lemma 3.20 and Lemma 3.21, we obtain that \[D^{\Phi}_{\mathrm{Aut}}(s_{1}^{-1},s_{2}^{-1}) =D^{\Phi}_{\mathrm{Aut}}(s_{1}^{-1}s_{2},s_{2}^{-1}s_{2})\] \[=D^{\Phi}_{\mathrm{Aut}}(s_{1}^{-1}s_{2},\mathrm{id}_{X})\] \[=D^{\Phi}_{\mathrm{Aut}}(s_{1}^{-1}s_{2},s_{1}^{-1}s_{1})\] \[\leq D^{\Phi^{\prime}}_{\mathrm{Aut}}(s_{2},s_{1})=D^{\Phi^{ \prime}}_{\mathrm{Aut}}(s_{1},s_{2}).\] We have previously defined the map \[\mathcal{R}\colon\Phi\times\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X)\to\Phi^{ \prime},\ \ \ \ (\varphi,s)\mapsto\varphi s\] where \(\mathcal{R}(\Phi,s)=\mathcal{R}_{s}(\Phi)\), for every \(s\in\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X)\). **Proposition 3.24**.: _The function \(\mathcal{R}\) is continuous, by choosing the pseudo-metric \(D^{\Phi}_{\operatorname{Aut}}\) on \(\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X)\)._ Proof.: We have that \[\|\mathcal{R}(\varphi,t)-\mathcal{R}(\overline{\varphi},s)\|_{\infty} =\|\varphi t-\overline{\varphi}s\|_{\infty}\] \[\leq\|\varphi t-\varphi s\|_{\infty}+\|\varphi s-\overline{ \varphi}s\|_{\infty}\] \[=\|\varphi t-\varphi s\|_{\infty}+\|\varphi-\overline{\varphi}\|_ {\infty}\] \[\leq D^{\Phi}_{\operatorname{Aut}}(t,s)+\|\varphi-\overline{ \varphi}\|_{\infty}\] for any \(\varphi,\overline{\varphi}\in\Phi\) and any \(t,s\in\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X)\). This proves that \(\mathcal{R}\) is continuous. Now, we can give a result about the compactness of \((\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X),D^{\Phi}_{\operatorname{Aut}})\), under suitable assumptions. **Proposition 3.25**.: _If \(\Phi\) and \(\Phi^{\prime}\) are totally bounded, then \((\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X),D^{\Phi}_{\operatorname{Aut}})\) is totally bounded._ Proof.: Consider a sequence \((s_{i})_{i\in\mathbb{N}}\) in \(\operatorname{Aut}_{\Phi,\Phi^{\prime}}(X)\) and a real number \(\varepsilon>0\). Since \(\Phi\) is totally bounded, we can find a finite subset \(\Phi_{\varepsilon}=\{\varphi_{1},\ldots,\varphi_{n}\}\) such that for every \(\varphi\in\Phi\) there exists \(\varphi_{r}\in\Phi\) for which \(\|\varphi-\varphi_{r}\|_{\infty}<\varepsilon\). Now, consider the sequence \((\varphi_{1}s_{i})_{i\in\mathbb{N}}\) in \(\Phi^{\prime}\). Since also \(\Phi^{\prime}\) is totally bounded, from Lemma 3.7 it follows that we can extract a Cauchy subsequence \((\varphi_{1}s_{i_{h}})_{h\in\mathbb{N}}\). Again, we can extract another Cauchy subsequence \((\varphi_{2}s_{i_{h_{t}}})_{t\in\mathbb{N}}\). Repeating the process for every \(k\in\{1,\ldots,n\}\), we are able to extract a subsequence of \((s_{i})_{i\in\mathbb{N}}\), that for simplicity of notation we can indicate as \((s_{i_{j}})_{j\in\mathbb{N}}\), such that \((\varphi_{k}s_{i_{j}})_{j\in\mathbb{N}}\) is a Cauchy sequence in \(\Phi^{\prime}\) for every \(k\in\{1,\ldots,n\}\). By definition \(\Phi_{\varepsilon}\) is finite, then we can find an index \(\bar{\jmath}\) such that for any \(k\in\{1,\ldots,n\}\) \[\|\varphi_{k}s_{i_{\ell}}-\varphi_{k}s_{i_{m}}\|_{\infty}\leq\varepsilon,\ \ \ \text{for every}\ \ \ \ell,m\geq\bar{\jmath}. \tag{3.4.1}\] Furthermore we have that, for any \(\varphi\in\Phi\), any \(\varphi_{k}\in\Phi_{\varepsilon}\) and any \(\ell,m\in\mathbb{N}\) \[\|\varphi s_{i_{\ell}}-\varphi s_{i_{m}}\|_{\infty} \leq\|\varphi s_{i_{\ell}}-\varphi_{k}s_{i_{\ell}}\|_{\infty}+\| \varphi_{k}s_{i_{\ell}}-\varphi_{k}s_{i_{m}}\|_{\infty}+\|\varphi_{k}s_{i_{m}}- \varphi s_{i_{m}}\|_{\infty}\] \[=\|\varphi-\varphi_{k}\|_{\infty}+\|\varphi_{k}s_{i_{\ell}}- \varphi_{k}s_{i_{m}}\|_{\infty}+\|\varphi_{k}-\varphi\|_{\infty}.\] We observe that the choice of \(\bar{\jmath}\) in (3.4.1) depends only on \(\varepsilon\) and \(\Phi_{\varepsilon}\), not on \(\varphi\). Then, choosing a \(\varphi_{k}\in\Phi_{\varepsilon}\) such that \(\|\varphi_{k}-\varphi\|_{\infty}<\varepsilon\), we get \(\|\varphi s_{i_{\ell}}-\varphi s_{i_{m}}\|_{\infty}<3\varepsilon\) for every \(\varphi\in\Phi\) and every \(\ell,m\geq\bar{\jmath}\). Hence, for every \(\ell,m\in\mathbb{N}\) \[D^{\Phi}_{\mathrm{Aut}}(s_{i_{\ell}},s_{i_{m}})=\sup_{\varphi\in\Phi}\|\varphi s_ {i_{\ell}}-\varphi s_{i_{m}}\|_{\infty}<3\varepsilon\] Therefore \((s_{i_{j}})_{j\in\mathbb{N}}\) is a Cauchy sequence in \(\mathrm{Aut}_{\Phi,\Phi^{\prime}}(X)\). For Lemma 3.7 the statement holds. **Corollary 3.26**.: _Assume that \(S\subseteq\mathrm{Aut}_{\Phi,\Phi^{\prime}}(X)\). If \(\Phi\) and \(\Phi^{\prime}\) are totally bounded and \((S,D^{\Phi}_{\mathrm{Aut}})\) is complete, then it is also compact._ Proof.: From Proposition 3.25 we have that \(S\) is totally bounded and since by hypothesis it is also complete, the statement holds. ## 4. The space of P-GENEOs In this section we introduce the concept of Partial Group Equivariant Non-Expansive Operator (P-GENEO). P-GENEOs allow us to transform data sets, preserving symmetries and distances and maintaining the acceptability conditions of the transformations. We will also describe some topological results about the structure of the space of P-GENEOs and some techniques used for defining new P-GENEOs in order to populate the space of P-GENEOs. **Definition 4.1**.: Let \(X,Y\) be sets and \((\Phi,\Phi^{\prime},S)\), \((\Psi,\Psi^{\prime},Q)\) be perception triples with domains \(X\) and \(Y\), respectively. Consider a triple of functions \((F,F^{\prime},T)\) with the following properties: * \(F\colon\Phi\to\Psi\), \(F^{\prime}\colon\Phi^{\prime}\to\Psi^{\prime}\), \(T\colon S\to Q\); * for any \(s,t\in S\) such that \(st\in S\) it holds that \(T(st)=T(s)T(t)\); * for any \(s\in S\) such that \(s^{-1}\in S\) it holds that \(T(s^{-1})=T(s)^{-1}\); * \((F,F^{\prime},T)\) is _equivariant_, i.e., \(F^{\prime}(\varphi s)=F(\varphi)T(s)\) for every \(\varphi\in\Phi\), \(s\in S\). The triple \((F,F^{\prime},T)\) is called a **perception map** or a **Partial Group Equivariant Operator (P-GEO)** from \((\Phi,\Phi^{\prime},S)\) to \((\Psi,\Psi^{\prime},Q)\). In Remark 3.3 we observed that \(\mathrm{id}_{X}\in\mathrm{Aut}_{\Phi,\Phi^{\prime}}(X)\) if and only if \(\Phi\subseteq\Phi^{\prime}\). Then we can consider a perception triple \((\Phi,\Phi^{\prime},S)\) with \(\Phi\subseteq\Phi^{\prime}\) and \(\mathrm{id}_{X}\in S\subseteq\mathrm{Aut}_{\Phi,\Phi^{\prime}}(X)\). Now we will show how a P-GEO from this perception triple behaves. **Lemma 4.2**.: _Consider two perception triples \((\Phi,\Phi^{\prime},S)\) and \((\Psi,\Psi^{\prime},Q)\) with domains \(X\) and \(Y\), respectively, and with \(\mathrm{id}_{X}\in S\subseteq\mathrm{Aut}_{\Phi,\Phi^{\prime}}(X)\). Let \((F,F^{\prime},T)\) be a P-GEO from \((\Phi,\Phi^{\prime},S)\) to \((\Psi,\Psi^{\prime},Q)\). Then \(\Psi\subseteq\Psi^{\prime}\) and \(\mathrm{id}_{Y}\in Q\subseteq\mathrm{Aut}_{\Psi,\Psi^{\prime}}(Y)\)._ Proof.: Since \((F,F^{\prime},T)\) is a P-GEO, by definition, we have that, for any \(s,t\in S\) such that \(st\in S\), \(T(st)=T(s)T(t)\). Since \(\mathrm{id}_{X}\in S\), then \[T(\mathrm{id}_{X})=T(\mathrm{id}_{X}\mathrm{id}_{X})=T(\mathrm{id}_{X})T( \mathrm{id}_{X})\] and hence \(T(\mathrm{id}_{X})=\mathrm{id}_{Y}\in Q\subseteq\mathrm{Aut}_{\Psi,\Psi^{ \prime}}(X)\). Moreover, for Remark 3.3, we have that \(\Psi\subseteq\Psi^{\prime}\). **Proposition 4.3**.: _Consider two perception triples \((\Phi,\Phi^{\prime},S)\) and \((\Psi,\Psi^{\prime},Q)\) with domains \(X\) and \(Y\), respectively, and with \(\mathrm{id}_{X}\in S\subseteq\mathrm{Aut}_{\Phi,\Phi^{\prime}}(X)\). Let \((F,F^{\prime},T)\) be a P-GEO from \((\Phi,\Phi^{\prime},S)\) to \((\Psi,\Psi^{\prime},Q)\). Then \(F^{\prime}|_{\Phi}=F\)._ Proof.: Since \((F,F^{\prime},T)\) is a P-GEO, it is equivariant and by Lemma 4.2 we have that \[F^{\prime}(\varphi)=F^{\prime}(\varphi\mathrm{id}_{X})=F(\varphi)T(\mathrm{id} _{X})=F(\varphi)\mathrm{id}_{Y}=F(\varphi)\] for every \(\varphi\in\Phi\). **Definition 4.4**.: Assume that \((\Phi,\Phi^{\prime},S)\) and \((\Psi,\Psi^{\prime},Q)\) are perception triples. If \((F,F^{\prime},T)\) is a perception map from \((\Phi,\Phi^{\prime},S)\) to \((\Psi,\Psi^{\prime},Q)\) and \(F\), \(F^{\prime}\) are non-expansive, i.e., \[\|F(\varphi_{1})-F(\varphi_{2})\|_{\infty} \leq\|\varphi_{1}-\varphi_{2}\|_{\infty},\] \[\|F^{\prime}(\varphi_{1}^{\prime})-F^{\prime}(\varphi_{2}^{\prime })\|_{\infty} \leq\|\varphi_{1}^{\prime}-\varphi_{2}^{\prime}\|_{\infty}\] for every \(\varphi_{1},\varphi_{2}\in\Phi\), \(\varphi_{1}^{\prime},\varphi_{2}^{\prime}\in\Phi^{\prime}\), then \((F,F^{\prime},T)\) is called a **Partial Group Equivariant Non-Expansive Operator (P-GENEO)**. In other words, a P-GENEO is a triple \((F,F^{\prime},T)\) such that \(F,F^{\prime}\) are non-expansive and the following diagram commutes for every \(s\in S\) _Remark 4.5_.: We can observe that a GENEO (see [1]) can be represented as a special case of P-GENEO, considering two perception triples \((\Phi,\Phi^{\prime},S)\), \((\Psi,\Psi^{\prime},Q)\) such that \(\Phi=\Phi^{\prime}\), \(\Psi=\Psi^{\prime}\), and the subsets containing the invariant transformations \(S\) and \(Q\) are groups (and then the map \(T\colon S\to Q\) is a homomorphism). In this setting, a P-GENEO \((F,F^{\prime},T)\) is a triple where the operators \(F\), \(F^{\prime}\) are equal to each other (because of Proposition 4.3) and the map \(T\) is a homomorphism. Hence, instead of the triple, we can simply write the pair \((F,T)\), that is a GENEO. Considering two perception triples, we typically want to study the space of all P-GENEOs between them with the map \(T\) fixed. Therefore, when the map \(T\) is fixed and specified, we will simply consider pairs of operators \((F,F^{\prime})\) instead of triples \((F,F^{\prime},T)\), and we say that \((F,F^{\prime})\) is a P-GENEO _associated with_ or _with respect to_ the map \(T\). Moreover, in this case we indicate the property of equivariance of the triple \((F,F^{\prime},T)\) writing that the pair \((F,F^{\prime})\) is \(T\)-_equivariant_. _Example 4.6_.: Let \(X=\mathbb{R}^{2}\). Take a real number \(\ell>0\). In \(X\) consider the square \(Q_{1}:=[0,\ell]\times[0,\ell]\), and its translation \(s_{a}\) of a vector \(a=(a_{1},a_{2})\in\mathbb{R}^{2}\)\(Q^{\prime}_{1}:=[a_{1},\ell+a_{1}]\times[a_{2},\ell+a_{2}]\). Analogously, let us consider a real number \(0<\varepsilon<\ell\) and two squares inside \(Q_{1}\) and \(Q^{\prime}_{1}\), \(Q_{2}:=[\varepsilon,\ell-\varepsilon]\times[\varepsilon,\ell-\varepsilon]\) and \(Q^{\prime}_{2}:=[a_{1}+\varepsilon,\ell+a_{1}-\varepsilon]\times[a_{2}+ \varepsilon,\ell+a_{2}-\varepsilon]\), as in Figure 2. Consider the following function spaces in \(\mathbb{R}^{X}_{b}\): \[\Phi :=\{\varphi\colon X\to\mathbb{R}\ |\ \mathrm{supp}(\varphi) \subseteq Q_{1}\}\] \[\Phi^{\prime} :=\{\varphi^{\prime}\colon X\to\mathbb{R}\ |\ \mathrm{supp}(\varphi^{\prime}) \subseteq Q^{\prime}_{1}\}\] \[\Psi :=\{\psi\colon X\to\mathbb{R}\ |\ \mathrm{supp}(\psi)\subseteq Q_{2}\}\] \[\Psi^{\prime} :=\{\psi^{\prime}\colon X\to\mathbb{R}\ |\ \mathrm{supp}(\psi^{\prime}) \subseteq Q^{\prime}_{2}\}.\] Let \(S:=\{s_{a}^{-1}\}\), where \(s\) is the translation by the vector \(a=(a_{1},a_{2})\). The triples \((\Phi,\Phi^{\prime},S)\) and \((\Psi,\Psi^{\prime},S)\) are perception triples. This example could model the translation of two nested grey-scale images. We want to build now an operator between these images in order to obtain a transformation that commutes with the selected translation. We can consider the triple of functions \((F,F^{\prime},T)\) defined as follows. \(F\colon\Phi\to\Psi\) is the operator that maintains the output of functions in \(\Phi\) at points of \(Q_{2}\) and set them to zero outside it; analogously \(F^{\prime}\colon\Phi^{\prime}\to\Psi^{\prime}\) is the operator that maintains the output of functions in \(\Phi^{\prime}\) at points of \(Q^{\prime}_{2}\) and set them to zero outside it; and \(T=\mathrm{id}_{S}\). Therefore, the triple \((F,F^{\prime},T)\) Figure 2. Squares used in Example 4.6. is a P-GENEO from \((\Phi,\Phi^{\prime},S)\) to \((\Psi,\Psi^{\prime},S)\). It turns out that the maps are non-expansive and the equivariance holds: \[F^{\prime}(\varphi s_{a}^{-1})=F(\varphi)T(s_{a}^{-1})=F(\varphi)s_{a}^{-1}\] for any \(\varphi\in\Phi\). From the point of view of application, we are considering two square images and their translations and we apply an operator that 'cuts' the images, taking into account only the part of the image that interests the observer. This example justifies the definition of P-GENEO as a triple of operators \((F,F^{\prime},T)\), without requiring \(F\) and \(F^{\prime}\) to be equal in the possibly non-empty intersection of their domains. In fact, if \(\varphi\) is a function contained in \(\Phi\cap\Phi^{\prime}\), its image via \(F\) and \(F^{\prime}\) may be different. ### Methods to construct P-GENEOs Starting from a finite number of P-GENEOs, we will illustrate some methods to construct new P-GENEOs. First of all, the composition of two P-GENEOs is still a P-GENEO. **Proposition 4.7**.: _Given two composable P-GENEOs, \((F_{1},F_{1}^{\prime},T_{1})\colon(\Phi,\Phi^{\prime},S)\to(\Psi,\Psi^{\prime},Q)\) and \((F_{2},F_{2}^{\prime},T_{2})\colon(\Psi,\Psi^{\prime},Q)\to(\Omega,\Omega^{ \prime},K)\), their composition defined as_ \[(F,F^{\prime},T):=(F_{2}\circ F_{1},F_{2}^{\prime}\circ F_{1}^{\prime},T_{2} \circ T_{1})\colon(\Phi,\Phi^{\prime},S)\to(\Omega,\Omega^{\prime},K)\] _is a P-GENEO._ Proof.: First, one could easily check that the map \(T=T_{2}\circ T_{1}\) respects the second and the third property of 4.1. Therefore, it remains to verify that \(F(\Phi)\subseteq\Omega\), \(F^{\prime}(\Phi^{\prime})\subseteq\Omega^{\prime}\) and that the properties of equivariance and non-expansiveness are maintained. 1. Since \(F_{1}(\Phi)\subseteq\Psi\) and \(F_{2}(\Psi)\subseteq\Omega\), then we have that \(F(\Phi)=(F_{2}\circ F_{1})(\Phi)=F_{2}(F_{1}(\Phi))\subseteq F_{2}(\Psi)\subseteq\Omega\). Analogously, \(F^{\prime}(\Phi^{\prime})\subseteq\Omega^{\prime}\). 2. Since \((F_{1},F_{1}^{\prime},T_{1})\) and \((F_{2},F_{2}^{\prime},T_{2})\) are equivariant, then \((F,F^{\prime},T)\) is equivariant. Indeed, for every \(\varphi\in\Phi\) we have that \[F^{\prime}(\varphi s) =(F_{2}^{\prime}\circ F_{1}^{\prime})(\varphi s)=F_{2}^{\prime}( F_{1}^{\prime}(\varphi s))\] \[=F_{2}^{\prime}(F_{1}(\varphi)T_{1}(s))=F_{2}(F_{1}(\varphi))T_{2 }(T_{1}(s))\] \[=(F_{2}\circ F_{1})(\varphi)(T_{2}\circ T_{1})(s)=F(\varphi)T(s).\] 3. Since \(F_{1}\) and \(F_{2}\) are non-expansive, then \(F\) is non-expansive; indeed for every \(\varphi_{1},\varphi_{2}\in\Phi\) we have that \[\|F(\varphi_{1})-F(\varphi_{2})\|_{\infty} =\|(F_{2}\circ F_{1})(\varphi_{1})-(F_{2}\circ F_{1})(\varphi_{2} )\|_{\infty}\] \[=\|F_{2}(F_{1}(\varphi_{1}))-F_{2}(F_{1}(\varphi_{2}))\|_{\infty}\] \[\leq\|F_{1}(\varphi_{1})-F_{1}(\varphi_{2})\|_{\infty}\] \[\leq\|\varphi_{1}-\varphi_{2}\|_{\infty}.\] Analogously, \(F^{\prime}\) is non-expansive. Given a finite number of P-GENEOs with respect to the same map \(T\), we illustrate a general method to construct a new operator as a combination of them. Given two sets \(X\) and \(Y\), consider a finite set \(\{H_{1},\ldots,H_{n}\}\) of functions from \(\Omega\subseteq\mathbb{R}_{b}^{X}\) to \(\mathbb{R}_{b}^{Y}\) and a map \(\mathcal{L}\colon\mathbb{R}^{n}\to\mathbb{R}\), where \(\mathbb{R}^{n}\) is endowed with the norm \(\|(x_{1},\ldots,x_{n})\|_{\infty}:=\max_{1\leq i\leq n}|x_{i}|\). We define \(\mathcal{L}^{*}(H_{1},\ldots,H_{n})\colon\Omega\to\mathbb{R}_{b}^{Y}\) as \[\mathcal{L}^{*}(H_{1},\ldots,H_{n})(\omega):=[\mathcal{L}(H_{1}(\omega),\ldots,H_{n}(\omega))],\] for any \(\omega\in\Omega\), where \([\mathcal{L}(H_{1}(\omega),\ldots,H_{n}(\omega))]\colon Y\to\mathbb{R}\) is defined by setting \[[\mathcal{L}(H_{1}(\omega),\ldots,H_{n}(\omega))](y):=\mathcal{L}(H_{1}( \omega)(y),\ldots,H_{n}(\omega)(y))\] for any \(y\in Y\). Now, consider two perception triples \((\Phi,\Phi^{\prime},S)\) and \((\Psi,\Psi^{\prime},Q)\) with domains \(X\) and \(Y\), respectively, and a finite set of P-GENEOs \((F_{1},F_{1}^{\prime}),\ldots(F_{n},F_{n}^{\prime})\) between them associated with the map \(T\colon S\to Q\). We can consider the functions \(\mathcal{L}^{*}(F_{1},\ldots,F_{n})\colon\Phi\to\mathbb{R}_{b}^{Y}\) and \(\mathcal{L}^{*}(F_{1}^{\prime},\ldots,F_{n}^{\prime})\colon\Phi^{\prime}\to \mathbb{R}_{b}^{Y}\), defined as before, and state the following result. **Proposition 4.8**.: _Assume that \(\mathcal{L}\colon\mathbb{R}^{n}\to\mathbb{R}\) is non-expansive. If \(\mathcal{L}^{*}(F_{1},\ldots,F_{n})(\Phi)\subseteq\Psi\) and \(\mathcal{L}^{*}(F_{1}^{\prime},\ldots,F_{n}^{\prime})(\Phi^{\prime})\subseteq\Psi ^{\prime}\), then \((\mathcal{L}^{*}(F_{1},\ldots,F_{n}),\mathcal{L}^{*}(F_{1}^{\prime},\ldots,F_{ n}^{\prime}))\) is a P-GENEO from \((\Phi,\Phi^{\prime},S)\) to \((\Psi,\Psi^{\prime},Q)\) with respect to \(T\)._ Proof.: By hypothesis, \(\mathcal{L}^{*}(F_{1},\ldots,F_{n})(\Phi)\subseteq\Psi\) and \(\mathcal{L}^{*}(F_{1}^{\prime},\ldots,F_{n}^{\prime})(\Phi^{\prime})\subseteq\Psi ^{\prime}\), so we just need to verify the properties of equivariance and non-expansiveness. 1. Since \((F_{1},F^{\prime}_{1}),\ldots,(F_{n},F^{\prime}_{n})\) are \(T\)-equivariant, then for any \(\varphi\in\Phi\) and any \(s\in S\) we have that: \[\mathcal{L}^{*}(F^{\prime}_{1},\ldots,F^{\prime}_{n})(\varphi s) =[\mathcal{L}(F^{\prime}_{1}(\varphi s),\ldots,F^{\prime}_{n}( \varphi s))]\] \[=[\mathcal{L}(F_{1}(\varphi)T(s),\ldots,F_{n}(\varphi)T(s))]\] \[=[\mathcal{L}(F_{1}(\varphi),\ldots,F_{n}(\varphi))]T(s)\] \[=\mathcal{L}^{*}(F_{1},\ldots,F_{n})(\varphi)T(s).\] Therefore \((\mathcal{L}^{*}(F_{1},\ldots,F_{n}),\mathcal{L}^{*}(F^{\prime}_{1},\ldots,F^ {\prime}_{n}))\) is \(T\)-equivariant. 2. Since \(F_{1},\ldots,F_{n}\) and \(\mathcal{L}\) are non-expansive, then for any \(\varphi_{1},\varphi_{2}\in\Phi\) we have that: \[\|\mathcal{L}^{*}(F_{1},\ldots,F_{n})(\varphi_{1})-\mathcal{L}^{* }(F_{1},\ldots,F_{n})(\varphi_{2})\|_{\infty}\] \[=\max_{y\in Y}[|\mathcal{L}(F_{1}(\varphi_{1}),\ldots,F_{n}( \varphi_{1}))](y)-[\mathcal{L}(F_{1}(\varphi_{2}),\ldots,F_{n}(\varphi_{2}))]( y)|\] \[=\max_{y\in Y}|\mathcal{L}(F_{1}(\varphi_{1})(y),\ldots,F_{n}( \varphi_{1})(y))-\mathcal{L}(F_{1}(\varphi_{2})(y),\ldots,F_{n}(\varphi_{2})( y))|\] \[\leq\max_{y\in Y}\|(F_{1}(\varphi_{1})(y)-F_{1}(\varphi_{2})(y), \ldots,F_{n}(\varphi_{1})(y)-F_{n}(\varphi_{2})(y))\|_{\infty}\] \[=\max_{y\in Y}\max_{1\leq i\leq n}|F_{i}(\varphi_{1})(y)-F_{i}( \varphi_{2})(y)|\] \[=\max_{1\leq i\leq n}\|F_{i}(\varphi_{1})-F_{i}(\varphi_{2})\|_{\infty}\] \[\leq\|\varphi_{1}-\varphi_{2}\|_{\infty}.\] Hence, \(\mathcal{L}^{*}(F_{1},\ldots,F_{n})\) is non-expansive. Analogously, since \(F^{\prime}_{1},\ldots,F^{\prime}_{n}\) and \(\mathcal{L}\) are non-expansive, then \(\mathcal{L}^{*}(F^{\prime}_{1},\ldots,F^{\prime}_{n})\) is non-expansive. Therefore \((\mathcal{L}^{*}(F_{1},\ldots,F_{n}),\mathcal{L}^{*}(F^{\prime}_{1},\ldots,F^ {\prime}_{n}))\) is a P-GENEO from \((\Phi,\Phi^{\prime},S)\) to \((\Psi,\Psi^{\prime},Q)\) with respect to \(T\). _Remark 4.9_.: The above result describes a general method to build new P-GENEOs, starting from a finite number of known P-GENEOs via non-expansive maps. Some examples of such non-expansive maps are the maximum function, the power mean and the convex combination (for further details, see [12, 17, 18]). ### Compactness and convexity of the space of P-GENEOs Given two perception triples, under some assumptions on the data sets, it is possible to show two useful features in applications: compactness and convexity. These two properties guarantee, on the one hand, that the space of P-GENEOs can be approximated by a finite subset of them, and, on the other, that a convex combination of P-GENEOs is again a P-GENEO. First, we define a metric on the space of P-GENEOs. Let \(X,Y\) be sets and consider two sets \(\Omega\subseteq\mathbb{R}_{b}^{X},\Delta\subseteq\mathbb{R}_{b}^{Y}\), we can define the distance \[D_{\mathbf{NE}}^{\Omega}(F_{1},F_{2}):=\sup_{\omega\in\Omega}\|F_{1}(\omega)-F_{ 2}(\omega)\|_{\infty}\] for every \(F_{1},F_{2}\in\mathbf{NE}(\Omega,\Delta)\). The metric \(D_{\text{\rm P-GENEO}}\) on the space \(\mathcal{F}_{T}^{all}\) of all the P-GENEOs between the perception triples \((\Phi,\Phi^{\prime},S)\) and \((\Psi,\Psi^{\prime},Q)\) associated with the map \(T\) is defined as \[D_{\text{\rm P-GENEO}}((F_{1},F_{1}^{\prime}),(F_{2},F_{2}^{\prime})):=\max\{D _{\mathbf{NE}}^{\Phi}(F_{1},F_{2}),D_{\mathbf{NE}}^{\Phi^{\prime}}(F_{1}^{ \prime},F_{2}^{\prime})\}\] \[=\max\{\sup_{\varphi\in\Phi}\|F_{1}(\varphi)-F_{2}(\varphi)\|_{\infty},\sup_{ \varphi^{\prime}\in\Phi^{\prime}}\|F_{1}^{\prime}(\varphi^{\prime})-F_{2}^{ \prime}(\varphi^{\prime})\|_{\infty}\}\] for every \((F_{1},F_{1}^{\prime}),(F_{2},F_{2}^{\prime})\in\mathcal{F}_{T}^{all}\). #### 4.2.1. Compactness Before proceeding, we need to prove that the following result holds: **Lemma 4.10**.: _If \((P,d_{P}),(Q,d_{Q})\) are compact metric spaces, then \(\textbf{NE}(P,Q)\) is compact._ Proof.: Theorem 5 in [14] implies that \(\mathbf{NE}(P,Q)\) is relatively compact, since it is a equicontinuous space of maps. Hence, it will suffice to show that \(\mathbf{NE}(P,Q)\) is closed. Considering a sequence \((F_{i})_{i\in\mathbb{N}}\) in \(\mathbf{NE}(P,Q)\) such that \(\lim_{i\to\infty}F_{i}=F\), we have that \[d_{Q}(F(p_{1}),F(p_{2}))=\lim_{i\to\infty}d_{Q}(F_{i}(p_{1}),F_{i}(p_{2}))\leq d _{P}(p_{1},p_{2})\] for every \(p_{1},p_{2}\in P\). Therefore, \(F\in\mathbf{NE}(P,Q)\). It follows that \(\mathbf{NE}(P,Q)\) is closed. Consider two perception triples \((\Phi,\Phi^{\prime},S)\) and \((\Psi,\Psi^{\prime},Q)\), with domains \(X\) and \(Y\), respectively, and the space \(\mathcal{F}_{T}^{all}\) of P-GENEOs between them associated with the map \(T\colon S\to Q\). The following result holds: **Theorem 4.11**.: _If \(\Phi,\Phi^{\prime},\Psi\) and \(\Psi^{\prime}\) are compact, then \(\mathcal{F}_{T}^{all}\) is compact with respect to the metric \(D_{\mathrm{P-GENEO}}\)._ Proof.: By definition, \(\mathcal{F}_{T}^{all}\subseteq\mathbf{NE}(\Phi,\Psi)\times\mathbf{NE}(\Phi^{ \prime},\Psi^{\prime})\). Since \(\Phi,\Phi^{\prime},\Psi\) and \(\Psi^{\prime}\) are compact, for Lemma 4.10 the spaces \(\mathbf{NE}(\Phi,\Psi)\) and \(\mathbf{NE}(\Phi^{\prime},\Psi^{\prime})\) are also compact, and then, by Tychonoff's Theorem, the product \(\mathbf{NE}(\Phi,\Psi)\times\mathbf{NE}(\Phi^{\prime},\Psi^{\prime})\) is also compact, with respect to the product topology. Hence, to prove our statement it suffices to show that \(\mathcal{F}_{T}^{all}\) is closed. Let us consider a sequence \(((F_{i},F_{i}^{\prime}))_{i\in\mathbb{N}}\) of P-GENEOs, converging to a pair \((F,F^{\prime})\in\mathbf{NE}(\Phi,\Psi)\times\mathbf{NE}(\Phi^{\prime},\Psi^ {\prime})\). Since \((F_{i},F_{i}^{\prime})\) is \(T\)-equivariant for every \(i\in\mathbb{N}\) and the action of \(Q\) on \(\Psi\) is continuous (see Proposition 3.24), \((F,F^{\prime})\) belongs to \(\mathcal{F}_{T}^{all}\). Indeed, we have that \[F^{\prime}(\varphi s)=\lim_{i\to\infty}F_{i}^{\prime}(\varphi s)=\lim_{i\to \infty}F_{i}(\varphi)T(s)=F(\varphi)T(s)\] for every \(s\in S\) and every \(\varphi\in\Phi\). Hence, \(\mathcal{F}_{T}^{all}\) is a closed subset of a compact set and then it is also compact. #### 4.2.2. Convexity Assume that \(\Psi,\Psi^{\prime}\) are convex. Let \((F_{1},F_{1}^{\prime}),\ldots,(F_{n},F_{n}^{\prime})\in\mathcal{F}_{T}^{all}\) and consider an \(n\)-tuple \((a_{1},\ldots,a_{n})\in\mathbb{R}^{n}\) with \(a_{i}\geq 0\) for every \(i\in\{1,\ldots,n\}\) and \(\sum_{i=1}^{n}a_{i}=1\). We can define two operators \(F_{\Sigma}\colon\Phi\to\Psi\) and \(F_{\Sigma}^{\prime}\colon\Phi^{\prime}\to\Psi^{\prime}\) as \[F_{\Sigma}(\varphi):=\sum_{i=1}^{n}a_{i}F_{i}(\varphi),\text{ and }F_{\Sigma}^{ \prime}(\varphi^{\prime}):=\sum_{i=1}^{n}a_{i}F_{i}^{\prime}(\varphi^{\prime})\] for every \(\varphi\in\Phi,\varphi^{\prime}\in\Phi^{\prime}\). We notice that the convexity of \(\Psi\) and \(\Psi^{\prime}\) guarantees that \(F_{\Sigma}\) and \(F_{\Sigma}^{\prime}\) are well defined. **Proposition 4.12**.: \((F_{\Sigma},F_{\Sigma}^{\prime})\) _belongs to \(\mathcal{F}_{T}^{all}\)._ Proof.: By hypothesis, for every \(i\in\{1,\ldots,n\}\)\((F_{i},F_{i}^{\prime})\) is a perception map, and then: \[F_{\Sigma}^{\prime}(\varphi s)=\sum_{i=1}^{n}a_{i}F_{i}^{\prime} (\varphi s) =\sum_{i=1}^{n}a_{i}(F_{i}(\varphi)T(s))\] \[=\Bigl{(}\sum_{i=1}^{n}a_{i}F_{i}(\varphi)\Bigr{)}T(s)\] \[=F_{\Sigma}(\varphi)T(s)\] for every \(\varphi\in\Phi\) and every \(s\in S\). Furthermore, since for every \(i\in\{1,\ldots,n\}\)\(F_{i}(\Phi)\subseteq\Psi\) and \(\Psi\) is convex, also \(F_{\Sigma}(\Phi)\subseteq\Psi\). Analogously, the convexity of \(\Psi^{\prime}\) implies that \(F_{\Sigma}^{\prime}(\Phi^{\prime})\subseteq\Psi^{\prime}\). Therefore \((F_{\Sigma},F_{\Sigma}^{\prime})\) is a P-GEO. It remains to show the non-expansiveness of \(F_{\Sigma}\) and \(F_{\Sigma}^{\prime}\). Since \(F_{i}\) is non-expansive for any \(i\), then for every \(\varphi_{1},\varphi_{2}\in\Phi\) we have that \[\|F_{\Sigma}(\varphi_{1})-F_{\Sigma}(\varphi_{2})\|_{\infty} =\left\|\sum_{i=1}^{n}a_{i}F_{i}(\varphi_{1})-\sum_{i=1}^{n}a_{i} F_{i}(\varphi_{2})\right\|_{\infty}\] \[=\left\|\sum_{i=1}^{n}a_{i}(F_{i}(\varphi_{1})-F_{i}(\varphi_{2})) \right\|_{\infty}\] \[\leq\sum_{i=1}^{n}|a_{i}|\left\|F_{i}(\varphi_{1})-F_{i}(\varphi_ {2})\right\|_{\infty}\] \[\leq\sum_{i=1}^{n}|a_{i}|\|\varphi_{1}-\varphi_{2}\|_{\infty}=\| \varphi_{1}-\varphi_{2}\|_{\infty}.\] Analogously, since every \(F_{i}^{\prime}\) is non-expansive, for every \(\varphi_{1}^{\prime},\varphi_{2}^{\prime}\in\Phi^{\prime}\) we have that \[\|F_{\Sigma}^{\prime}(\varphi_{1}^{\prime})-F_{\Sigma}^{\prime}(\varphi_{2}^{ \prime})\|_{\infty} \leq\sum_{i=1}^{n}|a_{i}|\|\varphi_{1}^{\prime}-\varphi_{2}^{ \prime}\|_{\infty}=\|\varphi_{1}^{\prime}-\varphi_{2}^{\prime}\|_{\infty}.\] Therefore, we have proven that \((F_{\Sigma},F^{\prime}_{\Sigma})\) is a P-GEO with \(F_{\Sigma}\) and \(F^{\prime}_{\Sigma}\) non-expansive. Hence it is a P-GENEO. Then, the following result holds: **Corollary 4.13**.: _If \(\Psi,\Psi^{\prime}\) are convex, then the set \(\mathcal{F}^{\text{all}}_{T}\) is convex._ Proof.: It is sufficient to apply Proposition 4.12 for \(n=2\), by setting \(a_{1}=t\), \(a_{2}=1-t\) for \(0\leq t\leq 1\). ## 5. Conclusions In this article we proposed a generalization of some known results in the theory of GENEOs to a new mathematical framework, where the collection of all symmetries is represented by a subset of a group of transformations. We introduced P-GENEOs and showed that they are a generalisation of GENEOs. We defined pseudo-metrics on the space of measurements and on the space of P-GENEOs and studied their induced topological structures. Under the assumption that the function spaces are compact and convex, we showed compactness and convexity of the space of P-GENEOs. In particular, compactness guarantees that any operator can be approximated by a finite number of operators belonging to the same space, while convexity allows us to build new P-GENEOs by taking convex combinations of P-GENEOs. Compactness and convexity together ensure that every strictly convex loss function on the space of P-GENEOs admits a unique global minimum. Given a collection of P-GENEOs, we presented a general method to construct new P-GENEOs as combinations of the initial ones.
2310.09145
Lincoln AI Computing Survey (LAICS) Update
This paper is an update of the survey of AI accelerators and processors from past four years, which is now called the Lincoln AI Computing Survey - LAICS (pronounced "lace"). As in past years, this paper collects and summarizes the current commercial accelerators that have been publicly announced with peak performance and peak power consumption numbers. The performance and power values are plotted on a scatter graph, and a number of dimensions and observations from the trends on this plot are again discussed and analyzed. Market segments are highlighted on the scatter plot, and zoomed plots of each segment are also included. Finally, a brief description of each of the new accelerators that have been added in the survey this year is included.
Albert Reuther, Peter Michaleas, Michael Jones, Vijay Gadepally, Siddharth Samsi, Jeremy Kepner
2023-10-13T14:36:26Z
http://arxiv.org/abs/2310.09145v1
# Lincoln AI Computing Survey (LAICS) Update ###### Abstract This paper is an update of the survey of AI accelerators and processors from past four years, which is now called the Lincoln AI Computing Survey - LAICS (pronounced "lace"). As in past years, this paper collects and summarizes the current commercial accelerators that have been publicly announced with peak performance and peak power consumption numbers. The performance and power values are plotted on a scatter graph, and a number of dimensions and observations from the trends on this plot are again discussed and analyzed. Market segments are highlighted on the scatter plot, and zoomed plots of each segment are also included. Finally, a brief description of each of the new accelerators that have been added in the survey this year is included. Machine learning, GPU, TPU, tensor, dataflow, CGRA, accelerator, embedded inference, computational performance ## I Introduction A number of announcements, releases, and deployments of artificial intelligence (AI) accelerators from startups and established technology companies have occurred in the past year. Perhaps most notable is the emergence of very large foundation models that are able to generate prose, poetry, images, etc. based on training using vast amounts of data usually collected via internet data crawls. Much technical press has been focused on how effective the resulting tools will be for various tasks, but also there is much discussion about the training of these models. But from an accelerator perspective, it is the very same accelerators that are aimed towards training more modestly sized models that are used for training these very large models. The very large models are just using many more accelerators simultaneously in a synchronous parallel manner, and they are interconnected with very high bandwidth networks. But beyond that news, not much has changed in the overall trends and landscape. Hence, this paper just updates what was discussed in last year's survey. For much of the background of this study, please refer to one of the previous IEEE-HPEC papers that our team has published [1, 2, 3, 4]. This background includes an explanation of the AI ecosystem architecture, the history of the emergence of AI accelerators and accelerators in general, a more detailed explanation of the survey scatter plots, and a discussion of broader observations and trends. ## II Survey of Processors This paper is an update to IEEE-HPEC papers from the past four years [1, 2, 3, 4]. This survey continues to cast a wide net to include accelerators and processors for a variety of applications including defense and national security AI/ML edge applications. The survey collects information on all of the numerical precision types that an accelerator supports, but for most of them, their best inference performance is in int8 or fp16/bf16, so that is what usually is plotted. This survey gathers performance and power information from publicly available materials including research papers, technical trade press, company benchmarks, etc. The key metrics of this public data are plotted in Figure 1, which graphs recent processor capabilities (as of Summer 2023) mapping peak performance vs. power consumption, and Table I summarizes some of the important metadata of the accelerators, cards, and systems, including the labels used in Figure 1. The x-axis indicates peak power, and the y-axis indicate peak giga-operations per second (GOps/s), both on a logarithmic scale. The computational precision of the processing capability is depicted by the geometric marker used. The form factor is depicted by color, which shows the package for which peak power is reported. Blue corresponds to a single chip; orange corresponds to a card; and green corresponds to entire systems (single node desktop and server systems). Finally, the hollow geometric objects are peak performance for inference-only accelerators, while the solid geometric figures are performance for accelerators that are designed to perform both training and inference. A reasonable categorization of accelerators follows their intended application, and the five categories are shown as ellipses on the graph, which roughly correspond to performance and power consumption: Very Low Power for wake word detection, speech processing, very small sensors, etc.; Embedded for cameras, small UAVs and robots, etc.; Autonomous for driver assist services, autonomous driving, and autonomous robots; Data Center Chips and Cards; and Data Center Systems. A zoomed in scatter plot for each of these categories is shown in the subfigures of Figure 2. For most of the accelerators, their descriptions and commentaries have not changed since last year so please refer to the papers of the last four years for descriptions and commentaries. Several new releases are included in this update. * Based on similar technology of its Cloud AI 100 accelerator, Qualcomm has released two versions of its robotics AI system platform, the RB5 and RB6, in the past few years. Both are competing in the same low power system-on-a-chip market as the NVIDIA Jetson product line, and are aimed at integration in applications including robotics, driver assist, modest UAVs, etc. [99, 100]. * The Memryx MX3 AI accelerator chip is a startup that was spun out of the University of Michigan. It is designed to be deployed with a host CPU to greatly speed up AI inference, consuming about 1W of power. It computes activations with bf16 numerical precision, and store model parameter weight at 4-bit, 8-bit, and 16-bit integer precisions, which can be set on a layer-by-layer basis [68, 69]. * On the heels of it's Hailo-8 AI accelerator, Hailo has released a lower power variant, the Hailo-15. The Hailo-15 targets the Internet Protocol (IP) camera market, and it is a SoC that includes a CPU, a digital signal processor (DSP) accelerator, and a neural accelerator, which all draw less than 2W [52]. * Startup Esperanto Technologies has released their first processor accelerator called the ET-SoC-1. Each chip is comprised of 1,088 64-bit ET-Minion RISC-V cores, each of which have scalar, vector, and tensor units along with L1 cache/scratchpad memory. Their key application is training and inference for recommender systems, which have a balanced mix of scalar, vector, and tensor operations [30, 31]. * Baidu has started deploying its second-generation Kunlun accelerator, Kunlun II. Baidu touted that the Kunlun II is 2-3 times faster than the original Kunlun [20]. * The Chinese GPU startup Biren emerged from stealth mode to announce and release two high performance GPUs: the BR100 and BR104. The BR104 is a single die GPU, while the BR100 combines two dies/chiplets in the same package [21, 22]. * AMD has announced the followup to their Instinct MI250 GPU called the Instinct MI300A, which will be a multi-chiplet CPU-GPU Accelerated Processing Unit (APU) integrated package. The announcement showed package photos of two CPU dies integrated with six GPU dies. [13, 14] * While Intel announced their high-end AI GPU a few years ago, details continued to be scarce until this past year. Enough performance numbers were announced for the Intel Xe-HPC (codename Ponte Vecchio) to include it in this year's survey [58, 59, 60]. Along with the Xe-HPC, Intel also announced and started shipping two inference-oriented GPU cards, the Flex 140 and Flex 170 [61]. * After announcing and shipping their Hopper H100 GPUs in systems at the end of 2022, NVIDIA has started shipping DGX servers, which integrate eight H100 GPUs [81]. NVIDIA has also released a high-performance Ampere GPU, the A800, that is aimed at the Chinese market, which reportedly performs at approximately 70% peak performance of the A100 [77]. Finally, NVIDIA has released a new Ada Lovelace GPU family which is aimed at the data center inference and graphics rendering (gaming) farm markets. The first specifications were released for the L40 GPU, which are included in this survey [88]. Fig. 1: Peak performance vs. power scatter plot of publicly announced AI accelerators and processors. ## III Results Fig. 2: Zoomed regions of peak performance vs. peak power scatter plot: **(a)** very low power, **(b)** embedded, **(c)** autonomous, **(d)** data center chips and cards, **(e)** data center systems. ## III Summary This paper updates the Lincoln AI Computing Survey (LAICS) of deep neural network accelerators that span from extremely low power through embedded and autonomous applications to data center class accelerators for inference and training. We presented the new full scatter plot along with zoomed in scatter plots for each of the major deployment/market segments, and we discussed some new additions for the year. The rate of announcements and releases has continued to be consistent as companies compete for various embedded, data center, cloud, and on-premises HPC deployments. ## IV Data Availability The data spreadsheets and references that have been collected for this study and its papers will be posted at [https://github.com/areuther/ai-accelerators](https://github.com/areuther/ai-accelerators) after they have cleared the release review process. ## Acknowledgement We express our gratitude to Masahiro Arakawa, Bill Arcand, Bill Bergeron, David Bestor, Bob Bond, Chansup Byun, Nathan Frey, Vitaliy Gleyzer, Jeff Gottschalk, Michael Houle, Matthew Hubbell, Hayden Jananthan, Anna Klein, David Martinez, Joseph McDonald, Lauren Milechin, Sanjeev Mohindra, Paul Monticciolo, Julie Mullen, Andrew Prout, Stephan Rejto, Antonio Rosa, Charles Yee, and Marc Zissman for their support of this work.
2307.12814
Stimulated Brillouin scattering in tellurite-covered silicon nitride waveguides
Stimulated Brillouin scattering (SBS), a coherent nonlinear effect coupling acoustics and optics, can be used in a wide range of applications such as Brillouin lasers and tunable narrowband RF filtering. Wide adoption of such technologies however, would need a balance of strong Brillouin interaction and low optical loss in a structure compatible with large scale fabrication. Achieving these characteristics in scalable platforms such as silicon and silicon nitride remains a challenge. Here, we investigate a scalable Brillouin platform combining low loss Si$_3$N$_4$ and tellurium oxide (TeO$_2$) exhibiting strong Brillouin response and enhanced acoustic confinement. In this platform we measure a Brillouin gain coefficient of 8.5~m$^{-1}$W$^{-1}$, exhibiting a twenty fold improvement over the largest previously reported Brillouin gain in a Si$_3$N$_4$ platform. Further, we demonstrate cladding engineering to control the strength of the Brillouin interaction. We utilized the Brillouin gain and loss resonances in this waveguide for an RF photonic filter with more than 15 dB rejection and 250 MHz linewidth. Finally, we present a pathway by geometric optimization and cladding engineering to a further enhancement of the gain coefficient to 155~m$^{-1}$W$^{-1}$, a potential 400 times increase in the Brillouin gain coefficient.
Roel A. Botter, Yvan Klaver, Randy te Morsche, Bruno L. Segat Frare, Batoul Hashemi, Kaixuan Ye, Akhileshwar Mishra, Redlef B. G. Braamhaar, Jonathan D. B. Bradley, David Marpaung
2023-07-24T14:10:03Z
http://arxiv.org/abs/2307.12814v1
# Stimulated Brillouin scattering in tellurite-covered silicon nitride waveguides ###### Abstract Stimulated Brillouin scattering (SBS), a coherent nonlinear effect coupling acoustics and optics, can be used in a wide range of applications such as Brillouin lasers and tunable narrowband RF filtering. Wide adoption of such technologies however, would need a balance of strong Brillouin interaction and low optical loss in a structure compatible with large scale fabrication. Achieving these characteristics in scalable platforms such as silicon and silicon nitride remains a challenge. Here, we investigate a scalable Brillouin platform combining low loss Si\({}_{3}\)N\({}_{4}\) and tellurium oxide (TeO\({}_{2}\)) exhibiting strong Brillouin response and enhanced acoustic confinement. In this platform we measure a Brillouin gain coefficient of 8.5 m\({}^{-1}\)W\({}^{-1}\), exhibiting a twenty fold improvement over the largest previously reported Brillouin gain in a Si\({}_{3}\)N\({}_{4}\) platform. Further, we demonstrate cladding engineering to control the strength of the Brillouin interaction. We utilized the Brillouin gain and loss resonances in this waveguide for an RF photonic filter with more than 15 dB rejection and 250 MHz linewidth. Finally, we present a pathway by geometric optimization and cladding engineering to a further enhancement of the gain coefficient to 155 m\({}^{-1}\)W\({}^{-1}\), a potential 400 times increase in the Brillouin gain coefficient. + Footnote †: These authors contributed equally to this work + Footnote †: These authors contributed equally to this work + Footnote †: These authors contributed equally to this work ## I Introduction Stimulated Brillouin scattering (SBS), a nonlinear optical effect mediated through acoustic waves is a burgeoning field, which can be used in a wide range of fields, from telecommunications to sensing [1; 2; 3]. SBS results in a narrowband (tens of MHz) gain resonance, shifted from the pump frequency by about 10 GHz. This makes it a unique filter and amplifier with applications in next-generation optical and radio communications [4; 5], low-threshold narrow-linewidth lasers [6; 7], nonreciprocal light propagation [8; 9] and high-precision sensors [10; 11]. To achieve wide adoption of SBS in applications, the Brillouin nonlinearity needs to be integrated in scalable photonic platforms. Traditionally, high on-chip Brillouin gain in standard photonic platforms has been achieved by enhancing the effective length through low-loss waveguides [7], or by increasing the Brillouin gain coefficient. The latter can be achieved by unlocking acoustic guiding via the waveguide geometry [12; 13] or by adding waveguides made of high-gain materials through hybrid [14; 15; 16] or heterogeneous integration [17]. However, these platforms come with several challenges, such as low integration density, less robust free-floating structures, multi-photon absorption, or a lack of tuneable components. To circumvent these challenges, currently there are efforts to achieve improved acoustic waveguiding in nitride based platforms such as silicon nitride [18] and silicon oxyni-ride [19; 20]. However, the acoustic guidance is fundamentally limited by the lack of confinement of the acoustic wave within the nitride itself due to the hardness of the core. Recently, a hybrid integration platform that combines standard wafer-scale thin silicon nitride and tellurium oxide (TeO\({}_{2}\), tellurite) has been investigated, see Fig. 1 (a) and (d). These tellurite-covered silicon nitride waveguides create a promising platform for low loss circuits, with losses down to 0.25 dB/cm [21; 22], by adding a single CMOS compatible back-end step [23]. Tellurite itself is known to be a good platform for acousto-optic devices[24], where SBS has been previously demonstrated in tellurite fibers [25] and observed in integrated waveguides [26]. The tellurite-covered silicon nitride waveguides platform has shown a variety of applications in on-chip amplifiers [27], four-wave mixing [28], octave-spanning supercontinuum and third-harmonic generation [29; 23], as depicted in Fig. 1 (e) through (g). Although highly promising, the Brillouin response of these waveguides, illustrated in Fig. 1 (b) and (c), had not yet been investigated. In this work, we present a comprehensive study of SBS in tellurite-covered silicon nitride waveguides. We experimentally demonstrate, for the first time, SBS in such hybrid waveguides. We study the impact of cladding engineering to tailor the strength of Brillouin interactions in these waveguides. We further demonstrate RF photonic filtering through harnessing the Brillouin gain and loss resonances in our sample. Finally, we perform a geometric optimization via simulations, showing the possibility to improve the Brillouin gain factor by two orders of magnitude. These results represent the highest gain coefficient achieved in a silicon nitride-based circuit and open the possibility towards integrating a Brillouin engine in a tunable complex circuit for applications such as narrow linewidth lasers, frequency combs, and RF photonic signal processors. ## Results ### SBS in tellurite-covered waveguides Our samples consist of single-stripe optical waveguides in LPCVD silicon nitride with a height of 100 nm, and width of 1600 nm. The entire chip is then covered with a layer of tellurite with a height of 354 nm (see Methods for details of fabrication). Fig. 2 (a) shows the geometry of the waveguide. The waveguides are cladded with a layer of CYTOP, a fluoropolymer, to protect the waveguiding layer. We investigate the strength of Brillouin scattering in these waveguides through simulations of the optical and acoustic modes, implemented in COMSOL Multiphysics. We start with an optical simulation as depicted in Fig. 2 (b). This optical simulation can be used to calculate the optical forces, which we use as a basis for the calculation of the acoustic response. The acoustic field at the highest Brillouin peak is shown in Fig. 2 (c). The overlap between the optical and acoustic fields is used to calculate the Brillouin interaction strength, and create a response spectrum, which is depicted as the solid line in Fig. 2 (j). The details of the simulation methods can be found in the Supplementary Materials. The simulations revealed the signature of SBS with a gain coefficient of 6.5 m\({}^{-1}\)W\({}^{-1}\) at a frequency shift of 7.47 GHz. We proceeded with the measurements of the Brillouin gain response of a 1 cm long waveguide. We measured a propagation loss of 1 dB/cm in the sample. We then implemented a double intensity modulation pump-probe scheme [18] for enhanced-sensitivity SBS characterisations. The details of the measurement apparatus are described in the Methods section and Supplementary materials. The measured Brillouin response can be seen in Fig. 2 (k) with the highest Brillouin peak gain of 4.5 m\({}^{-1}\)W\({}^{-1}\) appearing at 8.2 GHz Stokes-shifted frequency from the pump. This gain coefficient is an order of magnitude higher than that of silicon nitride wave Figure 1: **Combining nonlinear tellurite covered waveguides with a scalable silicon nitride photonic integrated circuit.** (a) A false colour SEM image of an (uncladded) tellurite covered silicon nitride waveguide. (b) Stimulated Brillouin scattering in tellurite covered waveguides, as described in this work. (d) Our vision of a device combining standard silicon nitride with tellurite covered silicon nitride. The tellurite covered waveguides also support other nonlinear processes, including (e) supercontinuum generation, (f) on-chip amplification through erbium doping and (g) four wave mixing. guides reported recently [7, 18, 30]. The measurement and simulation results show good agreement in both the magnitude and frequency shift of the SBS gain peak. ### SBS enhancement through cladding engineering To further improve the Brillouin gain of these waveguides we analyze their acoustic behaviour. As seen in Fig. 2 (c), significant acoustic wave leakage occurs from the tellurite core to the CYTOP polymer upper cladding due to lack of contrast in acoustic impedance. This reduces the strength of the acoustic waves in the waveguide, thereby reducing the Brillouin gain. Furthermore, the polymer is acoustically lossy, attenuating the acoustic fields, and thus further decreasing the Brillouin gain. Here we circumvent these adverse effects through cladding engineering that provides improved acoustic waveguiding without degrading the optical propagation losses. A good candidate for cladding material to prevent acoustic acoustic leakage is silicon oxide. Ideally, a thick (over a micron) cladding will allow for optimized acoustic waveguiding (see Fig. 2 (i), for example). However, current limitations in our fabrication process only allow low-temperature deposition (required to avoid increased losses in the tellurite layer [31]) of low-loss silica of the order of a few tens of nanometer. We therefore investigated the effectiveness of using a thin layer of silicon oxide to reduce acoustic leakage into the polymer and to increase the Brillouin gain. The simulated SBS response of a thin (46 nm) silica cladded waveguide is summarized in Fig. 2 (d)-(f) and (j). Appreciable reduction of the acoustic wave amplitude leaking in to the upper polymer cladding was observed. The calculated Brillouin gain coefficient of this waveguide is 16.4 m\({}^{-1}\)W\({}^{-1}\), which is more than double that of the gain without the thin silica cladding. This Figure 2: **Cladding engineering for enhanced Brillouin scattering.** We investigted three different waveguide geometries; with a CYTOP cladding, with a 46 nm silicon oxide barrier added, and with a full silicon oxide cladding. (a, d, g) The geometry of the waveguides, with and without silicon oxide barrier. (b, e, h) The simulated electric field of the optical modes, and (c, f, i) the simulated displacement fields of the acoustic response at the acoustic frequency with the highest Brillouin gain. For the CYTOP covered waveguides with and without silicon oxide barrier, shown are (j) the simulated Brillouin responses compared to (k) the measured Brillouin responses. (l) Simulated Brillouin response of the full silicon oxide cladded waveguide. result shows the effective impact of cladding engineering even using a very thin cladding layer. We then fabricated the cladding-engineered waveguide, and measured the propagation loss and the SBS gain coefficient. The waveguide consists of a 200 nm layer of silicon nitride, covered with a 299 nm layer of tellurite. The cladding consists of a 46 nm layer of silicon oxide, and again a layer of polymer as protection. We performed SBS characterisations in a 10 cm long waveguide with loss of 0.59 dB/cm. The measured gain response is shown in Fig. 2 (k) and shows two peaks of equal magnitude, each with a Brillouin gain coefficient of 8.5 m\({}^{-1}\)W\({}^{-1}\). The gain coefficient is double of that of the waveguides with only polymer cladding, demonstrating the effectiveness of our cladding engineering strategy. As mentioned earlier, optimum performance can be unlocked using thick a silica cladding. We simulated a waveguide geometry with a similar cross-section as the polymer-cladded waveguide considered earlier, only that the cladding is replaced by microns-thick silicon oxide to provide full acoustic waveguiding. The detailed waveguide geometry and acousto-optic characteristics are shown in Fig. 2 (d)-(f). As a result of improved acousto-optic overlap, enhancement of the the Brillouin gain coefficient beyond 40 m\({}^{-1}\)W\({}^{-1}\) can be achieved, as shown in Fig. 2 (l). The simulated and measured SBS gain of the cladding engineered waveguides are summarized in Table 1. ### RF photonic notch filter We demonstrate the use of the on-chip Brillouin gain in our cladding-engineered tellurite-covered waveguide in a cancellation-based RF photonic notch filter experiment [32]. In such a filter, relatively low SBS gain or loss can be tranformed to a high extinction notch filter through RF destructive interference. In this case, a modulator is used to prepare an RF modulated spectrum consisting of an optical carrier and two RF sidebands with opposite phase and unbalanced amplitude, before one of these sidebands is processed by the SBS gain of loss resonances [32]. We devise a measurement setup based on a vector network analyzer (VNA) to accurately measure the on-off SBS gain and loss from the sample, as illustrated in Fig. 3 (a). The synthesized single-sideband with carrier optical signal from the IQ modulator was injected to a 15 cm long spiral waveguide that is pumped with 26.6 dBm of on-chip power at 1561 nm. We then detect the amplification and loss of the sideband at the SBS shift frequencies using a photodetector. The signal flow of such a characterization step is illustrated in Fig. 3 (b). In Fig. 3 (f) we show the resulting \(S_{21}\) relative to a pump-off measurement of the VNA based measurement that indicates a total amplification for the Stokes signal of 0.2 dB for both peaks with a full width half maximum (FWHM) of 500 MHz and 200 MHz at 8.6 GHz and 6.8 GHz respectively, which is identical for the absorption observed at the anti-Stokes frequencies. To achieve an RF photonic notch filter from the gain resonance, the IQ modulator is set to generate a dual sideband (DSB) modulated signal with opposite phases, so there is a \(\pi\) phase difference between the lower sideband (LSB) and upper sideband (USB). The power imbalance between the LSB and USB is set equal to the Brillouin gain by tuning the DPMZM bias voltages. By now adding a pump laser, appropriately detuned from the signal carrier (probe) laser, signal cancellation is achieved within the Brillouin gain window. This is shown schematically in Fig. 3 (c). The normalized response of the measured RF photonic notch filter is shown in Fig. 3 (e). Due to the limited (\(\sim\) 0.2 dB) gain in the waveguide, only a limited rejection of about 15 dB could be realised. Further improvements in the technology, including SBS gain enhancement through cladding engineering, interfacing with an on-chip pre-processing tunable circuit like ring resonators [18], and the possibility to achieve on-chip amplification through erbium doping [22] promise an encouraging route to the realization of a high performance SBS RF photonic filter in this platform. ### Enhancing the Brillouin response by geometric optimization Apart from the cladding material, the Brillouin response is also sensitive to the geometry of the waveguide cross-section. In order to obtain the maximal improvement, we performed a genetic optimization on the cross-sectional dimensions of the structure with full silicon oxide cladding. The geometry with the highest gain is shown in Fig. 4 \begin{table} \begin{tabular}{c|c|c|c|c} **Geometry** & \multicolumn{2}{c|}{**Simulated**} & \multicolumn{2}{c}{**Measured**} \\ & **max g\({}_{\text{B}}\)** & **SBS shift** & **max g\({}_{\text{B}}\)** & **SBS shift** \\ & (m\({}^{-1}\)W\({}^{-1}\)) & (GHz) & (m\({}^{-1}\)W\({}^{-1}\)) & (GHz) \\ \hline Polymer cladding & 6.5 & 7.47 & 4.5 & 8.2 \\ Thin silica cladding & 16.4 & 8.58 & 8.5 & 8.9 \\ Full silica cladding & 42.5 & 7.96 & - & - \\ Optimized full silica cladding & 154.8 & 6.335 & - & - \\ \end{tabular} \end{table} Table 1: **SBS characteristics of cladding-engineered waveguides.** (a), with a silicon nitride width and thickness of 4020 nm and 110 nm, respectively, and a tellurite thickness of 270 nm. The cladding thickness was found to have little influence beyond 800 nm, thus it is given a set value of 1 \(\mu\)m. The optical mode and acoustic response are shown in Fig. 4 (b) and (c), respectively. The acoustic response shows an acoustic mode with a single vertical lobe which corresponds to the lowest frequency peak in the gain spectrum. Compared to the acoustic responses of the waveguides in Fig. 2, the optimized waveguide achieves significant gain enhancement through improved vertical acoustic confinement and acousto-optic overlap. The peak Brillouin gain of the optimized waveguide is estimated to be 155 m\({}^{-1}\)W\({}^{-1}\) with a linewidth of \(\Gamma=5.71\) MHz, as shown in Fig. 4 (d). The details of the optimization procedure and more extensive results can be found in Supplementary Note E. With the optimized structures the gain is significantly improved, to illustrate the impact we show the calculated gain for different lengths of waveguides in Fig. 4 (e). Pumped with a power of 27 dBm these show significant gain of 10s of dB for even realistic losses close to those observed. Such numbers come close to what has been achieved in chalcogenide based waveguides [2]. This high gain can also be leveraged in the use of on-chip Brillouin lasers [6, 7, 14, 33], leading to lower thresholds. As an example for the optimized waveguide geometry for a full oxide cladding, the thresholds can be estimated as given in Fig. 4 (f). Here, the length of the cavity is matched to 1 FSR using a group index of 1.635, and assuming a design for minimal threshold calculated as [18]: \[P_{\mathrm{th}}=0.089\frac{\alpha^{2}L}{g_{\mathrm{B}}}, \tag{1}\] where \(g_{\mathrm{B}}\) is the Brillouin gain in m\({}^{-}\)1W\({}^{-}\)1, \(\alpha\) is the propagation loss in dB/cm and \(L\) is the cavity length, that is 2.9 cm in this case. As can be seen in Fig. 4 (f), even at an expected loss of 0.5 dB/cm the large gain still means a threshold of a few 10s of mW. This would lead to practical pump power levels when applied even for these large diameter ring resonators. Fig. 3: **RF photonic filter in tellurite-covered silicon nitride waveguide.** (a) Schematic of the measurement setup of the Vector Network Analyzer (VNA) based measurements. IQ mod: In-phase and Quadrature modulator, PD: photodiode, PM: Power meter. (b) RF and optical spectra at different stages of the SBS gain measurement. (c) RF and optical spectra at different stages of the RF filter. (d) \(S_{21}\) gain and phase of VNA based SBS gain measurement results. (e) The notch filter realised using the on-chip SBS gain, showting enhanced 15 dB rejection from only 0.2 dB of SBS gain. ## Discussion We have shown, for the first time, the Brillouin response of tellurite covered silicon nitride waveguides. We enhanced the Brillouin gain by adding a thin layer of silicon oxide to reduce the acoustic losses. This resulted in a Brillouin gain coefficient of 8.5 m\({}^{-1}\)W\({}^{-1}\), a record for silicon nitride based waveguides. By genetic optimization, we determined an optimal geometry for a tellurite covered silicon nitride waveguide with a full silicon oxide cladding. The improved gain in this waveguide with optimization of the cross-sectional geometry is 155 m\({}^{-1}\)W\({}^{-1}\). This represents a 400 times increase in the Brillouin gain coefficient compared to previous results in silicon nitride waveguides. The gain is significantly higher, such that an ultra-narrow amplifier with 10s of dB of gain or a simple design Brillouin laser with a few 10s of mW threshold should be achievable. Various improvements in the fabrication and design can lead to realizing the vision of such high-Brillouin-gain waveguides in silicon nitride PICs, including adding full hard cladding and reducing optical propagation losses. We also aim toward developing tapers from waveguides without tellurite to tellurite covered waveguides. This will open up the possibility of combining standard and versatile silicon nitride waveguide circuits with high Brillouin gain sections enabled by the tellurium oxide layer, as well as low loss edge-couplers for efficient fiber-chip pumping and signal coupling.. This will represent a novel concept of spatial selection of Brillouin enhancement in a large scale circuit. The high Brillouin gain of these waveguides, combined with the other non-linearities, opens an avenue towards the creation of Brillouin-Kerr soliton frequency combs [34]. These high non-linearities can create these solitons without requiring extremely low-loss waveguides. ## Methods ### Waveguide fabrication The silicon nitride strips were fabricated on a 100 mm diameter silicon wafer using a standard foundry process [35]. This involves growing an silicon oxide through wet thermal oxidation of the silicon substrate at 1000degC. The silicon nitride layer was then grown using low-pressure chemical vapor deposition (LPCVD). The waveguides were patterned into the silicon nitride layer by stepper lithography and reactive ion etching. The wafer was then diced into individual chips. Next the tellurite layer, and where applicable the silicon oxide layer, was grown using radio frequency (RF) reactive sputtering [22]. Finally, the polymer layer was applied via spincoating. ### Brillouin gain measurement The measurements were performed using an updated version of the double intensity modulation setup previously used in our work on multilayer silicon nitride waveguides [18], which was based on earlier work in Brillouin spectroscopy [36] and gain measurements in thick silicon Figure 4: **Gain optimization of tellurite-covered silicon nitride waveguide.** (a) optimized geometry of the waveguides with 1 \(\mu\)m silicon oxide cladding. (b) Simulated electric field of the optical modes, and (c) the simulated displacement fields of the acoustic response at the acoustic frequency with the highest Brillouin gain. (d) Simulated Brillouin gain spectra of the full silicon oxide cladded waveguide exhibiting peak gain coefficient of 155 m\({}^{-1}\)W\({}^{-1}\). (e) SBS gain as functions of waveguide length and propagation loss with 500 mW of on-chip optical power. (f) Stimulated Brillouin lasing threshold in a single ring resonator, with FSR matched to the Brillouin shift. nitude waveguides [30]. In this setup, schematically illustrated in Fig. 5, both the pump and probe laser are modulated at slightly different frequencies. This way any interaction between them when traveling through the sample (i.e. SBS) will result in a signal at their difference frequency such that the interaction strength can be recorded using a lock-in amplifier. A detailed description can be found in Supplementary Note A. ## IV Author Contributions D.M. and R.A.B. developed the concept and proposed the physical system. R.A.B., R.M. and K.Y. developed and performed numerical simulations. R.M. performed the numerical optimization with input from K.Y. and Y.K.. R.A.B. performed the gain characterization experiments, with input from B.L.S.F., B.H., K.Y. and Y.K.. Y.K. and R.M. performed the VNA based experiments with input from D.M., A.M. and R.B.G.B.. B.L.S.F., B.H. and J.D.B. developed and fabricated the samples. D.M., R.A.B., Y.K. and R.M. wrote the manuscript. D.M. supervised the project. ## V Funding Information The authors acknowledge funding from the European Research Council Consolidator Grant (101043229 TRIFFIC), Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO) Vidi (15702) and Start Up (740.018.021), the Natural Sciences and Engineering Research Council of Canada (NSERC) (I2IPJ 555793-20 and RGPIN-2017-06423) and the Canadian Foundation for Innovation (CFI) (35548).
2303.01421
Semiparametric Language Models Are Scalable Continual Learners
Semiparametric language models (LMs) have shown promise in continuously learning from new text data by combining a parameterized neural LM with a growable non-parametric memory for memorizing new content. However, conventional semiparametric LMs will finally become prohibitive for computing and storing if they are applied to continual learning over streaming data, because the non-parametric memory grows linearly with the amount of data they learn from over time. To address the issue of scalability, we present a simple and intuitive approach called Selective Memorization (SeMem), which only memorizes difficult samples that the model is likely to struggle with. We demonstrate that SeMem improves the scalability of semiparametric LMs for continual learning over streaming data in two ways: (1) data-wise scalability: as the model becomes stronger through continual learning, it will encounter fewer difficult cases that need to be memorized, causing the growth of the non-parametric memory to slow down over time rather than growing at a linear rate with the size of training data; (2) model-wise scalability: SeMem allows a larger model to memorize fewer samples than its smaller counterpart because it is rarer for a larger model to encounter incomprehensible cases, resulting in a non-parametric memory that does not scale linearly with model size. We conduct extensive experiments in language modeling and downstream tasks to test SeMem's results, showing SeMem enables a semiparametric LM to be a scalable continual learner with little forgetting.
Guangyue Peng, Tao Ge, Si-Qing Chen, Furu Wei, Houfeng Wang
2023-03-02T17:15:02Z
http://arxiv.org/abs/2303.01421v1
# Semiparametric Language Models Are ###### Abstract Semiparametric language models (LMs) have shown promise in continuously learning from new text data by combining a parameterized neural LM with a growable non-parametric memory for memorizing new content. However, conventional semiparametric LMs will finally become prohibitive for computing and storing if they are applied to continual learning over streaming data, because the non-parametric memory grows linearly with the amount of data they learn from over time. To address the issue of scalability, we present a simple and intuitive approach called Selective Memorization (SeMem), which only memorizes difficult samples that the model is likely to struggle with. We demonstrate that SeMem improves the scalability of semiparametric LMs for continual learning over streaming data in two ways: **(1) data-wise scalability**: as the model becomes stronger through continual learning, it will encounter fewer difficult cases that need to be memorized, causing the growth of the non-parametric memory to slow down over time rather than growing at a linear rate with the size of training data; **(2) model-wise scalability**: SeMem allows a larger model to memorize fewer samples than its smaller counterpart because it is rarer for a larger model to encounter incomprehensible cases, resulting in a non-parametric memory that does not scale linearly with model size. We conduct extensive experiments in language modeling and downstream tasks to test SeMem's results, showing SeMem enables a semiparametric LM to be a scalable continual learner with little forgetting. ## 1 Introduction Semiparametric language models (LMs) have been drawing attention in recent years (Sukhbaatar et al., 2019; Guu et al., 2020; Yogatama et al., 2021; Alon et al., 2022; Wu et al., 2022) for their photographic memorization capabilities, which combine a parameterized neural LM with an extensible non-parametric memory allowing them to incrementally learn from new data. However, a conventional semiparametric LM has an obvious scalability issue for continual learning (CL) over massive streaming data, as shown in Figure 1: **(1)** with the increasing amount of streaming data it learns over time, its non-parametric memory will proportionally grow and end up prohibitive for computing and storing; **(2)** the size of memory also grows proportionally to the parameterized LM's size (more accurately, model dimension), which is unfriendly and uneconomical for large LMs. To address this scalability issue for CL over streaming data, we present a simple and intuitive approach called Selective Memorization (SeMem). Unlike conventional semiparametric LMs that fully memorize training data into non-parametric memory, SeMem selectively memorizes only difficult samples that the model is likely to struggle with, which is similar to the way humans learn through memorization. SeMem can improve the scalability of semiparametric LMs for CL over streaming data in two ways, as Figure 1 shows: **(a) data-wise scalability**: as the model becomes more knowledgeable through continual learning with more data, it will encounter fewer difficult cases that need to be memorized; as a result, the growth of the non-parametric memory will slow over time rather than growing at a linear rate with the size of training data; **(b) model-wise scalability**: SeMem allows a larger model to memorize fewer samples than a smaller model because it is rarer for a larger model to encounter difficult cases than the smaller one, resulting in a non-parametric memory that does not scale linearly with model size. We conduct extensive experiments in language modeling and downstream tasks, showing that SeMem allows a semiparametric LM to be a scalable continual learner with little forgetting. Our contributions can be summarized as follows: * We propose Selective Memorization (SeMem) that allows the model to skip memorizing easy cases and only focus on difficult cases. SeMem can largely increase the non-parametric memory's efficiency, allowing the model to substantially reduce its external memory size without hurting performance. * Our extensive experiments in language modeling and downstream tasks show that SeMem improves both data- and model-wise scalability of a semiparametric LM for continual learning, which is one of the earliest explorations, to the best of our knowledge, in scalable continual learning over streaming text data with semiparametric LMs. ## 2 Preliminaries: Semiparametric LMs A semiparametric LM, as its name implies, is a language model that consists of a non-parametric memory besides a parametric LM. Formally, we use \(\Theta\) to denote a semiparametric LM: \[\Theta=(\theta,\mathcal{M}) \tag{1}\] where \(\theta\) denotes the parameterized LM and \(\mathcal{M}\) denotes the non-parametric memory. In this paper, we use the \(k\)NN-LM (Khandelwal et al., 2020) as a representative semiparametric LM for a case study. The \(k\)NN-LM enhances its parametric LM's prediction by leveraging information of the \(k\)-nearest neighbors in its non-parametric memory during inference. Given a leftward context Figure 1: **(a)** The growth trend of memorized tokens of the conventional full memorization (FullMem) and our proposed Selective Memorization (SeMem) policy. Compared with FullMem, not only does SeMem have a significantly reduced memory but also its memory growth will become slower as it continuously learns with more training tokens than its initial growth (i.e., the green dashed line); **(b)** The growth trend of memorized size (GB) with the increase of the model size. The numbers in parentheses indicate the memory reduced by SeMem compared to the FullMem, showing that SeMem’s effect is more significant in larger LMs. \(\mathbf{x}=(x_{1},\ldots,x_{n})\), its next word \(y\)'s probability is computed as follows: \[P(y|\mathbf{x};\Theta)=\underbrace{(1-\lambda)P(y|\mathbf{x};\theta)}_{\text{ Parametric LM}}+\underbrace{\lambda P(y|\widetilde{\mathbf{x}};\mathcal{M})}_{\text{Non-parametric memory}} \tag{2}\] where \(\lambda\) is a hyperparameter to weigh the parametric LM and non-parametric memory's contribution to the final result. \(\widetilde{\mathbf{x}}\) denotes the contextualized representation2 of \(\mathbf{x}\) and \(\mathcal{M}\) is the non-parametric memory for \(k\)NN retrieval, which is a set of key-value pairs for looking up a target word (value) of a leftward context (key): Footnote 2: As Khandelwal et al. (2020) suggests, we use the parametric LM’s last-layer hidden representation before the feedforward network as the contextualized representation. \[\mathcal{M}=\{(\widetilde{\mathbf{x}_{<\mathbf{t}}}\to x_{t})|x_{t}\in \mathcal{D}\} \tag{3}\] where \(\mathcal{D}\) denotes a training set. As Eq 3 shows, conventional \(k\)NN-LMs construct \(\mathcal{M}\) with an entry for each training set token \(x_{t}\) (as value), and the contextualized representation of its leftward context \(\widetilde{\mathbf{x}_{<\mathbf{t}}}\) (as key). We refer to this method as **full memorization** (FullMem) since it memorizes all tokens in a training set. During inference, we first use \(\widetilde{\mathbf{x}}\) as a query to retrieve \(k\) nearest neighbors from the memory \(\mathcal{M}\): \[\mathcal{N}(\tilde{\mathbf{x}})=\{(\tilde{\mathbf{x}}^{(i)}\to y^{(i)})|i=1,2, \ldots,k\}\subseteq\mathcal{M}\] Then, we obtain the non-parametric memory's prediction by computing the weighted sum of retrieved targets: \[P(y|\widetilde{\mathbf{x}};\mathcal{M})\propto\sum_{(\tilde{\mathbf{x}}^{(i)}\to y^{( i)})\in\mathcal{N}(\tilde{\mathbf{x}})}\mathbb{1}_{y^{(i)}=y}\exp(-d(\tilde{\mathbf{x}}^{(i )},\tilde{\mathbf{x}}))\] Where \(\mathbb{1}\) is the indicator function, and \(d(.,.)\) denotes the semantic similarity3 between two vectors. Figure 2: Intuition of SeMem where we consider cases with \(\log P<-1.5\) as hard: **(a) Small-1 and Small-2** illustrate what training cases need to be memorized/skipped; **(b) Small-2 and Small-3** show that some hard cases will become easy and thus be skipped if their similar cases are memorized before; **(c) Small-2 and Large-1** demonstrate that a hard case for a small model is considered easy and thus skipped by a large model. ## 3 Selective Memorization ### Memorization Policy In contrast to the conventional full memorization policy that memorizes every training case in semiparametric LMs, our proposed Selective Memorization (SeMem) allows the model to selectively memorize training cases based on their value to improve the model, as is inspired by the way in which humans learn through memorization. As Figure 2 shows, SeMem assumes that an easy case (e.g., _Small-1_) which is already well handled by the LM is less valuable to improve the LM than a hard case (e.g., _Small-2_). This intuition allows the LM to skip memorizing easy cases and only focus on memorizing hard cases that the LM is likely to struggle with. SeMem can also be interpreted by analogy to gradient-based learning where a simple case is less valuable than a difficult case because the simple case only yields very weak learning signals (i.e., gradient) to update a model. To formalize SeMem, we first define a memorization threshold \(\delta\) for determining if a training case is hard enough for the LM so that the LM needs to memorize it: \[\begin{cases}\text{Memorize,}&\text{ if }\log P(x_{t}|\mathbf{x_{<t}};\Theta)< \delta\\ \text{Skip,}&\text{ else}\end{cases} \tag{4}\] Then, we only memorize the cases whose (probability) confidence is below the threshold \(\delta\): \[\mathcal{M}\leftarrow\mathcal{M}\;\cup\;\{(\widetilde{\mathbf{x_{<t}}}\to x_{t}) |\log P(x_{t}|\mathbf{x_{<t}};\Theta)<\delta\} \tag{5}\] By looking into Eq 4-5 above, we can easily understand that the memorization decision of SeMem dynamically changes over time even for the same case because \(\mathcal{M}\) keeps growing throughout continual learning. As shown in Figure 2(b), if a case (_Small-2_) was memorized previously, its similar cases (e.g., _Small-3_) as well as the case itself will be less likely to be memorized again because the LM may not consider them hard as before. With the LM memorizing more and more cases over time, it will become more knowledgeable and thus rarer to confront incomprehensible cases that need to be memorized. As a result, the non-parametric memory \(\mathcal{M}\)'s growth will not be proportional to the size of training data but become slower and slower, demonstrating the desirable data-wise scalability. SeMem not only benefits scaling over data, but also it improves model-wise scalability. As illustrated in Figure 2(c), a smaller and a larger (parameterized) LM may have different memorization decisions even for the same case: in general, a smaller LM tends to memorize more than a larger LM because it is likely to confront more hard cases that need to be memorized than a larger LM. ### Inference Calibration Given that SeMem allows the LM to skip memorizing easy cases during learning, some easy cases that are well handled by the parameterized LM may never be memorized in the non-parametric memory \(\mathcal{M}\). Therefore, for these easy cases, we should not consult the non-parametric memory but rely more on the parameterized LM; while for hard cases that the parameterized LM cannot address well, we should weigh the non-parametric memory more. This intuition motivates us to calibrate the final output probability distributions (over the output vocabulary) during inference instead of using a constant hyperparameter (\(\lambda\) in Eq 2). Inspired by previous studies (Zheng et al., 2021; Drozdov et al., 2022; He et al., 2021) which propose to adaptively weigh the parametric LM and non-parametric memory for either improving the prediction result or skipping retrieval to reduce inference time, we train a neural network (NN) calibrator to inform whether the case is too simple to consult the non-parametric memory during inference with following features: * Distribution information of the parametric LM * \(\widetilde{\mathbf{x}}\): contextualized representation of \(\mathbf{x}\) by the parameterized LM * \(conf(\mathbf{x})\): \(\max_{y}P(y|\mathbf{x};\theta)\) * \(ent(\mathbf{x})\): entropy of \(P(y|\mathbf{x};\theta)\) * Lexical information of the training data * \(\log freq(\mathbf{x_{-1}})\): log of frequency of the last token in the context * \(\log distinct(\mathbf{x_{-1}})\): log of the number of distinct values of the last token in the context * Density information of the external memory * \(d(\mathbf{\tilde{x}}^{(i)},\mathbf{\tilde{x}})\): \(L^{2}\) distance (semantic similarity) between the query and the top-\(i\) retrieved neighbor, \(i=1,2,\ldots,10\). * \(\log distinct(\mathbf{\tilde{y}}^{(i)})\): log of the number of distinct values of the top-\(i\) retrieved values, \(i=1,2,\ldots,10\). Following He et al. (2021), we use a 4-layer MLP network4 as our NN calibrator (parameterized by \(\theta_{c}\)), which can be learned to optimize perplexity of a small validation set: Footnote 4: Details are included in Appendix A.2. \[\theta_{c}^{*}=\arg\max_{\theta_{c}}(1-\lambda(\mathbf{x};\theta_{c}))P(y|\mathbf{x}; \theta)+\lambda(\mathbf{x};\theta_{c})P(y|\widetilde{\mathbf{x}};\mathcal{M}) \tag{6}\] ## 4 Experiments ### Experimental Setting We use the news articles whose timestamps are during the first half (i.e., 6 months) of 2020 in the News Crawl corpus5, referred to as **Newscrawl-20H1** as our streaming data for continual learning (CL). We leave 1% of the streaming data per day as the validation set and another 1% as the test set. Table 1 shows the statistics of Newscrawl-20H1. We continually learn the streaming data in chronological order and update the search index6 every day. In addition, we use the contemporaneous Wikipedia event articles7 which describe real-world events that happened during 20H1 mainly for testing our approach in domains other than news articles. We refer to this dataset as **WikiEvent-20H1**, which contains on average 10 Wikipedia articles per month with around 100k tokens in total. Footnote 6: [https://data.statmt.org/news-crawl](https://data.statmt.org/news-crawl) Footnote 7: An example event article is 2020 Caribbean earthquake. Footnote 8: The release date of the GPT-2 is Feb 2019. Therefore, it must be unaware of data after Feb 2019. We use the GPT-28(Radford et al., 2019) as our parameterized LM to study CL over the streaming data (during 20H1). We experiment with the GPT-2 small9, medium and large size models whose configurations are presented in Table 2. Footnote 8: The GPT-2 small is our default parametric LM if we do not mention its size explicitly. We evaluate SeMem's result and scalability after CL over all the training data of Newscrawl-20H1. We conduct extensive experiments in both language modeling and downstream tasks, and compare SeMem with the following baselines: \begin{table} \begin{tabular}{c|c c c|c c c|c c c} \hline \hline & \multicolumn{6}{c|}{**Newscrawl-20H1**} \\ \hline & \multicolumn{3}{c|}{**Daily**} & \multicolumn{3}{c|}{**Monthly**} & \multicolumn{3}{c}{**Total**} \\ & Train & Valid & Test & Train & Valid & Test & Train & Valid & Test \\ \hline Articles & 4,421 & 45 & 46 & 132,642 & 1,353 & 1,369 & 795,850 & 8,118 & 8,209 \\ Tokens & 2.44M & 24.64K & 24.70K & 73.23M & 739.20K & 741.01K & 439.38M & 4.44M & 4.45M \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics of Newscrawl-20H1. \begin{table} \begin{tabular}{c|c|c|c} \hline \hline Model & Layer & Dim & \#Param \\ \hline GPT-2 small & 12 & 768 & 123M \\ GPT-2 medium & 24 & 1024 & 355M \\ GPT-2 large & 36 & 1280 & 774M \\ \hline \hline \end{tabular} \end{table} Table 2: Model configurations of the GPT-2 models (as the parametric LMs) in our experiments. * Full memorization (FullMem): Conventional memorization policy in semiparametric LMs that memorizes every token in the training set. * Random memorization (RandMem): Randomly memorizing \(60\%\) of training data. We conduct three runs with different random seeds and choose the best performing one as the baseline. We use the news articles during December 2019 in the News crawl as pilot streaming data to estimate an appropriate memorization threshold \(\delta\) for SeMem. We find when \(\delta=-1.5\), SeMem can achieve comparable performance to FullMem with around 60% memory in the pilot data. Therefore, we set \(\delta=-1.5\) in our following experiments. The NN calibrator is trained on a small subset of our Newscrawl-20H1 validation set, which contains only 1,800 articles. ### Results of language modeling Table 3 presents a result comparison between SeMem and FullMem as well as RandMem for CL with Newscrawl-20H1, showing semiparametric LMs substantially improve the bare GPT-2 small (24.1 PPL). Among them, SeMem achieves comparable (even slightly better) language modeling performance to FullMem but with only 53% overall memory size, and largely outperforms RandMem (60% memory size), demonstrating SeMem is a cost-effective memorization policy for CL. We look into the results by tracking SeMem's memorization rate of each month throughout CL. According to Figure 3, SeMem's memorization rate each month gradually decreases as it learns over time because the model becomes increasingly knowledgeable and thus skips memorizing more training cases that it already knows, which also accounts for the sublinear growth trend of SeMem's memory observed in Figure 1(a). We further confirm this by comparing the memorization rate of continually learning the news data during July 2020 in the News crawl (referred to as Newscrawl-July) and WikiEvent-20H1 before and after CL with Newscrawl-20H1 to demonstrate SeMem's data-wise scalability, as shown in Table 4. Table 5 shows the model size's effect on SeMem. In the premise of comparable results to FullMem, the SeMem's effect becomes more significant as the parametric model becomes larger: its overall \begin{table} \begin{tabular}{l|c|c} \hline \hline Methods & PPL (\(\downarrow\)) & MemRate (\(\downarrow\)) \\ \hline Bare GPT-2 small & 24.1 & 0\% \\ +FullMem & 9.0 & 100\% \\ +RandMem & 15.0 & 60\% \\ \hline +SeMem & **8.6** & 53\% \\ \hline \hline \end{tabular} \end{table} Table 3: Perplexity (PPL) on the test set of Newscrawl-20H1 by different memorization methods for CL over Newscrawl-20H1. Figure 3: SeMem’s memorization rate gradually decreases throughout CL over months, demonstrating the data-wise scalability. memorization rate drops from 53% (GPT-2 small) to 42% (GPT-2 large). This is because a larger LM tends to skip more training cases than a smaller LM, as we assumed. The reduced memorization rate well demonstrates the model-wise scalability of SeMem. We finally evaluate SeMem's learning and forgetting performance by tracking its result on 7 test sets throughout CL over the Newscrawl-20H1: 6 are each month's held-out data in Newscrawl-20H1, and the other is the test set of Wiki-103 language modeling benchmark which has nothing to do with the Newscrawl-20H1 training data. According to Figure 4, SeMem learns from the streaming data well, reflected by a sharp decrease of PPL on a test set (except Wiki-103) after learning the corresponding month's training data. More importantly, it does not suffer from the catastrophic forgetting issue (French, 1999). PPL scores of all the 7 test sets do not significantly degrade throughout CL, since SeMem will never erase previous memory or update the parametric LM's weights, showing desirable results for CL. The advantage can be better understood by comparing with other CL baselines such as RecAdam (Chen et al., 2020) and MixiReview (He et al., 2019)10, as shown in Table 6. Footnote 10: Details of CL baselines are included in Appendix B. \begin{table} \begin{tabular}{c|c c|c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c|}{**Newcrawl-July (same domain)**} & \multicolumn{2}{c}{**WikiEvent-20H1 (different domain)**} \\ & MemRate (\(\downarrow\)) & PPL (\(\downarrow\)) & MemRate (\(\downarrow\)) & PPL (\(\downarrow\)) \\ \hline \multicolumn{5}{c|}{Bare GPT-2 small} & 60\% & 8.8 & 62\% & 29.0 \\ + CL with Newscrawl-2020H1 & **49\%** & **8.2** & **57\%** & **27.5** \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of memorization rate and perplexity for continually learning (CL) new data – Newscrawl-July and WikiEvent-20H1 before and after CL with Newscrawl-2020H1: After CL with Newscrawl-2020H1, the model’s memorization rate for new data (both Newscrawl-July and WikiEvent-20H1) is significantly lower than the bare GPT-2 model but even with better results, demonstrating that previous CL (with Newscrawl-2020H1) help the model skip memorizing many cases that it learned before without hurting performance. Figure 4: Language modeling performance (i.e., perplexity) on 7 test sets (Wiki-103 and 6 subsets of Newcrawl-20H1’s test set by month) throughout CL via SeMem over Newscrawl-20H1. \begin{table} \begin{tabular}{c|c|c|c} \hline \hline Model & Strategy & PPL (\(\downarrow\)) & MemRate (\(\downarrow\)) \\ \hline \multirow{2}{*}{GPT-2 small} & FullMem & 9.0 & 100\% \\ \cline{2-4} & SeMem & 8.6 & 53\% \\ \hline \multirow{2}{*}{GPT-2 medium} & FullMem & 7.0 & 100\% \\ \cline{2-4} & SeMem & 7.2 & 48\% \\ \hline \multirow{2}{*}{GPT-2 large} & FullMem & 6.2 & 100\% \\ \cline{2-4} & SeMem & 6.3 & 42\% \\ \hline \hline \end{tabular} \end{table} Table 5: SeMem’s model-wise scalability After CL over the news streaming data which is not a significantly different domain from the LM's pretraining dataset (i.e., openwebtext), we test SeMem's CL performance over the ACL paper dataset (Lo et al., 2020), which contains 42K ACL papers. We hold off 80 papers (\(\sim\)200K tokens) to construct the validation and test set, using the rest for training.11 The result is shown in Table 7. Footnote 11: We split training data into 4 batches for CL and update the index after finishing each batch. ### Results of downstream tasks #### 4.3.1 Next-word prediction For a language model, next-word prediction is its basic usage and most straightforward end task which is especially important for the application (e.g., input methods, Google's smart compose and Microsoft's text predictions) of predicting texts in AI writing assistants. To align this task with our CL setting (over Newscrawl-20H1), we test next-word prediction on the WikiEvent-20H1 to verify if CL over Newscrawl-20H1 can help write the contemporary event articles in Wikipedia. As in language modeling, SeMem consistently shows comparable performance with increasingly better scalability than FullMem as the model size gets larger (Table 8) and achieves desirable CL results with little forgetting (Figure 5). #### 4.3.2 Closed-book question answer We use the RealTime QA (Kasai et al., 2022) benchmark which is a multiple-choice question dataset about the current world, requiring a model to answer questions about novel events or information, as our second testbed of downstream tasks. To align with our streaming data, we use the subset of question-answer pairs about the news during 20H1 in RealTime QA as our downstream dataset and evaluate in the closed-book Multiple Choice setting. As the original paper of RealTime QA suggests, we evaluate our largest12 model (i.e., the GPT-2 large) in a zero-shot learning setting. \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c} \hline \hline **Methods** & **Wiki-103** & **1** & **2** & **3** & **4** & **5** & **6** \\ \hline \hline Bare GPT-2 & 29.1 & 24.3 & 24.0 & 24.1 & 24.0 & 24.7 & 23.8 \\ \hline \begin{tabular}{c} + Fine-tune \\ (best) \\ \end{tabular} & 33.4 (+2.9) & 20.4 (+2.4) & 18.7 (+2.3) & 17.3 (+1.3) & 17.3 (+0.2) & 16.2 (+0.8) & 15.4 (+0.0) \\ \begin{tabular}{c} (best) \\ \end{tabular} & 30.6 & 18.2 & 16.4 & 16.0 & 17.1 & 15.4 & 15.4 \\ \hline \begin{tabular}{c} + RecAdam \\ (best) \\ \end{tabular} & 34.5 (+2.7) & 19.6 (+0.7) & 18.3 (+0.7) & 17.3 (+0.2) & 17.1 (+0.3) & 16.8 (+0.3) & 16.9 (+0.0) \\ \begin{tabular}{c} (best) \\ \end{tabular} & 31.8 & 18.9 & 17.6 & 17.1 & 16.8 & 16.5 & 16.9 \\ \hline \begin{tabular}{c} + MixReview \\ (best) \\ \end{tabular} & 33.6 (+3.0) & 19.9 (+1.8) & 18.5 (+2.1) & 17.3 (+1.4) & 17.2 (+0.1) & 15.8 (+0.4) & 15.6 (+0.0) \\ \begin{tabular}{c} (best) \\ \end{tabular} & 30.6 & 18.1 & 16.4 & 15.9 & 17.1 & 15.4 & 15.6 \\ \hline \begin{tabular}{c} + SeMem \\ (best) \\ \end{tabular} & 29.9 (+0.5) & 9.4 (+0.1) & 7.6 (+0.2) & 7.8 (+0.5) & 6.9 (+0.1) & 9.5 (+0.0) & 8.8 (+0.0) \\ \begin{tabular}{c} (best) \\ \end{tabular} & 29.4 & 9.3 & 7.4 & 7.3 & 6.8 & 9.5 & 8.8 \\ \hline \end{tabular} \end{table} Table 6: Perplexity (the lower, the better) comparison evaluated on the 7 test sets (Wiki-103 and Newscrawl-20H1) after CL over the streaming (training) data of Newscrawl-20H1 between SeMem and representative CL approaches: RecAdam (Chen et al., 2020) as the regularization-based CL baseline and Mix-Review (He et al., 2019) as the replay-based CL baseline. The numbers in the second row of each cell of CL methods denote the best result ever achieved during the process of CL. SeMem not only achieves better results in learning from the new data but also suffers less from catastrophic forgetting compared with other CL approaches despite introducing additional memory. \begin{table} \begin{tabular}{l|c|c|c} \hline \hline **Model (Memrate)** & **Wiki-103** & **Newscrawl-20H1** & **ACL** \\ \hline Bare GPT-2 (0\%) & 29.1 & 24.1 & 40.5 \\ \hline + SeMem (64\%) & 29.9 \(\rightarrow\) 30.4 (+0.5) & 8.6 \(\rightarrow\) 9.2 (+0.6) & 22.7 \\ \hline \hline \end{tabular} \end{table} Table 7: CL performance over ACL papers after learning the Newscrawl-20H1 dataset. SeMem consistently does well in learning new data (40.5 \(\rightarrow\) 22.7 in PPL) without significant forgetting (minor performance degradation compared with the counterpart before CL over the ACL papers in the Wiki-103 and Newscrawl-20H1 test set). Compared with FullMem, it only uses 64% memory. Table 9 shows the results in the RealTime QA benchmark. Compared with the bare GPT-2 large, CL through SeMem over Newscrawl-20H1 substantially improves question-answering performance in the Realtime QA dataset because it learns the world knowledge during 20H1 from the news stream that is necessary to answer the questions. As in language modeling evaluation, SeMem performs as well as FullMem with less memorization cost and outperforms RandMem with a similar memory size. Moreover, little forgetting is consistently observed, as shown in Table 10. \begin{table} \begin{tabular}{c|c c c} \hline \hline CL over Test on & 1-2 & 3-4 & 5-6 \\ \hline 1-2 & 36.0 & 29.3 & 29.5 \\ 3-4 & 37.2 & 35.8 & 30.6 \\ 5-6 & 37.0 & 36.5 & 35.5 \\ \hline \hline \end{tabular} \end{table} Table 10: Accuracy of the GPT-2 large tested on every two months of RealTime QA throughout CL over Newscrawl-20H1. Figure 5: Next-word prediction accuracy on 6 subsets (by month) of WikiEvent-20H1 throughout CL over Newscrawl-20H1. \begin{table} \begin{tabular}{l|c c|c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c|}{Acc (\(\uparrow\))} & \multicolumn{3}{c}{MemRate (\(\downarrow\))} \\ \cline{2-7} & small & medium & large & small & medium & large \\ \hline Bare GPT-2 & 39.5 & 41.2 & 44.6 & 0\% & 0\% & 0\% \\ +FullMem & 43.0 & **45.5** & **48.9** & 100\% & 100\% & 100\% \\ +RandMem & 40.3 & 41.3 & 45.5 & 60\% & 60\% & 60\% \\ \hline +SeMem (Ours) & **43.8** & 45.4 & 48.7 & **53\%** & **48\%** & **42\%** \\ \hline \hline \end{tabular} \end{table} Table 8: Next-word prediction accuracy (Acc) on the test set (WikiEvent-20H1) and memorization rate (MemRate) by different memorization methods and models after CL over Newscrawl-20H1. Analysis ### Performance _VS_ Memorization rate We have confirmed that SeMem can achieve comparable performance to FullMem with a substantially reduced memorization rate when \(\delta=-1.5\). Intuitively, if \(\delta\) increases, more cases will be memorized and the performance will be likely to further increase; on the contrary, if \(\delta\) decreases, more cases will be skipped, resulting in less memory but weaker performance. Table 11 confirms this intuition, demonstrating that we can obtain a trade-off between scalability and performance through the manipulation of \(\delta\). ### NN calibrator As introduced in Section 3.2, the NN calibrator plays an important role in calibrating the semiparametric LM's prediction by adapting \(\lambda\) in Eq 2 at test time. Although such an adaptive method has been proven universally effective in semiparametric LMs by previous work, we reveal in Table 12 that it benefits SeMem most, significantly more than RandMem and conventional FullMem which only introduces marginal improvement as in previous work, demonstrating that calibration is more compatible with SeMem. We ablate the features of the NN calibrator to study their effects on results. According to Table 13, all our proposed features contribute positively to the calibrator, among which the density information, especially the \(L^{2}\) distance, is the most important one because it can directly reflect if the non-parametric memory has much relevant information given a test case, providing the most straightforward evidence to the calibrator. ### In-context learning We test if the in-context learning capability of a language model is affected by CL through SeMem. We present the in-context learning result in the Realtime QA benchmark that the memory can \begin{table} \begin{tabular}{l|c} \hline \hline Features & PPL (\(\downarrow\)) \\ \hline All features & 8.6 \\ -density features & 12.0 \\ -distribution features & 9.9 \\ -lexical features & 8.9 \\ \hline \hline \end{tabular} \end{table} Table 13: The ablation study of features in the NN calibrator \begin{table} \begin{tabular}{l|c|c} \hline \hline Methods & PPL (\(\downarrow\)) & MemRate (\(\downarrow\)) \\ \hline Bare GPT-2 small & 24.1 & 0\% \\ \hline +FullMem & 9.0 & 100\% \\ \hline +SeMem (\(\delta=-1.0\)) & **8.2** & 63\% \\ +SeMem (\(\delta=-1.5\)) & 8.6 & 53\% \\ +SeMem (\(\delta=-2.0\)) & 9.9 & **46\%** \\ \hline \hline \end{tabular} \end{table} Table 11: SeMem with different memorization threshold \(\delta\) benefit in Table 14, showing that SeMem is not in conflict with in-context learning and that the LM can still benefit from more examples shown in the context. Also, we evaluate SeMem's effect on in-context learning for general tasks collected by Shi et al. (2022) that cannot benefit from its memory. According to Table 15, despite no improvement observed, SeMem does not affect the results in these tasks, demonstrating its robustness. ## 6 Related Work Retrieval-augmented LMsThere are many recent works that use retrieval components for language tasks, in special language modeling. REALM (Guu et al., 2020) puts a trainable document retriever onto the masked language model, which is fine-tuned on an open-domain QA task. RETRO (Borgeaud et al., 2022) enhances auto-regressive language models by conditioning on document chunks retrieved from a large corpus by a frozen BERT retriever, a differentiable encoder, and a chunked cross-attention mechanism. Unlike above approaches that explicitly store text for retrieval, semiparametric LMs are proposed to store text in memory which consists of key-value pairs. As a representative semiparametric LM, the \(k\)NN-LM (Khandelwal et al., 2020) adopts non-parametric external memory to store text examples seen before and uses it at test time to enhance the parametric LM's prediction, without the need for training or retraining. Since \(k\)NN-LM came out as a powerful method for making use of external data, a number of methods have been proposed to study semiparametric LMs (Yogatama et al., 2021; Drozdov et al., 2022; Jin et al., 2022; Bhardwaj et al., 2022; Shi et al., 2022; Trotta et al., 2022; Khandelwal et al., 2021; Jiang et al., 2021; Zheng et al., 2021). Among them, He et al. (2021) focuses on improving efficiency of semiparametric LMs, proposing multiple methods (e.g., quantization, dimensionality reduction) to optimize the inference speed. While it has similar motivation as our work, we focus more on memory scalability in the setting of CL over streaming data with orthogonal technical contributions. Continual learning LMsContinual learning (CL) proposes to address the "new knowledge - catastrophic forgetting" dilemma (French, 1999), which means ingoing new information could always bring models to forget previously learned knowledge severely. According to Ke et al. (2022), the main approach to overcoming the difficulty of catastrophic forgetting could be categorized into 4 families: (1) regularization-based methods(Liu et al., 2019; Qin and Joty, 2022) add regularization to penalize excessive changes of parameters towards new tasks; (2) replay-based methods(Sun et al., 2020; Scialom et al., 2022; Qin et al., 2022) either memorize seen samples or generate pseudo samples to enhance learning in new tasks; (3) parameter-isolation based methods(Ke et al., 2022; Jin et al., 2021) use separate parameters for distinct tasks, preventing them from interfering with each other; and (4) instruction-based methods(Yin et al., 2022) relies on task-specific instructions \begin{table} \begin{tabular}{l|c c c c c} \hline \hline Methods (MemRate) & 0-shot & 1-shot & 2-shot \\ \hline Bare GPT-2 large (0\%) & 29.8 & 31.5 & 32.5 \\ +FullMem (100\%) & 36.3 & **36.9** & 37.2 \\ +RandMem (60\%) & 30.7 & 32.1 & 33.0 \\ \hline +SeMem (42\%) & **36.3** & 36.7 & **37.7** \\ \hline \hline \end{tabular} \end{table} Table 14: In-context learning accuracy of the GPT-2 large with CL over Newscrawl-20H1 on the RealTime QA in 0-, 1- and 2-shot learning. \begin{table} \begin{tabular}{l|c c c c c c c} \hline \hline Methods (MemRate) & RTE & CB & RT & SST-2 & CR & MR & HYP \\ \hline Bare GPT-2 large (0\%) & 53.1 & 39.3 & 49.5 & 51.4 & 50.5 & 50.8 & 60.0 \\ FullMem (100\%) & 50.3 & 35.5 & 49.0 & 49.8 & 48.6 & 47.5 & 60.0 \\ FullMem (100\%) w/ NN calibrator & 52.8 & 41.1 & 49.5 & 51.8 & 50.8 & 50.9 & 60.0 \\ \hline SeMem (42\%) & 53.1 & 41.1 & 49.5 & 51.5 & 50.8 & 50.0 & 60.0 \\ \hline \hline \end{tabular} \end{table} Table 15: 0-shot learning accuracy of the GPT-2 large with CL over Newscrawl-20H1 on general NLP tasks. (prompts) to condition the LM. However, CL is still an under-explored field for LM. LAMOL (Sun et al., 2020) simultaneously learns to solve the tasks and generate training samples, so that it can replay pseudo-samples of previous tasks without extra needs for memory or model capacity. ELLE (Qin et al., 2022) performs continual pre-training by combining (2) and (3) methods, while CPT (Ke et al., 2022) adopts continual post-training with help of its carefully-designed adapter. The closest works to what we do are (Jin et al., 2022) and (Jang et al., 2022), which continually adapt language models to emerging corpora across domains and timelines. Our work is a pioneer exploration of using semiparametric LMs as a continual learner over streaming data by addressing its painful scalability issue with our proposed Selective Memorization method, which not only alleviates catastrophic forgetting in continual learning but also reaches sublinear data-wise and model-wise scalability in storage without the need for further training or fine-tuning. ## 7 Conclusion In this paper, we study improving the scalability of semiparametric LMs for continual learning and propose Selective Memorization (SeMem) - a simple yet intuitive approach. Extensive experiments in both language modeling and downstream tasks show SeMem can largely reduce memory size without hurting performance and that its desirable data-wise and model-wise scalability enables a semiparametric LM to become a scalable continual learner with little forgetting, shedding light on a new path towards evolving an LM without changing its model parameters.
2306.04387
M$^3$IT: A Large-Scale Dataset towards Multi-Modal Multilingual Instruction Tuning
Instruction tuning has significantly advanced large language models (LLMs) such as ChatGPT, enabling them to align with human instructions across diverse tasks. However, progress in open vision-language models (VLMs) has been limited due to the scarcity of high-quality instruction datasets. To tackle this challenge and promote research in the vision-language field, we introduce the Multi-Modal, Multilingual Instruction Tuning (M$^3$IT) dataset, designed to optimize VLM alignment with human instructions. Our M$^3$IT dataset comprises 40 carefully curated datasets, including 2.4 million instances and 400 manually written task instructions, reformatted into a vision-to-text structure. Key tasks are translated into 80 languages with an advanced translation system, ensuring broader accessibility. M$^3$IT surpasses previous datasets regarding task coverage, instruction number and instance scale. Moreover, we develop Ying-VLM, a VLM model trained on our M$^3$IT dataset, showcasing its potential to answer complex questions requiring world knowledge, generalize to unseen video tasks, and comprehend unseen instructions in Chinese. We have open-sourced the dataset to encourage further research.
Lei Li, Yuwei Yin, Shicheng Li, Liang Chen, Peiyi Wang, Shuhuai Ren, Mukai Li, Yazheng Yang, Jingjing Xu, Xu Sun, Lingpeng Kong, Qi Liu
2023-06-07T12:35:37Z
http://arxiv.org/abs/2306.04387v2
# M\({}^{3}\)IT: A Large-Scale Dataset towards ###### Abstract Instruction tuning has significantly advanced large language models (LLMs) such as ChatGPT, enabling them to align with human instructions across diverse tasks. However, progress in open vision-language models (VLMs) has been limited due to the scarcity of high-quality instruction datasets. To tackle this challenge and promote research in the vision-language field, we introduce the Multi-Modal, Multilingual Instruction Tuning (M\({}^{3}\)IT) dataset, designed to optimize VLM alignment with human instructions. Our M\({}^{3}\)IT dataset comprises 40 carefully curated datasets, including 2.4 million instances and 400 manually written task instructions, reformatted into a vision-to-text structure. Key tasks are translated into 80 languages with an advanced translation system, ensuring broader accessibility. M\({}^{3}\)IT surpasses previous datasets regarding task coverage, instruction number and instance scale. Moreover, we develop Ying-VLM, a VLM model trained on our M\({}^{3}\)IT dataset, showcasing its potential to answer complex questions requiring world knowledge, generalize to unseen video tasks, and comprehend unseen instructions in Chinese. We have open-sourced the dataset to encourage further research.1 Footnote 1: Our dataset is available at [https://huggingface.co/datasets/MMI](https://huggingface.co/datasets/MMI) Instruction/M3IT ## 1 Introduction There has been a continuously increasing trend to develop intelligent assistants that can follow human instructions [3; 36; 37]. In the natural language processing (NLP) field, instruction tuning [35; 53] is a success paradigm that leverages large-scale well-formatted instances to align large language models (LLMs) to human instructions. By finetuning on instances with specific task descriptions, LLMs learn to follow the instruction to perform various tasks, and demonstrate strong generalization ability on unseen tasks [29]. Expanding beyond NLP, a general-purpose intelligent agent must encompass various modalities, such as vision, prompting recent efforts to investigate instruction tuning in vision-language domains [63; 28; 7]. To develop powerful vision-language models (VLMs), it is essential to have a well-constructed dataset that encompasses diverse vision-language tasks and aligns with human instructions. However, the instructional data supporting existing VLMs is either not publicly available (e.g., GPT-4) or offers limited task and language coverage (e.g., only tasks in English are considered). This scarcity of comprehensive datasets has impeded the progress of open vision-language models, highlighting the importance of multi-modal instruction tuning and the need for high-quality datasets. In this paper, we aim to advance instruction tuning research in the multi-modal domain by introducing an open dataset M\({}^{3}\)IT, a **M**ulti-**M**odal **M**ultilingual **I**nstruction **T**uning dataset, as an essential step towards building a versatile general-purpose assistant. We build this dataset by converting existing datasets into a unified vision-to-text schema with four stages: (1) manual instruction writing, (2) dataset pre-processing, (3) careful quality check and (4) dataset translation for key tasks. Our dataset encompasses a wide range of tasks, including classic image-text tasks such as image classification, visual question answering, and image captioning. Video-related tasks, such as video question-answering, are also incorporated to ensure comprehensive coverage across multiple modalities. We further integrate Chinese vision-language datasets with corresponding Chinese instructions. The resulting dataset compiles 40 diverse tasks and 400 instructions. Finally, key vision-language tasks are translated into 80 languages with a strong translation system, to support multilingual studies. To evaluate the effectiveness of the proposed dataset, we develop a vision-language model, Ying-VLM, by integrating a strong vision encoder, BLIP-2 [23] with a large language model, Ziya-13B [61], derived from LLaMA [49]. Building on the successful approach of incorporating visual tokens as textual prompts in LLMs [7; 63; 28], we employ a two-stage training process: (1) the initial stage aligns vision features with text embeddings through image captioning on LAION400M [41], and (2) the second stage enhances the model by conducting instruction tuning on selected tasks of our dataset. Experimental results reveal that Ying-VLM surpasses strong baseline models in knowledgeable VQA tasks and exhibits improved generalization performance to unseen video and cross-lingual tasks. Further analysis indicates that the improved performance corresponds to increased tasks for instruction tuning, while the diversity of instructions also affects outcomes. This paper presents two key contributions: (1) We introduce the open-source, large-scale Multi-modal, multilingual Instruction Tuning (M\({}^{3}\)IT) dataset, designed to enable the development of general-purpose multi-modal agents. (2) We develop Ying-VLM, a visual assistant that excels in knowledgeable VQA tasks, demonstrates strong generalization to unseen video QA and Chinese multi-modal tasks, and offers valuable insights for future research. ## 2 Related Work Our work draws inspiration from recent language instruction tuning benchmarks [53; 35], which have been proven effective for improving language models to obtain cross-task generalization ability [29; 52]. In this paper, we focus on exploring the instruction tuning paradigm from LLMs to multi-modal agents. Unlike text-only tasks, vision-language tasks generally have more diverse formats, which poses new challenges toward vision-language instruction tuning benchmarks. To develop a general-purpose vision-language model, it is crucial to create high-quality multi-modal instruction tuning datasets encompassing diverse tasks, languages, and instructions. Several studies have investigated multi-modal instruction tuning for VLMs. LLaVA [28] and MiniGPT-4 [63] generate visual content-related dialog by incorporating image caption data into GPT-4/ChatGPT models. MultiInstruct [56] reformats a series of visual classification tasks into an instruction-tuning format, while InstructBLIP [7] adapts 28 existing image-to-text tasks. However, these datasets do not provide an ideal multi-modal instruction tuning dataset due to their limited (1) coverage of various task types in multi-modal fields, (2) diversity and quality of instances, and (3) inclusion of multiple languages for wide linguistic diversity. In this paper, we construct an improved multi-modal instruction tuning dataset by expanding task coverage to 40 datasets, supplementing instances with 10 manually written task instructions, and including tasks in different languages. Table 1 compares the characteristics of existing multi-modal instruction tuning datasets and M\({}^{3}\)IT. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Dataset & \# Tasks & Multi-Lingual & \# of Instances & Avg. \# of Manual Instructions / Task & Open-Sourced \\ \hline MiniGPT4 & N / A & ✗ & 5K & N / A & ✓ \\ LLaVA & 3 & ✗ & 1.15M & N / A & ✓ \\ MultiModalGPT & 3 & ✗ & 6K & 5 & ✗ \\ MultiInstruct & 26 & ✗ & \(\sim 235\)K & 5 & ✗ \\ InstructBLIP & 28 & ✗ & \(\sim 1.6\)M & 9.7 & ✗ \\ M\({}^{3}\)IT (Ours) & 40 & ✓ & 2.4M & 10 & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of multi-modal instruction tuning datasets. M\({}^{3}\)IT: A Multi-Modal Multilingual Instruction Tuning Dataset In this section, we introduce our proposed M\({}^{3}\)IT dataset by first elaborating the dataset coverage (SS 3.1), followed by the details of the annotation process(SS 3.2). Finally, we present the dataset format and provide the statistics of the crafted datasets instructions(SS 3.3). ### Task Coverage Our dataset compiles diverse tasks of classical vision-language tasks, including captioning, visual question answering (VQA), visual conditioned generation, reasoning and classification. **Captioning** This task aims to produce descriptions of the given images according to different needs. We include MS COCO [27] (the Karpathy split) for generic image descriptions. TextCaps [44] requires models to capture the text presented in the image and generate captions accordingly. Image-Paragraph-Captioning [21] focuses on generating detailed descriptions for images. **Reasoning** This task evaluates specific reasoning capabilities. We incorporate CLEVR [19] and NLVR [46] for spatial reasoning, Visual Commonsense Reasoning (VCR) [60] for commonsense reasoning, Visual MRC [47] for reading comprehensive over images, and Winoground [48] for fine-grained semantics reasoning over text descriptions and image contents. **Visual Question Answering (VQA)** This is the most widely studied multi-modal task, which requires the model to answer a given question based on the image correctly. Tasks include VQA v2 [15], Shapes VQA [1], DocVQA [33], OCR-VQA [34], ST-VQA [2], Text-VQA [45], and GQA [18]. **Knowledgeable Visual Question Answering** Unlike traditional VQA tasks focusing on the question relevant to the content image, knowledgeable visual question answer (KVQA) requires the model to draw upon outside knowledge to answer questions. We incorporate two outside knowledge VQA datasets: OK-VQA [32] and A-OK-VQA [42], ScienceQA [31] which contains multi-modal science questions, and ViQuAE [22] focusing on knowledge facts of named entities in images. **Classification** This task involves classifying an image based on a given set of candidate labels. ImageNet [40], Grounded Object Identification (COCO-GOI) [27], COCO-Text [50], Image Text Matching (COCO-ITM) [27], e-SNLI-VE [20], Multi-modal Fact Checking (Mocheg) [58], and IQA [9] are included. Due to language model input length constraints, we reduce the number of options in some datasets with extensive candidate labels, such as ImageNet. **Generation** Visual conditional general requires models to understand the visual content and make a composition meeting the task demand. We have Visual Storytelling (VIST) [17], Visual Dialog (VisDial) [8], and multi-modal machine translation Multi30k [10] in this category. **Chinese and multilingual Vision-Language Tasks** To examine the effect of instruction tuning on different languages, we incorporate several Chinese vision-language tasks including FM-IQA [11] for VQA, COCO-CN [25] and Flickr8k-CN [24] for captioning, Chinese Food Net [4] for classification, and MMchat [62] for generation. **Video-Language Tasks** Beyond the static images, we are interested in whether instruction tuning can also be applied to video-text tasks. We include the classic MSR-VTT datasets [55] for video captioning, MSRVTT-QA [54], ActivityNet-QA [59], iVQA [57] and MSVD-QA [54] for video question answering, Something-Something [14] for video action classification. As shown in Figure 1, our dataset makes a wide coverage of the current existing visual-language and video-language benchmarks, enabling different skill sets for the language models, from simple image captioning to complicated reasoning based on the image even beyond the visual content. ### Annotation Process To build high-quality multi-modal instruction datasets, we rewrite various datasets into a vision-to-text format. The annotation process includes four steps: (1) writing instructions for each task, (2) structuring images and texts into a unified schema, (3) checking the overall dataset quality, and (4) building multilingual sets. Eight authors of this work are employed as human annotators, each of whom is a graduate student familiar with relevant literature. **Stage I: Instruction Writing** To build high-quality instructions, we first ask annotators to carefully read the dataset paper and check the original dataset with some instances to get a clear understanding of the task. After that, they are required to write \(10\) diverse task instructions manually, covering the key characteristics of the task. Table 2 shows the statistics of the written instructions for each task. In total, we annotate \(400\) instructions for all tasks. The average length per instruction is \(24.4\). To evaluate the diversity of annotated instructions, we employ the average edit distance to measure the similarity between two strings. The average edit distance within the same task is \(76.6\), indicating a good range of instruction diversity. **Stage II: Data Format Unification** After the instruction has been written according to the task characteristics, we further process the images and corresponding text for a unified instance schema. For most datasets, we keep the original images and text, where images are converted into corresponding base64 encoded strings for easy data loading. We perform two modifications on potential examples: (1) **Adding Bounding Box to Images.** For tasks designed for specific regions in the image, a straightforward solution is to provide the bounding box information in natural language for informing the language models of the regions in interest. However, the image preprocessing techniques adopted by different vision encoders may resize the original image, and the original bounding box annotation thus needs further adjustments. Inspired by the recent observation that common vision encoders such as CLIP [39] are sensitive to the visual prompt [43], we directly tag the bounding box as a red rectangle to the image, serving as a hint for VLMs to focus on the target region. (2) **Short Answer Paraphrasing.** As recent studies have shown that the original short and brief answers in the common \begin{table} \begin{tabular}{l r} \hline \hline Number of different instructions & 400 \\ - Image Captioning & 52 \\ - Classification & 113 \\ - Visual Question Answering & 95 \\ - Knowledgeable Visual QA & 40 \\ - Reasoning & 60 \\ - Generation & 40 \\ \hline Tokens per instruction & \(24.4\pm 9.6\) \\ \hline Instruction edit distance among the same task & \(76.6\pm 37.2\) \\ \hline Instruction edit distance across tasks & \(106.6\pm 39.5\) \\ \hline \hline \end{tabular} \end{table} Table 2: The statistics of our instructions. Figure 1: Tasks in our proposed multi-modal multilingual instruction tuning dataset. The tasks in dashed white boxes are held-out evaluation sets that are not adopted during training. Tasks with bold names are translated into \(80\) languages. VQA dataset could negatively influence the model generation performance [7], we propose to utilize the ChatGPT [36] model for paraphrasing the original answers, by providing origin question and answer with potential extra contextual information. Contextual information includes the caption of the original images and OCR tokens for the scene-related question. The prompt used for answer paraphrasing can be found in Appendix. Figure 2 illustrates the data modifications we performed on our dataset. **Stage III: Quality Check** In this stage, we assign a different annotator to each task to review 10 examples from each split. During this stage, we identify minor format inconsistencies between tasks and address them by standardizing the task formats. We also observe that a few answers (less than 3% of examined instances) were not effectively paraphrased by ChatGPT due to insufficient image information. We employ simple heuristics to filter these paraphrased answers and use a basic template to convert the original answer into a sentence. We find that this small portion of unsuccessful paraphrased answers has negligible impact. Finally, the task dataset is deemed complete once the annotator can successfully load it and re-examine the accuracy of the instructions, inputs, and outputs for each instance examined. **Stage IV: Key Datasets Translation** To boost the language diversity and support the evaluation across different languages, we select a subset of datasets (OK-VQA, ImageNet, Winoground, VQAv2, VIST, MSRVTT and MSRVTT-QA) that covers different tasks and translate their evaluation data into \(100\) languages following FLORES-101 [13]. We translate \(500\) samples for each split of each task in our first version. More multilingual samples will be supported in the future. We adopt the distillation version NLLB-1.3B [6] for translation, one of the state-of-the-art open multilingual translation models. As there are no native speakers for different languages, we adopt an automatic filtering mechanism to ensure the translation quality, where languages with translation BLEU scores from English larger than \(20\) based on FLORES-101 results are kept. After this step, only \(80\) languages are kept (see Appendix for detailed language names). ### Dataset Format The instance in our dataset consists of five fields: (1) **Images**: we represent the images with the potentially added bounding box by a base64 string. (2) **Instruction**: we randomly select an instruction from the task instruction pool for each instance. (3) **Inputs**: we allocate this field for providing task-specific inputs to the model, e.g., the question in the VQA tasks. For tasks such as captioning, there is no extra input so the corresponding field is left as an empty string. (4) **Outputs**: the required output to the specific tasks, such as the description of the image for captioning tasks and the answer to the image-related question. (5) **Meta Data**: we provide this field to preserve important information such as image id for referencing the original dataset. Figure 3 illustrates an instance in the unified format. With the clear distinction of these fields, the user of our benchmark can flexibly construct the training instances needed and evaluate the models conveniently. Table 3 gives the statistics aggregated by tasks, and we refer readers to Appendix for detailed statistics and the license of each dataset. Figure 2: (Left) On region-based tasks, bounding boxes are added to original images to inform the model of the area in interest. (Right) Short answer paraphrasing to improve the response quality. ## 4 Experiments In this section, we build a VLM to validate the effectiveness of the proposed M\({}^{3}\)IT dataset for multi-modal agents. We first introduce the experimental setups (SS 4.1), then report and discuss the results (SS 4.2). Lastly, we analyze the influence of task number and instruction diversity, and provide a qualitative result (SS 4.3). ### Experimental Settings **Implementation Details** Inspired by the recent success of BLIP [23], we adopt the vision encoder and the Q-former architecture in the BLIP2-OPT-2.7B [23] model to extract relevant visual features from images. For the large language models, we utilize Ziya-13B [61] derived from LLaMA [49] with bilingual (English and Chinese) ability. We employ a two-staged training. **Stage I Visual-Text Alignment:** To align the visual and textual feature space, we utilize the instructions in the coco captioning and perform an initial alignment training on LIAON 400M [41]. We train the Q-former and the language projection, resulting in a total \(130\)M parameters to optimize with AdamW [30]. The batch size is set to \(256\) to maximize the utilization of GPU and the model is trained with \(300\)k steps. The learning rate linearly increases to a peak value of \(5e\)-\(5\) in the first \(2000\) steps and follows a cosine decay scheduler. The weight decay is set to \(0.05\). **Stage II Multi-modal Instruction Tuning:** We further perform a multi-modal instruction tuning in our benchmark to activate the great potential of LLMs. We train the model after alignment training for \(3\) epochs and with a lower learning rate of \(1e\)-\(5\) and a warmup stage of \(1000\) steps. Inspired by LoRa tuning [16], the weights for mapping query and value vectors in the attention layer of LLMs are learnable in this stage to better adapt to the instruction tuning dataset. Other training parameters are consistent with Stage I. All experiments are conducted with 8 NVIDIA 80GB A100 GPUs. It took about 10 days for Stage I and Stage II can be finished in a day. \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Task} & \multirow{2}{*}{Description} & \multicolumn{3}{c}{Total \#samples} \\ & & Train & Val & Test \\ \hline CAP & Given an image, write a description for the image. & 679,087 & 41,462 & 27,499 \\ CLS & Given an image, classify the image into pre-defined categories. & 238,303 & 100,069 & 21,206 \\ VQA & Given an image, answer a question relevant to the image. & 177,633 & 46,314 & 10,828 \\ KVQA & Given an image, answer the question requires outside knowledge. & 39,981 & 11,682 & 5,477 \\ REA & Given an image, conduct reasoning over the images. & 99,372 & 11,500 & 10,000 \\ GEN & Given an image, make compositions with certain requirements. & 145,000 & 11,315 & 17,350 \\ \hline Chinese & CAP, CLS, VQA, and GEN tasks in Chinese. & 192,076 & 77,306 & 4,100 \\ Video & CAP, CLS, and VQA tasks on video-language datasets. & 20,868 & 7,542 & 9,294 \\ \hline Multi-lingual & Translated tasks in \(80\) languages & \multicolumn{2}{c}{0} & 240,000 & 184,000 \\ \hline \hline \end{tabular} \end{table} Table 3: M\({}^{3}\)IT task descriptions and statistics, encompassing image captioning (CAP), classification (CLS), visual question answering (VQA), knowledgeable visual question answering (KVQA), reasoning (REA), generation (GEN), Chinese vision-language, and video-language tasks. We aggregate instance counts for training, validation, and test sets across all tasks, totaling 2,429,264 instances. Figure 3: A ViQuAE instance represented in the unified data instance schema used in our dataset. **Evaluation Setup** To examine the generalization of instruction tuning, some tasks are held-out for evaluation (see Figure 1 for held-in/out tasks). We are interested in the following research questions: (RQ1) Can multi-modal instruction tuning elicit world knowledge from LLMs? (RQ2) Can English-only instruction tuning generalize to other languages such as Chinese? and (RQ3) Can image-only multi-modal instruction tuning generalize to video-language tasks? For RQ1, we evaluate our models on three KVQA tasks in our datasets, i.e., OK-VQA [32], A-OKVQA [42] and ViQuAE. For RQ2 and RQ3, we perform zero-shot transfer evaluation on Chinese vision-language and video-language datasets, respectively. We use greedy decoding in inference if not otherwise specified. **Metrics** We adopt ROUGE-L [26] as an automatic metric to assess the consistency between predictions and ground-truth answers, focusing on evaluating the model's conversational abilities. As the automatic metric may not fully capture the nuances of conversational quality, we further introduce GPT-4 as a proxy of human evaluators (SS 4.2). **Baselines** We compare our models to recently proposed powerful multi-modal agents, including (1) BLIP-2-Flan-T5-XXL [23] where an instruction-tuned Flan-T5 [53] is connected with a powerful vision encoder to perform a series of multi-modal tasks; (2) MiniGPT-4 which aligns a CLIP visual encoder with a frozen Vicuna [5] with artificially collected dialog dataset; and (3) InstructBLIP, a recently proposed instruction tuning enhanced multi-modal agents with Vicuna-13B with converted multi-model datasets and the LLaVA [28] dataset generated by GPT-4. ### Main Results **RQ1: Knowledgeable Visual Question Answer Evaluation** The results on the KVQA benchmarks are shown in Table 4. In comparison to the strongest baseline, our model achieves an improvement of 3.2 and 2.7 ROUGE-L points for OK-VQA and A-OKVQA, respectively. Additionally, Ying-VLM delivers the best performance on the held-out ViQuAE dataset. These findings indicate that instruction tuning on M\({}^{3}\)IT effectively harnesses knowledge from LLMs and elevates response quality. **RQ2: Zero-Shot Transfer to Chinese Vision-Language Tasks** We assess models on three unseen Chinese vision-language tasks to investigate the cross-language generalization capabilities of instruction tuning. BLIP-2 is not considered, as Flan-T5 does not support Chinese.2 As illustrated in Table 5, our model outperforms MiniGPT4 and InstructBLIP on all evaluated tasks, demonstrating notable improvements. These findings indicate that instruction tuning with English datasets can effectively generalize to different languages, showcasing the promising potential that can be further explored. Footnote 2: For all models, we introduce a prompt to promote Chinese outputs. See Appendix D for details. **RQ3: Zero-Shot Transfer to Video-Language Tasks** To evaluate performance on video-language tasks, we uniformly sample \(8\) frames from each video. A comparison with MiniGPT4 is excluded, as it does not support video inputs. Following the approach of InstructBLIP [7], we concatenate the visual embedding extracted from the Q-former of each frame as a prefix embedding to the language model. As demonstrated in Table 6, our model excels in these challenging settings, significantly \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Video Captioning} & \multicolumn{4}{c}{Video Question Answer} \\ \cline{3-5} & & iVQA & ActivityNet-QA & MSRVTT-QA & MSVD-QA \\ \hline BLIP-2-Flan-T5-XXL & 8.8 & 11.1 & 8.9 & 10.3 & 13.2 \\ InstructBLIP & **14.3** & 6.3 & 9.3 & 4.0 & 7.0 \\ Ying-VLM (Ours) & 14.2 & **23.5** & **21.9** & **18.3** & **21.4** \\ \hline \hline \end{tabular} \end{table} Table 6: Zero-shot transfer to video-language tasks. We report ROUGE-L score for all tasks. \begin{table} \begin{tabular}{l c c c} \hline \hline Model & OK-VQA & A-OKVQA & ViQuAE \\ \hline BLIP-2-Flan-T5-XXL & 9.1 & 15.6 & 9.7 \\ MiniGPT4 & 23.3 & 21.8 & 24.4 \\ InstructBLIP & 7.1 & 5.9 & 7.3 \\ Ying-VLM (Ours) & **27.5** & **24.5** & **29.6** \\ \hline \hline \end{tabular} \end{table} Table 4: ROUGE-L evaluation results of KVQA tasks. Our Ying-VLM outperforms all the baselines consistently. \begin{table} \begin{tabular}{l c c c} \hline \hline Model & Flickr-8h-CN & FM-QA & Chinese-FoodNet \\ \hline MiniGPT4 & 9.6 & 20.1 & 5.0 \\ InstructBLIP & 5.2 & 2.3 & 1.0 \\ Ying-VLM (Ours) & **20.5** & **33.3** & **49.8** \\ \hline \hline \end{tabular} \end{table} Table 5: Zero-shot transfer to Chinese vision-language tasks. Our model generalizes well on unseen Chinese captioning, VQA and classification tasks, with the highest ROUGE-L. surpassing the BLIP-series baselines. It is worth noting that the training dataset does not include any visual inputs such as videos, implying that our instruction tuning effectively aids the model in generalizing to video inputs with a temporal dimension. **GPT-4 Evaluation Results** To further validate the quality of the generated response, we propose to utilize the powerful GPT-4 model as a proxy of human evaluators [38, 12]. Specifically, following Vicuna [5], we use GPT-4 to rate the performance of different models against our Ying-VLM. Considering the API cost of GPT-4, \(300\) examples are randomly sampled from OK-VQA, A-OKVQA and ViQuAE datasets as a subset for evaluation. For each sample, we construct a prompt consisting of the original question, its corresponding reference answer, the response generated by our Ying-VLM, and a baseline system output. GPT-4 is queried with the prompt to rate both responses on a scale of ten based on the given question and its reference answer. The ratings are primarily based on the accuracy, relevance, and naturalness of the response to meet the requirements when humans are interacting with multi-modal agents (see Appendix for the detailed evaluation template). We employ the strategy proposed by Wang et al. [51] to mitigate potential evaluation biases regarding the response order.3 Figure 4 shows that our Ying-VLM outperforms all baseline models in most samples. Notably, Ying-VLM beat the strongest baseline MiniGPT4 on \(167\) over \(300\) tested samples. Consistent with the previous ROUGE-L evaluation, this result indicates that the model fine-tuned on our instruction dataset can produce more accurate and engaging responses on the challenging KVQA tasks. Footnote 3: [https://github.com/i-Eval/FairEval](https://github.com/i-Eval/FairEval) ### Analysis We investigate the effect of task number and instruction diversity on the performance of learned models, providing insights for future studies to utilize our benchmark better. Figure 4: Evaluation results using GPT-4 as an evaluator. Our model outperforms MiniGPT-4 and InstructBLIP with a winning rate at 55.6% and 65.5%, respectively. Figure 5: Performance increases with more instruction tuning datasets. Figure 6: Performance changes with the varied number of instructions used for training. **Effect of Task Number** We investigate the influence of task numbers by randomly shuffling our tasks and then selecting a subset to train the model during the instruction tuning stage. Due to the computational resource limitation, we set up a maximum of \(5\)k examples for each task and train all the models for \(5\)k steps with a batch size of \(64\). We select \(0\), \(4\), \(8\), \(16\) and all \(27\) tasks for training, and report the individual ROUGE-L score and the average score. As illustrated in Figure 5, increasing the number of tasks greatly improves the results of the generalization performance. Besides, the performance gain is not diminished as the task number increases. This is promising as it indicates that we can continually improve performance by introducing more tasks into the training. It would be interesting to investigate the influence of different task clusters, which we leave for future studies. **Effect Instruction Diversity** To investigate the influence of instruction diversity, we limit the number of instructions used in each dataset to \(1\), \(2\), \(4\), and \(8\), resulting in varying levels of diversity for each task. The other training parameters are consistent with those used in previous experiments on task number investigation. Figure 6 shows that the performance varies with the level of diversity. Specifically, our results suggest that using four instructions per task is sufficient for achieving decent performance. We leave a more in-depth analysis of the instruction diversity for future work. **Qualitative Results** We conduct a case study to provide a more straightforward understanding of instruction-tuned models. The cases are chosen from the held-out ViQuAE and ChineseFoodNet datasets. As shown in Figure 7, our model generates accurate responses to all questions. In contrast, MiniGPT4 produces an incorrect answer for the stadium question on the left and fails to follow instructions in the subsequent cases, providing generic image descriptions instead. Additionally, compared to InstructBLIP, which provides concise but less engaging answers for the two questions requiring external knowledge, our model responds more naturally and engagingly, underlining the value of our dataset. Our model also successfully generalizes to Chinese inputs, accurately classifying the food image based on the instruction. These cases emphasize the importance of instruction tuning and demonstrate that our dataset can effectively enhance the capabilities of VLMs. ## 5 Conclusion In this paper, we present M\({}^{3}\)IT, a multi-modal multilingual instruction tuning dataset for aiding the development of multi-modal large language models. The dataset comprises 2.4 million carefully curated instances and 400 manually written task instructions across 40 tasks. We build Ying-VLM to validate the effectiveness of our dataset. Quantitative and qualitative results demonstrate that the Figure 7: Case study of the model outputs. Correct answers are bolded with green, wrong answers are in red and irrelevant answers are in grey. The model trained with our datasets can provide natural and informative responses to entity-centric questions, and generalize to the food classification task in Chinese (English translation for visualization only). models trained with our datasets successfully follow human instructions, provide more engaging responses, and achieve strong generalization performance on unseen video and Chinese tasks. Further analysis shows that the increased task number can continually boost performance, and instruction diversity can influence results. We hope our proposed benchmark, trained models, and experimental findings can facilitate future studies toward building powerful multi-modal intelligent agents. ## Appendix A Dataset Statistics Table 7 lists the detailed statistics in our benchmark. We collect the dataset license from PaperWithCode.4 For datasets under Unknown and Custom licenses, we suggest the users check the project page or contact the dataset owner before usage. Footnote 4: [https://paperswithcode.com/](https://paperswithcode.com/) \begin{table} \begin{tabular}{c|c|c|c c c|c} \hline \hline \multirow{2}{*}{Task} & \multirow{2}{*}{Dataset} & \multirow{2}{*}{Used} & \multicolumn{4}{c|}{\#samples} & \multirow{2}{*}{License} \\ & & & \multicolumn{2}{c}{Train} & \multicolumn{1}{c|}{Val} & \multicolumn{1}{c|}{Test} & \\ \hline \multirow{3}{*}{Captioning} & MS COCO [27] & Yes & 566,747 & 25,010 & 25,010 & Custom \\ & TextCaps [44] & Yes & 97,765 & 13,965 & 0 & Unknown \\ & Image-Paragraph-Captioning [21] & Yes & 14,575 & 2,487 & 2,489 & Custom \\ \hline \multirow{5}{*}{Classification} & COCO-GOI [27] & Yes & 30,000 & 2,000 & 0 & Custom \\ & COCO-Text [50] & Yes & 118,312 & 27,550 & 0 & Custom \\ & ImageNet [40] & Yes & 30,000 & 50,000 & 0 & Non-commercial \\ & COCO-ITM [27] & Yes & 30,000 & 5,000 & 5,000 & Custom \\ & e-SNLI-VE [20] & Yes & 20,000 & 14,339 & 14,740 & Unknown \\ & Mocheg [58] & Yes & 4,991 & 180 & 466 & CC BY 4.0 \\ & IQA [9] & Yes & 5,000 & 1,000 & 1,000 & Custom \\ \hline \multirow{5}{*}{VQA} & VQA v2 [15] & Yes & 30,000 & 30,000 & 0 & CC-BY 4.0 \\ & Shapes VQA [1] & Yes & 13,568 & 1,024 & 1,024 & Unknown \\ & DocVQA [33] & Yes & 39,463 & 5,349 & 0 & Unknown \\ & OCR-VQA [34] & Yes & 11,414 & 4,940 & 0 & Unknown \\ & ST-VQA [2] & Yes & 26,074 & 0 & 4,070 & Unknown \\ & Text-VQA [45] & Yes & 27,113 & 0 & 5,734 & CC BY 4.0 \\ & GQA [18] & Yes & 30,001 & 5,001 & 0 & CC BY 4.0 \\ \hline \multirow{5}{*}{KVQA} & OK-VQA [32] & Yes & 9,009 & 5,046 & 0 & Unknown \\ & A-OK-VQA [42] & Yes & 17,056 & 1,145 & 0 & Unknown \\ & ScienceQA [31] & Yes & 12,726 & 4,241 & 4,241 & CC BY-NC-SA \\ & ViQuAE [22] & No & 1,190 & 1,250 & 1,236 & CC By 4.0 \\ \hline \multirow{5}{*}{Reasoning} & CLEVR [19] & Yes & 30,000 & 2,000 & 0 & CC BY 4.0 \\ & NLVR [46] & Yes & 29,372 & 2,000 & 0 & Unknown \\ & VCR [60] & Yes & 25,000 & 5,000 & 5,000 & Custom \\ & VisualMRC [47] & Yes & 15,000 & 2,500 & 5,000 & Unknown \\ & Winoground [48] & No & 0 & 0 & 800 & Unknown \\ \hline \multirow{5}{*}{Generation} & Visual Storytelling [17] & Yes & 5,000 & 4,315 & 4,350 & Unknown \\ & Visual Dialog [8] & Yes & 50,000 & 1,000 & 1,000 & CC By 4.0 \\ & Multi30k [10] & Yes & 90,000 & 6,000 & 12,000 & Non-commercial \\ \hline \multirow{5}{*}{Chinese} & FM-IQA [11] & No & 164,735 & 75,206 & 0 & Unknown \\ & COCO-Caption CN [25] & No & 18,341 & 1,000 & 1,000 & Non-commercial \\ & Flickr-Rk-Caption CN [24] & No & 6,000 & 1,000 & 1,000 & CC By 3.0 \\ & Chinese Food Classification [4] & No & 0 & 0 & 1,100 & Unknown \\ & Multimodal Chat [62] & No & 3,000 & 1,000 & 1,000 & Unknown \\ \hline \multirow{5}{*}{Video} & Action-Classification [14] & No & 2,000 & 2,000 & 2,000 & Custom \\ & iVQA [57] & No & 5,994 & 2,000 & 2,000 & Unknown \\ \cline{1-1} & MSVD QA [54] & No & 1,161 & 245 & 504 & Unknown \\ \cline{1-1} & ActivityNet QA [59] & No & 3,200 & 1,800 & 800 & Unknown \\ \cline{1-1} & MSRVT QT QA [54] & No & 6,513 & 497 & 2,990 & Unknown \\ \cline{1-1} & MSRVT Captioning [55] & No & 2,000 & 1,000 & 1,000 & Unknown \\ \hline \hline \end{tabular} \end{table} Table 7: Detailed task descriptions and statistics of our instruction tuning tasks, including all datasets in all types of tasks. The column “Used” indicates whether we use this dataset in the instruction tuning stage. Template for Answer Paraphrase We provide the paraphrase template in Table 8 for querying the ChatGPT to re-write the original short answers, where {Q} and {A} is filled with the question and the answer need to be paraphrased, respectively. We incorporate an example to better inform the model of the paraphrasing tasks. For VQAv2 tasks, we add an extra {Caption} field in the template filled with corresponding captions from the COCO dataset to provide extra context information to help to paraphrase. You are an AI visual assistant. Now you are given a question related to an image and a short ground-truth answer. Your task is to transform the ground-truth answer into a natural and convincing response. Make sure the response is accurate, highly relevant to the question, and consistent with the original answer. Question: Which NASA space probe was launched to this planet in 1989? Answer: Magellan Transformed Answer: NASA sent the Magellan spacecraft to Venus in 1989, which was the first planetary spacecraft launched from a space shuttle. Question: {Q} Answer: {A} Transformed Answer: ## Appendix C Dataset Translation We translate all the task instructions and evaluation sets of ImageNet, Winoground, VQAv2, OKVQA, VIST, MSRVTT and MSRVTT-QA into \(80\) languages, as shown in Table 9. Due to the computational resource constraint, we translate the whole test of Winoground ( \(800\) examples) and set a maximum instance number of \(500\) for each split in other tasks. ## Appendix D Prompt for Zero-Shot Chinese Vision-Language Tasks In our experiments, all Vision-Language models are fine-tuned exclusively using English data. In our preliminary study, we observe that these models tend to generate English responses, even when the input and instructions are written in Chinese. We introduce a simple Chinese dialogue context during the zero-shot Chinese Vision-Language Task evaluation for all models, as illustrated in Table 10, Interestingly, this minor adjustment can encourage models to produce reasonable Chinese output. We leave the analysis of instruction-tuned VLM models' multilingual capabilities for future research. ## Appendix E Template for GPT-4 Evaluation We adopt the template in Table 11 to query GPT-4 and obtain the evaluation results with FairEval 5 to obtain more stable results. Specifically, each tested instance is a quaternion: (question, reference, response1, response2), where response1 and response2 are two responses from our Ying-VLM and the baseline model, respectively. For each instance, we query GPT-4 to judge which response is of better quality regarding accuracy, relevance and naturalness. We populate the quaternion into the evaluation template to form \begin{table} \begin{tabular}{|p{284.5pt}|} \hline You are an AI visual assistant. Now you are given a question related to an image and a short ground-truth answer. Your task is to transform the ground-truth answer into a natural and convincing response. Make sure the response is accurate, highly relevant to the question, and consistent with the original answer. \\ Question: Which NASA space probe was launched to this planet in 1989? Answer: Magellan Transformed Answer: \end{table} Table 8: Template used to query ChatGPT for answer paraphrasing. two query prompts: T(Q=question, R=reference, R1=response1, R2=response2) and T(Q=question, R=reference, R1=response2, R2=response1). We set the temperature of GPT-4 to \(1\) and sample three completions for each query prompt. Therefore, each response will \begin{table} \begin{tabular}{l l l} \hline \hline **Language Code** & **Script** & **Language Name** \\ \hline af & af\_Lam & Arikaans \\ am & ann\_john & Amhatic \\ ar & ann\_AvAv & Modern Standard Arabic \\ as & ann\_Beng & Assamese \\ at & sat\_Lam & Aturian \\ be & bel\_Cyl & Beltarian \\ bg & bul\_Cyl & Bulgaria \\ ba & ben\_Beng & Bengali \\ bs & bsc\_Lam & Bossian \\ ca & cat\_Lam & Catalan \\ ceb & cb\_Lam & Cebouno \\ cs & cs\_Lam & Cecch \\ cy & cyn\_Lam & Welsh \\ da & dan\_Lam & Danish \\ de & dus\_Lam & German \\ el & el\_Geck & Greek \\ es & spa\_Lam & Spanish \\ et & est\_Lam & Estonian \\ fi & fin\_Lam & Finnish \\ fr & fa\_Lam & French \\ fuv & frv\_Lam & Nigerian Fulfudde \\ gl & gk\_Lam & Galician \\ gu & gu\_Gay & Galician \\ ha & hun\_Lam & Hunau \\ he & hcb\_Beb & Hebrew \\ hi & hin\_Deu & Hinli \\ hr & hr\_Lam & Chromatin \\ hu & hun\_Lam & Hungarian \\ hy & hye\_Arm & Armenian \\ id & ind\_Lam & Indonesian \\ ig & ib\_Lam & lgbo \\ is & i\_Lam & Icelandic \\ it & ia\_Lam & Italian \\ ja & jen\_Jpn & Japanese \\ y & jen\_Jpn & Javanese \\ ka & kat\_Gore & Gorejian \\ lk & kat\_Cyl & Karahah \\ km & lhm\_Kam & Kumer \\ kn & kkm\_Kala & Kamanda \\ ko & kac\_Hung & Korean \\ ky & lir\_Cyl & Kyrgz \\ lb & lr\_Lam & Lancknenbourgh \\ leg & hg\_Lam & Gandia \\ ij & ij\_Lam & Ligurian \\ ii & im\_Lam & Linnowghia \\ in & ln\_Lam & Liapala \\ lo & ln\_Laco & Lao \\ it & hi\_Lam & Lithuan \\ iv & iv\_Lam & Standard Latvian \\ mi & mir\_Lam & Maori \\ mk & mid\_Cyl & Macdonian \\ ml & mat\_Myn & Malayghan \\ mr & mar\_Deu & Marchi \\ mt & ml\_Lam & Malhese \\ my & my\_Myn & Burmese \\ al & ind\_Lam & Dutch \\ my & my\_Jam & Nynaja \\ oc & sci\_Lam & Coctian \\ pa & pun\_Garn & Eastern Punjisi \\ pl & pl\_Lam & Polish \\ p & per\_Lam & Portuguese \\ ro & ro\_Lam & Romanian \\ rn & rus\_Cyl & Russian \\ ad & sal\_Ash & Sindi \\ sk & sk\_Lam & Slovak \\ sn & sun\_Lam & Shona \\ so & som\_Lam & Somali \\ sr & sp\_Cyl & Serbian \\ sv & sev\_Lam & Swedish \\ na & tam\_Tam & Tamil \\ te & tel\_Tba & Telugu \\ ig & t\_Cyl & Dijk \\ th & th\_Thai & Thai \\ tl & t\_Lam & Taglog \\ ir & tar\_Lam & Turkish \\ ak & dir\_Cyl & Ukrainian \\ ur & ur\_Lam & Utah \\ vi & v\_ie\_Lam & Vietnamese \\ wo & wo\_Lam & Wood \\ zh & zh\_Hans & Chinese (Simplified) \\ \hline \hline \end{tabular} \end{table} Table 9: List of Language Codes, Scripts, and Languages Names for translated datasets. receive \(6\) scores, and we use the average score as the final score for each response. The response with the higher final score is considered the better response. The GPT-4 evaluation incurred a cost of \(\$20.45\) for InstructBlip and \(\$20.90\) for MiniGPT-4. [Question] [The Start of Reference Answer] [R] [The End of Reference Answer] [The Start of Assistant 1's Answer] [R1] [The End of Assistant 1's Answer] [The Start of Assistant 2's Answer] [The End of Assistant 2's Answer] [System] We would like to request your feedback on the performance of two AI assistants in response to the user's multimodal question displayed above. We provided no multimodal inputs other than question text, but we provided a reference answer for this question. You need to evaluate the quality of the two responses based on the question and the reference answer. Please rate the on the follow aspects: 1. Accuracy: whether the candidate's response is consistent with the original answer, this is important as we do not want a misleading result; 2. Relevance: whether the candidate's response is highly relevant to the question and image content; 3. Naturalness: whether the candidate's response is engaging, providing a great communication experience for the user when interacting with the AI visual assistant. of the two Assistants' responses. Each assistant receives an overall score on a scale of 1 to 10, where a higher score indicates better overall performance. Please first provide a comprehensive explanation of your evaluation, avoiding any potential bias and ensuring that the order in which the responses were presented does not affect your judgment. Then, output two lines indicating the scores for Assistant 1 and 2, respectively. Output with the following format: Evaluation evidence: <<evaluation explanation here>> The score of Assistant 1: <<score> The score of Assistant 2: <<score> \begin{table} \begin{tabular}{|p{170.7pt}|} \hline [Question] \\ [Q] \\ [The Start of Reference Answer] \\ [R] \\ [The End of Reference Answer] \\ [The Start of Assistant 1’s Answer] \\ [R1] \\ [The End of Assistant 1’s Answer] \\ [The Start of Assistant 2’s Answer] \\ [R2] \\ [The End of Assistant 2’s Answer] \\ [System] \\ We would like to request your feedback on the performance of two AI assistants in response to the user’s multimodal question displayed above. We provided no multimodal inputs other than question text, but we provided a reference answer for this question. You need to evaluate the quality of the two responses based on the question and the reference answer. Please rate the on the follow aspects: 1. Accuracy: whether the candidate’s response is consistent with the original answer, this is important as we do not want a misleading result; 2. Relevance: whether the candidate’s response is highly relevant to the question and image content; 3. Naturalness: whether the candidate’s response is engaging, providing a great communication experience for the user when interacting with the AI visual assistant. of the two Assistants’ responses. Each assistant receives an overall score on a scale of 1 to 10, where a higher score indicates better overall performance. Please first provide a comprehensive explanation of your evaluation, avoiding any potential bias and ensuring that the order in which the responses were presented does not affect your judgment. Then, output two lines indicating the scores for Assistant 1 and 2, respectively. Output with the following format: Evaluation evidence: <<evaluation explanation here>> The score of Assistant 1: <<score> The score of Assistant 2: <<score> \begin{table} \begin{tabular}{|p{170.7pt}|} \hline [Question] \\ [Q] \\ [The Start of Reference Answer] \\ [R] \\ [The End of Reference Answer] \\ [The Start of Assistant 1’s Answer] \\ [The Start of Assistant 2’s Answer] \\ [R2] \\ [The End of Assistant 2’s Answer] \\ [System] \\ We would like to request your feedback on the performance of two AI assistants in response to the user’s multimodal question displayed above. We provided no multimodal inputs other than question text, but we provided a reference answer for this question. You need to evaluate the quality of the two responses based on the question and the reference answer. Please rate the on the follow aspects: 1. Accuracy: whether the candidate’s response is consistent with the original answer, this is important as we do not want a misleading result; 2. Relevance: whether the candidate’s response is highly relevant to the question and image content; 3. Naturalness: whether the candidate’s response is engaging, providing a great communication experience for the user when interacting with the AI visual assistant. The two Assistants’ responses. Each assistant receives an overall score on a scale of 1 to 10, where a higher score indicates better overall performance. Please first provide a comprehensive explanation of your evaluation, avoiding any potential bias and ensuring that the order in which the responses were presented does not affect your judgment. Then, output two lines indicating the scores for Assistant 1 and 2, respectively. Output with the following format: Evaluation evidence: <<evaluation explanation here>> The score of Assistant 1: <<score> The score of Assistant 2: <<score> \end{table} Table 10: Prompt for promoting Chinese outputs.
2310.16065
The Hyperdimensional Transform: a Holographic Representation of Functions
Integral transforms are invaluable mathematical tools to map functions into spaces where they are easier to characterize. We introduce the hyperdimensional transform as a new kind of integral transform. It converts square-integrable functions into noise-robust, holographic, high-dimensional representations called hyperdimensional vectors. The central idea is to approximate a function by a linear combination of random functions. We formally introduce a set of stochastic, orthogonal basis functions and define the hyperdimensional transform and its inverse. We discuss general transform-related properties such as its uniqueness, approximation properties of the inverse transform, and the representation of integrals and derivatives. The hyperdimensional transform offers a powerful, flexible framework that connects closely with other integral transforms, such as the Fourier, Laplace, and fuzzy transforms. Moreover, it provides theoretical foundations and new insights for the field of hyperdimensional computing, a computing paradigm that is rapidly gaining attention for efficient and explainable machine learning algorithms, with potential applications in statistical modelling and machine learning. In addition, we provide straightforward and easily understandable code, which can function as a tutorial and allows for the reproduction of the demonstrated examples, from computing the transform to solving differential equations.
Pieter Dewulf, Michiel Stock, Bernard De Baets
2023-10-24T11:33:39Z
http://arxiv.org/abs/2310.16065v1
# The Hyperdimensional Transform: a Holographic Representation of Functions ###### Abstract Integral transforms are invaluable mathematical tools to map functions into spaces where they are easier to characterize. We introduce the hyperdimensional transform as a new kind of integral transform. It converts square-integrable functions into noise-robust, holographic, high-dimensional representations called hyperdimensional vectors. The central idea is to approximate a function by a linear combination of random functions. We formally introduce a set of stochastic, orthogonal basis functions and define the hyperdimensional transform and its inverse. We discuss general transform-related properties such as its uniqueness, approximation properties of the inverse transform, and the representation of integrals and derivatives. The hyperdimensional transform offers a powerful, flexible framework that connects closely with other integral transforms, such as the Fourier, Laplace, and fuzzy transforms. Moreover, it provides theoretical foundations and new insights for the field of hyperdimensional computing, a computing paradigm that is rapidly gaining attention for efficient and explainable machine learning algorithms, with potential applications in statistical modelling and machine learning. In addition, we provide straightforward and easily understandable code, which can function as a tutorial and allows for the reproduction of the demonstrated examples, from computing the transform to solving differential equations. Integral transforms, differential equations, hyperdimensional computing, vector symbolic architectures, machine learning, efficient computing ## 1 Introduction ### Integral transforms In mathematics, various kinds of integral transformations (often simply called integral transforms, emphasizing the result of the transformation) exist that map functions from their original space into a new space, e.g., the Laplace transform, the Fourier transform, the wavelet transform, the fuzzy transform and the Z-transform, to name but a few [1, 5, 17, 21]. The underlying idea is that some problems may be solved more easily in the new space and that the solution in this new space can be mapped back (approximately) to the original space. For example, the Laplace transform is a well-known tool for solving differential equations; the Fourier transform is a tool for analyzing functions in the frequency domain; and the fuzzy transform can be used to work with noisy data and for data compression purposes, in addition to solving differential equations. An integral transform can generally be expressed as a mathematical operator \(\mathcal{T}\), taking the following form: \[\left(\mathcal{T}f\right)(s)=\int_{x_{1}}^{x_{2}}f(x)K(x,s)\,\mathrm{d}x\,.\] Here, the function \(f\) is transformed into a function \(\mathcal{T}f\), and the type of transformation is specified by the domains of \(f\) and \(\mathcal{T}f\) and by the integral kernel \(K(x,s)\), which can be seen as a family of basis functions. For example, the Laplace transform converts a function of a real variable \(x\) into a function of a complex variable \(s\), and the exponential basis functions \(e^{-sx}\) determine the integral kernel. The Z-transform converts a discrete-time signal of a variable \(x\) into a function of a complex variable \(s\), with integral kernel \(s^{-x}\). As a last example, the fuzzy transform converts a function of a real variable \(x\) into a function \(\mathcal{T}f\) with as domain a finite set, and the integral kernel is determined by a finite fuzzy partition \(\{A_{s}(x)\mid s=1,2,\ldots,n\}\). Since the domain of \(\mathcal{T}f\) is a finite set in this case, one can stack all the evaluations \((\mathcal{T}f)\,(s)\) in a vector and interpret the transformation as a function-to-vector transformation. ### Hyperdimensional computing Our work bridges the above-mentioned integral transforms with the field of hyperdimensional computing (HDC) [8, 9, 10]. Hyperdimensional computing, also known as vector symbolic architectures (VSA), is a highly interdisciplinary field with connections to computer science, electrical engineering, artificial intelligence, mathematics, and cognitive science [9, 10]. Especially in the field of machine learning and data science, hyperdimensional computing has recently witnessed growing interest and an increase in applications as an energy-efficient method [9]. The basic idea of HDC is that objects of any type can be represented by high-dimensional distributed representations, called hyperdimensional vectors. HDC algorithms rely on a set of key vector operations with specific algebraic properties: binding, superposition (also called bundling or aggregation), permutation, and similarity measurement. These operations allow for fast and robust computations. The exact algebraic operations depend on the chosen type of hyperdimensional vector. As hyperdimensional computing largely started as an empirical field in various areas, different types of hyperdimensional vectors have been described and used (e.g., bipolar, binary, ternary, sparse, real-valued, or complex-valued). Nevertheless, the following four properties are assumed to be essential [8]: 1. _Hyperdimensionality_: the vectors should have a large number of dimensions, e.g. 10,000 or more. 2. _Robustness_: corruption of a small fraction of the vector should not result in a significant loss of information. The result of an HDC algorithm should be tolerant for such component failures. This robustness results from redundant representations. 3. _Holistic or holographic representation_: information should not be locally stored but distributed 'equally' over the entire vector. This is very different from the regular representation of data in computers, where specific bits have specific meanings. 4. _Randomness_: vectors should be drawn randomly, typically with its elements independent and identically distributed. These properties take inspiration from the functioning of the brain and allow for the implementation of various aspects of artificial intelligence such as memory, reasoning, and learning. For more details on the hyperdimensional space, we refer to [8, 9, 10]. ### Further outline This paper introduces a linear operator that transforms functions into hyperdimensional vectors, as defined by the four properties above. We are aware of one recent work where a similar idea of representing functions as hyperdimensional vectors was presented. In [7], the authors showed the analogy to kernel methods and used kernels compatible with the hyperdimensional "binding" operation to map kernel-decomposed functions into hyperdimensional space. In our work, the mapping into hyperdimensional space is more general. We present it as a formal integral transform. Concretely, in Section 2, we first provide a concrete, formal approach to the representation of objects as hyperdimensional vectors. To that end, we introduce the function \(\Delta^{\varphi}\,:\,X\rightarrow\mathbb{R}^{D}\), called a normalized hyperdimensional encoding, that maps elements \(x\) of a universe \(X\) into the hyperdimensional space \(\mathbb{R}^{D}\) with \(D\) a large number. The components \(\Delta^{\varphi}_{s}(x)\), \(s=1,2,\ldots,\,D\), of this vector-valued function may be seen as orthogonal basis functions, similar to the function \(e^{-sx}\) in the Laplace transform and \(A_{s}(x)\) in the fuzzy transform. Section 3 introduces the hyperdimensional transform for square-integrable functions as the linear operator \(\mathcal{H}^{\Delta^{\varphi}}\,:\,L^{2}(X)\rightarrow\mathbb{R}^{D}\). A noteworthy difference with many integral transforms is that this transform is not limited to functions with as domain a real interval but is defined for functions with as domain a more abstract universe \(X\). Also note that, although \(D\) is assumed large, a function from the infinite-dimensional space \(L^{2}(X)\) is transformed into a finite-dimensional vector space \(\mathbb{R}^{D}\). Hence, the transform can thus only represent an approximation of the original function. This behaviour is also allowed, for example, for the fuzzy transform. The remainder of this work discusses various transform-related properties of the hyperdimensional transform, such as uniqueness (Section 3), the inverse transform (Section 4) and its approximation qualities (Section 5), and the representation of derivatives, integrals and inner products (Section 6). In Section 7, we extend the theory to functions of multiple variables that live in different universes. In Section 8, as an application, we illustrate how linear differential equations and linear integral equations can be naturally represented in the hyperdimensional space. Finally, in Section 9, we discuss close connections with other integral transforms. We indicate how they differ from the new hyperdimensional transform and in which types of applications it might serve. ## 2 Hyperdimensional encoding In this section, we provide a concrete, formal approach to the representation of objects as hyperdimensional vectors. Given a universe \(X\) that is endowed with a measure, we define a hyperdimensional encoding as a function based upon a stochastic process, mapping elements of \(X\) to hyperdimensional vectors. We also introduce a notion of normalization. The corresponding normalized hyperdimensional encoding is the first step towards mapping functions belonging to \(L^{2}(X)\) into a hyperdimensional space. **Definition 1**.: _Let \((X,\mathcal{A},\mu)\) be a finite measure space and \(\{\Phi(x)\mid x\in X\}\) a stochastic process taking values in a bounded set \(S\subset\mathbb{R}\)._ 1. _A function_ \(n\,:\,X\to\mathbb{R}_{>0}\) _that satisfies_ \[\int_{x^{\prime}\in X}\frac{\mathbf{E}\left[\Phi(x)\Phi(x^{\prime})\right]}{n (x)n(x^{\prime})}\mathrm{d}\mu(x^{\prime})=1\,,\quad\text{ for all }x\in X\,,\] (1) _is called a normalization function of the stochastic process. We define the normalized stochastic process as_ \(\left\{\Delta^{\Phi}(x)\,:\,=\,\frac{\Phi(x)}{n(x)}\mid x\in X\right\}\)_._ 2. _Consider the vector-valued functions_ \(\varphi:\,X\to\mathbb{R}^{D}\) _and_ \(\Delta^{\varphi}\,:\,X\to\mathbb{R}^{D}\) _defined by_ \[\varphi(x)\,:\,=\,\left[\varphi_{1}(x),\varphi_{2}(x),\ldots,\varphi_{D}(x) \right]^{\mathrm{T}}\] _and_ \[\Delta^{\varphi}(x)\,:\,=\,\left[\frac{\varphi_{1}(x)}{n(x)},\frac{\varphi_{2 }(x)}{n(x)},\ldots,\frac{\varphi_{D}(x)}{n(x)}\right]^{\mathrm{T}}\,.\] _The_ \(D\) _components of these vectors are independent sample functions from the stochastic processes_ \(\Phi\) _and_ \(\Delta^{\Phi}\)_, respectively. If_ \(\Delta^{\varphi}\) _is Bochner integrable w.r.t._ \(\mu\)_, then we say that_ \(\Delta^{\varphi}\) _is a normalized hyperdimensional encoding of_ \(X\) _w.r.t. the stochastic process_ \(\Phi\)_. The function_ \(\varphi\) _is called the unnormalized hyperdimensional encoding._ **Remark 1**.: _The Bochner integral can be seen as the Lebesgue integral for vector-valued mappings [2, 12]. With the integrand taking values in the vector space \(\mathbb{R}^{D}\), integration w.r.t. \(\mu\) should be interpreted componentwise. In Eq. (1), the Lebesgue and the Bochner integral interpretations coincide as the integrand takes values in \(\mathbb{R}\)._ **Remark 2**.: _A normalized hyperdimensional encoding \(\Delta^{\varphi}\) of \(X\) w.r.t. the stochastic process \(\Phi\) can only be defined if a normalization function \(n\) can be found. The existence of such a function for an arbitrary stochastic process on an arbitrary measure space \((X,\mathcal{A},\mu)\) is not known in general. Examples 1-3 show some (less) obvious solutions for the normalization function on different measure spaces and with different stochastic processes._ The adjective _hyperdimensional_ refers to the fact that the dimensionality \(D\) is huge1. Dimensionalities of 10,000 dimensions or more are fairly typical [8]. According to the law of large numbers [6], we have Footnote 1: \(D\) should be ‘large enough’ such that an inner product approximates its expected values ‘close enough’, which can be quantified using appropriate concentration bounds. \[\lim_{D\to\infty}\left\langle\varphi(x),\varphi(x^{\prime})\right\rangle= \mathbf{E}\left[\Phi(x)\Phi(x^{\prime})\right]\] and \[\lim_{D\to\infty}\left\langle\Delta^{\varphi}(x),\Delta^{\varphi}(x^{\prime}) \right\rangle=\mathbf{E}\left[\Delta^{\Phi}(x)\Delta^{\Phi}(x^{\prime})\right]\,.\] In the left-hand sides, \(\left\langle\cdot,\cdot\right\rangle\) takes two vectors in \(\mathbb{R}^{D}\) as arguments and represents the Euclidean inner product scaled with the dimensionality \(D\). The expected values on the right-hand sides also represent inner products but between stochastic variables. We can write \[\left\langle\Phi(x),\Phi(x^{\prime})\right\rangle\,:\,=\,\mathbf{E}\left[\Phi (x)\Phi(x^{\prime})\right]\] and \[\left\langle\Delta^{\Phi}(x),\Delta^{\Phi}(x^{\prime})\right\rangle\,:\,=\, \mathbf{E}\left[\Delta^{\Phi}(x)\Delta^{\Phi}(x^{\prime})\right]\,.\] Depending on the context, either the expected value or the inner product notation can be used. By construction, a normalized encoding \(\Delta^{\varphi}\) exhibits the properties of being robust, holistic, and random: each vector component is an independent random sample, while information is statistically encoded via high-dimensional inner products that approximate expected values. **Proposition 1**.: _Let \(\Delta^{\varphi}\) be a normalized hyperdimensional encoding of \(X\), then for all \(x\in X\) we have_ \[\int_{x^{\prime}\in X}\left\langle\Delta^{\varphi}(x),\Delta^{\varphi}(x^{ \prime})\right\rangle\mathrm{d}\mu(x^{\prime})=\left\langle\Delta^{\varphi}(x ),\int_{x^{\prime}\in X}\Delta^{\varphi}(x^{\prime})\mathrm{d}\mu(x^{\prime}) \right\rangle\,.\] Proof.: This result follows from the fact that for any bounded linear operator \(\mathcal{T}\) between two Banach spaces and a function \(f\) that is Bochner integrable, \(\mathcal{T}f\) is Bochner integrable, and \(\mathcal{T}\int_{x\in X}f(x)d\,\mu(x)=\int_{x\in X}\left(\mathcal{T}f\right)( x)d\,\mu(x)\). This follows directly from the definition of the Bochner integral [12]. **Proposition 2**.: _Let \(\Delta^{\varphi}\) be a hyperdimensional encoding of \(X\), then we have_ \[\lim_{D\to\infty}\int_{x^{\prime}\in X}\left\langle\Delta^{\varphi }(x),\Delta^{\varphi}(x^{\prime})\right\rangle\mathrm{d}\mu(x^{\prime}) = \int_{x^{\prime}\in X}\lim_{D\to\infty}\left\langle\Delta^{ \varphi}(x),\Delta^{\varphi}(x^{\prime})\right\rangle\mathrm{d}\mu(x^{\prime})\] \[= \int_{x^{\prime}\in X}\left\langle\Delta^{\varphi}(x),\Delta^{ \varphi}(x^{\prime})\right\rangle\mathrm{d}\mu(x^{\prime})=1\,.\] Proof.: Due to the law of large numbers [6], we have \[\lim_{D\to\infty}\left\langle\Delta^{\varphi}(x),\Delta^{\varphi}(x^{\prime}) \right\rangle=\left\langle\Delta^{\varphi}(x),\Delta^{\varphi}(x^{\prime}) \right\rangle\,.\] Since \(S\) is bounded, \(\left\langle\Delta^{\varphi}(x),\Delta^{\varphi}(x^{\prime})\right\rangle\) is also bounded on our finite measure space. Hence, the first equality follows from the dominated convergence theorem [12]. The last equality follows from the definition of the normalization function. We now give some examples of a hyperdimensional representation via a normalized hyperdimensional encoding. In practice, many tasks pertain to a universe \(X\) that consists of real values. Intuitively, the inner product \(\left\langle\varphi(x),\varphi(x^{\prime})\right\rangle\)) can be related to the similarity between \(x\) and \(x^{\prime}\) and should be a decreasing function of \(|x-x^{\prime}|\). Different kinds of such representations have been proposed in the field of hyperdimensional computing [10]. Typically, some 'range size' \(\lambda\) is chosen such that \(\left\langle\varphi(x),\varphi(x^{\prime})\right\rangle\approx 0\) if \(|x-x^{\prime}|\geq\lambda\). If \(|x-x^{\prime}|<\lambda\), one uses some kind of interpolation: the number of shared entries in \(\varphi(x)\) and \(\varphi(x^{\prime})\) increases with decreasing \(|x-x^{\prime}|\). Below, we give a concrete example of such a hyperdimensional representation. **Example 1**.: _Let \(X=[a,b]\in\mathbb{R}\), \(\mathcal{A}\) be the set of all subintervals of \(X\) and \(\mu\) the Lebesgue measure expressing the length of each subinterval. We define a stochastic process \(\Phi\) taking values in \(S=\{-1,1\}\) by the following properties:_ 1. \(\mathbf{E}\left[\Phi(x)\right]=0\)_, for all_ \(x\in X\)_;_ 2. \(\mathbf{E}\left[\Phi(x)\Phi(x^{\prime})\right]=\max\left(0,1-\frac{|x-x^{ \prime}|}{\lambda}\right)\)_, for all_ \(x,x^{\prime}\in X\)_, for some_ \(\lambda>0\)_._ _Note that, because of \((i)\), the quantity \(\mathbf{E}\left[\Phi(x)\Phi(x^{\prime})\right]=\left\langle\Phi(x),\Phi(x^{ \prime})\right\rangle\) in \((ii)\) represents the covariance between the stochastic variables \(\Phi(x)\) and \(\Phi(x^{\prime})\)._ _A D-dimensional sample \(\varphi\) of such a process can be constructed. For example, first, select points \(x_{k}\) with \(k=0,1,...,n+1\) and \(x_{k+1}-x_{k}=\lambda\), such that \(x_{1},...,x_{n}\in[a,b]\) and \(x_{0},x_{n+1}\notin[a,b]\). Then, map all points \(x_{k}\) into corresponding D-dimensional random vectors \(r(x_{k})\in\mathbb{R}^{D}\) via D independent Rademacher variables (taking values in \(\{-1,1\}\) with equal probability). If \(x=x_{k}\), then assign \(\varphi(x)=r(x_{k})\); if \(x_{k}<x<x_{k+1}\), then, for each component \(\varphi_{i}(x)\), sample a switching point \(t_{i}\) from the uniform distribution on \(|x_{k},x_{k+1}[\), and assign \(\varphi_{i}(x)=\varphi_{i}(x_{k})\) if \(x<t_{i}\) and \(\varphi_{i}(x)=\varphi_{i}(x_{k+1})\) if \(x\geq t_{i}\). Each component of \(\varphi(x)\) is thus a piecewise continuous function switching at most \(n+1\) times between \(-1\) and \(1\) at random locations, such that the expectation of the inner product between \(\varphi(x)\) and \(\varphi(x^{\prime})\) decreases linearly with \(|x-x^{\prime}|\) until they are expected to become uncorrelated at \(|x-x^{\prime}|\geq\lambda\). Indeed, \(\lim_{D\to\infty}\left\langle\varphi(x),\varphi(x^{\prime})\right\rangle=\max \left(0,1-\frac{|x-x^{\prime}|}{\lambda}\right)\), which is invariant w.r.t. translation of the chosen points \(x_{k}\)._ _Besides a construction for taking a sample of the stochastic process, a normalization function is also required for a concrete normalized encoding \(\Delta^{\varphi}\). Finding a solution for the normalization function \(n(x)\) corresponds to solving the nonlinear integral equation_ \[1=\int_{x\in[a,b]}\frac{\max\left(0,1-\frac{|x-x^{\prime}|}{\lambda}\right)}{n(x )n(x^{\prime})}\mathrm{d}\mu(x)\,,\] _which is a special case of the nonlinear integral equation of the Hammerstein type with a singular term at the origin. This type of equation has been shown to have a positive measurable solution. For a proof and conditions, we refer to [3]. In practice, an approximate solution to the Hammerstein equation is often constructed by the method of successive approximation [14]. As an illustration for approximating the function \(n(x)\), we use the interval \([0,1]\) and set \(\lambda=1/4\). We choose 100 equidistant points to compute and evaluate our approximation for \(n(x)\). As an initial guess, we set_ \[n_{0}(x)=\sqrt{\int_{x^{\prime}\in[a,b]}\max\left(0,1-\frac{|x-x^{\prime}|}{ \lambda}\right)\mathrm{d}\mu(x^{\prime})}\] _and in each \(i\)-th iteration, we compute the function_ \[\bar{1}_{i}(x)=\int_{x^{\prime}\in[a,b]}\frac{\max\left(0,1-\frac{|x-x^{\prime }|}{l}\right)}{n_{i}(x)n_{i}(x^{\prime})}\mathrm{d}\mu(x^{\prime})\] _and update_ \[n_{i+1}(x)=n_{i}(x)\sqrt{\bar{1}_{i}(x)}\,.\] _The left and right panels in Figure 1 show the functions \(\bar{1}_{i}(x)\) and \(n_{i}(x)\) through 10 iterations._ _Using the function \(n_{10}(x)\) obtained after 10 iterations, in Figure 2, we compare the normalized \(\left\langle\Delta^{\Phi}(x),\Delta^{\Phi}(x^{\prime})\right\rangle\) (right panel) to the unnormalized \(\left\langle\Phi(x),\Phi(x^{\prime})\right\rangle\) (left panel) in function of \(x\) for a range of fixed \(x^{\prime}\). In the right panel, for each fixed \(x^{\prime}\), the area below \(\left\langle\Delta^{\Phi}(x),\Delta^{\Phi}(x^{\prime})\right\rangle\) is normalized to one._ In the above example, one may recognize the function \(\max\left(0,1-\frac{|x-x^{\prime}|}{\lambda}\right)\) used as a basis function in the fuzzy transform [16]. **Example 2**.: _Consider the same measure space \((X,\mathcal{A},\mu)\) as in Example 1 and a similar stochastic process. Now we add periodic boundary conditions, i.e.,_ 1. \(\mathbf{E}\left[\Phi(x)\right]=0\)_, for all_ \(x\in X\,;\)__ 2. \(\mathbf{E}\left[\Phi(x)\Phi(x^{\prime})\right]=\max\left(0,1-\frac{d(x,x^{ \prime})}{\lambda}\right)\)_, for all_ \(x,x^{\prime}\in X\)_, with_ \(\lambda=(b-a)/n\) _and_ \(n\geq 2\) _an integer, and_ \[d(x,x^{\prime})=\left\{\begin{array}{ll}|x-x^{\prime}|&\mbox{, if }|x-x^{\prime}|\leq(b-a)/2\\ b-a-|x-x^{\prime}|&\mbox{, if }|x-x^{\prime}|>(b-a)/2\end{array}\right..\] Figure 1: The approximate solution of the Hammerstein equation converges, using an iterative approach. Left panel: \(\bar{1}_{i}(x)\), right panel: \(n_{i}(x)\). _Note that \(\mathbf{E}\left[\Phi(a)\Phi(b)\right]=\left\langle\Phi(a),\Phi(b)\right\rangle=1\)._ _Indeed, a D-dimensional sample \(\varphi\) of such a process can be constructed analogously to Example 1, with the difference that for the points \(x_{0},x_{n+1}\not\in[a,b]\), we set \(r(x_{0})=r(x_{n})\) and \(r(x_{n+1})=r(x_{1})\) and also, we set \(\varphi(x_{0}+x)=\varphi(x_{n}+x)\) for \(x\in\left]0,\lambda\right[\)._ _We obtain a constant as a solution for the normalization function \(n(x)\) since_ \[n^{2}(x)=\int_{x^{\prime}\in[a,b]}\left\langle\Phi(x),\Phi(x^{\prime})\right\rangle \mathrm{d}\mu(x^{\prime})=\lambda\,,\] _for all \(x\in X\), and can define a concrete normalized hyperdimensional encoding as \(\Delta^{\varphi}=\varphi/\sqrt{\lambda}\)._ Note that, if there are no boundaries, the normalization function behaves as a constant. Similarly, in Example 1, away from the boundaries (w.r.t. the 'range size' \(\lambda=1/4\)), the normalization function behaves as a constant. To construct encodings for higher-dimensional spaces, we refer to Section 7. For completeness, we also illustrate the encoding of a finite set by means of the example below that assumes a simple structure. **Example 3**.: _Let \(X=U\times V\times W\) with \(U,V,W\) discrete sets, \(\mathcal{A}\) the power set of \(X\) and \(\mu\) the counting measure. We define a stochastic process \(\Phi\) by the following properties:_ 1. \(\mathbf{E}\left[\Phi(x)\right]=0\)_, for all_ \(x\in X\) _;_ 2. \(\mathbf{E}\left[\Phi(x)\Phi(x^{\prime})\right]=\frac{1}{3}\left(\delta_{x_{1}, x^{\prime}_{1}}+\delta_{x_{2},x^{\prime}_{2}}+\delta_{x_{3},x^{\prime}_{3}} \right)\)_, for all_ \(x=(x_{1},x_{2},x_{3}),x^{\prime}=(x^{\prime}_{1},x^{\prime}_{2},x^{\prime}_{3 })\in X\,.\)__ _Here, \(\delta\) denotes the Kronecker delta function._ _A D-dimensional sample \(\varphi\) of such a process can indeed be constructed. For example, first, map all elements \(x_{1}\in U\), \(x_{2}\in V\), \(x_{3}\in W\) into corresponding D-dimensional random vectors \(r_{1}(x_{1}),r_{2}(x_{2}),r_{3}(x_{3})\in\mathbb{R}^{D}\) via D Rademacher variables, taking values in \(\{-1,1\}\) with equal probability. Then define \(\varphi(x)=\frac{1}{\sqrt{3}}\left(r_{1}(x_{1})+r_{2}(x_{2})+r_{3}(x_{3})\right)\), and, indeed \(\lim_{D\to\infty}\left\langle\varphi(x),\varphi(x^{\prime})\right\rangle=\frac {1}{3}\left(\delta_{x_{1},x^{\prime}_{1}}+\delta_{x_{2},x^{\prime}_{2}}+\delta _{x_{3},x^{\prime}_{3}}\right)\). As normalization function \(n(x)\), we obtain a constant solution by counting:_ \[n^{2}(x) = \int_{x^{\prime}\in X}\left\langle\Phi(x),\Phi(x^{\prime})\right\rangle \mathrm{d}\mu(x^{\prime})\] \[= \int_{x^{\prime}\in X}\frac{1}{3}\left(\delta_{x_{1},x^{\prime} _{1}}+\delta_{x_{2},x^{\prime}_{2}}+\delta_{x_{3},x^{\prime}_{3}}\right) \mathrm{d}\mu(x^{\prime})\] \[= 1+\frac{2}{3}\Big{(}(|U|-1)+(|V|-1)+(|W|-1)\Big{)}\] \[+\frac{1}{3}\Big{(}(|U|-1)(|V|-1)+(|U|-1)(|W|-1)+(|V|-1)(|W|-1)\Big{)}\] \[=\,\frac{1}{3}\Big{(}|U||V|+|V||W|+|W||U|\Big{)}\] _for all \(x\in X\). With the concrete construction for the sample \(\varphi\) and the normalization constant \(n\), we can define a normalized hyperdimensional encoding as \(\Delta^{\varphi}=\varphi/n\). In case \(U,V,W=\{0,1\}\), one may recognize the simple matching coefficient in \(\mathbf{E}\left[\Phi(x)\Phi(x^{\prime})\right]=\frac{1}{3}\left(\delta_{x_{1}, x^{\prime}_{1}}+\delta_{x_{2},x^{\prime}_{2}}+\delta_{x_{3},x^{\prime}_{3}}\right)\), used for expressing the similarity between objects with binary attributes [4, 22]._ In the field of hyperdimensional computing, approaches for constructing hyperdimensional representations have been described for many more universes \(X\), representing different types of data structure such as graphs, images, sequences, symbols, sets, trees, and other structures [18, 10]. These approaches all have a random aspect in common. Our main contribution in this section is the formalization as a stochastic process with expected values and the notion of normalization, which is needed to formulate a proper transform in the next section. ## 3 The hyperdimensional transform In this section, we use the normalized hyperdimensional encoding \(\Delta^{\varphi}\,:\,X\to\mathbb{R}^{D}\) to construct the linear operator \(\mathcal{H}^{\Delta^{\varphi}}\) that transforms functions from \(L^{2}(X)\) into \(\mathbb{R}^{D}\). The components \(\Delta^{\varphi}_{i}\) that result from independent samples of a stochastic process will serve as orthogonal basis functions on which a function \(f\) is projected. We adhere to the following assumptions throughout this section: \((X,\mathcal{A},\mu)\) is a finite measure space; \(\{\Phi(x)\mid x\in X\}\) is a stochastic process taking values in a bounded set \(S\subset\mathbb{R}\); and \(\Delta^{\varphi}\) is a normalized hyperdimensional encoding of \(X\) w.r.t. the stochastic process \(\Phi\). These are also the standing assumptions for the remainder of this work unless indicated differently. **Definition 2**.: _The hyperdimensional transform w.r.t. \(\Delta^{\varphi}\) is defined by the linear operator \(\mathcal{H}^{\Delta^{\varphi}}\,:\,L^{2}(X)\to\mathbb{R}^{D}\) as:_ \[F=\mathcal{H}^{\Delta^{\varphi}}f\,:\,=\int_{x\in X}f(x)\Delta^{\varphi}(x) \mathrm{d}\mu(x)\,,\] _and maps any real-valued function \(f\) in \(L^{2}(X)\) to a \(D\)-dimensional real vector \(F\). The product in the integrand is the product of the vector \(\Delta^{\varphi}(x)\in\mathbb{R}^{D}\) and the scalar quantity \(f(x)\in\mathbb{R}\). The integral should again be interpreted as the Bochner integral w.r.t. the Lebesgue measure \(\mu\)._ Note that the operator \(\mathcal{H}^{\Delta^{\varphi}}\) is linear, _i.e._, \[\mathcal{H}^{\Delta^{\varphi}}(\alpha f+\beta g)=\alpha\mathcal{H}^{\Delta^{ \varphi}}f+\beta\mathcal{H}^{\Delta^{\varphi}}g\,,\] with \(\alpha\) and \(\beta\) two scalars and \(f\) and \(g\) two real-valued functions in \(L^{2}(X)\). This linearity allows us to extend the hyperdimensional transform from real-valued functions to complex-valued ones, using \(f=f_{real}+\mathbf{i}f_{im}\). **Remark 3**.: 1. _The operator_ \(\mathcal{H}^{\Delta^{\varphi}}\,:\,L^{2}(X)\to\mathbb{R}^{D}\) _maps from one Hilbert space to another. Note that the Hilbert space_ \(L^{2}(X)\) _is infinite-dimensional, while_ \(\mathbb{R}^{D}\) _is finite-dimensional and we assume_ \(D\) _to be large._ 2. _One of the assumptions throughout this section is that the measure space_ \((X,\mathcal{A},\mu)\) _is finite, in which case a function_ \(f\) _in_ \(L^{2}(X)\) _also belongs to_ \(L^{1}(X)\) _and thus is Lebesgue integrable._ 3. _Since the_ \(i\)_-th component_ \(\Delta^{\varphi}_{i}\) _of the vector-valued function_ \(\Delta^{\varphi}\)_) is bounded and Lebesgue integrable (see Definition_ 1 _and Remark_ 1_), and_ \(f\) _is Lebesgue integrable (see_ (ii)_), the product_ \(f\Delta^{\varphi}_{i}\) _is also Lebesgue integrable. Hence, the vector-valued function_ \(f\Delta^{\varphi}\) _is Bochner integrable and the transform is well-defined._ **Remark 4**.: _The definition of the hyperdimensional transform for functions can be extended with one for measures. Let \(\mathcal{M}(X,\mathcal{A})\) denote the space of all real-valued measures on the measurable space \((X,\mathcal{A})\). Then the hyperdimensional transform of a measure \(\mu^{\prime}\in\mathcal{M}(X,\mathcal{A})\) w.r.t. \(\Delta^{\varphi}\) can be defined by the linear operator \(\mathcal{H}^{\Delta^{\varphi}}_{*}\,:\,\mathcal{M}(X,\mathcal{A})\to\mathbb{R} ^{D}\) as:_ \[M^{\prime}=\mathcal{H}^{\Delta^{\varphi}}_{*}\,\mu^{\prime}=\int_{x\in X} \Delta^{\varphi}(x)\mathrm{d}\mu^{\prime}(x)\,.\] _Instead of weighing the integration with a function \(f\), now a measure \(\mu^{\prime}\) is used. This extension allows for the interpretation \(\Delta^{\varphi}(x)=\mathcal{H}^{\Delta^{\varphi}}_{*}\,\delta_{x}\) with \(\delta_{x}\) the Dirac measure peaked at \(x\). If \(\mu^{\prime}=\mu\), then \(\mathcal{H}^{\Delta^{\varphi}}_{*}\,\mu=\mathcal{H}^{\Delta^{\varphi}}\,_{X}\), with \(1_{X}\) the simple function mapping all elements of \(X\) to \(1\)._ The following theorem expresses that the hyperdimensional transform is unique, _i.e._, the transform is injective if the function \(\left\langle\Delta^{\varphi}(\cdot),\Delta^{\varphi}(\cdot)\right\rangle:X\times X \rightarrow\mathbb{R}\) is a strictly positive definite kernel function. For completeness, we first recall the definition of such kernel function [13]. **Definition 3**.: _A function \(k:\ X\times X\rightarrow\mathbb{R}\) is a positive definite kernel function if it is symmetric, i.e., \(k(x^{1},x^{2})=k(x^{2},x^{1})\), and any Gram matrix is positive definite, i.e.,_ \[\sum_{i=1}^{n}\sum_{j=1}^{n}c_{i}c_{j}k(x^{i},x^{j})\geq 0\,,\] _for any \(n\in\mathbb{N}\), any \(x^{1},\ldots,x^{n}\in X\) and any \(c_{1},\ldots,c_{n}\in\mathbb{R}\)[13, 19]. The function is said to be strictly positive definite if the equality_ \[\sum_{i=1}^{n}\sum_{j=1}^{n}c_{i}c_{j}k(x^{i},x^{j})=0\] _implies \(c_{1}=c_{2}=\ldots=c_{n}=0\)._ This definition is equivalent to saying that the eigenvalues of any Gram matrix of a positive definite kernel function \(k\), _i.e._, any \(n\times n\) matrix \(K\) with \(K_{ij}=k(x^{i},x^{j})\) for any \(x^{1},\ldots,x^{n}\) and any \(n\in\mathbb{N}\), are non-negative. For a strictly positive definite kernel function, the eigenvalues of the Gram matrix must be strictly positive [13, 19]. **Theorem 1**.: _Let \(F=\mathcal{H}^{\Delta^{\varphi}}f\) and \(G=\mathcal{H}^{\Delta^{\varphi}}g\) be the hyperdimensional transforms of \(f,g\in L^{2}(X)\). If the function \(\left\langle\Delta^{\varphi}(\cdot),\Delta^{\varphi}(\cdot)\right\rangle:X \times X\rightarrow\mathbb{R}\) is a strictly positive definite kernel function, then \(F=G\) implies \(f=g\)._ Proof.: Note that \[F-G=\int_{x\in X}\left(f(x)-g(x)\right)\Delta^{\varphi}(x)\mathrm{d}\mu(x)\,.\] Using Proposition 1, we have \[\|F-G\|^{2} =\left\langle F-G,F-G\right\rangle\] \[=\] The strictly positive definiteness of \(\left\langle\Delta^{\varphi}(\cdot),\Delta^{\varphi}(\cdot)\right\rangle:X \times X\rightarrow\mathbb{R}\) then implies that if \(\|F-G\|^{2}=0\), then also \(f=g\)[13, 19]. **Example 4**.: _The function \(\left\langle\Phi(x),\Phi(x^{\prime})\right\rangle=\max\left(0,1-\frac{|x-x^{ \prime}|}{\lambda}\right)\), for \(x,x^{\prime}\in[a,b]\), introduced in Example 1, is positive definite. Indeed, the Fourier transform of the non-negative function \(\frac{2-2\cos(\omega)}{\omega^{2}}\) in the frequency domain is proportional to the function \(\max\left(0,1-|x-x^{\prime}|\right)\) with \(\lambda=1\) in the spatial domain, such that positive definiteness follows from Bochner's theorem [13]. To obtain a unique transform, a strictly positive definite function can always be constructed by using a modified stochastic process \(\Phi_{\epsilon}\) with a small probability \(\epsilon\) for which the outcome of the stochastic process is a completely random function, such that_ _Since \(\sum_{i=1}^{n}\sum_{j=1}^{n}c_{i}c_{j}\delta_{x^{i},x^{j}}=0\) implies \(c_{1}=\ldots=c_{n}=0\), even the slightest \(\epsilon\) turns a positive definite function \(\left\langle\Phi(x),\Phi(x^{\prime})\right\rangle\) into a strictly positive definite function \(\left\langle\Phi_{\epsilon}(x),\Phi_{\epsilon}(x^{\prime})\right\rangle\)._ ## 4 The inverse hyperdimensional transform This section introduces the inverse hyperdimensional transform, a linear operator \(\tilde{\mathcal{H}}^{\Delta^{\varphi}}\) that transforms vectors in \(\mathbb{R}^{D}\) back into \(L^{2}(X)\). The back-transformed function \(\tilde{f}=\tilde{\mathcal{H}}^{\Delta^{\varphi}}f^{\Delta^{\varphi}}f\) is to be understood as an approximation of the original function and not its exact recovery, though it can be an arbitrarily close approximation. **Definition 4**.: _The inverse hyperdimensional transform w.r.t. \(\Delta^{\varphi}\) is defined by the linear operator \(\tilde{\mathcal{H}}^{\Delta^{\varphi}}:\mathbb{R}^{D}\to L^{2}(X)\) as:_ \[\tilde{\mathcal{H}}^{\Delta^{\varphi}}F:=\left\langle F,\Delta^{\varphi}(\cdot )\right\rangle\,,\] _with function evaluation_ \[\left(\tilde{\mathcal{H}}^{\Delta^{\varphi}}F\right)(x)=\left\langle F,\Delta ^{\varphi}(x)\right\rangle\,,\] _and maps any D-dimensional vector \(F\) to a real-valued function in \(L^{2}(X)\)._ **Remark 5**.: _The function \(\tilde{f}=\tilde{\mathcal{H}}^{\Delta^{\varphi}}F\) is indeed an element of \(L^{2}(X)\) for all \(F\in\mathbb{R}^{D}\). Since \(\Delta^{\varphi}\) is Bochner integrable, we have that \(\tilde{f}=\langle F,\Delta^{\varphi}(\cdot)\rangle\) is Lebesgue integrable (see Remark 1 and the proof of Proposition 1) and since \(\tilde{f}\) is bounded because \(\Delta^{\varphi}\) is bounded, \(\tilde{f}\) is also square Lebesgue integrable._ **Remark 6**.: _The back-transformed function \(\tilde{f}=\tilde{\mathcal{H}}^{\Delta^{\varphi}}\mathcal{H}^{\Delta^{\varphi}}f\) does not yield the original function \(f\) but an approximation thereof. More specifically, based on Proposition 1, we have:_ \[\tilde{f}(x) = \left(\tilde{\mathcal{H}}^{\Delta^{\varphi}}\mathcal{H}^{\Delta ^{\varphi}}f\right)(x)\] \[= \int_{x^{\prime}\in X}f(x^{\prime})\left\langle\Delta^{\varphi}( x),\Delta^{\varphi}(x^{\prime})\right\rangle\mathrm{d}\mu(x^{\prime})\,,\] _which can be interpreted as a smoothened version of the original function, according to the kernel function \(\left\langle\Delta^{\varphi}(\cdot),\Delta^{\varphi}(\cdot)\right\rangle\)._ **Remark 7**.: _With \(1_{X}\) the simple function mapping all elements of \(X\) to \(1\), we have:_ \[\tilde{1}_{X}(x)=\left(\tilde{\mathcal{H}}^{\Delta^{\varphi}}\mathcal{H}^{ \Delta^{\varphi}}1_{X}\right)(x)=\int_{x^{\prime}\in X}\left\langle\Delta^{ \varphi}(x),\Delta^{\varphi}(x^{\prime})\right\rangle\mathrm{d}\mu(x^{\prime})\,,\] _which converges to \(1_{X}\) for \(D\to\infty\), expressing the normalization requirement of \(\Delta^{\varphi}\) (Proposition 2, Definition 1)._ **Example 5**.: _We perform a brief experiment in which \(\tilde{f}=\tilde{\mathcal{H}}^{\Delta^{\varphi}}\mathcal{H}^{\Delta^{\varphi}}f\) can be compared to \(f\). We use the normalized encoding of an interval introduced in Example 1, and set the interval \(X=[0,1]\), \(\lambda=1/20\), the normalization function \(n=n_{10}\), obtained via 10 iterations, and \(f\,:\,x\mapsto x\sin(10x)\). The results for \(D=5000\), \(10,000\), and \(50,000\) are shown in Figure 3._ ## 5 Approximation properties of the inverse hyperdimensional transform In this section, we additionally assume that \((X,d)\) is a metric space such that we can speak of continuous functions. We describe some approximation properties of \(\tilde{\mathcal{H}}^{\Delta^{\varphi}}\mathcal{H}^{\Delta^{\varphi}}f\) in the limit \(D\to\infty\). Following the law of large numbers [6], this limit is the expected value of \(\tilde{\mathcal{H}}^{\Delta^{\varphi}}\mathcal{H}^{\Delta^{\varphi}}f\). Note that the expected absolute difference of \(\left\langle\Delta^{\varphi}(x),\Delta^{\varphi}(x^{\prime})\right\rangle\) and its limit \(\left\langle\Delta^{\varphi}(x),\Delta^{\varphi}(x^{\prime})\right\rangle\) for \(D\to\infty\) scale as \(1/\sqrt{D}\). First, we introduce the notion of a length scale of an encoding. **Definition 5**.: _Let \((X,d)\) be a metric space and \(\{\Phi_{l}(x)\mid x\in X\}\) a stochastic process, parameterized by \(l\in\mathbb{R}_{>0}\), taking values in a bounded set \(S\subset\mathbb{R}\), and let \(\Delta^{\varphi_{l}}\) be a normalized hyperdimensional encoding of \(X\) w.r.t. the stochastic process \(\Phi_{l}\). We say that the parameter \(l\) is a length scale if for all \(x,x^{\prime}\in X\)_ 1. \(\mathbf{E}\left[\Delta^{\Phi_{l}}(x)\Delta^{\Phi_{l}}(x^{\prime})\right]>0\)_, if_ \(d(x,x^{\prime})<l\)_;_ 2. \(\mathbf{E}\left[\Delta^{\Phi_{l}}(x)\Delta^{\Phi_{l}}(x^{\prime})\right]=0\)_, if_ \(d(x,x^{\prime})\geq l\)_._ Figure 3: Comparison of the function \(f\,:\,x\mapsto x\sin(10x)\) and \(\tilde{f}=\tilde{\mathcal{H}}^{\Delta^{\varphi}}\mathcal{H}^{\Delta^{\varphi}}f\) for different dimensionalities \(D\), using the normalized hyperdimensional encoding \(\Delta^{\varphi}\) from Example 1 with \(\lambda=1/20\). _Here, note that \(\mathbf{E}\left[\Delta^{\Phi_{l}}(x)\Delta^{\Phi_{l}}(x^{\prime})\right]=\left\langle \Delta^{\Phi_{l}}(x),\Delta^{\Phi_{l}}(x^{\prime})\right\rangle=\lim_{D\to \infty}\left\langle\Delta^{\Phi_{l}}(x),\Delta^{\Phi_{l}}(x^{\prime})\right\rangle\)._ The length scale \(l\) thus is a parameter that expresses over which distance the random variables \(\Delta^{\Phi_{l}}(x)\) and \(\Delta^{\Phi_{l}}(x^{\prime})\) can be (positively) correlated. **Theorem 2**.: _Let \(\Delta^{\Phi_{l}}\) be a normalized hyperdimensional encoding of \(X\) parameterized by a length scale \(l\in\mathbb{R}_{>0}\). Let \(f:X\to\mathbb{R}\) be a function in \(L^{2}(X)\) that is continuous at \(x\in X\). Then, for any \(\varepsilon>0\), there exists a length scale \(l>0\) such that_ \[|f(x)-\tilde{f}(x)|\leq\varepsilon\,,\] _with \(\tilde{f}=\lim_{D\to\infty}\tilde{\mathcal{H}}^{\Delta^{\Phi_{l}}}\mathcal{H} ^{\Delta^{\Phi_{l}}}f\)._ Proof.: Using \(\int_{x^{\prime}\in X}\left\langle\Delta^{\Phi_{l}}(x),\Delta^{\Phi_{l}}(x^{ \prime})\right\rangle\,\mathrm{d}\mu(x^{\prime})=1\) and Propositions 1 and 2, we have \[\left|f(x)-\tilde{f}(x)\right| =\left|\int_{x^{\prime}\in X}(f(x)-f(x^{\prime}))\left\langle \Delta^{\Phi_{l}}(x),\Delta^{\Phi_{l}}(x^{\prime})\right\rangle\,\mathrm{d}\mu (x^{\prime})\right|\] \[\leq\int_{x^{\prime}\in X}\left|f(x)-f(x^{\prime})\right|\left\langle \Delta^{\Phi_{l}}(x),\Delta^{\Phi_{l}}(x^{\prime})\right\rangle\,\mathrm{d}\mu (x^{\prime})\] \[\leq\max_{x^{\prime}\in X,d(x,x^{\prime})\leq l}\left|f(x)-f(x^{ \prime})\right|\,.\] Due to the continuity of \(f\) at \(x\), for every \(\varepsilon>0\), there exists an \(l>0\) such that \(|f(x)-f(x^{\prime})|<\varepsilon\) if \(d(x,x^{\prime})<l\). **Remark 8**.: _Note that any piecewise continuous function on a real interval \(X=[a,b]\) with a finite number of jump discontinuities can also be approximated arbitrarily well on the entire domain. Assume that \(f\) is continuous on subintervals \(X^{1},\ldots,X^{n}\) that form a partition of \(X\). Now, a normalized hyperdimensional encoding \(\Delta^{\varphi}\) of \(X\) can be constructed in terms of independent encodings \(\Delta^{\varphi\,i}\) that each map subinterval \(X^{i}\) to \(\mathbb{R}^{D}\), i.e., for \(D\to\infty\), \(\left\langle\Delta^{\varphi\,i}(x),\Delta^{\varphi\,j}(x^{\prime})\right\rangle =0\) for \(x,x^{\prime}\in X^{i},X^{j}\) and \(i\neq j\). Analogous to the proof of Theorem 2, one can easily show that \(\tilde{f}=\tilde{\mathcal{H}}^{\Delta^{\varphi}}\mathcal{H}^{\Delta^{\varphi}}f\) approximates the piecewise continuous function arbitrarily well by approximating the continuous functions on the different subintervals via independent encodings._ **Example 6**.: _Recall that in both Examples 1 and 2, it holds that \(\left\langle\Phi(x),\Phi(x^{\prime})\right\rangle=\max\left(0,1-\frac{d(x,x^{ \prime})}{\lambda}\right)\). One easily verifies that \(\lambda\) satisfies the requirements of a length scale \(l\) with the following choice of metric \(d\):_ 1. _Example 1:_ \(d(x,x^{\prime})=\left|x-x^{\prime}\right|\,;\)__ 2. _Example 2:_ \[d(x,x^{\prime})=\left\{\begin{array}{ll}\left|x-x^{\prime}\right|&,\,\text{ if }\left|x-x^{\prime}\right|\leq(b-a)/2\\ b-a-\left|x-x^{\prime}\right|&,\,\text{ if }\left|x-x^{\prime}\right|>(b-a)/2 \end{array}\right..\] _Following Theorem 2, the normalized hyperdimensional encodings in these examples thus allow for approximating any continuous function arbitrarily well in the limit of \(D\to\infty\)._ Also functions of discrete variables can be approximated arbitrarily close, as they are always continuous. One can always define an encoding parameterized by a length scale \(l\) such that each element is only correlated to itself for \(l\) approaching \(0\). As an example, we next extend Example 3 by including a length scale \(l\) such that the requirements of the definition of a length scale and of Theorem 2 are fulfilled. **Example 7**.: _We define the metric_ \[d(x,x^{\prime})=1-\frac{\delta_{x_{1},x^{\prime}_{1}}+\delta_{x_{2},x^{\prime }_{2}}+\delta_{x_{3},x^{\prime}_{3}}}{3}\,,\] _taking only values \(0,\,1/3,\,2/3\) and \(1\) on the discrete set \(X\) from Example 3. We define a modified stochastic process \(\Phi_{l}\) parameterized by \(l\) by_ 1. \(\mathbf{E}\left[\Phi(x)\right]=0\)_, for all_ \(x\in X\,;\)__ _._ 2. \[\mathbf{E}\left[\Phi_{l}(x)\Phi_{l}(x^{\prime})\right]=\left\{\begin{array}{ll} \delta_{x_{1},x^{\prime}_{1}}\delta_{x_{2},x^{\prime}_{2}}\delta_{x_{3},x^{ \prime}_{3}}&,\,\,ifl<1/3\\ \frac{1}{3}\left(\delta_{x_{1},x^{\prime}_{1}}\delta_{x_{2},x^{\prime}_{2}}+ \delta_{x_{2},x^{\prime}_{2}}\delta_{x_{3},x^{\prime}_{3}}+\delta_{x_{3},x^{ \prime}_{3}}\delta_{x_{1},x^{\prime}_{1}}\right)&,\,\,if1/3\leq l<2/3\\ \frac{1}{3}\left(\delta_{x_{1},x^{\prime}_{1}}+\delta_{x_{2},x^{\prime}_{2}}+ \delta_{x_{3},x^{\prime}_{3}}\right)&,\,\,if2/3\leq l\end{array}\right.\,,\] _for all \(x=(x_{1},x_{2},x_{3})\in X\) and \(x^{\prime}=(x^{\prime}_{1},x^{\prime}_{2},x^{\prime}_{3})\in X\)._ _A sample can be constructed for all \(x=(x_{1},x_{2},x_{3})\in X\) as_ \[\varphi_{l}(x)=\left\{\begin{array}{ll}r_{1}(x_{1})r_{2}(x_{2})r_{3}(x_{3}) &,\,\,ifl<1/3\\ \frac{1}{\sqrt{3}}\left(r_{1}(x_{1})r_{2}(x_{2})+r_{2}(x_{2})r_{3}(x_{3})+r_{3 }(x_{3})r_{2}(x_{1})\right)&,\,\,if1/3\leq l<2/3\\ \frac{1}{\sqrt{3}}\left(r(x_{1})+r(x_{2})+r(x_{3})\right)&,\,\,if2/3\leq l \end{array}\right.\,.\] _For \(l\geq 2/3\), the stochastic process is unchanged compared to Example 3. For \(l<1/3\), the encoding of each element of \(X\) is only correlated to itself. One can compute the normalization constants for every \(l\) and verify that \(l\) is a length scale according to Definition 5: random variables are positively correlated if the distance is smaller than \(l\) and uncorrelated else._ Now, we provide an indication of the speed of convergence with length scale \(l\) when \(X\subset\mathbb{R}\) is a real interval, again assuming the limit \(D\to\infty\). **Theorem 3**.: _Let \(X=[a,b]\subset\mathbb{R}\) and \(\Delta^{\varphi_{l}}\) a normalized hyperdimensional encoding parametrized by a length scale \(l\in\mathbb{R}_{>0}\). Let \(f\) be twice continuously differentiable and \(\tilde{f}=\lim_{D\to\infty}\tilde{\mathcal{H}}^{\Delta^{\varphi_{l}}} \mathcal{H}^{\Delta^{\varphi_{l}}}f\), then_ \[\tilde{f}(x)=f(x)+O(l^{2})\,.\] Proof.: The proof is given for \(l<b-a\) and \(x\in[a+l,b-l]\), and is analogous when closer to the boundaries. In the limit \(D\to\infty\), we have \[\tilde{f}(x)=\int_{x^{\prime}\in[a,b]}f(x^{\prime})\left\langle\Delta^{\Phi}(x ),\Delta^{\Phi}(x^{\prime})\right\rangle\mathrm{d}\mu(x^{\prime})\,.\] Using the trapezium rule for a twice continuously differentiable function with the three points \(x-l\), \(x\), \(x+l\), we have \[\tilde{f}(x)=l\left[\frac{f(x-l)\left\langle\Delta^{\Phi}(x),\Delta^{\Phi}(x-l )\right\rangle}{2}+\frac{f(x+l)\left\langle\Delta^{\Phi}(x),\Delta^{\Phi}(x+l )\right\rangle}{2}+f(x)\left\langle\Delta^{\Phi}(x),\Delta^{\Phi}(x)\right\rangle \right]+O(l^{2})\,,\] and using \(\left\langle\Delta^{\Phi_{l}}(x),\Delta^{\Phi_{l}}(x^{\prime})\right\rangle=0\) if \(|x-x^{\prime}|\geq l\), we have \[\tilde{f}(x)=lf(x)\left\langle\Delta^{\Phi}(x),\Delta^{\Phi}(x)\right\rangle+O (l^{2})\,.\] Similarly, using \(\int_{x^{\prime}\in[a,b]}\left\langle\Delta^{\Phi_{l}}(x),\Delta^{\Phi_{l}}(x^ {\prime})\right\rangle\mathrm{d}\mu(x^{\prime})=1\), we have \[f(x)=\int_{[a,b]}f(x)\left\langle\Delta^{\Phi_{l}}(x),\Delta^{\Phi_{l}}(x^{ \prime})\right\rangle\mathrm{d}\mu(x)^{\prime}\,,\] such that, again using the trapezium rule, \[f(x)=lf(x)\left\langle\Delta^{\Phi}(x),\Delta^{\Phi}(x)\right\rangle+O(l^{2})\,,\] and thus \(\tilde{f}(x)=f(x)+O(l^{2})\). **Example 8**.: _Consider the settings of Example 5, but instead of varying \(D\), we set \(D\) large (i.e., at 50,000) and vary \(\lambda\) which acts as a length scale \(l\) according to Theorems 2 and 3. The approximated function for different length scales is shown in Figure 4._ ## 6 Integrals and derivatives In this section, we describe how integrals and derivatives of functions can be expressed in terms of their hyperdimensional transforms. First, we consider integrals, for which no additional assumptions are needed. **Theorem 4**.: _Let \(F=\mathcal{H}^{\Delta^{\varphi}}f\) and \(G=\mathcal{H}^{\Delta^{\varphi}}g\) be the hyperdimensional transforms of \(f,g\in L^{2}(X)\), then_ \[\langle F,G\rangle=\int_{x\in X}\tilde{f}(x)\mathrm{g}(x)\mathrm{d}\mu(x)=\int _{x\in X}f(x)\tilde{\mathrm{g}}(x)\mathrm{d}\mu(x)\,,\] _with \(\tilde{f}=\tilde{\mathcal{H}}^{\Delta^{\varphi}}\mathcal{H}^{\Delta^{\varphi}}f\) and \(\tilde{g}=\tilde{\mathcal{H}}^{\Delta^{\varphi}}\mathcal{H}^{\Delta^{\varphi}}g\) the back-transformed functions._ Proof.: Using Proposition 1, we have \[\langle F,G\rangle =\left\langle F,\int_{x\in X}g(x)\Delta^{\varphi}(x)\mathrm{d} \mu(x)\right\rangle\] \[=\int_{x\in X}g(x)\left\langle F,\Delta^{\varphi}(x)\right\rangle \mathrm{d}\mu(x)\] \[=\int_{x\in X}g(x)\tilde{f}(x)\mathrm{d}\mu(x)\,.\] The inner product between functions in \(L^{2}(X)\) corresponds to the Euclidean inner product in \(\mathbb{R}^{D}\). **Corollary 1**.: _Let \(f\) be a function in \(L^{2}(X)\) and \(F=\mathcal{H}^{\Delta^{\varphi}}f\) its hyperdimensional transform, and let \(1_{X}\) be the simple function mapping all elements of \(X\) to \(1\) and \(1_{X}=\mathcal{H}^{\Delta^{\varphi}}1_{X}\) its hyperdimensional transform, then_ \[\langle F,\,1_{X}\rangle=\int_{x\in X}f(x)\tilde{1}_{X}(x)\mathrm{d}\mu(x)= \int_{x\in X}\tilde{f}(x)\mathrm{d}\mu(x)\] _and_ \[\lim_{D\to\infty}\langle F,\,1_{X}\rangle=\int_{x\in X}f(x)\mathrm{d}\mu(x)\,.\] Figure 4: Comparison of the function \(f:x\mapsto x\sin(10x)\) and \(\tilde{f}=\tilde{\mathcal{H}}^{\Delta^{\varphi}}\mathcal{H}^{\Delta^{\varphi}}f\) for different length scales, using the normalized hyperdimensional encoding \(\Delta^{\varphi}\) from Example 1 with \(D=50,000\). The larger the length scale, the smoother the approximating function \(\tilde{f}\). The smaller the length scale, the closer the approximation is to the original function. Proof.: The first claim follows directly from Theorem 4 by setting \(G=\mathbb{1}_{X}\). The second claim follows from the fact that for \(D\to\infty\), the function \(\mathbb{1}_{X}\) approximates \(1_{X}\) perfectly, expressing the normalization of \(\Delta^{\Phi}\) (see Remark 7). Note that, for \(D\to\infty\), we thus also have that \(\int_{x\in X}f(x)d\mu(x)=\int_{x\in X}\tilde{f}(x)d\mu(x)\), _i.e._, independently of the length scale, the smoothed function \(\tilde{f}\) yields the same integral. **Corollary 2**.: _Let \(f\) be a function in \(L^{2}(X)\) and \(F=\mathcal{H}^{\Delta^{\varphi}}f\) its hyperdimensional transform, and let \(1_{A}:X\to\{0,1\}\) be the simple function mapping all elements of a measurable subset \(A\subset X\) to \(1\) and all other elements of \(X\) to \(0\), and \(\mathbb{1}_{A}=\mathcal{H}^{\Delta^{\varphi}}1_{A}\) its hyperdimensional transform, then_ \[\left\langle F,\mathbb{1}_{A}\right\rangle=\int_{x\in A}\tilde{f}(x)\mathrm{d }\mu(x)\,.\] Next, we also introduce the representation of the derivative of a function in the hyperdimensional space. Therefore, we add the assumption that \(X\subset\mathbb{R}\) is a real interval, consider the metric \(d(x,x^{\prime})=|x-x^{\prime}|\) and use the standard definition of the derivative. **Definition 6**.: _Let \(\Delta^{\varphi}:X\to\mathbb{R}^{D}\) be a normalized hyperdimensional encoding of \(X\) of which the components \(\Delta^{\varphi}_{i}\) are functions that are \(n\) times differentiable at \(x\in X\). Then we say that \(\Delta^{\varphi}\) is \(n\) times differentiable at \(x\) and the \(n\)-th order derivative at \(x\) is elementwisely given by_ \[\Delta^{\varphi,(n)}(x):=\left[\frac{\mathrm{d}^{n}}{\mathrm{d}x^{n}}\Delta^ {\varphi}_{1}(x),\frac{\mathrm{d}^{n}}{\mathrm{d}x^{n}}\Delta^{\varphi}_{2}(x ),\ldots,\frac{\mathrm{d}^{n}}{\mathrm{d}x^{n}}\Delta^{\varphi}_{D}(x)\right]\,.\] **Theorem 5**.: _Let \(\Delta^{\varphi}:X\to\mathbb{R}^{D}\) be a normalized hyperdimensional encoding of \(X\) that is \(n\) times differentiable at \(x\) and \(\tilde{f}=\tilde{\mathcal{H}}^{\Delta^{\varphi}}\mathcal{H}^{\Delta^{\varphi}}f\) the back-transformed function of \(f\in L^{2}(X)\), then \(\tilde{f}\) is also \(n\) times differentiable at \(x\) and_ \[\frac{\mathrm{d}^{n}}{\mathrm{d}x^{n}}\tilde{f}(x)=\left\langle F,\Delta^{ \varphi,(n)}(x)\right\rangle\,,\] _with \(F=\mathcal{H}^{\Delta^{\varphi}}f\)._ Proof.: Because of the linearity of the inner product, we have \[\frac{\mathrm{d}}{\mathrm{d}x}\tilde{f}(x) =\lim_{|h|\to 0}\frac{\tilde{f}(x+h)-\tilde{f}(x)}{h}\] \[=\lim_{|h|\to 0}\frac{\left\langle F,\Delta^{\varphi}(x+h) \right\rangle-\left\langle F,\Delta^{\varphi}(x)\right\rangle}{h}\] \[=\lim_{|h|\to 0}\left\langle F,\frac{\Delta^{\varphi}(x+h)- \Delta^{\varphi}(x)}{h}\right\rangle\] \[=\left\langle F,\Delta^{\varphi,(1)}(x)\right\rangle\,.\] The higher-order derivatives follow from recursion. **Remark 9**.: _Note that \(\frac{\mathrm{d}^{n}}{\mathrm{d}x^{n}}\tilde{f}(x)=\left\langle F,\Delta^{ \varphi,(n)}(x)\right\rangle\) expresses the linear functionals of function evaluation and derivative function evaluation of any order in a unified way as explicit inner products with \(F\in\mathbb{R}^{D}\). Similarly, the integral operator is a linear functional that is explicitly represented as a dot product with \(F\), i.e., \(\int_{x\in X}\tilde{f}(x)d\mu(x)=\left\langle F,\mathbb{1}_{X}\right\rangle\)._ Typically, in the context of hyperdimensional computing, the hyperdimensional representation is low-memory. Consider Example 1, where the stochastic process \(\Phi\) takes values in \(S=\{-1,1\}\) and the unnormalized hyperdimensional representation \(\varphi(x)\in\{-1,1\}^{D}\) can be represented as a bit vector. In that case, each component \(\varphi_{i}\) of \(\varphi\), being a random function switching between 1 and -1 at a certain frequency (see Example 1), is thus not differentiable. In practice, this is not necessarily a limitation for estimating derivatives. Note that the encoding assumes a finite length scale \(l>0\) within which point representations \(\varphi(x)\) and \(\varphi(x^{\prime})\) are correlated. One can argue that the location of \(x\) is thus fuzzy w.r.t. a precision \(l\). Consequently, one can argue that it is reasonable to approximate the derivative with a finite difference \(h\) close to the length scale \(l\). The finite-difference derivative of \(\tilde{f}\) as an approximation of the true derivative can be exactly computed via the finite-difference derivative of the encoding. The proof is analogous to that of Theorem 5. In Figure 5(a), a few lower-order derivatives of the step function computed with a finite difference are shown. The step function illustrates a component \(\varphi_{i}\) of the unnormalized encoding that switches between 1 and -1 at a certain frequency. As an alternative, one may replace the step function in \(\varphi_{i}\) with smooth alternatives based on, e.g., the sigmoid function (see Figure 5(b)). The latter approach results in a smoother function recovery \(\tilde{f}\) and an exact derivative expression, however, at the cost of a more complex encoding compared to the simple \(\{1,-1\}\)-encoding. ## 7 Extensions to functions with multiple variables In this section, we extend the hyperdimensional transform to multivariate functions. We consider bivariate functions \(f(x,y)\) with \(x\in X\) and \(y\in Y\); generalization to three or more variables is analogous and is not explicitly written down for the sake of brevity. Typical examples concern \(X=Y=\mathbb{R}\), although one can envisage more involved settings. The only requirement is that the variables live in measure spaces. The following assumptions stand throughout this section: \((X,\mathcal{A},\nu)\) and \((Y,B,\xi)\) are finite measure spaces; \(\{\Phi(x)\mid x\in X\}\) and \(\{\Psi(y)\mid y\in Y\}\) are stochastic processes taking values in bounded sets \(S\subset\mathbb{R}\) and \(T\subset\mathbb{R}\), resp.; and \(\Delta^{\varphi}:X\to\mathbb{R}^{D}\) and \(\Delta^{\varphi}:Y\to\mathbb{R}^{D}\) are normalized hyperdimensional encodings of \(X\) and \(Y\) w.r. t. the stochastic processes \(\Phi\) and \(\Psi\). In the case that \(X=Y\), the stochastic processes may be the same, but the sampled functions \(\varphi\) and \(\psi\) are always independent. Additionally, we assume that the normalized stochastic processes are zero-centered, _i.e._, \(\mathbb{E}\left[\Delta^{\Phi}(x)\right]=\mathbb{E}\left[\Delta^{\Psi}(y)\right]=0\), for all \(x\in X\) and \(y\in Y\). ### Hyperdimensional representation: product encoding First, we introduce the hyperdimensional representation of a product space. **Definition 7**.: _The function \(\Delta^{\varphi,\varphi}:X\times Y\to\mathbb{R}^{D}\), given by_ \[\Delta^{\varphi,\varphi}(x,y)=\Delta^{\varphi}(x)\otimes\Delta^{\psi}(y)\,,\] _is called the product encoding of \(\Delta^{\varphi}\) and \(\Delta^{\varphi}\). Here, \(\otimes\) denotes the elementwise product, i.e.,_ \[\Delta^{\varphi,\varphi}_{i}(x,y)=\Delta^{\varphi}_{i}(x)\Delta^{\psi}_{i}(y)\,.\] Note that the order of \(\varphi\) and \(\psi\) in the notation \(\Delta^{\varphi,\varphi}\) is important: \(\Delta^{\varphi,\varphi}(x,y)=\Delta^{\varphi}(x)\otimes\Delta^{\psi}(y)\), while \(\Delta^{\varphi,\varphi}(x,y)=\Delta^{\psi}(x)\otimes\Delta^{\varphi}(y)\). **Remark 10**.: _In the limit of \(D\to\infty\), we have_ \[\left\langle\Delta^{\varphi}(x)\otimes\Delta^{\varphi}(y),\Delta^{\varphi}(x ^{\prime})\otimes\Delta^{\varphi}(y^{\prime})\right\rangle=\left\langle \Delta^{\varphi}(x),\Delta^{\varphi}(x^{\prime})\right\rangle\left\langle \Delta^{\psi}(y),\Delta^{\psi}(y^{\prime})\right\rangle\,,\] _which is a basic result from statistics on the covariance of products of zero-centered random variables. This general outer product (or tensor product) property motivates the use of \(\otimes\) for denoting the elementwise product. The property holds for infinite dimensionality and holds only approximately for finite dimensionality. The advantage of this approximation is that the dimensionality \(D\) is a constant, whereas the dimensionality of a real outer product increases as \(D^{2}\)._ Consider the product measure space \((X\times Y,\mathcal{A}\times B,\mu)\). Here, \(\mathcal{A}\times B\) is the \(\sigma\)-algebra generated by the Cartesian products of elements of \(\mathcal{A}\) and \(B\). The product measure \(\mu\) is uniquely determined as \(\mu(A\times B)=\nu(A)\xi(B)\), for any \(A\in\mathcal{A}\) and \(B\in B\), if both measure spaces are \(\sigma\)-finite, which is a standard assumption (a finite measure space is also \(\sigma\)-finite). Figure 5: Lower-order derivatives of the step function as an illustration for the lower-order derivatives of a component \(\varphi_{i}\) of the unnormalized encoding \(\varphi\). As in Example 4, \(\varphi_{i}\) is a function switching between 1 and -1 at a certain frequency. In the left panel, a (centered) finite-difference derivative is computed. In the right panel, a smooth differentiable alternative is used for the step function based on the sigmoid function. The functions are rescaled on the y-axis for proper visualization. With this product measure on the product space \(X\times Y\), and the product encoding \(\Delta^{\varphi,\varphi}\,:\,X\times Y\to\mathbb{R}^{D}\), the hyperdimensional transform of \(f\in L^{2}(X\times Y)\) takes the form \[F=\mathcal{H}^{\Delta^{\varphi,\varphi}}\,f=\int_{X\times Y}f(x,y)\Delta^{ \varphi,\varphi}(x,y)\mathrm{d}\mu(x,y)\,.\] According to Fubini's theorem for product measures, this integral can be computed using iterated integrals and the order of integration can be changed, _i.e._, \[F =\int_{Y}\left(\int_{X}f(x,y)\Delta^{\varphi,\varphi}(x,y) \mathrm{d}\upsilon(x)\right)\mathrm{d}\xi(y)\] \[=\int_{X}\left(\int_{Y}f(x,y)\Delta^{\varphi,\varphi}(x,y) \mathrm{d}\xi(y)\right)\mathrm{d}\upsilon(x)\,.\] As a product measure space is again a measure space itself, the aforementioned theory on the hyperdimensional transform, inverse transform and approximation properties, is still applicable. In what follows, we add some additional results that apply in particular to product measure spaces. ### Marginalisation As a second extension for multiple variables, we describe how one can integrate a single variable while fixing the others. **Theorem 6**.: _Let \(1_{Y}\,:\,Y\to\{1\}\) be the simple function mapping all elements of \(Y\) to 1 and \(1_{Y}=\mathcal{H}^{\Delta^{\varphi}}1_{Y}\) its hyperdimensional transform. Let \(f\,:\,X\times Y\to\mathbb{R}\) be a bivariate function in \(L^{2}(X\times Y)\), \(F=\mathcal{H}^{\Delta^{\varphi,\varphi}}f\) its hyperdimensional transform, and \(\tilde{f}=\mathcal{\tilde{H}}^{\Delta^{\varphi,\varphi}}\mathcal{H}^{\Delta^ {\varphi,\varphi}}f\) the back-transformed function, then_ \[\int_{Y}\tilde{f}(x,y)\mathrm{d}\xi(y) =\,\left\langle F,\Delta^{\varphi}(x)\otimes 1_{Y}\right\rangle \tag{2}\] \[=\,\left\langle F\otimes\Delta^{\varphi}(x),1_{Y}\right\rangle\] (3) \[=\,\left\langle F\otimes 1_{Y},\Delta^{\varphi}(x)\right\rangle\,. \tag{4}\] Proof.: We prove the first equality, the other ones being analogous. Filling in the expressions, and using Proposition 1 and Fubini's theorem, we have \[\left\langle F,\Delta^{\varphi}(x)\otimes\mathbb{1}_{Y}\right\rangle =\,\left\langle\int_{X}\int_{Y}f(x^{\prime},y^{\prime})\Delta^{ \varphi}(x^{\prime})\otimes\Delta^{\varphi}(y^{\prime})\mathrm{d}\upsilon(x^ {\prime})\mathrm{d}\xi(y^{\prime})\right\rangle\] \[=\,\int_{Y}\left[\int_{X}\int_{Y}f(x^{\prime},y^{\prime})\left\langle \Delta^{\varphi,\varphi}(x^{\prime},y^{\prime}),\Delta^{\varphi,\varphi}(x,y) \right\rangle\;\mathrm{d}\upsilon(x^{\prime})\mathrm{d}\xi(y^{\prime})\right] \mathrm{d}\xi(y)\] \[=\,\int_{Y}\tilde{f}(x,y)\mathrm{d}\xi(y)\,.\] **Remark 11**.: _The three expressions in Theorem 6 have particular interpretations, which one might interpret as a basis for Bayesian inference with complex distributions:_ _Eq. (2): Using the extension of the hyperdimensional transform for measures, one can interpret the expression \(\Delta^{\varphi}(x)\otimes\mathbb{1}_{Y}\) as \(\mathcal{H}_{\Delta}^{\Delta^{\varphi}}\delta_{x}\otimes\mathcal{H}^{\Delta^{ \varphi}}1_{Y}\). The inner product thus represents the evaluation of a function that is Dirac-distributed in the variable \(x\) and has a constant density \(1\) in the variable \(y\)._ _Eq. (3): The expression \(F\otimes\Delta^{\varphi}(x)\) can be seen as the representation of a univariate function in the variable \(y\), conditioned on \(x\). This univariate function is integrated w.r.t. the variable \(y\) by the inner product with \(1_{Y}\)._ _Eq. (4): The expression \(F\otimes\mathbb{1}_{Y}\) can be seen as a marginal univariate function in the variable \(x\). The inner product with \(\Delta^{\varphi}(x)\) is then simply a function evaluation of this function at \(x\). Marginalizing a multivariate function in hyperdimensional space thus simply corresponds to an elementwise vector multiplication._ ### Partial derivatives and gradients For a last extension for multiple variables, we add the assumption that \(X\subset\mathbb{R}\) and \(Y\subset\mathbb{R}\) are real intervals and we use the standard definition of the (partial) derivative. **Theorem 7**.: _Let \(\Delta^{\varphi}\) and \(\Delta^{\varphi}\) be normalized hyperdimensional encodings of \(X\) and \(Y\) that are differentiable at \((x,y)\in X\times Y\), and let \(f\,:\,X\times Y\to\mathbb{R}\) be a bivariate function in \(L^{2}(X\times Y)\), \(F=\mathcal{H}^{\Delta^{\varphi}\varphi}\,f\) its hyperdimensional transform, and \(\tilde{f}=\tilde{\mathcal{H}}^{\Delta^{\varphi}\varphi}\mathcal{H}^{\Delta^{ \varphi}\varphi}\,f\) the back-transformed function, then the gradient of \(\tilde{F}\) at \((x,y)\) exists and_ \[\frac{\partial}{\partial x}\tilde{f}(x,y)=\left\langle F,\Delta^{\varphi,(1)}( x)\otimes\Delta^{\varphi}(y)\right\rangle\] _and_ \[\frac{\partial}{\partial y}\tilde{f}(x,y)=\left\langle F,\Delta^{\varphi}(x) \otimes\Delta^{\varphi,(1)}(y)\right\rangle\,.\] Proof.: The proof is analogous to that of Theorem 5. ## 8 Application: expressing linear differential and integral equations In this section, we illustrate how the functionals of function evaluation, derivative function evaluation, and integral evaluation as explicit inner products in hyperdimensional space naturally allow for expressing linear differential and integral equations. Contrary to solving differential equations via other integral transforms (e.g., Laplace or Fourier), no analytical expressions for the transform or the inverse transform are required. Instead, the hyperdimensional transform offers a more numerical approach, where the infinite-dimensional function is approximated by a vector of finite, large dimensionality \(D\). This approach unifies solving differential equations and performing linear regression, thus establishing a connection with the fields of statistical modelling and machine learning. We retain the standard assumptions from Section 3, and additionally assume that \(X\subset\mathbb{R}\) is a real interval. ### Linear differential equations Consider the general form of a linear differential equation for \(x\in[a,b]\): \[a_{0}(x)f(x)+a_{1}(x)\frac{\mathrm{d}}{\mathrm{d}x}f(x)+\cdots+a_{n}(x)\frac{ \mathrm{d}^{n}}{\mathrm{d}x^{n}}f(x)=b(x)\,.\] Approximating the solution as \(\tilde{f}=\tilde{\mathcal{H}}^{\Delta^{\varphi}}F\) with \(F\in\mathbb{R}^{D}\) and using \(\frac{\mathrm{d}^{n}}{\mathrm{d}x^{n}}\tilde{f}(x)=\left\langle F,\Delta^{ \varphi,(n)}(x)\right\rangle\), the differential equation takes the following form: \[a_{0}(x)\left\langle F,\Delta^{\varphi}(x)\right\rangle+a_{1}(x)\left\langle F,\Delta^{\varphi,(1)}(x)\right\rangle+\cdots+a_{n}(x)\left\langle F,\Delta^{ \varphi,(n)}(x)\right\rangle=b(x)\,.\] Equivalently, we have: \[\left\langle F,a_{0}(x)\Delta^{\varphi}(x)+a_{1}(x)\Delta^{\varphi,(1)}(x)+ \cdots+a_{n}(x)\Delta^{\varphi,(n)}(x)\right\rangle=b(x)\,.\] Imposing that the differential equation must hold at points \(x^{i}\), \(i=1,\,\ldots,\,m\), leads to a system of \(m\) equations: \[X\,F=B\,. \tag{5}\] The \(m\times D\) matrix \(X\) stacks the representations of the right-hand sides of the inner products at each of the \(m\) points, _i.e._, \[X=\begin{bmatrix}\left[a_{0}(x^{1})\Delta^{\varphi}(x^{1})+a_{1}(x^{1})\Delta^ {\varphi,(1)}(x^{1})+\cdots+a_{n}(x^{1})\Delta^{\varphi,(n)}(x^{1})\right]^{ \mathrm{T}}\\ \left[a_{0}(x^{2})\Delta^{\varphi}(x^{2})+a_{1}(x^{2})\Delta^{\varphi,(1)}(x^ {2})+\cdots+a_{n}(x^{2})\Delta^{\varphi,(n)}(x^{2})\right]^{\mathrm{T}}\\ \vdots\\ \left[a_{0}(x^{m})\Delta^{\varphi}(x^{m})+a_{1}(x^{m})\Delta^{\varphi,(1)}(x^ {m})+\cdots+a_{n}(x^{m})\Delta^{\varphi,(n)}(x^{m})\right]^{\mathrm{T}}\end{bmatrix}.\] The \(D\)-dimensional vector \(B\) stacks the function evaluations of \(b\) at these points, _i.e._, \[B=\begin{bmatrix}b(x^{1})\\ b(x^{2})\\ \vdots\\ b(x^{m})\end{bmatrix}.\] System (5) imposes the differential equation at the points \(x^{i}\), \(i=1,\ldots,m\). If the maximal distance between the \(m\) points is not greater than the length scale of the encoding, then there are no points \(x\in[a,b]\) such that \(\left\langle\Delta^{\varphi}(x),\,\Delta^{\varphi}(x^{i})\right\rangle=0\). Due to the correlation within the length scale \(l\), the entire domain is thus taken into account in the system corresponding to the \(m\) points. Note that this system takes the exact same form of a regular linear regression problem with \(F\) the model parameters searched for and \(X\) the matrix with highly nonlinear features. With an ordinary least squares assumption and a small regularization term for numerical stability, such a problem is typically solved either via the exact solution of ridge regression or via iterative methods such as conjugate gradient descent. The main difference with linear regression is that the \(m\) data points are now not just \(m\) simple \(0\)-th order function observations in the form of \(f(x^{i})=b_{i}\). Instead, each data point can now express a more complex higher-order function observation, e.g., \(f(x^{i})+f^{\prime}(x^{i})=b_{i}\). Also, boundary conditions of any order can be added to the equations as they can also be expressed as inner products and take the very same form. Adding the equations for the boundary conditions, the system of linear equations that expresses the differential equation and the boundary conditions can be written as \(X_{c}F=B_{c}\). We refer to Figure 6 for some examples. Note that, with this finite dimensionality, the result almost seems not noisy at all. Here, \(\tilde{f}\) was optimized to match the differential equation as good as possible. The conditions on the derivatives of \(\tilde{f}\) and the ridge regularization may ensure a smoother \(\tilde{f}\). **Remark 12**.: _The above approach unifies performing linear regression and solving a differential equation. When no data points for regression are given, the differential equation will dominate, and vice versa. One may also consider linear regression as the main task, and see the differential equation as a kind of regularization. For example, when little or no data points for regression are available, one may impose the function to behave as a constant, linearly, quadratically, etc., by imposing \(f^{\prime}(x)=0\), \(f^{\prime\prime}(x)=0\), \(f^{\prime\prime\prime}(x)=0\), etc._ Solving a differential equation in hyperdimensional space takes this simple form because the function is represented as a vector \(F\), and the functionals that query a function evaluation, a derivative function evaluation, etc., are all represented as inner products with \(F\). A system of \(m\) linear equations in the components of \(F\) can then simply be constructed by expressing at which \(m\) points the equations must hold. ### Linear integral equations The very same reasoning holds for integral equations. Next, we show how solving an integral equation can be turned into solving a linear regression problem. A prominent example of a nonlinear integral equation is the Fredholm equation Figure 6: Hyperdimensional solutions of basic differential equations describing exponential decay, the harmonic oscillator, and the damped harmonic oscillator. Both the hyperdimensional solutions (dashed line), taking into account the boundary conditions, and the analytical solutions are plotted (solid line). We set \(k=10\). The encoding of Example 1, a dimensionality \(D=5000\), a length scale \(l=0.05\), a finite difference \(h=l/5\) for derivation, and \(500\) equidistant points \(x^{i}\in[0,1]\) were used. The solution was computed using the exact solution of ridge regression [11], with a small ridge regression parameter \(\lambda=1\) for numerical stability. of the second type: \[f(x)=b(x)+\lambda\int_{c}^{d}k(y,x)f(y)\mathrm{d}y\,.\] The functions \(b:[c,d]\to\mathbb{R}\) and \(k:[c,d]\times[c,d]\to\mathbb{R}\) are given, \(\lambda\) is a constant, and \(f:[c,d]\to\mathbb{R}\) is the function searched for. We approximate the solution as \(\tilde{f}=\mathcal{H}^{\Delta^{\varphi}}f\) and use \(K=\mathcal{H}^{\Delta^{\varphi}\varphi}k\) as the hyperdimensional representation of \(k\). Recall that the multivariate transform \(K=\mathcal{H}^{\Delta^{\varphi}\varphi}k\) w.r.t. the hyperdimensional encoding \(\Delta^{\varphi,\varphi}\) assumes that the first variable (here, \(y\)) and the second variable (here, \(x\)) are independently encoded by \(\Delta^{\varphi}\) and \(\Delta^{\varphi}\), respectively. The integral equation can be written as \[\langle F,\Delta^{\varphi}(x)\rangle=b(x)+\lambda\left\langle F,K\otimes \Delta^{\varphi}(x)\right\rangle\,. \tag{6}\] Note that the inner product on the right-hand side of Eq. (6) integrates the variable \(y\) encoded by \(\Delta^{\varphi}\), while the variable \(x\), encoded by \(\Delta^{\varphi}\), is used for conditioning \(k(y,x)\) (see marginalisation of multivariate functions in Section 7.2). The equation can be rewritten as a single inner-product equation \[\langle F,\Delta^{\varphi}(x)-\lambda K\otimes\Delta^{\varphi}(x)\rangle=b(x)\,,\] such that a linear regression matrix equation is again obtained by choosing points \(x^{l}\), \(i=1,\ldots,m\). ## 9 Connections with other integral transforms In this section, we first relate the hyperdimensional transform to other integral transforms in general, focusing on prominent examples such as the transforms of Laplace and Fourier. Second, we discuss the close connection with the fuzzy transform in greater detail. ### Integral transforms As introduced in Section 1, the hyperdimensional transform is an integral transform just like the Laplace transform, the Fourier transform and the fuzzy transform. While the Laplace and Fourier transforms yield functions of complex or real variables, the hyperdimensional transform and the fuzzy transform yield functions with as domain a finite set. Vectorizing the function values, the fuzzy and hyperdimensional transform can be interpreted as function-to-vector transformations. On the one hand, the finite dimensionality \(D\) of the hyperdimensional transform might imply less expressivity and entail some loss of information, while the random nature of the basis functions introduces stochastic noise. However, these effects diminish with increasing dimensionality \(D\) of the vector. Hence, the dimensionality is assumed large. On the other hand, the transformation to a finite-dimensional vector makes the computation of the integral tractable for a broader set of functions: each component of the transform can be computed directly without the need for an analytical expression. Note that the hyperdimensional transform is defined for any abstract universe \(X\) that is provided with a measure, allowing, e.g., for representing functions on sets, sequences, or graphs. The hyperdimensional transform opens a distinct approach to solving differential equations. Instead of an analytical solution, an approximate solution can be computed. Thanks to the natural expressions of the functionals that include differentiation and integration, the hyperdimensional transform converts linear differential equations and linear integral equations into linear matrix equations, unifying them with linear regression. While the Fourier transform decomposes a function in an infinite set of wave functions of all possible frequencies, the hyperdimensional transform decomposes a function in random wave-like functions. For instance, in Example 1, these wave-like functions randomly switch between 1 and -1 at some 'average frequency' that is related to the length scale \(l\). Due to the possibility of setting a finite length scale \(l\), the hyperdimensional transform allows for incorporating noisy data. Similarly, for the fuzzy transform (see Section 9.2), the lower expressivity due to the finite dimension and a notion of length scale allow for filtering noise. Also, approaches in hyperdimensional computing based on holographic representations allow for noise-robust classification in machine learning. ### The fuzzy transform Because of its close connection with the hyperdimensional transform, we discuss the fuzzy transform in more detail. For a comprehensive overview of the fuzzy transform, we refer to [16]. Let \([a,b]\subset\mathbb{R}\) be an interval and \(x_{1}<\cdots<x_{D}\) fixed nodes such that \(x_{1}=a\) and \(x_{D}=b\). A set of basis functions \(A_{s}:[a,b]\to[0,1]\), \(s=1,\ldots,D\), is called a _fuzzy partition_ of \([a,b]\) if the following conditions are satisfied, for \(s=1,\ldots,D\): 1. \(A_{s}(x_{s})=1\); 2. \(A_{s}(x)=0\) if \(x\not\in]x_{s-1},x_{s+1}[\) (with the convention \(x_{0}=a\) and \(x_{D+1}=b\)); 3. \(A_{s}(x)\) is continuous; 4. \(A_{s}(x)\) strictly increases on \([x_{s-1},x_{s}]\) and strictly decreases on \([x_{s},x_{s+1}]\). A prominent example is the set of uniform triangular basis functions given by \(A_{s}(x)=\max\left(0,1-\frac{|x-x_{s}|}{\lambda}\right)\) with \(\lambda=(b-a)/(D-1)\), illustrated in Figure 7 with \([a,b]=[0,5]\) and \(D=6\). Note the exact correspondence with the expression in Example 1, _i.e._, \(A_{s}(x)=\langle\Phi(x),\Phi(x_{s})\rangle\). The components of the fuzzy transform \(G\) of a function \(g\,:\,[a,b]\to\mathbb{R}\) are given by \[G_{s}\,:=\,\frac{\int_{a}^{b}g(x)A_{s}(x)\mathrm{d}x}{\int_{a}^{b}A_{s}(x) \mathrm{d}x}\,. \tag{7}\] Each \(s\)-th component can thus be interpreted as a local weighted mean of the function around the node \(x_{s}\). The back-transformed function is then given by \[\tilde{g}(x)\,:=\sum_{s=1}^{D}G_{s}A_{s}(x)\,. \tag{8}\] Following the definition of the functions \(A_{s}\), the function \(\tilde{g}\) evaluated at node \(x_{s}\) equals the component \(G_{s}\), _i.e._, \[\tilde{g}(x_{s}) = G_{s} \tag{9}\] \[= \frac{\int_{a}^{b}g(x)A_{s}(x)\mathrm{d}x}{\int_{a}^{b}A_{s}(x) \mathrm{d}x}\] (10) \[= \frac{\int_{a}^{b}g(x)\,\langle\Phi(x),\Phi(x_{s})\rangle\, \mathrm{d}x}{\int_{a}^{b}\langle\Phi(x),\Phi(x_{s})\rangle\,\mathrm{d}x}\,.\] For the hyperdimensional transform, recall that \[\tilde{f}(x_{s})=\int_{x\in[a,b]}f(x)\frac{\langle\Phi(x),\Phi(x_{s})\rangle}{ n(x)n(x^{\prime})}\mathrm{d}\mu(x)\,, \tag{11}\] which is equivalent to Eq. (10) if the normalization function \(n\) can be determined as \[n(x)=\sqrt{\int_{x\in[a,b]}\langle\Phi(x),\Phi(x_{s})\rangle\,\mathrm{d}\mu(x)}\] and is constant. This is the case in Example 2 without boundaries and in Example 1 if one may neglect the boundary effects (e.g., when \(\lambda\) is small). In general, the normalization function is not a constant. One main difference between the hyperdimensional transform and the fuzzy transform is thus the way of normalization. Figure 7: Example of a uniform triangular fuzzy partition of the interval [0,5] with six basis functions. A second main difference is that for the fuzzy transform, \(\tilde{f}(x)\) in Eq. (8) interpolates between nodes \(x_{s}\), linearly in the case of triangular basis functions, while for the hyperdimensional transform, no specific choice for the nodes is made and Eq. (11) holds at any point \(x\) and not only at nodes \(x_{s}\). The hyperdimensionally back-transformed function can be interpreted as a moving window average, instead of a (linear) interpolation between averages, however, possibly with some stochastic noise, depending on the dimensionality and the smoothness of the encoding. Similarly as for the hyperdimensional transform, the fuzzy transform can be used to solve (partial) differential equations and to handle noisy data [17, 15, 20]. Derivatives are computed based on a finite difference between the components \(F_{s}\) of the fuzzy transform; for more details on solving differential equations with the fuzzy transform, we refer to [15]. The hyperdimensional transform may either use a finite difference or an infinitesimal difference, depending on whether the encoding is differentiable. Both methodologies can be seen as approximate approaches to solving the differential equation with some finite length scale/precision. ## 10 Conclusion We formally introduced the hyperdimensional transform, allowing for the approximation of functions by holographic, high-dimensional representations called hyperdimensional vectors. We discussed general transform-related properties such as the uniqueness of the transform, approximation properties of the inverse transform, and the representation of inner products, integrals, and derivatives. The hyperdimensional transform offers theoretical foundations and insights for research in the field of hyperdimensional computing. We also demonstrated how this transform can be used to solve linear differential and integral equations and discussed the connection with other integral transforms, such as the Laplace transform, the Fourier transform, and the fuzzy transform. Due to its capabilities of handling noisy data, we also anticipate applications in the fields of machine learning and statistical modelling. In our future work, we will elaborate further in this direction. Obvious aspects include an empirical estimation of the transform based on a sample of function evaluations, and a bipolar approximation of the transform that harnesses, even more, the fast and efficient capabilities of hyperdimensional computing. Additionally, the capability of the transform to represent entire signals, functions, or distributions as points in hyperdimensional space opens up new possibilities.
2308.14438
Freeze-in bino dark matter in high scale supersymmetry
We explore a scenario of high scale supersymmetry where all supersymmetric particles except gauginos stay at a high energy scale $M_{\rm SUSY}$ which is much larger than the reheating temperature $T_\text{RH}$. The dark matter is dominated by bino component with mass around the electroweak scale and the observed relic abundance is mainly generated by the freeze-in process during the early universe. Considering the various constraints, we identify two available scenarios in which the supersymmetric sector at an energy scale below $T_\text{RH}$ consists of: a) bino; b) bino and wino. Typically, for a bino mass around 0.1-1 TeV and a wino mass around 2 TeV, we find that $M_{\rm SUSY}$ should be around $10^{12-14}$ GeV with $T_\text{RH}$ around $10^{4-6}$ GeV.
Chengcheng Han, Peiwen Wu, Jin Min Yang, Mengchao Zhang
2023-08-28T09:24:11Z
http://arxiv.org/abs/2308.14438v2
# Freeze-in bino dark matter in high scale supersymmetry ###### Abstract We explore a scenario of high scale supersymmetry where all supersymmetric particles except gauginos stay at a high energy scale \(M_{\rm SUSY}\) which is much larger than the reheating temperature \(T_{\rm RH}\). The dark matter is dominated by bino component with mass around the electroweak scale and the observed relic abundance is mainly generated by the freeze-in process during the early universe. Considering the various constraints, we identify two available scenarios in which the supersymmetric sector at an energy scale below \(T_{\rm RH}\) consists of: a) bino; b) bino and wino. Typically, for a bino mass around 0.1-1 TeV and a wino mass around 2 TeV, we find that \(M_{\rm SUSY}\) should be around \(10^{12-14}\) GeV with \(T_{\rm RH}\) around \(10^{4-6}\) GeV. ###### Contents * 1 Introduction * 2 Model of heavy supersymemtry * 3 Freeze-in bino dark matter in MSSM * 3.1 Particle spectrum * 3.2 Bino production from freeze-in mechanism * 3.3 Case I: bino freeze-in from \(HH^{*}\to\tilde{B}\tilde{B}\) * 3.4 Case II: fermion scattering process \(f\bar{f}\to\tilde{B}\tilde{B}\) * 3.5 Case III: gluino/wino scattering or decay processes * 4 Numerical results and discussion * 4.1 Limits from BBN * 4.2 Limits from direct/indirect detection * 4.3 Limits from the LHC * 5 Conclusion * A Notation conventions and dim-5 operator in Case I * B Boltzmann equation and calculation details of freeze-in DM in Case I * C The calculation details in Case II * D The calculation details in Case III A * E The calculation details in Case III B * F The calculation details of 2-body decay after EWSB ## 1 Introduction Supersymmetry (SUSY) [1; 2; 3; 4; 5; 6] is a significant theoretical framework aiming at extending the Standard Model (SM), drawing inspiration from the pursuit of a quantum gravity theory, particularly within the context of superstring theory. In the field of phenomenology, SUSY not only provides a viable candidate for dark matter (DM) which plays a crucial role in the formation of large-scale structures in the universe, but also contributes to the renormalization group running of gauge couplings through the inclusion of additional particles near the electroweak scale. This property of SUSY facilitates the potential unification of the three fundamental forces at high energy scales. It has long been postulated that SUSY DM takes the form of Weakly Interacting Massive Particles (WIMPs) that can be probed through diverse experiments [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17]. However, the absence of confirmed DM signals poses significant challenges to the standing of SUSY DM. The current LHC search results indicate that SUSY particles seem to be heavier than the electroweak (EW) scale [18; 19], thus challenging the WIMP paradigm of SUSY (for recent reviews on SUSY in light of current experiments, see, e.g., [20; 21; 22]). Given the current situation, in this study we consider an alternative scenario of SUSY DM in which gauginos are located at a low energy scale while all other SUSY partners exist at a significantly higher scale \(M_{\rm SUSY}\). This scenario is a special case of the Split SUSY [23; 24; 25; 26] where higgsinos are also taken to be a similar scale as sfermions. One should note that the Higgs sector in this scenario is fine-tuned [27; 28; 29; 30; 31] and it might be a consequence of the anthropic principle. However, in this work we will assume that SUSY still provides a candidate of DM and we will specifically consider the Minimal Supersymmetric Standard Model (MSSM). Since the measurement of gamma-ray from the MAGIC [32] has strongly constrained the possibility of wino DM1, the only viable DM candidate in the MSSM is bino. However, it is widely known that pure bino DM is typically overabundant from the freeze-out mechanism [33] due to its weak coupling with the visible sector [34; 35]. Alternatively, a bino particle with a rather weak coupling may serve as a suitable candidate for Feebly Interacting Massive Particle (FIMP) DM with a correct relic abundance generated via the freeze-in mechanism [36], with assumptions that the reheating process solely occurs in the Standard Model (SM) sector and the reheating temperature \(T_{\rm RH}\) is lower than the SUSY scale \(M_{\rm SUSY}\). Footnote 1: There is still viable parameter space for wino dark matter assuming core profile of the DM. In this work we study the possibility that the bino DM in MSSM is generated via the freeze-in process during the early universe. We assume that all MSSM particles except gauginos share similar mass \(M_{\rm SUSY}\) which is much higher than the reheating temperature \(T_{\rm RH}\) of the universe. To generate enough relic abundance of bino dark matter, we always require the bino mass lower than the reheating temperature. While for the mass of wino or gluino, they could be either higher or lower than the reheating temperate \(T_{\rm RH}\) depending on the different scenarios we consider. The paper is organized as follows. In Section 2 we present the model set up. In Section 3 we first overview the physics related to dark matter and then study the dominate channels for bino freeze-in production. In Section 4 we give the numerical results and discuss the experimental limits on the model parameter space relevant for our scenarios. We draw the conclusions in Section 5 and leave the calculation details in Appendices. Model of Heavy Supersymmetry Since we are considering a scenario of high scale supersymmetry in which only gauginos are at low energy scale, the relevant Lagrangian terms are \[\mathcal{L} \supset -\sum_{f=q,l}M_{f}^{2}\tilde{f}^{\ast}\tilde{f}+\left[\bigg{(}\sum _{A=1,2,3}-\frac{1}{2}M_{A}\tilde{V}^{A,a}\tilde{V}^{A,a}\bigg{)}-\mu\tilde{H}_ {u}\cdot\tilde{H}_{d}+b\mu H_{u}\cdot H_{d}+h.c.\right] \tag{1}\] \[-\sum_{A=1,2}\sqrt{2}g_{A}\bigg{[}H_{u}^{\ast}\bigg{(}T^{A,a} \tilde{V}^{A,a}\bigg{)}\tilde{H}_{u}+H_{d}^{\ast}\bigg{(}T^{A,a}\tilde{V}^{A,a} \bigg{)}\tilde{H}_{d}+h.c.\bigg{]}\] \[-\sum_{A=1,2,3}\sqrt{2}g_{A}\bigg{[}\sum_{f=q,l}\tilde{f}^{\ast} \bigg{(}T^{A,a}\tilde{V}^{A,a}\bigg{)}f+h.c.\bigg{]}\] \[-(M_{H_{u}}^{2}+|\mu|^{2})H_{u}^{\ast}H_{u}-(M_{H_{d}}^{2}+|\mu|^ {2})H_{d}^{\ast}H_{d}\,\] where \(A=1,2,3\) correspond to the SM gauge group \(\mathrm{U}(1)_{\mathrm{Y}},\mathrm{SU}(2)_{\mathrm{L}},\mathrm{SU}(3)_{ \mathrm{C}}\), respectively, and \(a\) denotes the corresponding indices in adjoint representation of group \(A\). Fields \(\tilde{V}^{A,a},\tilde{H}_{u},\tilde{H}_{d},\tilde{f}\) are the superpartners of the SM vector gauge bosons \(V^{A,a}=B,W^{1\sim 3},G^{1\sim 8}\), scalar doublets \(H_{u},H_{d}\) and fermions \(f\). The fields \(H_{u}\), \(H_{d}\), \(\tilde{H}_{u}\), \(\tilde{H}_{d}\) are defined as \[H_{u} = \left(\begin{array}{c}H_{u}^{+}\\ H_{u}^{0}\end{array}\right),\quad\tilde{H}_{u}=\left(\begin{array}{c}\tilde{H }_{u}^{+}\\ \tilde{H}_{u}^{0}\end{array}\right),\quad H_{d}=\left(\begin{array}{c}H_{d}^ {0}\\ H_{d}^{-}\end{array}\right),\quad\tilde{H}_{d}=\left(\begin{array}{c}\tilde{H }_{d}^{0}\\ \tilde{H}_{d}^{-}\end{array}\right). \tag{2}\] For the Higgs sector, we need a SM-like Higgs boson \(H_{\mathrm{SM}}\) near the electroweak scale [37; 38]. This is obtained from the mixing between the two Higgs doublets \(H_{u}\) and \(H_{d}\) in the MSSM: \[H_{u}=\left(\begin{array}{c}H_{u}^{+}\\ H_{u}^{0}\end{array}\right) = \sin\beta\,H_{\mathrm{SM}}+\cos\beta\,H_{\mathrm{NP}}=\sin\beta \left(\begin{array}{c}G_{\mathrm{SM}}^{+}\\ H_{\mathrm{SM}}^{0}\end{array}\right)+\cos\beta\left(\begin{array}{c}H_{ \mathrm{NP}}^{+}\\ H_{\mathrm{NP}}^{0}\end{array}\right)\, \tag{3}\] \[(-i\sigma^{2})H_{d}^{\ast}=\left(\begin{array}{c}-(H_{d}^{-})^{ \ast}\\ (H_{d}^{0})^{\ast}\end{array}\right) = \cos\beta\,H_{\mathrm{SM}}-\sin\beta\,H_{\mathrm{NP}}=\cos\beta \left(\begin{array}{c}G_{\mathrm{SM}}^{+}\\ H_{\mathrm{SM}}^{0}\end{array}\right)-\sin\beta\left(\begin{array}{c}H_{ \mathrm{NP}}^{+}\\ H_{\mathrm{NP}}^{0}\end{array}\right)\, \tag{4}\] where \(\sigma^{2}\) is the second Pauli matrix, and \(\tan\beta=\langle H_{u}^{0}\rangle/\langle H_{d}^{0}\rangle\) with \(\langle H_{u}^{0}\rangle\) and \(\langle H_{d}^{0}\rangle\) being the vacuum expectation values (VEVs). Such mixings can be realized by properly choosing Higgs mass parameters \(\mu,M_{H_{u}},M_{H_{d}}\), and \(b\). The subscription "NP" in \(H_{\mathrm{NP}}\) denotes the new physics (NP) Higgs doublet in the MSSM accompanying the SM one2. Since the mass parameters \(M_{H_{u}}\), \(M_{H_{d}}\), \(b\), \(\mu\) are all much larger than the electroweak scale, a tuning of these parameters are needed to get a light Higgs at electroweak scale [27; 28; 29; 30; 31]. We need also match the Higgs self-coupling to be the SUSY value at the scale of \(M_{\mathrm{SUSY}}\), Footnote 2: Note that in order not to increase the complexity of notation, we don’t further perform the expansion of the complex but electrically neutral scalars \(H_{\mathrm{SM}}^{0},H_{\mathrm{NP}}^{0}\) into real and imaginary parts. However, one needs to beware that \(G_{\mathrm{SM}}^{\pm},H_{\mathrm{SM}}^{0}\) contain the Goldstone boson modes to be absorbed into vector gauge bosons \(W^{\pm},Z^{0}\) after the electroweak symmetry breaking (EWSB). \[\lambda(M_{\mathrm{SUSY}})=\frac{{g_{1}^{\prime}}^{2}+g_{2}^{2}}{4}\cos^{2}2 \beta. \tag{5}\] Note that the Higgs self-coupling \(\lambda\) becomes very small at high energy scale due to the RGE running, and thus the \(\beta\) value should get close to \(\pi/4\) and \(\tan\beta\approx 1\). We will fix \(\tan\beta=1\) as the benchmark parameter throughout this work for simplicity. Generally, when considering physical processes at temperature \(T\ll M_{\mathrm{SUSY}}\), we can integrate out the heavy mediators with mass \(\mu,M_{\tilde{f}}\sim M_{\mathrm{SUSY}}\gg T_{\mathrm{RH}}\) and get the following effective operators at the level of dimension 5 and 6, respectively, dimension-5: \[\propto \frac{1}{\mu}\,|H_{\rm SM}|^{2}(\tilde{B}\tilde{B},\tilde{B}\tilde{ W})\,\] (6) dimension-6: \[\propto \frac{1}{M_{\tilde{f}}^{2}}(f^{\dagger}\tilde{B}^{\dagger})(f \tilde{B},f\tilde{W},f\tilde{G})\.\] (7) Since we assume the mass parameters of higgsinos \(\mu\) and sfermions \(M_{\tilde{f}}\) around \(M_{\rm SUSY}\), the dominant process would be from the dimension-5 (dim-5) operators. Nevertheless we also present the processes related to dim-6 operators for completeness. We acknowledge that a majority of the significant processes are evaluated at energy scales considerably beneath \(M_{\rm SUSY}\). The recommended approach entails initiating the integration procedure for the massive particle to derive the effective operators of dimension 5 and 6, along with their corresponding Wilson coefficients, within the realm of \(M_{\rm SUSY}\). Subsequently, the computation of these Wilson coefficients at the pertinent scale is achieved by employing the Renormalization Group Equations to track the evolution of the operators. Notably, there exists a potential correction to the primary outcome, potentially on the order of \(O(1)\), yet the fundamental framework remains robust. We leave the investigation of this effect for future study. ## 3 Freeze-in Bino Dark Matter in Mssm ### Particle spectrum Despite the existence of new Higgs bosons and many supersymmetric partners of the SM particles, the MSSM particle spectrum we consider in this work consist of two sectors distinguished by their characteristic mass scales. Although not making significant difference for the mass spectrum structure before and after EWSB, we take the pre-EWSB case as an illustration. * sector, **inactive* * after cosmological reheating Mass: \(M\sim M_{\rm SUSY}\gg T_{RH}\) * Higgs bosons not in SM: \(H_{\rm NP}^{0}\), \(A\), \(H_{\rm NP}^{\pm}\) * Sfermions \(\tilde{f}\) * Higgsinos \(\tilde{H}_{u},\tilde{H}_{d}\) * sector, **active* * after cosmological reheating Mass: \(M\sim\mathcal{O}(1)\,{\rm TeV}\ll M_{\rm SUSY}\) * SM particles * Bino \(\tilde{B}\), consisting cosmological DM with mass \(M_{1}<T_{RH}\) * Winos \(\tilde{W}\), with mass \(M_{2}\) * Gluinos \(\tilde{G}\), with mass \(M_{3}\) In the above we utilized gauge eigenstates for description, since \(\tilde{B},\tilde{W}\) do not mix with higgsinos \(\tilde{H}_{u},\tilde{H}_{d}\) before EWSB when the SM Higgs \(H_{\rm SM}\) has not acquired the VEV. ### Bino production from freeze-in mechanism In the early stage of universe before EWSB when the gaugino states \(\tilde{B},\tilde{W}\) do not mix with higgsinos \(\tilde{H}_{u},\tilde{H}_{d}\), pure \(\tilde{B}\) acting as DM can only interact with SM via mediators with heavy mass near the scale \(M_{\rm SUSY}\), as shown in Fig. 1. Due to the suppressed interacting strength, the cosmological production of bino DM in our scenario proceed via the freeze-in mechanism. In the follows we consider the contributions to bino DM production from several typical processes3. Footnote 3: After electroweak phase transition occurs and \(H_{\rm SM}\) acquires VEV, the top and bottom vertex in the left panel of Fig.1 imply the mixing between \(\tilde{B},\tilde{W}\) and \(\tilde{H}_{u},\tilde{H}_{d}\), resulting in the mass eigenstates of electrically neutral neutralinos \(\tilde{\chi}^{0}_{1,2,3,4}\) and charged \(\tilde{\chi}^{\pm}_{1,2}\) (see discussions in Section 4.1). ### Case I: bino freeze-in from \(HH^{*}\to\tilde{B}\tilde{B}\) This case corresponds to the left panel of Fig. 1 but without winos \(\tilde{W}\). After integrating out the heavy higgsinos, the relevant dim-5 effective interaction is given by (the details are given in Appendix A) \[\mathcal{L}^{\rm eff}_{HH^{*}\to\tilde{B}\tilde{B}}=\frac{2g_{1}^{2}\,Y_{\tilde {H}}^{2}}{\mu}\sin\beta\cos\beta(|H_{\rm SM}|^{2})(\tilde{B}\tilde{B}+\tilde{B }^{\dagger}\tilde{B}^{\dagger})\, \tag{10}\] where \(|H_{\rm SM}|^{2}=G^{+}_{\rm SM}(G^{+}_{\rm SM})^{*}+(H^{0}_{\rm SM})(H^{0}_{\rm SM })^{*}\). In the subscription \(HH^{*}\to\tilde{B}\tilde{B}\) on the left side (and hereafter when not causing any confusion), we denote \(H_{\rm SM}\) as \(H\) to simplify the notation, and all fields in the initial and final states of the process should be understood in the sense of physical particles4. With more details given in Appendix B, Eq.(10) would induce the Boltzmann equation of the bino number density: Footnote 4: Discussion on the naming convention of particles, states and filed can be found in, e.g. [39]. \[\frac{d}{dt}n_{\tilde{B}}+3\mathcal{H}n_{\tilde{B}}=\mathbf{C}_{HH^{*}\to \tilde{B}\tilde{B}}\approx\frac{g_{1}^{4}}{4}\frac{1}{\pi^{5}}\,\frac{\sin^{2 }\beta\cos^{2}\beta}{\mu^{2}}T^{6}. \tag{11}\] Figure 1: Schematic plots for interactions of DM composed of pure \(\tilde{B}\) with SM after cosmological reheating considered in this work, which would induce dimension-5 (left) and dimension-6 (right) effective operators. The SM Higgs \(H_{\rm SM}\) originates from the mixing between MSSM Higgs doublets \(H_{u},H_{d}\). Colored lines indicate the direction of freeze-in production when applicable. Additional Hermitian conjugated processes also exist when the amplitudes are complex. See more discussions in the main texts. he above equation can be modified to a differential equation about bino yield \(Y_{\tilde{B}}=n_{\tilde{B}}/S\) (\(S\) is the entropy density) and temperature \(T\): \[\frac{dY_{HH^{*}\to\tilde{B}\tilde{B}}(T)}{dT}=-\frac{{\bf C}_{HH^{*}\to \tilde{B}\tilde{B}}}{ST{\cal H}}\] \[\approx-(1.25\times 10^{-3})\times M_{\rm Pl}\frac{{\bf C}_{HH^{*} \to\tilde{B}\tilde{B}}}{T^{6}}\approx-(1\times 10^{-6})\times M_{\rm Pl}\;g_{1}^{4} \frac{\sin^{2}\beta\cos^{2}\beta}{\mu^{2}}\,\] where \(M_{\rm Pl}\approx 1.22\times 10^{19}\) GeV is the Planck mass, \(S=2\pi^{2}g_{*}T^{3}/45\), and Hubble expansion rate \({\cal H}\approx 1.66\sqrt{g_{*}}T^{2}/M_{\rm Pl}\) with \(g_{*}=106.75\) before EWSB. Performing a simple integral from reheating temperature, it can be found that the final yield of \(\tilde{B}\) depends on the reheating temperature \(T_{\rm RH}\) which corresponds to the Ultraviolet (UV) freeze-in scenario [36, 40]: \[Y_{HH^{*}\to\tilde{B}\tilde{B}}(\infty)\approx(1\times 10^{-6})\times M_{\rm Pl }\;g_{1}^{4}\frac{\sin^{2}\beta\cos^{2}\beta}{\mu^{2}}\;T_{\rm RH}\, \tag{11}\] and the corresponding current relic abundance is given by \[\big{(}\Omega_{\tilde{B}}h^{2}\big{)}_{HH^{*}\to\tilde{B}\tilde{B }}=M_{1}\;\frac{Y_{HH^{*}\to\tilde{B}\tilde{B}}(\infty)S_{0}}{\rho_{cr}}\approx Y _{HH^{*}\to\tilde{B}\tilde{B}}(\infty)\left(\frac{M_{1}}{\rm TeV}\right) \times(2.72\times 10^{11}). \tag{12}\] ### Case II: fermion scattering process \(f\bar{f}\to\tilde{B}\tilde{B}\) After integrating out sfermions with heavy mass \(M_{\tilde{q},\tilde{l}}\sim M_{\rm SUSY}\) in the right panel of Fig.1, the effective interactions between SM fermion pair and \(\tilde{B}\) pair have the following form at dimention 6 (for more details, see Appendix C): \[{\cal L}^{\rm eff}_{ff\to\tilde{B}\tilde{B}}=\sum_{f=q,l}\frac{(\sqrt{2}g_{1 }Y_{f})(\sqrt{2}g_{1}Y_{f})}{M_{\tilde{f}}^{2}}(f^{\dagger}\tilde{B}^{\dagger })(f\tilde{B})\, \tag{13}\] where for simplicity, we consider an universal mass for all the fermions, i.e. \(M_{\tilde{f}}\equiv M_{\tilde{q}}=M_{\tilde{l}}\). Thus the Boltzmann equation is \[\frac{dY_{f\bar{f}\to\tilde{B}\tilde{B}}(T)}{dT}=-\frac{{\bf C}_{ f\bar{f}\to\tilde{B}\tilde{B}}}{ST{\cal H}}\] \[\approx-(1.25\times 10^{-3})\times M_{\rm Pl}\frac{{\bf C}_{f\bar{f} \to\tilde{B}\tilde{B}}}{T^{6}}\approx-(8.6\times 10^{-5})\times M_{\rm Pl}\frac{g_{1}^ {4}}{M_{\tilde{f}}^{4}}T^{2}\, \tag{14}\] and correspondingly, \[Y_{f\bar{f}\to\tilde{B}\tilde{B}}(\infty) \approx (4.7\times 10^{-7})\times\frac{M_{\rm Pl}}{M_{f}^{4}}\;T_{\rm RH}^{3}\, \tag{15}\] \[\big{(}\Omega_{\tilde{B}}h^{2}\big{)}_{f\bar{f}\to\tilde{B}\tilde {B}} = M_{1}\;\frac{Y_{f\bar{f}\to\tilde{B}\tilde{B}}(\infty)S_{0}}{\rho_{cr}} \approx Y_{f\bar{f}\to\tilde{B}\tilde{B}}(\infty)\left(\frac{M_{1}}{\rm TeV} \right)\times(2.72\times 10^{11}). \tag{16}\] ### Case III: gluino/wino scattering or decay processes As indicated by blue colored arrows in Fig. 1, the \(2\to 2\) scattering processes consist of two ways of generating bino DM when combining U(1)\({}_{\rm Y}\) with SU(2)\({}_{\rm L}\) or SU(3)\({}_{\rm C}\) interactions, related by the cross symmetry. Moreover, we can also have the red colored arrow indicating \(1\to 3\) (\(1\to 2\)) decay processes generating binos before (after) EWSB when the cosmological temperature drops below the scale of \(M_{2}\) or \(M_{3}\) (equivalently, when the age of the universe reach the lifetime of \(\tilde{W}\) and \(\tilde{G}\)). Similar to the previous two cases, integrating out heavy higgsino and sfermions would generate the following dim-5 and dim-6 effective operators: \[\mathcal{L}^{\rm eff}_{\rm case\text{-}III}= \bigg{\{}-\sum_{b=1}^{3}\frac{(\sqrt{2}g_{1}Y_{H})(\sqrt{2}g_{2})} {\mu}\sin\beta\cos\beta(H^{\star}\frac{1}{2}\sigma^{b}H)(\tilde{B}\tilde{W}^{ b})\] \[+\sum_{f=u_{L},d_{L},e_{L},\nu}\quad\sum_{b=1}^{3}\frac{(\sqrt{2} g_{1}Y_{f})(\sqrt{2}g_{2})}{M_{\tilde{f}}^{2}}(f^{\dagger}\tilde{B}^{ \dagger})(\frac{1}{2}\sigma^{b}f\tilde{W}^{b})\] \[+\sum_{f=u_{L},d_{L},u_{R}^{\dagger},d_{R}^{\dagger}}\quad\sum_{a =1}^{8}\frac{(\sqrt{2}g_{1}Y_{f})(\sqrt{2}g_{3})}{M_{\tilde{f}}^{2}}(f^{ \dagger}\tilde{B}^{\dagger})(\frac{1}{2}\lambda^{a}f\tilde{G}^{a})\bigg{\}}+h.c.. \tag{11}\] Note that the index \(f\) in the second line includes only SU(2)\({}_{\rm L}\) doublets, while the index \(f\) in the third line includes only quarks. To highlight the difference, we use index \(a\) and \(b\) to denote generators of SU(3)\({}_{\rm C}\) and SU(2)\({}_{\rm L}\) interactions, respectively. Correspondingly, \(\lambda^{a}\) and \(\sigma^{b}\) are Gell-Mann and Pauli matries, respectively. In the following, we consider the contributions to the bino DM production from \(2\to 2\) scattering and \(1\to 3\) decay separately, while leaving the effects of \(1\to 2\) decay appearing after EWSB in Section 4.1. #### 3.5.1 Case III A: \(2\to 2\) scattering involving gluino/wino With more detailed given in Appendix D, the collision terms in the Boltzmann equation for dim-5 and dim-6 operators are approximated as (ignoring the masses of all external particles) \[\mathbf{C}_{\rm dim-5} = \frac{T}{2048\pi^{6}}\int_{4M_{1}^{2}}^{\infty}ds\;(s-4M_{1}^{2} )^{1/2}K_{1}(\sqrt{s}/T)\sum_{\rm internal\,d.o.f}\int d\Omega \tag{12}\] \[\times\bigg{(}|\mathcal{M}|^{2}_{HH^{\star}\to\tilde{B}\tilde{B}} +|\mathcal{M}|^{2}_{HH^{\star}\to\tilde{B}\tilde{W}}+N_{\rm conj}|\mathcal{M} |^{2}_{\tilde{W}H\to\tilde{B}H}\bigg{)}\] \[=\bigg{(}\frac{1}{4}g_{1}^{4}+\frac{3}{2}g_{1}^{2}g_{2}^{2}\bigg{)} \frac{1}{\pi^{5}}\frac{\sin^{2}\beta\cos^{2}\beta}{\mu^{2}}T^{6}\,\] \[\mathbf{C}_{\rm dim-6} = \frac{T}{2048\pi^{6}}\int_{4M_{1}^{2}}^{\infty}ds\;(s-4M_{1}^{2} )^{1/2}K_{1}(\sqrt{s}/T)\sum_{\rm internal\,d.o.f}\int d\Omega\] (13) \[\times\bigg{(}|\mathcal{M}|^{2}_{f\tilde{f}\to\tilde{B}\tilde{B}} +|\mathcal{M}|^{2}_{f\tilde{f}\to\tilde{B}\tilde{W}}+N_{\rm conj}|\mathcal{M} |^{2}_{\tilde{W}f\to\tilde{B}f}+|\mathcal{M}|^{2}_{f\tilde{f}\to\tilde{B} \tilde{G}}+N_{\rm conj}|\mathcal{M}|^{2}_{\tilde{G}f\to\tilde{B}f}\bigg{)}\] \[= (\frac{190}{9}g_{1}^{4}+30g_{1}^{2}g_{2}^{2}+\frac{440}{3}g_{1}^{ 2}g_{3}^{2})\frac{1}{\pi^{5}}\frac{1}{\Lambda_{f}^{4}}T^{8}\,\] where \(N_{\rm conj}=2\) denotes the effects of conjugated process. #### 3.5.2 Case III B: decay of gluino/wino Following the method in [36] with \(f_{\tilde{G}}\) and \(f_{\tilde{W}}\) approximated by \(e^{-E_{\tilde{G}}/T}\) and \(e^{-E_{\tilde{W}}/T}\), the Boltzmann equation of freeze-in production for the \(1\to 3\) decay processes is \[\frac{d}{dt}n_{\tilde{B}}+3\mathcal{H}n_{\tilde{B}}=\mathbf{C}\] \[\approx\frac{g_{\tilde{G}}M_{3}^{2}}{2\pi^{2}}TK_{1}(\frac{M_{3}} {T})\Gamma_{\tilde{G}\to f\tilde{I}\tilde{B}}+\frac{g_{\tilde{W}}M_{2}^{2}}{2 \pi^{2}}TK_{1}(\frac{M_{2}}{T})(\Gamma_{\tilde{W}\to f\tilde{I}\tilde{B}}+ \Gamma_{\tilde{W}\to HH^{\star}\tilde{B}})\, \tag{14}\] where \(g_{\tilde{G}}=16\) and \(g_{\tilde{W}}=6\) are the internal d.o.f. of \(\tilde{G}\) and \(\tilde{W}\), respectively. The expressions of decay width involved in the above results are listed in Appendix E. Changing variables to yield \(Y_{\tilde{B}}\) and temperature \(T\), we then integrate over temperature evolution to obtain the final yield. If reheating temperature \(T_{\rm RH}\) is much larger than \(M_{2}\) and \(M_{3}\), then the final yield from \(1\to 3\) decay can be approximated by \[Y_{\tilde{B}}^{1\to 3}(\infty)\approx\int_{T_{min}}^{T_{\rm RH}} \frac{{\bf C}}{ST{\cal H}}dT\] \[\approx(3\times 10^{-4})\times M_{\rm Pl}\left(\frac{1}{M_{3}^{2}}g_ {\tilde{G}}\Gamma_{\tilde{G}\to f\tilde{f}\tilde{B}}+\frac{1}{M_{2}^{2}}g_{ \tilde{W}}\Gamma_{\tilde{W}\to f\tilde{f}\tilde{B}}+\frac{1}{M_{2}^{2}}g_{ \tilde{W}}\Gamma_{\tilde{W}\to HH^{*}\tilde{B}}\right). \tag{13}\] It is worth pointing out that the above result is not sensitive to \(T_{\rm RH}\). Taking a low reheating temperature \(T_{\rm RH}=1.1\,M_{3}\) as an example, increasing the value of \(T_{\rm RH}\) does no modify the result significantly. In addition to the \(1\to 3\) decay, we should also note that wino \(\tilde{W}\) with mass \(M_{2}<T_{\rm RH}\) keeps staying in the thermal bath until reaching its freeze-out moment yielding a relic wino number density, which would later convert to the equal amount of bino number density \(n_{\tilde{B}}\) via \(1\to 2\) decay \(\tilde{W}\to\tilde{B}+h\) after EWSB occurs. Depending on the bino mass \(M_{1}\), this freeze-out component would also contribute to the total bino DM abundance in today's epoch. We checked that with wino mass \(M_{2}=2\) TeV, the \(1\to 2\) decay contribution of \(Y_{\tilde{B}}^{1\to 2}\) to final bino yield is around 25% (1%) on the percentage level for \(M_{1}=1(0.1)\) TeV [41], thus not affecting the freeze-in domination scenario of this work. We properly include the wino freeze-out contribution in our results. There is also contribution from gluino late time decay. However, to avoid the constraints from BBN, we have to set the gluino mass higher than the \(T_{\rm RH}\), thus we do not include its contribution here. ## 4 Numerical Results and Discussion In Fig. 2 we show the required scales of \(\mu\) (\(M_{\tilde{f}}\)) for dim-5(6) operators with various \(T_{\rm RH}\) to produce the observed bino DM relic abundance. The upper (lower) two lines correspond to dim-5 (6) operators. We can see that due to the more suppression of dim-6 operators, the needed \(M_{\tilde{f}}\) are generally \({\cal O}(10^{-4})\) smaller than \(\mu\) in the dim-5 case. If we assume \({\cal O}(\mu)\approx{\cal O}(M_{\tilde{f}})\), in order not to overclose the Universe, the dim-6 contributions would be completely negligible. From Fig. 2, we can see that for the case \(M_{\tilde{B}}<T_{RH}\ll M_{\tilde{W}}\), the dominant production of bino dark matter is from the process \(HH^{*}\to\tilde{B}\tilde{B}\) from the dim-5 operator. Generally, \(M_{\rm SUSY}\) should be around \(10^{13-14}\) GeV for \(T_{RH}<10^{6}\) GeV. Since the final relic abundance is proportional to \(T_{RH}/\mu^{2}\), the \(M_{\rm SUSY}\) could continue increasing if the reheating temperature \(T_{\rm RH}\) becomes higher. Note that this is similar to the model of Higgs portal to fermion dark matter which are studied in [42], with which we find our result are consistent. We emphasize that our model is motivated by a more complete framework and [42] falls into one of cases we consider. Moreover, For the case \(M_{\tilde{B}},M_{\tilde{W}}<T_{RH}\), we find the wino-included process can largely enhance the annihilate rate and a higher scale is needed to satisfy the relic abundance. In this case, \(M_{\rm SUSY}\) should be around \(10^{14-15}\) GeV for \(T_{RH}<10^{6}\) GeV. Notice that if the gluino is in the thermal equilibrium with SM in the early universe and the sfermions mediating the gluino decay are heavier than \(10^{9}\) GeV, gluino's lifetime could be longer than the age of Universe when the big bang nucleosynthesis (BBN) happens, leading to energy injection into the cosmic plasma and altering the BBN profile. In all cases considered in this work we find \(M_{\rm SUSY}\) is much larger than \(10^{9}\) GeV, therefore we always need \(M_{\tilde{G}}\gg T_{RH}\) to avoid the limit from BBN [43]. More discussions on BBN limits are given in 4. In Fig. 3 we show the comparison of final contributions and intermediate profile of UV and IR freeze-in processes to the bino DM relic abundance. It can be clearly seen that the IR freeze-in final yields from wino 3-body decays are negligible compared to that of UV freeze-in processes generated by \(2\to 2\) annihilation. Moreover, the critical production moment determining the final yield of UV freeze-in locates in a much smaller \(x\) (and thus much higher temperature) than the IR freeze-in case. ### Limits from BBN After EWSB, the SM-like Higgs doublet needs to be replaced by: \[H=\left(\begin{array}{c}G^{+}\\ \frac{1}{\sqrt{2}}(v+h+iG^{0})\end{array}\right)\, \tag{4.1}\] where \(v=246\) GeV is the VEV of SM Higgs 5 and \(h\) is the observed SM-like Higgs scalar. \(G^{\pm}\) (\(G^{-}=(G^{+})^{*}\)) and \(G^{0}\) are Goldstone bosons that form the longitudinal modes of SM gauge bosons \(W^{\pm}\) and \(Z\). As mentioned earlier, the SM-like Higgs VEV will generate mixings among the gauge states \(\tilde{B},\tilde{W},\tilde{H}_{u},\tilde{H}_{d}\) and form mass eigenstates of charge-neutral neutralinos \(\tilde{\chi}_{1,2,3,4}\) and charged charginos \(\tilde{\chi}_{1,2}^{\pm}\) (with ascending mass order inside sectors of neutralinos and charginos, respectively). For the scenario considered in this work, the Figure 2: Values of \(\mu\) and \(M_{\tilde{f}}\) to produce the observed DM abundance via the UV freeze-in processes. See more discussions in the main texts. omponent of neutralino \(\tilde{\chi}_{1}^{0}\) (\(\tilde{\chi}_{2}^{0}\)) is dominated by bino \(\tilde{B}\) (wino \(\tilde{W}^{3}\)), and component of chargino \(\tilde{\chi}_{1}^{\pm}\) is dominated by winos \(\frac{1}{\sqrt{2}}(\tilde{W}^{1}\mp i\tilde{W}^{2})\). More details of the approximated masses and couplings can be found in [44; 45; 46]. In the following, we would utilize the language of gauge states (bino \(\tilde{B}\), wino \(\tilde{W}\), higgsinos \(\tilde{H}_{u},\tilde{H}_{d}\) ) and mass eigenstates (neutralino \(\tilde{\chi}^{0}\), chargino \(\tilde{\chi}^{\pm}\)) interchangeably before and after EWSB. Now we study the limit of BBN on our scenario from lifetimes of neutralinos, charginos. In our scenario, only neutralino \(\tilde{\chi}_{1}^{0}\approx\tilde{B},\ \tilde{\chi}_{2}^{0}\approx\tilde{W}^{3}\) and chargino \(\tilde{\chi}_{1}^{\pm}\approx\frac{1}{\sqrt{2}}(\tilde{W}^{1}\mp i\tilde{W}^{ 2})\) existed in the primordial thermal bath. Due to the loop induced mass-splitting between \(\tilde{\chi}_{1}^{\pm}\) and \(\tilde{\chi}_{2}^{0}\), chargino \(\tilde{\chi}_{1}^{\pm}\) can have the 2-body decay \(\tilde{\chi}_{1}^{\pm}\rightarrow\tilde{\chi}_{2}^{0}\pi^{\pm}\)[47; 48; 49; 50]. It makes the lifetime of \(\tilde{\chi}_{1}^{\pm}\) much shorter than 1 sec, and thus not affecting the BBN profile. However, we need to scrutinize the lifetime of \(\tilde{\chi}_{2}^{0}\) more carefully. If \(\tilde{\chi}_{2}^{0}\) decays after the onset of BBN, then the highly energetic decay products will cause the photodissociation or hadrodissociation and thus change the final abundances of light elements. So a bound from BBN can be put on the model parameters, especially on the SUSY scale \(M_{\rm SUSY}\)[51; 52]. It is easy to see that Fig. 1 implies the 2-body decay mode of \(\tilde{\chi}_{2}^{0}\rightarrow\tilde{\chi}_{1}^{0}h\) at the level of dim-5 after EWSB, in which case we will have: \[\mathcal{L}_{\rm eff} = -\sum_{b=1}^{3}\frac{(\sqrt{2}g_{1}Y_{H})(\sqrt{2}g_{2})}{\mu} \sin\beta\cos\beta(H^{*}\frac{1}{2}\sigma^{b}H)(\tilde{B}\tilde{W}^{b})+h.c. \tag{4.2}\] \[= -\frac{g_{1}g_{2}v}{2\mu}\sin\beta\cos\beta(G^{\mp}\tilde{W}^{\pm }\tilde{B}-h\tilde{W}^{0}\tilde{B}+h.c.)\] \[\approx -\frac{g_{1}g_{2}v}{2\mu}\sin\beta\cos\beta(G^{\mp}\tilde{\chi}_ {1}^{\pm}\tilde{\chi}_{1}^{0}-h\tilde{\chi}_{2}^{0}\tilde{\chi}_{1}^{0}+h.c.) \,\] where the first term containing Goldstone boson \(G^{\mp}\) can be understood in the context of Goldstone equivalence theorem (GET) for \(\tilde{\chi}_{1}^{\pm}\rightarrow\tilde{\chi}_{1}^{0}W^{\pm}\). It should be noticed that Eq.(4.2) Figure 3: Comparison between UV freeze-in and IR freeze-in. Note the difference between temperatures indicated by \(x=M_{2}/T\) producing the correct relic density of bino DM. does not contain the three-particle coupling \(G^{0}\tilde{W}\tilde{B}\) and thus would not provide a way of inferring the 2-body decay mode \(\tilde{\chi}_{2}^{0}\rightarrow\tilde{\chi}_{1}^{0}Z\) via the GET. In fact, \(\tilde{\chi}_{2}^{0}\rightarrow\tilde{\chi}_{1}^{0}\)\(Z\) comes from the gauge covariant kinetic terms of gauginos and higgsinos combined with gaugino mixings after EWSB. However, the decay width of \(\tilde{\chi}_{2}^{0}\rightarrow\tilde{\chi}_{1}^{0}Z\) suffers from an extra suppression of \(\frac{1}{\mu^{2}}\) embedded in the mass mixings compared to \(\tilde{\chi}_{2}^{0}\rightarrow\tilde{\chi}_{1}^{0}h\) and thus can be ignored [53]. Therefore, we have the following dominant 2-body decay (see Appendix F for more details): \[\Gamma_{\tilde{\chi}_{2}^{0}\rightarrow\tilde{\chi}_{1}^{0}h}\approx M_{2} \frac{1}{16\pi}\left(\frac{v}{\mu}g_{1}g_{2}\sin\beta\cos\beta\right)^{2}\left( 1-\frac{M_{1}^{2}}{M_{2}^{2}}\right)\left(1+\frac{M_{1}}{M_{2}}\right)^{2}. \tag{10}\] Using the GET we would obtain the same results for \(\Gamma_{\tilde{\chi}_{1}^{\pm}\rightarrow\tilde{\chi}_{1}^{0}W^{\pm}}\) when neglecting the gauge boson masses. In this work, we apply the limit of BBN to the requirement that lifetime of \(\tilde{\chi}_{2}^{0}\) must be less than 0.3 second [36]. In Fig. 4, we show the interplay between BBN constraints and freeze-in production, where regions below black lines are allowed while region above blue lines are allowed. We can see that for bino mass around 0.1-1 TeV, an upper bound of \(M_{\rm SUSY}\sim 10^{14}\) TeV is needed to satisfy both phenomenological requirements. ### Limits from direct/indirect detection Our scenario can easily escape from the current limits from the direct and indirect detection. In the case of direct detection, Eq.(11) after EWSB would generate the \(t\)-channel scattering of \(\tilde{\chi}_{1}^{0}\) with quarks and gluons in SM neucleons mediated by SM Higgs, of which the event rate is suppressed by \(1/\mu^{2}\) and thus negligibly small. In the case of indirect detection, which is basically the inversed process of the freeze-in DM production, would generate cosmic rays Figure 4: Interplay between BBN constraints and freeze-in production, where regions below black lines are allowed while region above blue lines are allowed. via DM pair annihilations \(\tilde{\chi}^{0}_{1}\tilde{\chi}^{0}_{1}\to h^{*}\to\text{SM}\) and \(\tilde{\chi}^{0}_{1}\tilde{\chi}^{0}_{1}\to hh\to\text{SM}\), of which the flux is again suppressed by \(1/\mu^{2}\) and thus not violating the current experimental bounds. ### Limits from the LHC The collider signals of our scenario mainly come from \(pp\to\tilde{\chi}^{\pm}_{1}\tilde{\chi}^{\mp}_{1},\tilde{\chi}^{\pm}_{1}\tilde{ \chi}^{0}_{2}\) followed by \(\tilde{\chi}^{\pm}_{1}\to\tilde{\chi}^{0}_{2}\pi^{\pm}\) and \(\tilde{\chi}^{0}_{2}\to\tilde{\chi}^{0}_{1}h\) which both generate the long-lived particle (LLP) signals. The LLP signatures manifest as disappearing track for \(\tilde{\chi}^{\pm}_{1}\to\tilde{\chi}^{0}_{2}\pi^{\pm}\) and displaced vertices for \(\tilde{\chi}^{0}_{2}\to\tilde{\chi}^{0}_{1}h\), respectively. However, \(\tau_{\tilde{\chi}^{0}_{2}\to\tilde{\chi}^{0}_{1}h}>\mathcal{O}(10^{-2})\,\)s would make \(\tilde{\chi}^{0}_{2}\) traverse through the whole detector before decaying without leaving any energy deposit in the calorimeters, thus can easily evade the current ATLAS [54] and CMS [55] searches for displaced vertex signals at \(\sqrt{s}=13\,\text{TeV}\). As for the disappearing track signature of \(\tilde{\chi}^{\pm}_{1}\to\tilde{\chi}^{0}_{2}\pi^{\pm}\), ATLAS [56] and CMS [57] also performed dedicated searches using dataset at \(\sqrt{s}=13\,\text{TeV}\) and imply that \(\tilde{\chi}^{\pm}_{1},\tilde{\chi}^{0}_{2}\) should be heavier than 500-600 GeV, therefore our benchmark points with \(M_{2}=2\,\text{TeV}\) are still available. ## 5 Conclusion We studied a scenario of dark matter generated from UV freeze-in mechanism, realized in the framework of high scale MSSM. The bino is the dark matter candidate and its relic abundance is generated by the freeze-in processes via the dim-5 or dim-6 operators. We found that the SUSY scale \(M_{\text{SUSY}}\) should be around \(10^{13-15}\) GeV for reheating temperature in the range of \(10^{4-6}\) GeV. We also illustrated the interplay between BBN constraints from neutral wino decay and the experimentally observed dark matter relic abundance, implying an upper bound of \(M_{\text{SUSY}}\) around \(10^{14}\) GeV for wino mass around 2 TeV and bino mass of \(0.1\sim 1\) TeV. ###### Acknowledgements. This work was supported by the Natural Science Foundation of China (NSFC) under grant numbers 12105118, 11947118, 12075300, 11821505 and 12335005, the Peng-Huan-Wu Theoretical Physics Innovation Center (12047503), the CAS Center for Excellence in Particle Physics (CCEPP), and the Key Research Program of the Chinese Academy of Sciences under grant No. XDPB15. CH acknowledges support from the Sun Yat-Sen University Science Foundation and the Fundamental Research Funds for the Central Universities, Sun Yat-sen University under Grant No. 23qnpy58. PW acknowledges support from Natural Science Foundation of Jiangsu Province (Grant No. BK20210201), Fundamental Research Funds for the Central Universities, Excellent Scholar Project of Southeast University (Class A), and the Big Data Computing Center of Southeast University. PW also acknowledges his wife for being tolerant and supportive (freezing in or squeezing in time to give birth to a son the same day the work was finished). ## Appendix A Notation conventions and Dim-5 Operator in Case I In Eq.(2.1), the dot product means \(\tilde{H}_{u}\cdot\tilde{H}_{d}=\tilde{H}_{u,i}(i\sigma^{2})^{ij}\tilde{H}_{d, j}=\tilde{H}^{+}_{u}\tilde{H}^{-}_{d}-\tilde{H}^{0}_{u}\tilde{H}^{0}_{d}\) to realize the isospin symmetry \(\text{SU}(2)_{\text{L}}\) where \(\sigma^{2}\) is the second Pauli matrix. The Kronecker delta function \(\delta_{i}{}^{j}\) manifests the SU(2)\({}_{\rm L}\)-blindness of the U(1)\({}_{\rm Y}\) interactions under consideration for binos production and \(Y_{H_{u}}=+1/2,Y_{H_{d}}=-1/2\) are the hypercharges of doublets \(H_{u},H_{d}\), respectively. We follow the convention of [39] and impose the left-chiral two-component spinor formalism for higgisinos \(\tilde{H}_{u}^{+},\tilde{H}_{u}^{0},\tilde{H}_{d}^{0},\tilde{H}_{d}^{-}\) and bino \(\tilde{B}\) (as well as winos \(\tilde{W}\) and gluinos \(\tilde{g}\) in later discussion). For the Case I in Section 3.2, the relevant Lagrangian terms are \[\mathcal{L} \supset -\frac{1}{2}M_{1}\tilde{B}\tilde{B}-\mu\left(\tilde{H}_{u}^{+} \tilde{H}_{d}^{-}-\tilde{H}_{u}^{0}\tilde{H}_{d}^{0}\right)+h.c. \tag{10}\] \[-\frac{g_{1}}{\sqrt{2}}(H_{u}^{+})^{*}\tilde{H}_{u}^{+}\tilde{B}- \frac{g_{1}}{\sqrt{2}}(H_{u}^{0})^{*}\tilde{H}_{u}^{0}\tilde{B}+\frac{g_{1}}{ \sqrt{2}}(H_{d}^{-})^{*}\tilde{H}_{d}^{-}\tilde{B}+\frac{g_{1}}{\sqrt{2}}(H_{d }^{0})^{*}\tilde{H}_{d}^{0}\tilde{B}+h.c.\,\] After integrating out higgsinos with mass \(\mu\), we obtain dim-5 operator between SM Higgs \(H_{\rm SM}\) and \(\tilde{B}\) DM: \[\mathcal{L}^{\rm eff}_{HH^{*}\to\tilde{B}\tilde{B}} = -\frac{(\sqrt{2}g_{1}Y_{H})(\sqrt{2}g_{1}Y_{H})}{\mu}(H_{u}^{*} \cdot H_{d}^{*})\tilde{B}\tilde{B}+h.c. \tag{11}\] \[= -\frac{2g_{1}^{2}\,Y_{H}^{2}}{\mu}\sin\beta\cos\beta(H_{\rm SM}^ {*}\cdot i\sigma^{2}H_{\rm SM})(\tilde{B}\tilde{B}+\tilde{B}^{\dagger}\tilde{ B}^{\dagger})\] \[= \frac{2g_{1}^{2}\,Y_{H}^{2}}{\mu}\sin\beta\cos\beta(|H_{\rm SM}|^ {2})(\tilde{B}\tilde{B}+\tilde{B}^{\dagger}\tilde{B}^{\dagger})\,\] where \(Y_{H}=|Y_{H_{u}}|=|Y_{H_{d}}|=1/2\) and the dot products are \(H_{u}^{*}\cdot H_{d}^{*}=(H_{u}^{+})^{*}(H_{d}^{-})^{*}-(H_{u}^{0})^{*}(H_{d} ^{0})^{*}\). ## Appendix B Boltzmann equation and calculation details of freeze-in DM in Case I In the homogeneous and isotropic universe, the production of bino is described by following Boltzmann equation [33]: \[\frac{d}{dt}n_{\tilde{B}}+3\mathcal{H}n_{\tilde{B}}=\mathbf{C}\, \tag{12}\] with \(n_{\tilde{B}}\) denoting the number density of bino particle, and \(\mathcal{H}\) is the Hubble expansion rate. Taking \(HH^{*}\to\tilde{B}\tilde{B}\) (\(\tilde{B}\) means the physical bino particle) in Case I of Section 3.2 as an example, we have [58] \[\mathbf{C}_{ij\to kl} = N\times\frac{1}{S}\times\bigg{\{}\int\frac{d^{3}p_{i}}{(2\pi)^{3 }2E_{i}}\frac{d^{3}p_{j}}{(2\pi)^{3}2E_{j}}\frac{d^{3}p_{k}}{(2\pi)^{3}2E_{k} }\frac{d^{3}p_{l}}{(2\pi)^{3}2E_{l}} \tag{13}\] \[\times(2\pi)^{4}\delta^{4}(p_{i}+p_{j}-p_{k}-p_{l})\,\left[f_{i}f _{j}(1-f_{k})(1-f_{l})-f_{k}f_{l}(1+f_{i})(1+f_{j})\right]\] \[\times\sum_{\rm internal\,d.o.f}|\mathcal{M}|^{2}_{ij\to kl}\ \bigg{\}}\,\] where \(f_{i,j,k,l}\) are the phase space distribution functions. The number density, taking \(f_{i}\) as example, is defined as \[n_{i}\equiv g_{i}\int\frac{d^{3}p}{(2\pi)^{3}}f_{i}(p)\, \tag{14}\] in which \(g_{i}\) is the internal degree of freedom (d.o.f.) of particle \(i\). The factor \(N\) denotes the number of particles under consideration produced in the final state and the factor originates from the phase space suppression due to the _identical_ particles in the initial and final states. For \(HH^{*}\to\tilde{B}\tilde{B}\) we have \(N=2\) and \(1/S=1/(N!)=1/2\). After some manipulations and neglecting the negligible backward process, we have [58] \[{\bf C}_{ij\to kl} \approx \frac{T}{32\pi^{4}}\int_{(m_{k}+m_{l})^{2}}^{\infty}ds\,p_{ij}\,W _{ij\to kl}\,K_{1}(\sqrt{s}/T) \tag{100}\] \[W_{ij\to kl} = \frac{p_{kl}}{16\pi^{2}\sqrt{s}}\,\sum_{\rm internal\,d.o.f}\int \,d\Omega\,|{\cal M}|^{2}_{ij\to kl}\] (101) \[p_{ij} = \frac{\sqrt{s-(m_{i}+m_{j})^{2}}\sqrt{s-(m_{i}-m_{j})^{2}}}{2 \sqrt{s}}\, \tag{102}\] where \(p_{kl}\) is similar to \(p_{ij}\). After summing over all bino spin states \(s_{1},s_{2}\) and isospin states of the SM-like Higgs, we have the amplitude square (\(s\) is the square of the central energy): \[\sum_{\rm internal\,d.o.f}\int d\Omega\,\,|{\cal M}|^{2}_{HH^{*} \to\tilde{B}\tilde{B}}\,\] \[\approx(2\pi)\times\bigg{[}\sum_{i,j=1}^{2}({\delta_{i}}^{j})^{2 }\bigg{]}\bigg{[}Y^{4}_{H}\bigg{]}\bigg{(}\frac{g_{1}g_{2}\sin\beta\cos\beta}{ \mu}\bigg{)}^{2}\bigg{[}64\ s\bigg{(}1-\frac{4M_{1}^{2}}{s}\bigg{)}^{\frac{3}{ 2}}\bigg{]}\] \[\approx(16\pi)\times\frac{g_{1}^{4}}{\mu^{2}}\sin^{2}\beta\cos^{2 }\beta\ s. \tag{103}\] We modify the MSSM model file available in **FeynRules**[59, 60] to highlight the gauge state interactions and then export to **FeynArts**[61] augmented with **FeynCalc**[62] to perform the calculation. Since we are considering freeze-in production of \(\tilde{B}\), \(f_{1,2}\) in Eq. (100) can be ignored. We can further approximate \(f_{3,4}\) by Maxwell-Boltzmann distribution, i.e. \(f_{3,4}\approx e^{-E_{3,4}/T}\). Then the collision term can be rewritten as [40, 58, 63] \[{\bf C}_{HH^{*}\to\tilde{B}\tilde{B}} \approx \frac{T}{2048\pi^{6}}\int_{4M_{1}^{2}}^{\infty}ds\ (s-4M_{1}^{2})^{1/2}K_{1}(\sqrt{s}/T)\sum_{\rm internal\,d.o.f}\int\,d \Omega\,|{\cal M}|^{2}_{HH^{*}\to\tilde{B}\tilde{B}}\, \tag{104}\] \[\approx \frac{T}{128\pi^{5}}\frac{g_{1}^{4}\,\sin^{2}\beta\cos^{2}\beta }{\mu^{2}}\int_{4M_{1}^{2}}^{\infty}ds\ s^{3/2}K_{1}(\sqrt{s}/T)\.\] Here \(K_{1}\) is the Bessel function of the second kind, and we treat the SM-like Higgs in the initial state as being massless. In the case where \(M_{1}\ll T\), the collision term can be approximated as (using \(\int_{0}^{\infty}dxx^{4}K_{1}(x)=16\)) \[\int_{4M_{1}^{2}}^{\infty}ds\ s^{3/2}K_{1}(\sqrt{s}/T) \approx \int_{0}^{\infty}(dx\,T)\ (2xT)(xT)^{3}K_{1}(x) \tag{105}\] \[= 2\,T^{5}\int_{0}^{\infty}dx\,x^{4}K_{1}(x)=32\ T^{5}\.\] ## Appendix C The calculation details in case ii We use \(f=q,l\) with \(q=u_{L},d_{L},u_{R}^{\dagger},d_{R}^{\dagger}\) and \(l=\nu,e_{L},e_{R}^{\dagger}\) to denote the left-handed two-component Weyl spinor of SM quarks and leptons, where the bars are simply notations and do not mean the Dirac conjugation. Hypercharges are given by \(Y_{d_{L}},\ Y_{u^{\dagger}_{R}},\ Y_{d^{\dagger}_{R}},\ Y_{L_{L}}=Y_{e_{L}}=Y_{\nu}, \ Y_{e^{\dagger}_{R}}\}=\{1/6,\ -2/3,\ 1/3,\ -1/2,\ 1\}\). After integrating out sfermions with mass \(M_{f}\) in the right panel of Fig.1, we obtain dim-6 operators between SM fermion pair and \(\tilde{B}\) pair: \[{\cal L}_{\rm eff}=\sum_{f=q,l}\frac{(\sqrt{2}g_{1}Y_{f})(\sqrt{2}g_{1}Y_{f})} {M_{f}^{2}}(f^{\dagger}\tilde{B}^{\dagger})(f\tilde{B})\, \tag{108}\] where for simplicity we consider an universal mass for all the fermions, i.e. \(M_{f}\equiv M_{\tilde{q}}=M_{\tilde{l}}\). The amplitude squared terms in the collision term for \(f\bar{f}\rightarrow\tilde{B}\tilde{B}\) scattering process is given by6 Footnote 6: Again, fields in the initial and final states in the process should be understood in the sense of physical particles, where \(\bar{f}\) denotes the physical anti-particle. Discussion on the naming convention of particles, states and filed can be found in, e.g. [39]. \[\sum_{{\rm internal\,d.o.f}}\int d\Omega\ |{\cal M}|^{2}_{f\bar{f} \rightarrow\tilde{B}\tilde{B}} \approx 2\pi N_{\rm flavor}\bigg{[}N_{\rm color}\bigg{(}\sum_{i,j=1}^{2}( \delta_{i}\,^{j})^{2}Y_{Q_{L}}^{4}+Y_{u^{4}_{R}}^{4}+Y_{d^{4}_{R}}^{4}\bigg{)} \tag{109}\] \[+\bigg{(}\sum_{i,j=1}^{2}(\delta_{i}\,^{j})^{2}Y_{L_{L}}^{4}+Y_{e ^{\dagger}_{R}}^{4}\bigg{)}\bigg{]}\bigg{(}\frac{g_{1}^{2}}{M_{f}^{2}}\bigg{)} ^{2}\bigg{[}\frac{16}{3}\ s^{2}\bigg{]}\] \[= \frac{1520\pi}{27}\frac{g_{1}^{4}}{M_{\tilde{f}}^{4}}\ s^{2}\] where \(N_{\rm flavor}=N_{\rm color}=3\). As in Eq. (107), if we neglect bino mass, then the collision term can be approximately given by (using \(\int_{0}^{\infty}dxx^{6}K_{1}(x)=384\)) \[{\bf C}_{f\bar{f}\rightarrow\tilde{B}\tilde{B}} \approx \frac{T}{2048\pi^{6}}\int_{4M_{1}^{2}}^{\infty}ds\ (s-4M_{1}^{2})^{1/2}K_{1}(\sqrt{s}/T)\sum_{{\rm internal\,d.o.f}}\int d \Omega\,|{\cal M}|^{2}_{f\bar{f}^{\dagger}\rightarrow\tilde{B}\tilde{B}^{ \dagger}} \tag{110}\] \[\approx \frac{T}{2048\pi^{6}}\bigg{(}\frac{1520\pi}{27}\frac{g_{1}^{4}}{ M_{f}^{4}}\bigg{)}\int_{4M_{1}^{2}}^{\infty}ds\ s^{5/2}K_{1}(\sqrt{s}/T)\] \[\approx \frac{T}{2048\pi^{6}}\bigg{(}\frac{1520\pi}{27}\frac{g_{1}^{4}}{ M_{f}^{4}}\bigg{)}\int_{0}^{\infty}(Tdx)(2Tx)\ (xT)^{5}K_{1}(x)\] \[= \frac{190}{9}g_{1}^{4}\frac{1}{\pi^{5}}\frac{1}{M_{f}^{4}}T^{8}\.\] The calculation details in case III a When neglecting all particle masses in the final state, we have \[\sum_{\rm internal\,d.o.f}\int d\Omega\ |{\cal M}|^{2}_{HH^{ \ast}\rightarrow\bar{B}\bar{W}} \tag{103}\] \[\approx(2\pi)\times\bigg{[}\sum_{b=1}^{3}{\rm tr}\bigg{(}\frac{1} {2}\sigma^{b}\frac{1}{2}\sigma^{b}\bigg{)}\bigg{]}\bigg{[}Y_{H}^{2}\bigg{]} \bigg{(}\frac{g_{1}g_{2}\sin\beta\cos\beta}{\mu}\bigg{)}^{2}\bigg{[}64\ s\bigg{]}\] \[=(48\pi)\times\frac{g_{1}^{2}g_{2}^{2}}{\mu^{2}}\sin^{2}\beta\cos ^{2}\beta\ s\,\] \[\sum_{\rm internal\,d.o.f}\int d\Omega\ |{\cal M}|^{2}_{WH \rightarrow\bar{B}H}=\sum_{\rm internal\,d.o.f}\int d\Omega\ |{\cal M}|^{2}_{WH^{ \ast}\rightarrow\bar{B}H^{\ast}}\] \[\approx(2\pi)\times\bigg{[}\sum_{b=1}^{3}{\rm tr}\bigg{(}\frac{1} {2}\sigma^{b}\frac{1}{2}\sigma^{b}\bigg{)}\bigg{]}\bigg{[}Y_{H}^{2}\bigg{]} \bigg{(}\frac{g_{1}g_{2}\sin\beta\cos\beta}{\mu}\bigg{)}^{2}\bigg{[}32\ s\bigg{]}\] \[=(24\pi)\times\frac{g_{1}^{2}g_{2}^{2}}{\mu^{2}}\sin^{2}\beta \cos^{2}\beta\ s\,\] \[\sum_{\rm internal\,d.o.f}\int d\Omega\ |{\cal M}|^{2}_{f\bar{f} \rightarrow\bar{B}\bar{W}}\] \[\approx 2\pi\bigg{[}\sum_{b=1}^{3}{\rm tr}\bigg{(}\frac{1}{2} \sigma^{b}\frac{1}{2}\sigma^{b}\bigg{)}\bigg{]}\bigg{[}N_{\rm flavor}\bigg{(}Y _{L_{L}}^{2}+N_{\rm color}Y_{Q_{L}}^{2}\bigg{)}\bigg{]}\bigg{(}\frac{g_{1}g_{2} }{M_{\bar{f}}^{2}}\bigg{)}^{2}\bigg{[}\frac{16}{3}\ s^{2}\bigg{]}\] \[=(16\pi)\times\frac{g_{1}^{2}g_{2}^{2}}{M_{\bar{f}}^{4}}\ s^{2}\,\] \[\sum_{\rm internal\,d.o.f}\int d\Omega\ |{\cal M}|^{2}_{Wf \rightarrow\bar{B}\bar{f}}=\sum_{\rm internal\,d.o.f}\int d\Omega\ |{\cal M}|^{2}_{W\bar{f} \rightarrow\bar{B}\bar{f}}\] \[\approx 2\pi\bigg{[}\sum_{b=1}^{3}{\rm tr}\bigg{(}\frac{1}{2} \sigma^{b}\frac{1}{2}\sigma^{b}\bigg{)}\bigg{]}\bigg{[}N_{\rm flavor}\bigg{(}Y _{L_{L}}^{2}+N_{\rm color}Y_{Q_{L}}^{2}\bigg{)}\bigg{]}\bigg{(}\frac{g_{1}g_{2} }{M_{\bar{f}}^{2}}\bigg{)}^{2}\bigg{[}\frac{32}{3}\ s^{2}\bigg{]}\] \[=(32\pi)\times\frac{g_{1}^{2}g_{2}^{2}}{M_{\bar{f}}^{4}}\ s^{2}\,\] \[\sum_{\rm internal\,d.o.f}\int d\Omega\ |{\cal M}|^{2}_{f\bar{f} \rightarrow\bar{B}\bar{G}}\] \[\approx 2\pi\bigg{[}\sum_{a=1}^{8}{\rm tr}\bigg{(}\frac{1}{2} \lambda^{a}\frac{1}{2}\lambda^{a}\bigg{)}\bigg{]}\bigg{[}N_{\rm flavor}\bigg{(} \sum_{i,j=1}^{2}({\delta_{i}}^{j})^{2}Y_{Q_{L}}^{2}+Y_{u_{R}}^{2}+Y_{d_{R}^{ \dagger}}^{2}\bigg{)}\bigg{]}\bigg{(}\frac{g_{1}g_{3}}{M_{\bar{f}}^{2}}\bigg{)} ^{2}\bigg{[}\frac{16}{3}\ s^{2}\bigg{]}\] \[=(\frac{704\pi}{9})\times\frac{g_{1}^{2}g_{3}^{2}}{M_{\bar{f}}^{4} }\ s^{2}\,\] (104) \[\sum_{\rm internal\,d.o.f}\int d\Omega\ |{\cal M}|^{2}_{\bar{G}f \rightarrow\bar{B}f}=\sum_{\rm internal\,d.o.f}\int d\Omega\ |{\cal M}|^{2}_{\bar{G}f \rightarrow\bar{B}\bar{f}}\] \[\approx 2\pi\bigg{[}\sum_{a=1}^{8}{\rm tr}\bigg{(}\frac{1}{2} \lambda^{a}\frac{1}{2}\lambda^{a}\bigg{)}\bigg{]}\bigg{[}N_{\rm flavor}\bigg{(} \sum_{i,j=1}^{2}({\delta_{i}}^{j})^{2}Y_{Q_{L}}^{2}+Y_{u_{R}}^{2}+Y_{d_{R}^{ \dagger}}^{2}\bigg{)}\bigg{]}\bigg{(}\frac{g_{1}g_{3}}{M_{\bar{f}}^{2}}\bigg{)} ^{2}\bigg{[}\frac{32}{3}\ s^{2}\bigg{]}\] \[=(\frac{1408\pi}{9})\times\frac{g_{1}^{2}g_{3}^{2}}{M_{\bar{f}}^{4 }}\ s^{2}. \tag{105}\] The calculation details in case III B The \(1\to 3\) decay processes are indicated by the red colored arrow in Fig.1. When neglecting all particle masses in the final state, we have \[\Gamma_{\tilde{W}\to\tilde{B}HH^{*}} = \frac{1}{(2\pi)^{3}}\frac{1}{32M_{2}^{3}}\ \frac{1}{g_{\tilde{W}}}\sum_{\text{ internal d.o.f.}}\int dm_{12}^{2}\ dm_{23}^{2}\ |\mathcal{M}|^{2}_{\tilde{W}\to\tilde{B}HH^{*}} \tag{101}\] \[= \frac{1}{(2\pi)^{3}}\frac{1}{32M_{2}^{3}}\ \frac{1}{\sum_{b=1}^{3}(2s_{ \tilde{W}}+1)}\bigg{[}\sum_{b=1}^{3}\text{tr}\bigg{(}\frac{1}{2}\sigma^{b} \frac{1}{2}\sigma^{b}\bigg{)}\bigg{]}\] \[\times\bigg{[}Y_{H}^{2}\bigg{]}\bigg{[}\frac{32}{3}M_{2}^{6}\left( \frac{g_{1}g_{2}\sin\beta\cos\beta}{\mu}\right)^{2}\bigg{]}\] \[= \frac{1}{384\pi^{3}}\left(\frac{g_{1}g_{2}\sin\beta\cos\beta}{ \mu}\right)^{2}M_{2}^{3}\.\] \[\Gamma_{\tilde{W}\to\tilde{B}ff} = \frac{1}{(2\pi)^{3}}\frac{1}{32M_{2}^{3}}\ \frac{1}{g_{\tilde{W}}}\sum_{\text{ internal d.o.f.}}\int dm_{12}^{2}\ dm_{23}^{2}\ |\mathcal{M}|^{2}_{\tilde{W}\to\tilde{B}f\tilde{f}}\] (102) \[= \frac{1}{(2\pi)^{3}}\frac{1}{32M_{2}^{3}}\ \frac{1}{\sum_{b=1}^{3}(2s_{ \tilde{W}}+1)}\bigg{[}\sum_{b=1}^{3}\text{tr}\bigg{(}\frac{1}{2}\sigma^{b} \frac{1}{2}\sigma^{b}\bigg{)}\bigg{]}\] \[\times\bigg{[}N_{\text{flavor}}\bigg{(}Y_{L_{L}}^{2}+N_{\text{ color}}Y_{Q_{L}}^{2}\bigg{)}\bigg{]}\bigg{[}\frac{2}{3}M_{2}^{8}\left(\frac{g_{1}g_{2}}{M_ {\tilde{f}}^{2}}\right)^{2}\bigg{]}\] \[= \frac{1}{1536\pi^{3}}\left(\frac{g_{1}g_{2}}{M_{\tilde{f}}^{2}} \right)^{2}M_{2}^{5}\,\] \[\Gamma_{\tilde{G}\to\tilde{B}f\tilde{f}} = \frac{1}{(2\pi)^{3}}\frac{1}{32M_{3}^{3}}\ \frac{1}{g_{\tilde{G}}}\sum_{\text{ internal d.o.f.}}\int dm_{12}^{2}\ dm_{23}^{2}\ |\mathcal{M}|^{2}_{G\to\tilde{B}f\tilde{f}}\] (103) \[= \frac{1}{(2\pi)^{3}}\frac{1}{32M_{3}^{3}}\ \frac{1}{\sum_{a=1}^{8}(2s_{ \tilde{G}}+1)}\bigg{[}\sum_{a=1}^{8}\text{tr}\bigg{(}\frac{1}{2}\lambda^{a} \frac{1}{2}\lambda^{a}\bigg{)}\bigg{]}\] \[\times\bigg{[}N_{\text{flavor}}\bigg{(}N_{\text{iso},Q_{L}}Y_{Q_{L} }^{2}+Y_{u_{R}^{1}}^{2}+Y_{d_{R}^{1}}^{2}\bigg{)}\bigg{]}\bigg{[}\frac{2}{3} M_{2}^{8}\left(\frac{g_{1}g_{3}}{M_{\tilde{f}}^{2}}\right)^{2}\bigg{]}\] \[= \frac{11}{9216\pi^{3}}\left(\frac{g_{1}g_{3}}{M_{\tilde{f}}^{2}} \right)^{2}M_{2}^{5}\.\] where \(dm_{12}^{2},dm_{23}^{2}\) are defined in [64]. The calculation details of 2-body decay after EWSB As discussed in Section 4.1, we have the following \(1\to 2\) decay possibly affecting the cosmological BBN: \[\Gamma_{\tilde{\chi}_{2}^{0}\to\tilde{\chi}_{1}^{0}h} = \frac{1}{2s_{\tilde{\chi}_{2}^{0}}+1}\frac{1}{2M_{2}}\sum_{\text{ spin d.o.f.}}\int d\Pi_{2}\ |M|^{2}_{\tilde{\chi}_{2}^{0}\to\tilde{\chi}_{1}^{0}h}\] \[= \frac{1}{2}\frac{1}{2M_{2}}\ \int d\Pi_{2}\ \left(\frac{v}{\mu}g_{1 }g_{2}\sin\beta\cos\beta\right)^{2}\left[4(p_{\tilde{\chi}_{2}^{0}}\cdot p_{ \tilde{\chi}_{1}^{0}}+M_{\tilde{\chi}_{2}^{0}}M_{\tilde{\chi}_{1}^{0}})\right]\] \[\approx \frac{1}{2}\frac{1}{2M_{2}}\ \bigg{[}\int d\Omega\frac{1}{16\pi^{2}} \frac{|\vec{p}_{\tilde{\chi}_{1}^{0}}|}{M_{2}}\bigg{]}\ \left(\frac{v}{\mu}g_{1 }g_{2}\sin\beta\cos\beta\right)^{2}\bigg{[}4(M_{2}E_{\tilde{\chi}_{1}^{0}}+M_{ 2}M_{1})\bigg{]}\,\] where \[E_{\tilde{\chi}_{1}^{0}} = \frac{M_{2}^{2}+M_{1}^{2}-M_{h}^{2}}{2M_{2}}\approx\frac{M_{2}^{2 }+M_{1}^{2}}{2M_{2}}\, \tag{100}\] \[|\vec{p}_{\tilde{\chi}_{1}^{0}}| = \sqrt{E_{\tilde{\chi}_{1}^{0}}^{2}-M_{1}^{2}}=\frac{\left(M_{2}^{ 4}+M_{1}^{4}+M_{h}^{4}-2M_{2}^{2}M_{1}^{2}-2M_{2}^{2}M_{h}^{2}-2M_{1}^{2}M_{h }^{2}\right)^{\frac{1}{2}}}{2M_{2}}\] (101) \[\approx \frac{M_{2}^{2}-M_{1}^{2}}{2M_{2}}\.\] Finally, we have [44] \[\Gamma_{\tilde{\chi}_{2}^{0}\to\tilde{\chi}_{1}^{0}h} \approx \frac{1}{2}\frac{1}{2M_{2}}\Bigg{[}4\pi\frac{1}{16\pi^{2}}\frac{ 1}{2}\left(1-\frac{M_{1}^{2}}{M_{2}^{2}}\right)\Bigg{]}\left(\frac{v}{\mu}g_{1 }g_{2}\sin\beta\cos\beta\right)^{2}\bigg{[}4M_{2}\frac{(M_{2}+M_{1})^{2}}{2M_ {2}}\bigg{]} \tag{102}\] \[\approx M_{2}\frac{1}{16\pi}\left(\frac{v}{\mu}g_{1}g_{2}\sin\beta\cos \beta\right)^{2}\left(1-\frac{M_{1}^{2}}{M_{2}^{2}}\right)\left(1+\frac{M_{1}} {M_{2}}\right)^{2}\.\] Using the GET we would obtain the same results in the high energy limit for \(\Gamma_{\tilde{\chi}_{1}^{\pm}\to\tilde{\chi}_{1}^{0}W^{\pm}}\).
2309.00764
On the Kronecker product of Schur functions of square shapes
Motivated by the Saxl conjecture and the tensor square conjecture, which states that the tensor squares of certain irreducible representations of the symmetric group contain all irreducible representations, we study the tensor squares of irreducible representations associated with square Young diagrams. We give a formula for computing Kronecker coefficients, which are indexed by two square partitions and a three-row partition, specifically one with a short second row and the smallest part equal to 1. We also prove the positivity of square Kronecker coefficients for particular families of partitions, including three-row partitions and near-hooks.
Chenchen Zhao
2023-09-02T00:04:22Z
http://arxiv.org/abs/2309.00764v1
# On the Kronecker product of Schur functions of square shapes ###### Abstract. Motivated by the Saxl conjecture and the tensor square conjecture, which states that the tensor squares of certain irreducible representations of the symmetric group contain all irreducible representations, we study the tensor squares of irreducible representations associated with square Young diagrams. We give a formula for computing Kronecker coefficients, which are indexed by two square partitions and a three-row partition, specifically one with a short second row and the smallest part equal to \(1\). We also prove the positivity of square Kronecker coefficients for particular families of partitions, including three-row partitions and near-hooks. ## 1. Introduction Given partitions \(\lambda,\mu\vdash n\), we can decompose the internal product of Schur functions as \[s_{\lambda}\ast s_{\mu}=\sum_{\nu\vdash n}g(\lambda,\mu,\nu)s_{\nu},\] where \(g(\lambda,\mu,\nu)\) are the Kronecker coefficients. The Kronecker coefficients can also be interpreted as the multiplicities of an irreducible module of \(S_{n}\) in the tensor product of irreducible modules of \(S_{n}\) corresponding to \(\lambda\) and \(\mu\). Therefore, the Kronecker coefficients are certainly non-negative integers, which naturally suggests that there may be a combinatorial interpretation of the coefficients. The problem of finding a non-negative combinatorial interpretation for the Kronecker coefficients was explicitly stated by Stanley in 2000 ([11] Problem 10) as a major open problem in Algebraic Combinatorics. The Kronecker coefficients have recently gained prominence within the context of algebraic complexity theory, particularly in the realm of Geometric Complexity Theory (GCT). However, as addressed by Panova in [14], despite the increasing interest in the problem, little progress has been made: The Kronecker product problem is still poorly understood, and deriving an explicit combinatorial formula to solve the Kronecker product remains as an outstanding open problem in the field of Algebraic Combinatorics. The number of irreducible representations of the symmetric group \(S_{n}\) is equal to the number of conjugacy classes, which is the number of integer partitions of \(n\). Given \(\mu\vdash n\), let \(\mathbb{S}^{\mu}\) denote the Specht module of the symmetric group \(S_{n}\), indexed by partition \(\mu\). It is worth noting that these Specht modules provide us with a way to study the irreducible representations, with each representation being uniquely indexed by an integer partition (see e.g. [1]). In [1], Heide, Saxl, Tiep, and Zalesski proved that with a few exceptions, every irreducible character of a simple group of Lie type is a constituent of the tensor square of the Steinberg character. They conjectured that for \(n\geq 5\), there is an irreducible character \(\chi\) of \(A_{n}\), whose tensor square \(\chi\otimes\chi\) contains every irreducible character as a constituent. The following is the symmetric group analog of this conjecture: **Conjecture 1.1** (Tensor Square Conjecture).: _For every \(n\) except 2,4,9, there exists an irreducible representation \(V\) of the symmetric group \(S_{n}\) such that the tensor square \(V\otimes V\) contains every irreducible representation of \(S_{n}\) as a summand with positive multiplicity. In terms of the correspondence of partitions, there exists a partition \(\lambda\vdash n\) such that the Kronecker coefficient \(g(\lambda,\lambda,\mu)\) is positive for any \(\mu\vdash n\)._ In 2012, Jan Saxl conjectured that all irreducible representations of \(S_{n}\) over \(\mathbb{C}\) occur in the decomposition of the tensor square of irreducible representation corresponding to the staircase shape partition [17]. This conjecture is as follows: **Conjecture 1.2** (Saxl Conjecture).: _Let \(\rho_{m}\) denote the staircase partition of size \(n:=\binom{m+1}{2}\). Then \(g(\rho_{m},\rho_{m},\mu)>0\) for every \(\mu\vdash n\)._ Previous work made progress towards the Tensor Square Conjecture, and specifically towards the Saxl Conjecture, see e.g. [17], [18], [19], [20]. Attempts have also been made to understand the Kronecker coefficients from different aspects: combinatorial interpretations for some known special shapes, see e.g. [14], [15], [2], [16]; from the perspective of the computational complexity of computing or deciding positivity of the Kronecker coefficients, see e.g. [1], [18], [19]. In 2020, Bessenrodt and Panova made the following conjecture concerned with the shape of partitions satisfying the tensor square conjecture: **Conjecture 1.3** ([16], Bessenrodt-Panova 2020).: _For every \(n\), there exists \(k(n)\) such that the tensor square of every self-conjugate partition whose Durfee size is at least \(k(n)\) and is not the \(k\times k\) partition satisfies the Tensor Square Conjecture._ In [17], Pak, Panova, and Vallejo suggested that caret partitions may satisfy the tensor square conjecture. Many of the arguments on staircase shape could also be adapted for caret shapes and chopped-square shapes. Most approaches to proving the positivity of a certain family of Kronecker coefficients use the semigroup property, see Section 2, which relies on breaking the partition triple into smaller partitions. The minimal elements in this procedure are the rectangular shapes, and thus understanding Kronecker positivity in general starts from understanding Kronecker coefficients of rectangular shapes. In this paper, we study the tensor squares of irreducible representations corresponding to square Young diagrams, denoted \(\raisebox{-0.5pt}{\includegraphics[]{figures/1.eps}}_{m}\). We show that the Kronecker coefficients \(g(\raisebox{-0.5pt}{\includegraphics[]{figures/1.eps}}_{m},\raisebox{-0.5pt}{ \includegraphics[]{figures/1.eps}}_{m},\mu)\) in the case where \(\raisebox{-0.5pt}{\includegraphics[]{figures/1.eps}}_{m}=(m^{m})\) has square shape and \(\mu=(m^{2}-k,k-1,1)\) vanish if and only if \(k\leq 4\) when \(m\geq 5\). We give an explicit formula for \(g(\raisebox{-0.5pt}{\includegraphics[]{figures/1.eps}}_{m},\raisebox{-0.5pt}{ \includegraphics[]{figures/1.eps}}_{m},\mu)\) when \(\mu=(m^{2}-k,k-1,1)\) has a short second row: **Theorem 1.4** (Theorem 3.6).: _Let \(f(k)\) be the number of partitions of \(k\) with no parts equal to 1 or 2. Let \(\ell_{1}(\alpha)\) denote the number of different parts of a partition \(\alpha\). Then for \(2\leq k\leq m\),_ \[g(\raisebox{-0.5pt}{\includegraphics[]{figures/1.eps}}_{m},\raisebox{-0.5pt}{ \includegraphics[]{figures/1.eps}}_{m},(m^{2}-k,k-1,1))=\sum_{\begin{subarray}{c} \alpha\vdash k-1\\ \alpha_{1}=\alpha_{2}\end{subarray}}\ell_{1}(\alpha)-f(k).\] We completely prove the forward direction of the following conjecture and have partial work done towards the other direction, including showing the positivity of square Kronecker coefficients for three-row partitions and near-hooks. **Conjecture 1.5**.: _For \(m\geq 7\), \(g(\raisebox{-0.5pt}{\includegraphics[]{figures/1.eps}}_{m},\raisebox{-0.5pt}{ \includegraphics[]{figures/1.eps}}_{m},\mu)=0\) if and only if \(\mu\in S\) or \(\mu^{\prime}\in S\), where_ \[S:=\{(m^{2}-3,2,1),(m^{2}-4,3,1),(m^{2}-j,1^{j})\mid j\in\{1,2,4,6\}\}.\] We state our main results as follows: **Theorem 1.6** (Corollary 3.6.1, 3.7.1, Theorem 4.2, 4.11).: _For every integer \(m\geq 7\), let \(\mu\vdash m^{2}\) be a partition of length at most \(3\), we have \(g(\raisebox{-0.5pt}{\includegraphics[]{figures/1.eps}}_{m},\raisebox{-0.5pt}{ \includegraphics[]{figures/1.eps}}_{m},\mu)>0\) if and only if \(\mu\notin\{(m^{2}-3,2,1),(m^{2}-4,3,1),(m^{2}-2,1,1),(m^{2}-1,1)\}\)._ **Theorem 1.7** (Corollary 5.7.1).: _Let \(m\) be an integer and assume that \(m\geq 20\). Define near-hook partitions \(\mu_{i}(k,m):=(m^{2}-k-i,i,1^{k})\). Then for every \(i\geq 8\), we have \(g(\raisebox{-0.5pt}{\includegraphics[]{figures/1.eps}}_{m},\raisebox{-0.5pt}{ \includegraphics[]{figures/1.eps}}_{m},\mu_{i}(k,m))>0\) for all \(k\geq 0\)._ The rest of this paper is structured as follows. In Section 2, we equip the reader with some required background information and notations. In Section 3, we present the partitions that do not occur in tensor squares of square partitions. In Section 4 and Section 5, we present the results on the positivity of square Kronecker coefficients for certain families of partitions. In Section 6, we will discuss some additional remarks and related further research. ### Acknowledgements The author would like to thank her advisor, Greta Panova, for suggesting the problem and for helpful guidance and insightful discussions throughout the project. ## 2. Background ### Partitions A partition \(\lambda\) of \(n\), denoted as \(\lambda\vdash n\), is a finite list of weakly decreasing positive integers a \((\lambda_{1},\ldots,\lambda_{k})\) such that \(\sum_{i=1}^{k}\lambda_{i}=n\). Given a partition \(\lambda\), the size \(|\lambda|\) is defined to be \(\sum_{i=1}^{k}\lambda_{i}\). The length of \(\lambda\) is defined to be the number of parts of the partition and we denote it by \(\ell(\lambda).\) We use \(P(n)\) to denote the set of all partitions of \(n\). We associate each partition \(\lambda=(\lambda_{1},\ldots,\lambda_{k})\vdash n\) with a Young diagram, which is a left justified array of \(n\) boxes with \(\lambda_{i}\) boxes in row \(i.\) Denote by \(\lambda^{\prime}\) the conjugate (or transpose) of a partition \(\lambda\). For instance, below are the Young diagrams corresponding to partition \(\lambda=(5,3,2)\) and its transpose \(\lambda^{\prime}=(3,3,2,1,1).\) The Durfee size of a partition \(\lambda\), denoted by \(d(\lambda)\) is the number of boxes on the main diagonal of the Young diagram of \(\lambda.\) For the sake of convenience, we will refer to the irreducible representation corresponding to \(\lambda\) be \(\lambda.\) **Definition 2.1**.: _For \(m\geq 1\), we define the square-shaped partition \(\raisebox{-0.5pt}{\includegraphics[height=5.690551pt]{figs/2.eps}}_{m}\vdash m^{2}\) to be \(\raisebox{-0.5pt}{\includegraphics[height=5.690551pt]{figs/2.eps}}_{m}:=(m^{m})\)._ For \(n\in\mathbb{N},\) we denote the symmetric group on \(n\) symbols by \(S_{n}\). Let \(\lambda,\mu\vdash n\). We say that \(\lambda\) dominates \(\mu\), denoted by \(\lambda\unrhd\mu,\) if \(\sum_{i=1}^{j}\lambda_{i}\geq\sum_{i=1}^{j}\mu_{i}\) for all \(j\). Let \(p_{k}(a,b)\) denote the number of partitions of \(k\) that fit into an \(a\times b\) rectangle. We denote the number of partitions of \(k\) that fit into an \(m\times m\) square by \(P_{k}(m).\) Note that \(P_{k}(m)=p_{k}(m,m).\) Given \(\mu\vdash n\), let \(\chi^{\mu}\) denote the irreducible character of the symmetric group \(S_{\mu}\) and let \(\chi^{\mu}[\alpha]\) denote the value of \(\chi^{\mu}(\omega)\) on any permutation \(\omega\) of cycle type \(\alpha.\) The characters can be computed using the Murnaghan-Nakayama Rule (see e.g. [10] for more details about the rule). **Theorem 2.2** (Murnaghan-Nakayama Rule).: _We have_ \[\chi^{\lambda}(\alpha)=\sum_{T}(-1)^{ht(T)},\] _summed over all border-strip tableaux of shape \(\lambda\) and type \(\alpha\) and \(ht(T)\) is the sum of the heights of each border-strip minus \(\ell(\alpha)\)._ ### The Kronecker coefficients When working over the field \(\mathbb{C}\), the Specht modules are irreducible, and they form a complete set of irreducible representations of the symmetric group. Polytabloids associated with the standard Young tableaux form a basis for the Specht modules and hence, the Specht modules can be indexed by partitions. Given \(\mu\vdash n\), let \(\mathbb{S}^{\mu}\) denote the Specht module of the symmetric group \(S_{n}\), indexed by partition \(\mu\) (see e.g. [11] for more details on the construction of Specht modules). The Kronecker coefficients \(g(\mu,\nu,\lambda)\) are defined as the multiplicity of \(\mathbb{S}^{\lambda}\) in the tensor product decomposition of \(\mathbb{S}^{\mu}\otimes\mathbb{S}^{\nu}\). In particular, for any \(\mu,\nu,\lambda\vdash n\), we can write \[\mathbb{S}^{\mu}\otimes\mathbb{S}^{\nu}=\oplus_{\lambda\vdash n}\mathbb{S}^{ \oplus g(\mu,\nu,\lambda)}.\] We can also write \[\chi^{\mu}\chi^{\nu}=\sum_{\lambda\vdash n}g(\mu,\nu,\lambda)\chi^{\lambda},\] and it follows that \[g(\mu,\nu,\lambda)=\langle\chi^{\mu}\chi^{\nu},\chi^{\lambda}\rangle=\frac{1}{n! }\sum_{\omega\in\mathfrak{S}_{n}}\chi^{\mu}[\omega]\chi^{\nu}[\omega]\chi^{ \lambda}[\omega].\] It follows that the Kronecker coefficients have full symmetry over its three parameters \(\mu,\nu,\lambda\vdash n\). Further, since \(1^{n}\) is the sign representation, we have \(\chi^{\mu}\chi^{1^{n}}=\chi^{\mu^{\prime}}\) and therefore the Kronecker coefficients have the transposition property, namely \[g(\mu,\nu,\lambda)=g(\mu^{\prime},\nu^{\prime},\lambda)=g(\mu^{\prime},\nu, \lambda^{\prime})=g(\mu,\nu^{\prime},\lambda^{\prime}).\] ### Symmetric functions For main definitions and properties of symmetric functions, we refer to [10] Chapter 7. Let \(h_{\lambda}\) denote the homogeneous symmetric functions and \(s_{\lambda}\) denote the Schur functions. The Jacobi-Trudi Identity (see e.g. [10]) is a powerful tool in our work: **Theorem 2.3** (Jacobi-Trudi Identity).: _Let \(\lambda=(\lambda_{1},\ldots,\lambda_{n})\). Then_ \[s_{\lambda}=\det(h_{\lambda_{i}+j-i})_{1\leq i,j,\leq n}\text{ and }s_{\lambda^{ \prime}}=\det(e_{\lambda_{i}+j-i})_{1\leq i,j,\leq n}.\] Let \(c_{\mu\nu}^{\lambda}\), where \(|\lambda|=|\mu|+|\nu|\), denote the Littlewood-Richardson coefficients. Using the Hall inner product on symmetric functions, one can define the Littlewood-Richardson coefficients as \[c_{\mu\nu}^{\lambda}=\langle s_{\lambda},s_{\mu}s_{\nu}\rangle=\langle s_{ \lambda/\mu},s_{\nu}\rangle.\] Namely, the Littlewood-Richardson coefficients are defined to be the multiplicity of \(s_{\lambda}\) in the decomposition of \(s_{\mu}\cdot s_{\nu}.\) It is well-known that the Littlewood-Richardson coefficients have a combinatorial interpretation in terms of certain semistandard Young tableaux (see e.g. [10], [21]). Using the Frobenius map, one can define the Kronecker product of symmetric functions as \[s_{\mu}*s_{\nu}=\sum_{\lambda\vdash n}g(\mu,\nu,\lambda)s_{\lambda}.\] In [17], Littlewood proved the following identity, which is used frequently in our calculations: **Theorem 2.4** (Littlewood's Identity).: _Let \(\mu,\nu,\lambda\) be partitions. Then_ \[s_{\mu}s_{\nu}*s_{\lambda}=\sum_{\gamma\vdash|\mu|}\sum_{\delta\vdash|\nu|}c_{ \gamma\delta}^{\lambda}(s_{\mu}*s_{\gamma})(s_{\nu}*s_{\delta}),\] _where \(c_{\gamma\delta}^{\lambda}\) is the Littlewood-Richardson coefficient._ Another useful tool to simplify our calculations is Pieri's rule: **Theorem 2.5** (Pieri's rule).: _Let \(\mu\) be a partition. Then_ \[s_{\mu}s_{(n)}=\sum_{\lambda}s_{\lambda},\] _summed is over all partitions \(\lambda\) obtained from \(\mu\) by adding \(n\) boxes, with no two added elements in the same column._ ### Semigroup property Semigroup property, which was proved in [14], has been used extensively to prove the positivity of some families of partitions. For two partitions \(\lambda=(\lambda_{1},\lambda_{2},\ldots\lambda_{k})\) and \(\mu=(\mu_{1},\mu_{2},\ldots\mu_{l})\) with \(k\leq l\), the horizontal sum of \(\lambda\) and \(\mu\) is defined as \(\lambda+_{H}\mu=\mu+_{H}\lambda=(\lambda_{1}+\mu_{1},\lambda_{2}+\mu_{2}, \ldots,\lambda_{k}+\mu_{k},\mu_{k+1},\ldots,\mu_{l})\). The vertical sum of two partitions can be defined analogously, by adding the column lengths instead of row lengths. We define the vertical sum \(\lambda+_{V}\mu\) of two partitions \(\lambda\) and \(\mu\) to be \((\lambda^{\prime}+_{H}\mu^{\prime})^{\prime}\). **Theorem 2.6** (Semigroup Property [14]).: _If \(g(\lambda^{1},\lambda^{2},\lambda^{3})>0\) and \(g(\mu^{1},\mu^{2},\mu^{3})>0\), then \(g(\lambda^{1}+_{H}\mu^{1},\lambda^{2}+_{H}\mu^{2},\lambda^{3}+_{H}\mu^{3})>0\)._ **Corollary 2.6.1**.: _If \(g(\lambda^{1},\lambda^{2},\lambda^{3})>0\) and \(g(\mu^{1},\mu^{2},\mu^{3})>0\), then \(g(\lambda^{1}+_{V}\mu^{1},\lambda^{2}+_{V}\mu^{2},\lambda^{3}+_{H}\mu^{3})>0.\)_ Note that by induction, we can extend the semigroup property to an arbitrary number of partitions and a modified version of the semigroup property allows us to use an even number of vertical additions. ## 3. Missing partitions in tensor squares of square partitions In this section, we will show the absence of partitions in the tensor squares of square partitions by discussing the occurrences of two special families of partitions. Note that it follows immediately that the square shape partitions does not satisfy the Tensor Square Conjecture. ### Near two-row partitions \((m^{2}-k,k-1,1)\) Recall that we let \(P_{k}(m)\) denote the number of partitions of \(k\) that fit into an \(m\times m\) square and let \(n=m^{2}\). The following lemma is proved in [10], see also [11]. **Lemma 3.1** ([10],[11]).: _For \(1\leq k\leq n\), \(g(\bigsqcup_{m},\bigsqcup_{m},(n-k,k))=P_{k}(m)-P_{k-1}(m).\)_ Let \(\lambda^{*}\) denotes the \(m^{n}\)-complement of \(\lambda\) with \(m=\lambda_{1}\) and \(n=\lambda_{1}^{\prime}.\) We define a \(\pi\)-rotation of a partition \(\lambda\) is the shape obtained by rotating \(\lambda\) by \(180^{\circ}\). Following Thomas and Yong ([13]), let the \(m^{n}\)-shortness of \(\lambda\) denote the length of the shortest straight line segment of the path of length \(m+n\) from the southwest to the northeast corner of \(m\times n\) rectangle that separates \(\lambda\) from the \(\pi\)-rotation of \(\lambda^{*}.\) **Example 3.2**.: _Consider \(\lambda=(8,4,2,2,1)\), \(m=\lambda_{1}=8\) and \(n=\lambda_{1}^{\prime}=5\). Then \(\lambda^{*}=(7,6,6,4).\) The diagram below is a demonstration for the path of length \(m+n\) from the southwest to the northeast corner of a \(8\times 5\) rectangle that separates \((8,4,2,2,1)\) from the \(\pi\)-rotation of \((7,6,6,4).\) The shortest straight line segment of the blue path is \(1.\) Therefore, the \(8^{5}\)-shortness of \((8,4,2,2,1)\) is \(1.\)_ \[\begin{array}{| Proof.: Let \(\lambda_{m}^{*}\) denote the \(m^{m}\)-complement of \(\lambda_{m}\). Then \(\lambda_{m}^{*}=(1).\) The lengths of straight line segments of the path from the southwest to the northeast corner that separates \(\lambda_{m}\) from \(\lambda_{m}^{*}\) are \(m-1,1,1,m-1\), and therefore the \(m^{m}\)-shortness of \(\lambda_{m}^{*}\) is 1. Let \(\beta+k\leq m^{2}-1\). Then, \(s_{\lambda_{m}/\beta}\) is a basic skew Schur function as the difference between consecutive rows in \(\lambda_{m}\) is at most 1. By Theorem 3.4 A1, \(s_{\lambda_{m}/\beta}\) is multiplicity-free, which implies that \(c_{\beta\mu}^{\lambda_{m}}\in\{0,1\}\) for any \(\mu\vdash m^{2}-1-k\). **Lemma 3.5**.: _Let \(\ell_{1}(\alpha)\) denote the number of different parts of partition \(\alpha\). For \(1\leq k\leq m\),_ \[\sum_{\beta\vdash k-1}\sum_{\mu\vdash m^{2}-k}c_{\beta\mu}^{\lambda_{m}}=P_{k -1}(m)+\sum_{\beta\vdash k-1}\ell_{1}(\beta).\] Proof.: Let \(1\leq k\leq m\). Since \(c_{\beta\mu}^{\lambda_{m}}=1>0\), partitions \(\beta,\mu\subseteq\lambda_{m}\subseteq\bigm{\rgroup}_{m}\). Let \(\beta^{*}\) and \(\lambda_{m}^{*}\) denote the complements of \(\beta\) and \(\lambda_{m}\) inside the \(m\times m\) square, respectively. Since \(c_{\beta\mu}^{\lambda_{m}}\) depends only on \(\mu\) and the skew partition \(\lambda_{m}/\beta\), and the skew partitions \(\lambda_{m}/\beta\) and \(\beta^{*}/\lambda_{m}^{*}\) are identical when rotated, we have \(c_{\beta\mu}^{\lambda_{m}}=c_{\lambda_{m}^{*}\mu}^{\beta^{*}}=c_{(1)\mu}^{ \beta^{*}}\). By the Pieri's rule (Theorem 2.5), \(c_{(1)\mu}^{\beta^{*}}=1\) if and only if \(\mu\) is a partition obtained from \(\beta^{*}\) by removing 1 element. Since the number of ways to obtain a partition by removing an element from \(\beta^{*}\) is \(\ell_{1}(\beta^{*})\), we have \[\sum_{\beta\vdash k-1}\sum_{\mu\vdash m^{2}-k}c_{\beta\mu}^{\lambda_{m}}=\sum _{\begin{subarray}{c}\beta\vdash k-1\\ \beta\subseteq\lambda_{m}\end{subarray}}\sum_{\mu\vdash m^{2}-k}c_{(1)\mu}^{ \beta^{*}}=\sum_{\begin{subarray}{c}\beta\vdash k-1\\ \beta\subseteq\lambda_{m}\end{subarray}}\ell_{1}(\beta^{*}).\] Note that \(\ell_{1}(\beta^{*})=\ell_{1}(\beta)-1\) if \(\beta_{1}=\ell(\beta)=m\); \(\ell_{1}(\beta^{*})=\ell_{1}(\beta)\) if exactly one of \(\beta_{1},\ell(\beta)\) is \(m\); otherwise, \(\ell_{1}(\beta^{*})=\ell_{1}(\beta)+1\). Hence, when \(1\leq k\leq m\), we have \[\sum_{\begin{subarray}{c}\beta\vdash k-1\\ \beta\subseteq\lambda_{m}\end{subarray}}\ell_{1}(\beta^{*})=P_{k-1}(m)+\sum_{ \beta\vdash k-1}\ell_{1}(\beta).\] **Proposition 3.6** (near two-row partitions).: _Let \(2\leq k\leq m\). Let \(f(k)\) denote the number of partitions of \(k\) with no parts equal to 1 or 2, and \(\ell_{1}(\alpha)\) denote the number of different parts of partition \(\alpha\). Then_ \[g(\bigm{\rgroup}_{m},\bigm{\rgroup}_{m},(n-k,k-1,1))=\sum_{\begin{subarray}{c} \alpha\vdash k-1\\ \alpha_{1}=\alpha_{2}\end{subarray}}\ell_{1}(\alpha)-f(k).\] Proof.: Letting \(s_{\lambda}\) denote the Schur function indexed by a partition \(\lambda\), we have \[g(\bigm{\rgroup}_{m},\bigm{\rgroup}_{m},(n-k,k-1,1))=\left\langle s\bigm{ \rgroup}_{m},s_{(n-k,k-1,1)}*s\bigm{\rgroup}_{m}\right\rangle.\] Observe that, by Pieri's rule (Theorem 2.5), we have \[s_{(n-k,k-1)}s_{(1)}=s_{(n-k,k-1,1)}+s_{(n-k+1,k-1)}+s_{(n-k,k)}.\] Rewriting the above identity gives us that \(g(\bigm{\rgroup}_{m},\bigm{\rgroup}_{m},(n-k,k-1,1))\) can be interpreted as We first note that the last two terms give two Kronecker coefficients \(g(\bigm{\rgroup}_{m},\bigm{\rgroup}_{m},(n-k+1,k-1))\) and \(g(\bigm{\rgroup}_{m},\bigm{\rgroup}_{m},(n-k,k))\). Notice that by Lemma 3.1, we have \[g(\bigm{\rgroup}_{m},\bigm{\rgroup}_{m},(n-k+1,k-1))=P_{k-1}(m)-P_{k-2}(m)\] and \[g(\bigm{\rgroup}_{m},\bigm{\rgroup}_{m},(n-k,k))=P_{k}(m)-P_{k-1}(m).\] By Littlewood's Identity (Theorem 2.4), \[(s_{(n-k,k-1)}s_{(1)})*s\bigm{\rgroup}_{m} =\sum_{\gamma\vdash n-1}c_{\gamma,(1)}^{\bigm{\rgroup}_{m}}(s_{(n-k,k-1)}*s_{\gamma})(s_{(1)}*s_{(1)})\] \[=(s_{(n-k,k-1)}*s_{\lambda_{m}})(s_{(1)}),\] as \(c_{\gamma,(1)}^{\square}=1\) if \(\gamma=\lambda_{m}\) and \(c_{\gamma,(1)}^{\square}=0\) for all the other partitions of size \(n-1\). Taking inner product with \(s\,\square_{{}_{m}}\) on both sides, we have \[\left\langle s\,\square_{{}_{m}},(s_{(n-k,k-1)}s_{(1)})\ast s_{ \square}{}_{m}\right\rangle =\left\langle s\,\square_{{}_{m}},(s_{(n-k,k-1)}\ast s_{\lambda_{ m}})(s_{(1)})\right\rangle\] \[=\left\langle s\,\square_{{}_{m}(1)},(s_{(n-k,k-1)}\ast s_{ \lambda_{m}})\right\rangle\] \[=\left\langle s_{\lambda_{m}},s_{(n-k,k-1)}\ast s_{\lambda_{m}}\right\rangle.\] By Littlewood's Identity (2.4), Jacobi-Trudi Identity (2.3), together with Corollary 3.4.1, \(c_{\mu\beta}^{\lambda_{m}}\in\{0,1\}\), we have \[\left\langle s_{\lambda_{m}},s_{(n-k,k-1)}\ast s_{\lambda_{m}}\right\rangle =\sum_{\beta\vdash k-1}\sum_{\mu\vdash n-k}(c_{\mu\beta}^{\lambda _{m}})^{2}-\sum_{\alpha\vdash k-2}\sum_{\gamma\vdash n-k+1}(c_{\alpha\gamma}^ {\lambda_{m}})^{2}\] \[=\sum_{\beta\vdash k-1}\sum_{\mu\vdash n-k}c_{\mu\beta}^{\lambda _{m}}-\sum_{\alpha\vdash k-2}\sum_{\gamma\vdash n-k+1}c_{\alpha\gamma}^{ \lambda_{m}}.\] Putting the pieces together, we then have \[\begin{split}&\quad g(\,\square\,_{m},\,\square\,_{m},(n-k,k-1,1)) \\ =&\,\left\langle s_{\lambda_{m}},s_{(n-k,k-1)}\ast s_{ \lambda_{m}}\right\rangle-g(\,\square\,_{m},\,\square\,_{m},(n-k+1,k-1))-g( \,\square\,_{m},\,\square\,_{m},(n-k,k))\\ =&\,\left\langle s_{\lambda_{m}},s_{(n-k,k-1)}\ast s _{\lambda_{m}}\right\rangle-(P_{k-1}(m)-P_{k-2}(m))-(P_{k}(m)-P_{k-1}(m))\\ =&\,\sum_{\beta\vdash k-1}\sum_{\mu\vdash n-k}c_{ \mu\beta}^{\lambda_{m}}-\sum_{\alpha\vdash k-2}\sum_{\gamma\vdash n-k+1}c_{ \alpha\gamma}^{\lambda_{m}}-(P_{k}(m)-P_{k-2}(m))\\ =&\,P_{k-1}(m)+\sum_{\beta\vdash k-1}\ell_{1}(\beta) -\left(P_{k-2}(m)+\sum_{\alpha\vdash k-2}\ell_{1}(\alpha)\right)-(P_{k}(m)-P_{ k-2}(m))\\ =&\,\sum_{\beta\vdash k-1}\ell_{1}(\beta)-\sum_{ \alpha\vdash k-2}\ell_{1}(\alpha)-(P_{k}(m)-P_{k-1}(m))\\ =&\,\sum_{\begin{subarray}{c}\beta\vdash k-1\\ \beta_{1}=\beta_{2}\end{subarray}}\ell_{1}(\beta)+\left(\sum_{\begin{subarray} {c}\beta\vdash k-1\\ \beta_{1}>\beta_{2}\end{subarray}}\ell_{1}(\beta)-\sum_{\alpha\vdash k-2}\ell_ {1}(\alpha)\right)-(P_{k}(m)-P_{k-1}(m))\\ =&\,\sum_{\begin{subarray}{c}\beta\vdash k-1\\ \beta_{1}=\beta_{2}\end{subarray}}\ell_{1}(\beta)+\sum_{\begin{subarray}{c} \beta\vdash k-2\\ \beta_{1}=\beta_{2}\end{subarray}}1-(P_{k}(m)-P_{k-1}(m))\\ =&\,\sum_{\begin{subarray}{c}\beta\vdash k-1\\ \beta_{1}=\beta_{2}\end{subarray}}\ell_{1}(\beta)-\left(P_{k}(m)-P_{k-1}(m)-\sum _{\begin{subarray}{c}\beta\vdash k-2\\ \beta_{1}=\beta_{2}\end{subarray}}1\right)\\ =&\,\sum_{\begin{subarray}{c}\alpha\vdash k-1\\ \alpha_{1}=\alpha_{2}\end{subarray}}\ell_{1}(\alpha)-f(k).\end{split}\] The following result, which provides a necessary and sufficient condition for a near two-row partition with a short second row to vanish in the tensor square of square partitions, follows from Theorem 3.6. **Corollary 3.6.1**.: _Let \(2\leq k\leq m\). Then \(g(\,\square\,_{m},\,\square\,_{m},(n-k,k-1,1))=0\) if and only if \(k\leq 4\)._ Proof.: We can easily verify that \(\sum_{\alpha_{1}^{k}=k-1}\ell_{1}(\alpha)=f(k)\) for \(k\in\{2,3,4\}\). Then by Proposition 3.6, we conclude that \(g(\,\square\,_{m},\,\square\,_{m},(n-k,k-1,1))=0\) when \(k\leq 4\). Next, we consider the case when \(k\geq 5\). We can establish an injection from the set of all partitions of \(k\) whose parts are at least 3 to the set of partitions of \(k-1\) whose first two parts are the same, that is from \[S=\{\beta\vdash k\mid\beta_{i}\notin\{1,2\}\text{ for all }i\}\] to \[T=\{\alpha\vdash k-1\mid\alpha_{1}=\alpha_{2}\}.\] This injection is achieved by removing one box from the last row of \(\beta\in S\) and taking the transpose. When \(k\geq 5\), it follows that \(\sum_{\alpha\vdash k-1}\ell_{1}(\alpha)>|T|\geq|S|=f(k)\). Hence, we conclude that \(g(\raisebox{-0.5pt}{\includegraphics[height=5pt]{figs/2012-12-12-12-12-12-12-12-12-12 -12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12 -12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-12-1 2-1-12-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1 **Proposition 4.4**.: _For every even number \(m\geq 6\), \(g(\bigboxplus_{m},\bigboxplus_{m},(m^{2}-k,k-1,1))>0\) for every \(m+1\leq k\leq\frac{m^{2}}{2}\)._ Proof.: For an even integer \(m\geq 6\), we can write \(m=2r\) where \(r\geq 3\). We shall proceed by induction on \(r\). Based on computational evidence, we observe that \(g(\bigboxplus_{6},\bigboxplus_{6},(6^{2}-k,k-1,1))>0\) for every \(7\leq k\leq 18\). Let \(r\geq 4\). Assume the inductive hypothesis that \(g(\bigboxplus_{2(r-1)},\bigplus_{2(r-1)},(4(r-1)^{2}-i,i-1,1))\) for any \(2r-1\leq i\leq 2(r-1)^{2}\). Let \(2(r+1)\leq k\leq 2r^{2}.\) We can decompose the square partition with side length \(2r\) as follows: \[\bigplus_{2r}=\Big{(}\bigboxplus_{2(r-1)}+_{V}(2r-2,2r-2)\Big{)}+_{H}(2^{2r}).\] Note that by Theorem 4.3 and the transposition property of Kronecker coefficients, we obtain that \(g((2^{2r}),(2^{2r}),(2(r+a),2(r-a)))>0\) for any \(0\leq a\leq r\), and \(g((2r-2,2r-2),(2r-2,2r-2),(2(r-1+b),2(r-1-b)))>0\) for any \(0\leq b\leq r-1\). Consider the following system of inequalities: \[\begin{cases}4r^{2}-k-2(r+a)\geq k-1-2(r-a)\\ 4r^{2}-k-2(r+a)-2(r-1+b)\geq k-1-2(r-a)-2(r-1-b)\geq 1\\ k-1-2(r-a)-2(r-1-b)\geq 5\end{cases}.\] Suppose that \(0\leq a\leq r\), \(0\leq b\leq r-1\) is a pair of solutions to the system. We define partition \(\alpha(a,b):=(4r^{2}-k-2(r+a)-2(r-1+b),k-1-2(r-a)-2(r-1-b),1)\). By inductive hypothesis, together with Corollary 3.6.1, \(g(\bigplus_{2(r-1)},\bigplus_{2(r-1)},\alpha(a,b))>0.\) Note that we can decompose the near two-row partition as \[(4r^{2}-k,k-1,1)=\alpha(a,b)+_{H}(2(r-1+b),2(r-1-b)+_{H}(2(r+a),2(r-a)).\] Then by semigroup property (Theorem 2.6), \(g\left(\bigboxplus_{2r},\bigplus_{2r},(4r^{2}-k,k-1,1)\right)>0\). By the Principle of Mathematical Induction, the statement holds for all even \(m\geq 6\). Hence, it suffices to show the system of inequalities has integral solutions \(0\leq a\leq r,0\leq b\leq r-1\). By simplifying and rearranging, we can further reduce this system of inequalities to: \[\begin{cases}a\leq r^{2}-\frac{k}{2}+\frac{1}{4}\\ 2r+2-\frac{k}{2}\leq a+b\leq r^{2}-\frac{k}{2}+\frac{1}{4}.\end{cases}\] Notice that when \(k\leq\frac{(2r-1)^{2}}{2}\), the values \(a=r\) and \(b=\max\{\lceil r+2-\frac{k}{2}\rceil,0\}\) provide a feasible solution to the system. When \(\frac{(2r-1)^{2}}{2}\leq k\leq 2r^{2}\), the values \(a=\lfloor r^{2}-\frac{k}{2}+\frac{1}{4}\rfloor\) and \(b=0\) provide a feasible solution to the system. **Example 4.5**.: _Let \(m=6\) and \(k=10\). Diagrams below illustrate a way to decompose partitions \(\bigboxplus_{6}\) and \((26,9,1)\). Since \(g(\bigboxplus_{4},\bigplus_{4},(8,7,1))>0\), \(g(2^{6},2^{6},(10,2))>0\) by Theorem 4.3 and \(g((4,4),(4,4),(8))>0\), we conclude that \(g(\bigbox[]{\rule{0.0pt}{12.0pt}}_{6},\bigbox[]{\rule{0.0pt}{12.0pt}}_{6},(26,9,1) )>0\) by semigroup property._ We will next prove the positivity of \(g(\bigbox[]{\rule{0.0pt}{12.0pt}}_{m},\bigbox[]{\rule{0.0pt}{12.0pt}}_{m},(m^{2} -k,k-1,1))\) when \(m\) is odd using the semigroup property. **Proposition 4.6**.: _For every odd integer \(m\geq 7\) and \(k\geq 5\) such that \(k\leq\frac{(m-1)^{2}+1}{2}\), \(g(\bigbox[]{\rule{0.0pt}{12.0pt}}_{m},\bigbox[]{\rule{0.0pt}{12.0pt}}_{m},(m^{2} -k,k-1,1))>0\)._ Proof.: Let \(m\geq 7\) and \(k\geq 5\). Note that when \(k\leq\frac{(m-1)^{2}+1}{2}\), we have \((m^{2}-k)-(k-1)\geq 2m-1\) and we can consider the decompositions \[(m^{2}-k,k-1,1)=(m^{2}-k-2m+1,k-1,1)+_{H}(m-1)+_{H}(m)\] and \[\bigbox[]{\rule{0.0pt}{12.0pt}}_{m}=(\bigbox[]{\rule{0.0pt}{12.0pt}}_{m-1}+_{V }(m-1))+_{H}(1^{m}).\] Then by semigroup property and Proposition 4.4, we have \(g(\bigbox[]{\rule{0.0pt}{12.0pt}}_{m},\bigbox[]{\rule{0.0pt}{12.0pt}}_{m},(m^{2 }-k,k-1,1))>0\) in this case. Note that the previous proof only establishes the constituency of near two-row partitions with a relatively short second row in the tensor square of square partitions with an odd side length. Now we aim to demonstrate the constituency of near two-row partitions whose first part and second part have similar sizes. To accomplish this, we will first establish the constituency of an extreme case where the second row has a maximal length: **Lemma 4.7**.: _For every odd integer \(m\geq 3\), \(g\left(\bigbox[]{\rule{0.0pt}{12.0pt}}_{m},\bigbox[]{\rule{0.0pt}{12.0pt}}_{m}, \bigbox[]{\rule{0.0pt}{12.0pt}}_{m},\left(\frac{m^{2}-1}{2},\frac{m^{2}-1}{2}, 1\right)\right)>0\)._ Proof.: We can write odd integers \(m\) as \(m=2k+1\), and we will proceed with a proof by induction on \(k\geq 1\). We can verify the statement directly for \(m\in\{3,5,7\}\) through direct computations. When \(k=4\), we have \(m=2k+1=9\). In this case, the square partition \(\bigbox[]{\rule{0.0pt}{12.0pt}}_{9}\) can be expressed as \[\bigbox[]{\rule{0.0pt}{12.0pt}}_{9}=((5^{5})+_{V}(5^{4}))+_{H}(4^{9}).\] Furthermore, we can write \[(40,40,1)=(12,12,1)+_{H}(10,10)+_{H}(18,18).\] By assumption, we have \(g((5^{5}),(5^{5}),(12,12,1))>0\). Using computer software, we can verify the positivity of \(g((5^{4}),(5^{4}),(10,10))\) and \(g((4^{9}),(4^{9}),(18,18))\). Therefore, by the semigroup property, we conclude that \(g\left(\bigbox[]{\rule{0.0pt}{12.0pt}}_{9},\bigbox[]{\rule{0.0pt}{12.0pt}}_{9}, (40,40,1)\right)>0\). Now let \(k\geq 5\) and \(m=2k+1\). By the inductive hypothesis, we assume that \[g\left(\bigbox[]{\rule{0.0pt}{12.0pt}}_{m^{\prime}},\bigbox[]{\rule{0.0pt}{12. 0pt}}_{m^{\prime}},\left(\frac{{m^{\prime}}^{2}-1}{2},\frac{{m^{\prime}}^{2}- 1}{2},1\right)\right)>0,\] holds for all \(m^{\prime}=2k^{\prime}+1<2k+1\). We can express \(\raisebox{-0.5pt}{\includegraphics[]{images/2-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1 Proof.: We shall prove the statement by induction on odd integers \(m\geq 7\). Note that we can check by semigroup property and computer that the statement holds for \(m=7\). Let \(m\geq 9\) be an odd integer. Suppose that the statement holds for \(m-2\). Consider the decomposition that \(\qed_{m}=(\qed_{m-2}+_{V}(m-2,m-2))+_{H}(2^{m})\). Let \(a:=(m^{2}-k)-(k-1)\). Since \(k\geq\frac{(m-1)^{2}}{2}+1\), we have \((m^{2}-k)-(k-1)\leq 2m-2.\) We will discuss three cases as follows. 1. If \(a=0\), by Lemma 4.7, we know that \(g\left(\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m-1},\qed_{2},1\right)>0\). 2. If \(a=2\), by Lemma 4.9, we know that \(g\left(\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m}, \qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m}, \qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m}, \qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m}, \qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m}, \qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m}, \qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m}, \qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m}, \qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m}, \qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m}, \qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m}, \qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m}, \qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m}, \qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m}, \qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m}, \qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m}, \qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m}, \qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m}, \qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m}, \qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m}, \qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m}, \qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m}, \qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m}, \qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m}, \qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m}, \qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m}, \qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m}, \qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m}, \qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m}, \qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m},\qed_{m}, \qed_{m},\qed_{m},\qed_{m},\qed_{m},\ Proof.: With the help of the computer, we can check that \(g(k^{3},k^{3},k^{3})>0\) for \(k\in\{3,4,5\}.\) For any \(k\geq 6\), we can write \(k=3j+r\) for some non-negative integers \(j,r\) such that \(r\in\{0,4,5\}\). Then, we can write the partition \((k,k,k)\) as a horizontal sum of \(j\) square partitions of side length \(3\), and the rectangular partition \((r,r,r)\). The generalized semigroup property shows that \(g(k^{3},k^{3},k^{3})>0\) for \(k\geq 6\). Furthermore, by the transposition property, we have \(g(3^{k},3^{k},k^{3})=g(k^{3},k^{3},k^{3})>0\) for \(k\geq 3\). Lemma 4.14, 4.15 and 4.16 will be used in the proof of Proposition 4.17. These specific cases are addressed individually due to their different decomposition approach, setting them apart from the remaining cases of the proposition's proof. **Lemma 4.14**.: _The Kronecker coefficient \(g\left(\raisebox{-0.5pt}{\includegraphics[height=5pt]{figures/1.eps}}_{m}, \raisebox{-0.5pt}{\includegraphics[height=5pt]{figures/1.eps}}_{m},\left(\frac{m^{2 }+2}{3},\frac{m^{2}-1}{3},\frac{m^{2}-1}{3}\right)\right)>0\) for any positive integer \(m\geq 5\) such that \(m\equiv 2\mod 3\)._ Proof.: For any positive integer \(m\geq 5\) such that \(m\equiv 2\mod 3\), we can write \(m=3r+2\) for some \(r\geq 1\). We will prove the proposition by induction on \(r\). When \(r=1\), \(3r+2=5\) and with the help of the computer, we can check that \(g\left(\raisebox{-0.5pt}{\includegraphics[height=5pt]{figures/1.eps}}_{5}, \raisebox{-0.5pt}{\includegraphics[height=5pt]{figures/1.eps}}_{5},\left(\frac{5^{2 }+2}{3},\frac{5^{2}-1}{3},\frac{5^{2}-1}{3}\right)\right)>0.\) Let \(r\geq 2\). Assume the statement is true for \(r-1\). We can decompose \(\raisebox{-0.5pt}{\includegraphics[height=5pt]{figures/1.eps}}_{3r+2}\) as \[\raisebox{-0.5pt}{\includegraphics[height=5pt]{figures/1.eps}}_{3r+2}=\left( \raisebox{-0.5pt}{\includegraphics[height=5pt]{figures/1.eps}}_{3r-1}+_{V}(3r-1) \right)+_{H}(3^{3r+2}),\] and we can decompose the partition \(\left(\frac{(3r+2)^{2}+2}{3},\frac{(3r+2)^{2}-1}{3},\frac{(3r+2)^{2}-1}{3}\right)\) as \[\left(\frac{(3r+2)^{2}+2}{3},\frac{(3r+2)^{2}-1}{3},\frac{(3r+2)^ {2}-1}{3}\right) =\left(\frac{(3r-1)^{2}+2}{3},\frac{(3r-1)^{2}-1}{3},\frac{(3r-1 )^{2}-1}{3}\right)\] \[+_{H}(3r-1,3r-1,3r-1)\] \[+_{H}(3r+2,3r+2,3r+2).\] Then, by the inductive hypothesis, Lemma 4.13 and semigroup property, we can conclude that \(g\left(\raisebox{-0.5pt}{\includegraphics[height=5pt]{figures/1.eps}}_{3r+2}, \raisebox{-0.5pt}{\includegraphics[height=5pt]{figures/1.eps}}_{3r+2},\left(\frac{(3r +2)^{2}+2}{3},\frac{(3r+2)^{2}-1}{3},\frac{(3r+2)^{2}-1}{3}\right)\right)>0\). Thus, by the principle of mathematical induction, \(g\left(\raisebox{-0.5pt}{\includegraphics[height=5pt]{figures/1.eps}}_{m}, \raisebox{-0.5pt}{\includegraphics[height=5pt]{figures/1.eps}}_{m},\left(\frac{m^{2 }+2}{3},\frac{m^{2}-1}{3},\frac{m^{2}-1}{3}\right)\right)>0\) for every positive integer \(m\geq 5\) such that \(m\equiv 2\mod 3\). **Lemma 4.15**.: _For any positive integer \(m\geq 7\) such that \(m\equiv 1\mod 3\), the Kronecker coefficients \(g\left(\raisebox{-0.5pt}{\includegraphics[height=5pt]{figures/1.eps}}_{m}, \raisebox{-0.5pt}{\includegraphics[height=5pt]{figures/1.eps}}_{m},\lambda\right)>0\) for \(\lambda\) in the set_ \[\left\{\left(\frac{m^{2}+5}{3},\frac{m^{2}-1}{3},\frac{m^{2}-4}{3}\right), \left(\frac{m^{2}+5}{3},\frac{m^{2}+5}{3},\frac{m^{2}-10}{3}\right),\left( \frac{m^{2}+5}{3},\frac{m^{2}+2}{3},\frac{m^{2}-7}{3}\right)\right\}.\] Proof.: For any positive integer \(m\geq 7\) such that \(m\equiv 1\mod 3\), we can write \(m=3r+1\) for some \(r\geq 2\). We will prove the proposition by induction on \(r\). When \(r=2\), \(3r+1=7\), and with the help of the computer, we can verify the statement holds true for \(r=2.\) Let \(r\geq 3\), and assume that the statement is true for \(r-1\). We can decompose \(\raisebox{-0.5pt}{\includegraphics[height=5pt]{figures/1.eps}}_{m}(r)\) as \[\raisebox{-0.5pt}{\includegraphics[height=5pt]{figures/1.eps}}_{m}(r)=\left( \raisebox{-0.5pt}{\includegraphics[height=5pt]{figures/1.eps}}_{m(r-1)}+_{V}(m(r-1) ^{3})\right)+_{H}3^{m(r)},\] and we can decompose the partition \(\left(\frac{m(r)^{2}+i}{3},\frac{m(r)^{2}+j}{3},\frac{m(r)^{2}+k}{3}\right)\) as \[\left(\frac{m(r)^{2}+i}{3},\frac{m(r)^{2}+j}{3},\frac{m(r)^{2}+k}{ 3}\right) =\left(\frac{m(r-1)^{2}+i}{3},\frac{m(r-1)^{2}+j}{3},\frac{m(r-1) ^{2}+k}{3}\right)\] \[+_{H}(m(r-1),m(r-1),m(r-1))\] \[+_{H}(m(r),m(r),m(r)),\] where \((i,j,k)\in\{(5,-1,-4),(5,5,-10),(5,2,-7)\}.\) Then, by the inductive hypothesis, Lemma 4.13 and semigroup property, we have \(g\left(\raisebox{-0.5pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics[ height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics[ height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics[height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0.0pt}{\includegraphics [height=5.5pt]{0. suffices to show that we can rewrite a non-rectangular three-row partition of \(m^{2}\) as a horizontal sum of a three-row partition of \((m-2)^{2}\) appearing in the tensor square of \(\yng(m-2)\) and a non-rectangular three-row partition \(\tau\vdash 8r-4\) whose parts are all even. We will consider the following cases for the partition \(\lambda=(\lambda_{1},\lambda_{2},\lambda_{3})\) with \(\lambda_{3}\geq 2\). 1. \(\lambda_{2}-\lambda_{3}\geq 4r-2\). In this case, we can write \(\lambda\) as a horizontal sum of \((4r-2,4r-2)\) and a partition \((\lambda_{1}-4r+2,\lambda_{2}-4r+2,\lambda_{3})\). 2. \(\lambda_{2}-\lambda_{3}<4r-2\) and \(\lambda_{1}-\lambda_{2}\geq 8r-4-4\lfloor\frac{\lambda_{2}-\lambda_{3}}{2}\rfloor\). If these conditions hold, we can define \(\tau=(8r-4-2\lfloor\frac{\lambda_{2}-\lambda_{3}}{2}\rfloor,2\lfloor\frac{ \lambda_{2}-\lambda_{3}}{2}\rfloor)+8r-4\). Then, we can write \(\lambda\) as a horizontal sum of \(\tau\) and a three-row partition of \((m-2)^{2}\). 3. \(\lambda_{2}-\lambda_{3}<4r-2\) and \(\lambda_{1}-\lambda_{2}<8r-4-4\lfloor\frac{\lambda_{2}-\lambda_{3}}{2}\rfloor\). In this case, we observe that \(2(\lambda_{2}-\lambda_{3})+(\lambda_{1}-\lambda_{2})<8r-4\) if \(\lambda_{2}-\lambda_{3}\) is even, and \(2(\lambda_{2}-\lambda_{3})+(\lambda_{1}-\lambda_{2})<8r-2\) if \(\lambda_{2}-\lambda_{3}\) is odd. Therefore, we can conclude that \(\lambda_{3}\geq\lfloor\frac{(m-2)^{2}}{3}\rfloor\) under the given conditions. We further consider the following subcases: 1. If \(3\mid(m-2)^{2}\), then we can write \((m-2)^{2}=3k\) for some \(k\) even. 1. If \(\lambda\) has all parts even, then consider \(\tau=(\lambda_{1}-k,\lambda_{2}-k,\lambda_{3}-k)\). 2. If the parities of \(\lambda_{1},\lambda_{2},\lambda_{3}\) are odd, odd, even, respectively, and \(\lambda_{1}>\lambda_{2}\), consider \(\tau=(\lambda_{1}-k-1,\lambda_{2}-k-1,\lambda_{3}-k+2)\). 3. If the parities of \(\lambda_{1},\lambda_{2},\lambda_{3}\) are odd, odd, even, respectively, and \(\lambda_{1}=\lambda_{2}\), then we must have \(\lambda_{2}-\lambda_{3}\geq 5\) as otherwise \(m^{2}\) or \(m^{2}+1\) is a multiple of \(3\), which is impossible. Consider \(\tau=(\lambda_{1}-k-1,\lambda_{2}-k-1,\lambda_{3}-k+2)\). 4. If the parities of \(\lambda_{1},\lambda_{2},\lambda_{3}\) are odd, even, odd, respectively, consider \(\tau=(\lambda_{1}-k-1,\lambda_{2}-k,\lambda_{3}-k+1)\). 5. If the parities of \(\lambda_{1},\lambda_{2},\lambda_{3}\) are even, odd, odd, respectively, and \(\lambda_{2}>\lambda_{3}\), consider \(\tau=(\lambda_{1}-k-2,\lambda_{2}-k+1,\lambda_{3}-k+1)\). 6. If the parities of \(\lambda_{1},\lambda_{2},\lambda_{3}\) are even, odd, odd, respectively, \(\lambda_{2}=\lambda_{3}\), and \(\lambda_{1}-\lambda_{2}\geq 5\), we consider \(\tau=(\lambda_{1}-k-2,\lambda_{2}-k+1,\lambda_{3}-k+1)\). (Note that \(\lambda_{1}-\lambda_{2}\neq 3\) as \(m\equiv 2\mod 3\). 2. If the parities of \(\lambda_{1},\lambda_{2},\lambda_{3}\) are even, odd, odd, respectively, \(\lambda_{2}=\lambda_{3}\), and \(\lambda_{1}-\lambda_{2}=1\), then by Lemma 4.14, we can prove the positivity of \(g\left(\yng(m-2)^{2}\equiv 1\bmod 3\), then we can write \((m-2)^{2}=3k+1\) for some odd integer \(k\). 1. If \(\lambda\) has all parts even, consider \(\tau=(\lambda_{1}-k-1,\lambda_{2}-k-1,\lambda_{3}-k+1)\). 2. If the parities of \(\lambda_{1},\lambda_{2},\lambda_{3}\) are odd, odd, even, respectively, \(\lambda_{1}-\lambda_{2}>2\) or \(\lambda_{2}-\lambda_{3}>1\), consider \(\tau=(\lambda_{1}-k-2,\lambda_{2}-k,\lambda_{3}-k+1)\). 3. If the parities of \(\lambda_{1},\lambda_{2},\lambda_{3}\) are odd, odd, even, respectively, \(\lambda_{1}=\lambda_{2}+2=\lambda_{3}+3\), then \(m\equiv 1\mod 3\). By Lemma 4.15, we can obtain the positivity of \(g\left(\yng(m,\yng(m,\yng(m^{2}+5,\frac{m^{2}-1}{3},\frac{m^{2}-1}{3},\frac{m^{2 }-4}{3})\right)\right)\). 4. If the parities of \(\lambda_{1},\lambda_{2},\lambda_{3}\) are odd, odd, even, respectively, \(\lambda_{1}=\lambda_{2}\) and \(\lambda_{2}-\lambda_{3}=3\), then \(3\mid m\). By Lemma 4.16, we can obtain the positivity of \(g\left(\yng(m,\yng(m,\yng(m^{2}+3,\frac{m^{2}+3}{3},\frac{m^{2}-3}{3},\frac{m^{ 2}-6}{3})\right)\right)\) by semigroup property. 5. If the parities of \(\lambda_{1},\lambda_{2},\lambda_{3}\) are odd, odd, even, respectively, \(\lambda_{1}=\lambda_{2}\) and \(\lambda_{2}-\lambda_{3}=5\), then \(m\equiv 1\mod 3\). By Lemma 4.15, we can obtain the positivity of \(g\left(\yng(m,\yng(m,\yng(m^{2}+5,\frac{m^{2}+5}{3},\frac{m^{2}-1}{3})\right)\right)\). 6. If the parities of \(\lambda_{1},\lambda_{2},\lambda_{3}\) are odd, odd, even, respectively, \(\lambda_{1}=\lambda_{2}\) and \(\lambda_{2}-\lambda_{3}\geq 7\), consider \(\tau=(\lambda_{1}-k-2,\lambda_{2}-k-2,\lambda_{3}-k+3)\). 7. If the parities of \(\lambda_{1},\lambda_{2},\lambda_{3}\) are odd, even, odd, respectively, and \(\lambda_{2}-\lambda_{3}>3\) or \(\lambda_{1}-\lambda_{2}>1\), consider \(\tau=(\lambda_{1}-k-2,\lambda_{2}-k-1,\lambda_{3}-k+2)\). 2. If the parities of \(\lambda_{1},\lambda_{2},\lambda_{3}\) are odd, even, odd, respectively, and \(\lambda_{1}=\lambda_{2}+1=\lambda_{3}+4\), then \(m\equiv 1\mod 3\). By Lemma 4.15, we can obtain the positivity of \(g\left(\raisebox{-0.5pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics[ height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics[ height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[height=5pt]{100.0pt}{\includegraphics [height=5pt]{100.0pt}{\includegraphics[height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[ height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[ height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[ height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[ height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[ height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[ height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[ height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics[height=5pt]{10.0pt}{\includegraphics [height=5pt]{10.0pt}{\includegraphics [height=5pt]{10. * If \(\mu_{1}-\mu_{2}\in\{0,1\}\), consider the decomposition \(\bigm{\bigm{\bigm{\square}}}_{m}=\left(\bigm{\bigm{\square}}_{m-4}+_{V}(m-4)^{4} \right)+_{H}4^{m}\). If \((m-4)^{2}\geq 3\mu_{3}\), we can decompose \(\mu\) as \[\mu=\mu^{1}+_{H}(2m,2m)+_{H}(2m-8,2m-8),\] where \(\mu^{1}:=\left(\left\lceil\frac{(m-4)^{2}-\mu_{3}}{2}\right\rceil,\left\lfloor \frac{(m-4)^{2}-\mu_{3}}{2}\right\rfloor,\mu_{3}\right).\) Otherwise, we know that \(\mu_{3}\geq 16\), and we can decompose \(\mu\) as \[\mu=\mu^{2}+_{H}(2m-2,2m-2,4)+_{H}(2m-8,2m-8),\] where \(\mu^{2}:=\left(\left\lceil\frac{(m-4)^{2}-\mu_{3}}{2}\right\rceil+2,\left\lfloor \frac{(m-4)^{2}-\mu_{3}}{2}\right\rfloor+2,\mu_{3}-4\right)\). Then we have the following: * By inductive hypothesis, we have \(g\left(\bigm{\square}_{m-4},\bigm{\bigm{\square}}_{m-4},\mu^{1}\right)>0\) and \(g\left(\bigm{\square}_{m-4},\bigm{\bigm{\square}}_{m-4},\mu^{2}\right)>0\). * By semigroup property, Theorem 4.3 and the fact that \(g((3,3,3,3),(3,3,3,3),(6,6))>0\), we have \(g((m-4)^{4},(m-4)^{4},(2m-8,2m-8))>0\), \(g(4^{m},4^{m},(2m,2m))>0\), and \(g(4^{m},4^{m},(2m-2,2m-2,4))>0\). Hence, by semigroup property, we can conclude that \(g(\bigm{\square}_{m},\bigm{\bigm{\square}}_{m},\mu)>0\) when \(\mu_{1}-\mu_{2}\in\{0,1\}\). * If \(\mu_{1}-\mu_{2}\in\{2,3\}\), consider the decomposition \(\bigm{\square}_{m}=\left(\bigm{\square}_{m-4}+_{V}(m-4)^{4}\right)+_{H}4^{m}\). If \((m-4)^{2}\geq 3(\mu_{3}-2)\), we can decompose \(\mu\) as \[\mu=\mu^{1}+_{H}(2m,2m-2,2)+_{H}(2m-8,2m-8),\] where \(\mu^{1}:=\left(\left\lceil\frac{(m-4)^{2}-\mu_{3}}{2}\right\rceil+1,\left\lfloor \frac{(m-4)^{2}-\mu_{3}}{2}\right\rfloor+1,\mu_{3}-2\right)\). Otherwise, it implies that \(\mu_{3}\geq 18\) and we can decompose \(\mu\) as \[\mu=\mu^{2}+_{H}(2m,2m-2,2)+_{H}(2m-10,2m-10,4),\] where \(\mu^{2}:=\left(\left\lceil\frac{(m-4)^{2}-\mu_{3}}{2}\right\rceil+3,\left\lfloor \frac{(m-4)^{2}-\mu_{3}}{2}\right\rfloor+3,\mu_{3}-6\right).\) Then we have the following: * By inductive hypothesis and Theorem 4.2, we have \(g\left(\bigm{\square}_{m-4},\bigm{\square}_{m-4},\mu^{1}\right)>0\) and \(g\left(\bigm{\square}_{m-4},\bigm{\square}_{m-4},\mu^{2}\right)>0\). * By semigroup property and Theorem 4.3, we have \(g((m-4)^{4},(m-4)^{4},(2m-8,2m-8))>0\), \(g((m-4)^{4},(m-4)^{4},(2m-10,2m-10,4))>0\), and \(g(4^{m},4^{m},(2m,2m-2,2))>0\). Hence by semigroup property, we can conclude that \(g(\bigm{\square}_{m},\bigm{\square}_{m},\mu)>0\) when \(\mu_{1}-\mu_{2}\in\{2,3\}\). * If \(a:=\mu_{1}-\mu_{2}\geq 4\), consider the decomposition \(\bigm{\square}_{m}=\left(\bigm{\square}_{m-2}+_{V}(m-2,m-2)\right)+_{H}2^{m}\) and we will decompose \(\mu\) as \[\mu=\left(\left\lceil\frac{(m-2)^{2}-\mu_{3}}{2}\right\rceil+\delta(a), \left\lfloor\frac{(m-2)^{2}-\mu_{3}}{2}\right\rfloor-\delta(a),\mu_{3}\right)\] \[\quad+_{H}(m+1+2x,m-1-2x)\] \[\quad+_{H}(m-1+2y,m-3-2y),\] where \(\delta(a):=\begin{cases}0&\text{if $a\equiv 0$ or $1\mod 4$}\\ 1&\text{if $a\equiv 2$ or $3\mod 4$}\end{cases}\) and \(x,y\) are non-negative integers such that \(4(x+y+1)=4\left\lfloor\frac{a}{4}\right\rfloor\). Then we have the following: * By inductive hypothesis, \[g\left(\bigm{\square}_{m-2},\bigm{\square}_{m-2},\left(\left\lceil\frac{(m-2)^{2} -\mu_{3}}{2}\right\rceil+\delta(a),\left\lfloor\frac{(m-2)^{2}-\mu_{3}}{2} \right\rfloor-\delta(a),\mu_{3}\right)\right)>0.\] * By Theorem 4.3, we have \(g((2m-2,2m-2),(2m-2,2m-2),(m-1+2y,m-3-2y))>0\) and \(g(2^{m},2^{m},(m+1+2x,m-1-2x))>0\). By semigroup property, we can conclude that \(g(\bigm{\square}_{m},\bigm{\square}_{m},\mu)>0\) when \(\mu_{1}-\mu_{2}\geq 4\). Hence, by induction, for any odd integer \(m\geq 5\), we have \(g(\bigsqcup_{m},\bigsqcup_{m},\mu)>0\) for any three-row partition \(\mu\vdash m^{2}\) with \(2\leq\mu_{3}\leq 2m-2\). We now have all the ingredients to prove our main theorem. proof of Theorem 4.11.: With the help of computer and semigroup property, we check that for \(m\in\{7,9,11,13,15,17\},g(\bigsqcup_{m},\bigsqcup_{m},\mu)>0\) for any three-row partition \(\mu\vdash m^{2}\) with \(2\leq\mu_{3}\leq 2m-2\) and \(\mu_{1}-\mu_{2}\leq 2m-2\), as shown in the appendix. The result then follows from Proposition 4.17, Proposition 4.18 and Proposition 4.19. ## 5. Constituency of near-hooks \((m^{2}-k-i,i,1^{k})\) In this section, we will discuss sufficient conditions for near-hooks to be constituents in tensor squares of square partitions In their work [17], Ikenmeyer and Panova employed induction and the semigroup property to demonstrate the constituency of near-hooks with a second row of at most \(6\) in the tensor square of a rectangle with large side lengths. **Theorem 5.1** ([17] Corollary 4.6).: _Fix \(w\geq h\geq 7\). We have that \(g(\lambda,h\times w,h\times w)>0\) for all \(\lambda=(hw-j-|\rho|,1^{j}+_{H}\rho)\) with \(\rho\neq\emptyset\) and \(|\rho|\leq 6\) for all \(j\in[1,h^{2}-R_{\rho}]\) where \(R_{\rho}=|\rho|+\rho_{1}+1\), except in the following cases: \(\lambda\in\{(hw-3,2,1),(hw-h^{2}+3,2,1^{h^{2}-5}),(hw-4,3,1),(hw-h^{2}+3,2,2,1 ^{h^{2}-7})\}\)._ The positivity of certain classes of near-hooks can be directly derived from Theorem 5.1. **Corollary 5.1.1**.: _Let \(m\geq 7\). For all \(\mu_{i}(k,m)=(m^{2}-k-i,i,1^{k})\) with \(i\in[2,7]\) and \(k\in[\), \(g(\bigsqcup_{m},\bigsqcup_{m},\mu_{i}(k,m))>0\) except in the following cases: (1) \(i=2\) with \(k=1\) or \(k=m^{2}-5\), (2) \(i=3\) and \(k=1\)._ We present an alternative proof approach for finding sufficient conditions for two classes of near-hooks to be constituents in tensor squares of square partitions using a tool aimed specifically at the Saxl conjecture was developed in [10] as follows: **Theorem 5.2** ([10] Main Lemma).: _Let \(\mu=\mu^{\prime}\) be a self-conjugate partition of \(n\), and let \(\nu=(2\mu_{1}-1,2\mu_{2}-3,2\mu_{3}-5,\dots)\vdash n\) be the partition whose parts are lengths of the principal hooks of \(\mu\). Suppose \(\chi^{\lambda}[\nu]\neq 0\) for some \(\lambda\vdash n\). Then \(\chi^{\lambda}\) is a constituent of \(\chi^{\mu}\bigotimes\chi^{\mu}\)._ Let \(\mu_{i}(k,m):=(m^{2}-k-i,i,1^{k})\) and \(\alpha_{m}=(2m-1,2m-3,\dots,1).\) By Theorem 5.2, that \(|\chi^{\mu(k,m)}(\alpha_{m})|\neq 0\) would imply \(g(\bigsqcup_{m},\bigsqcup_{m},\mu_{i}(k,m))>0\). In particular, we will discuss the number of rim-hook tableaux of shape \(\mu_{2}(k,m)=(m^{2}-k-2,2,1^{k})\) and weight \(\alpha_{m}\). To use the Murnaghan-Nakayama rule to compute the characters, we consider the construction of an arbitrary rim-hook tableau of shape \(\mu_{2}(k,m)\) and weight \((1,3,\dots,2m-1).\) Observe that the \(1-\)hook can only be placed at the upper left corner, and there are three ways to place the \(3-\)hook, as illustrated in the following diagrams: Let \(P_{R(m)}(k)\) denote the number of partitions of \(k\) whose parts are distinct odd integers from the set \[R(m)=\{5,7,\dots,2m-1\}.\] Observing the diagrams above, we can deduce that the height of any rim-hook tableau with the shape \(\mu_{2}(k,m)\) and weight \((1,3,\ldots,2m-1)\) is always an odd number. From left to right, the quantities of rim-hook tableaux corresponding to the three diagrams are \(P_{R(m)}(k),\)\(P_{R(m)}(k+2),\) and \(P_{R(m)}(k-2),\) respectively. Thus, by Murnaghan-Nakayama rule, we therefore have \[\chi^{(n-k-2,2,1^{k})}(2m-1,2m-3,\ldots,3,1)=-P_{R(m)}(k)-P_{R(m)}(k+2)-P_{R(m) }(k-2).\] Thus, \(g(\lambda_{m},\lambda_{m},(n-k-2,2,1^{k}))>0\) if \(P_{R(m)}(k)+P_{R(m)}(k+2)+P_{R(m)}(k-2)>0,\) which is equivalent to that \[\max\{P_{R(m)}(k),P_{R(m)}(k+2),P_{R(m)}(k-2)\}>0.\] **Lemma 5.3**.: _Let \(m\geq 8\) be fixed, \(0\leq k\leq m^{2}-4\) and let_ \[NK_{2}(m)=\{1,2,4,6,8,m^{2}-12,m^{2}-10,m^{2}-8,m^{2}-6,m^{2}-5\}.\] _Then \(P_{R(m)}(k)+P_{R(m)}(k+2)+P_{R(m)}(k-2)>0\) if and only if \(k\notin NK_{2}(m).\)_ Proof.: By directly checking the values for \(k\in\{1,2,4,6,8\},\) we find that \[P_{R(m)}(k)+P_{R(m)}(k+2)+P_{R(m)}(k-2)=0\] holds true. Note the sum of elements in \(R(m)\) is \(m^{2}-4\) and therefore \(P_{R(m)}(k)=P_{R(m)}(m^{2}-4-k).\) It follows that if \(k\in\{m^{2}-12,m^{2}-10,m^{2}-8,m^{2}-6,m^{2}-5\},\) then \(P_{R(m)}(k)+P_{R(m)}(k+2)+P_{R(m)}(k-2)=0.\) We shall prove the other direction by induction on \(m\). We can check the statement is true for \(m=8.\) Now, assuming that the statement is true for \(m\geq 8,\) we will show that it holds true for \(m+1.\) Due to the symmetry of \(P_{R(m+1)}(k),\) it suffices to demonstrate that \(P_{R(m+1)}(k)+P_{R(m+1)}(k+2)+P_{R(m+1)}(k-2)>0\) for any \(k\in[\lceil\frac{(m+1)^{2}-4}{2}\rceil]\setminus\{1,2,4,6,8\}.\) Since \(R(m)\subset R(m+1)\) by construction, we can assert that \[P_{R(m+1)}(k)+P_{R(m+1)}(k+2)+P_{R(m+1)}(k-2)\geq P_{R(m)}(k)+P_{R(m)}(k+2)+P _{R(m)}(k-2)>0\] for any \(k\in[m^{2}-4]\setminus NK_{2}(m)\) by inductive hypothesis. It is easy to see that \(\lceil\frac{(m+1)^{2}-4}{2}\rceil<m^{2}-12\) when \(m\geq 6.\) Then by the inductive hypothesis, it follows that \(P_{R(m+1)}(k)+P_{R(m+1)}(k+2)+P_{R(m+1)}(k-2)>0\) for any \(k\notin NK_{2}(m+1),\) which completes the induction. **Theorem 5.4**.: _Let \(m\geq 8\) be fixed and \(0\leq k\leq m^{2}-4\). Then, \(g(\raisebox{-0.5pt}{\includegraphics[height=5.690551pt]{figs/2.eps}}_{m}, \raisebox{-0.5pt}{\includegraphics[height=5.690551pt]{figs/2.eps}}_{m},\mu_{2}(k,m))>0\) if and only if \(k\notin\{1,m^{2}-5\}.\)_ Proof.: (\(\Rightarrow\)) If \(k=1,\) by Corollary 3.6.1, \(g(\raisebox{-0.5pt}{\includegraphics[height=5.690551pt]{figs/2.eps}}_{m}, \raisebox{-0.5pt}{\includegraphics[height=5.690551pt]{figs/2.eps}}_{m},(m^{2}-3,2,1 ))=0\). If \(k=m^{2}-5,\) by the transposition property, \(g(\raisebox{-0.5pt}{\includegraphics[height=5.690551pt]{figs/2.eps}}_{m}, \raisebox{-0.5pt}{\includegraphics[height=5.690551pt]{figs/2.eps}}_{m},(m^{2}-k-2,2,1^{k}))=0.\) (\(\Leftarrow\)) If \(k\notin NK_{2}(m),\) the result follows from Theorem 5.2, Murnaghan-Nakayama rule, and Lemma 5.3. If \(k=2,4,6,8,\) we consider the decomposition \((m^{2}-4,2,1^{2})=(21,2,1,1)+_{H}(m^{2}-25),\)\((m^{2}-6,2,1^{4})=(19,2,1^{4})+_{H}(m^{2}-25),\)\((m^{2}-8,2,1^{6})=(17,2,1^{6})+_{H}(m^{2}-25),\)\((m^{2}-10,2,1^{8})=(15,2,1^{8})+_{H}(m^{2}-25),\) respectively. Then by the semigroup property, it follows that \(g(\raisebox{-0.5pt}{\includegraphics[height=5.690551pt]{figs/2.eps}}_{m}, \raisebox{-0.5pt}{\includegraphics[height=5.690551pt]{figs/2.eps}}_{m},(m^{2}-k,2,1^{k}))>0\) for \(k\in\{2,4,6,8\}\). By the transposition property of Kronecker coefficients, \(g(\raisebox{-0.5pt}{\includegraphics[height=5.690551pt]{figs/2.eps}}_{m}, \raisebox{-0.5pt}{\includegraphics[height=5.690551pt]{figs/2.eps}}_{m},(m^{2}-k,2,1^{k}))>0\) for \(k\in\{m^{2}-6,m^{2}-8,m^{2}-10,m^{2}-12\}.\) Similarly, there are only two ways to place the \(1\)-hook and \(3\)-hook into a rim-hook tableau of shape \(\mu_{3}(k,m)\) and weight \(\alpha_{m},\) as illustrated below. Therefore, we have \[\chi^{(n-k-3,3,1^{k})}(2m-1,2m-3,\ldots,3,1)=P_{R(m)}(k)+P_{R(m)}(k+3).\] It follows that \(g(\bigsqcup_{m},\bigsqcup_{m},(n-k-3,3,1^{k}))>0\) if \(P_{R(m)}(k)+P_{R(m)}(k+3)>0.\) **Lemma 5.5**.: _Let \(m\geq 5\) be fixed, \(0\leq k\leq m^{2}-7\) and let_ \[NK_{3}(m):=\{1,3,m^{2}-10,m^{2}-8\}.\] _Then \(P_{R(m)}(k)+P_{R(m)}(k+3)>0\) if and only if \(k\notin NK_{3}(m)\)._ Proof.: By directly checking the values for \(k\in\{1,3\}\), we find that \(P_{R(m)}(k)+P_{R(m)}(k+3)=0\) holds true. Note the sum of elements in \(R(m)\) is \(m^{2}-4\) and therefore \(P_{R(m)}(k)=P_{R(m)}(m^{2}-4-k)\). It follows that if \(k\in\{m^{2}-8,m^{2}-10\}\), then \(P_{R(m)}(k)+P_{R(m)}(k+3)=0\). We shall prove the other direction by induction on \(m\). It is easy to check that the statement is true for \(m=7.\) Now, assuming that the statement is true for \(m\geq 7\), we will show that it is also true for \(m+1.\) Due to the symmetry of \(P_{R(m+1)}(k)\), it suffices to demonstrate that \(P_{R(m+1)}(k)+P_{R(m+1)}(k+3)>0\) for any \(k\in[\lfloor\frac{(m+1)^{2}-7}{2}\rfloor]\setminus\{1,3\}\). Since \(R(m)\subset R(m+1)\) by construction, we can assert that \[P_{R(m+1)}(k)+P_{R(m+1)}(k+3)\geq P_{R(m)}(k)+P_{R(m)}(k+3)>0\] for any \(k\in[m^{2}-4]\setminus NK_{3}(m)\) by inductive hypothesis. We can verify that \(\lfloor\frac{(m+1)^{2}-7}{2}\rfloor<m^{2}-10\) when \(m\geq 5\). Then by the inductive hypothesis, we conclude that \(P_{R(m)}(k)+P_{R(m)}(k+3)>0\) for any \(k\notin NK_{3}(m+1)\), which completes the induction. **Theorem 5.6**.: _Let \(m\geq 7\) be fixed and \(0\leq k\leq m^{2}-6\). Then, \(g(\bigsqcup_{m},\bigsqcup_{m},\mu_{3}(k,m))>0\) if and only if \(k\neq 1\)._ Proof.: (\(\Rightarrow\)) If \(k=1\), by Corollary 3.6.1, \(g(\bigsqcup_{m},\bigsqcup_{m},(m^{2}-k-3,3,1^{k}))=0\). (\(\Leftarrow\)) Now assume that \(k\in\{0,2,3,\ldots,m^{2}-6\}\). If \(k=3\), we can decompose the partition \((m^{2}-6,3,1^{3})\) as \((3,3,1,1,1)+_{H}(\underline{m}^{2}-9)\). Since \(g((3,3,3),(3,3,3),(3,3,1,1,1))>0\), by semigroup property, it follows that \(g(\bigsqcup_{m},\bigsqcup_{m},(m^{2}-6,3,1^{3}))>0.\) If \(k=m^{2}-10\), we can decompose the partition \((7,3,1^{k})\) as \((7,3,1^{6})+_{V}(1^{(m^{2}-16)})\). Since \(g((4,4,4,4),(4,4,4),(7,3,1^{6}))>0\), by semigroup property,it follows that \(g(\bigsqcup_{m},\bigsqcup_{m},(7,3,1^{k}))>0.\) If \(k=m^{2}-8\), we can decompose the partition \((5,3,1^{k})\) as \((5,3,1^{8})+_{V}(1^{(m^{2}-16)})\). Since \(g((4,4,4,4),(4,4,4),(5,3,1^{8}))>0\), by semigroup property, it follows that \(g(\bigsqcup_{m},\bigsqcup_{m},(5,3,1^{k}))>0\). If \(k=m^{2}-6\), \(g(\bigsqcup_{m},\bigsqcup_{m},(3,3,1^{k}))=g(\bigsqcup_{m},\bigsqcup_{m},(m^{ 2}-4,2,2))>0\) by Theorem 4.11. If \(k<m^{2}-6\) and \(k\notin NK_{3}(m)\), the result follows from Theorem 5.2, Murnaghan-Nakayama rule, and Lemma 5.5. Next, we will discuss the constituency of near-hooks with a second row of length of at least \(8\). **Proposition 5.7**.: _For every \(i\geq 8\), we have \(g(\bigsqcup_{m},\bigsqcup_{m},\mu_{i}(k,m))>0\) for all \(m\geq 20\) and \(k\geq 0\)._ Proof.: Let \(i\geq 8\) be fixed. Suppose that \(m\geq 20\). If \(k\geq 7m+9-i\), we can decompose the transpose of partition \(\mu_{i}(k,m)\), that is \((k+2,2^{i-1},1^{m^{2}-2i-k})\) as \((k+2,2^{i-1},1^{m^{2}-2i-k})=(k_{1},1^{i-1})+_{H}(k+2-k_{1},1^{m^{2}-i-k-1})\) where \(k_{1}=7m-i+1\). Since \(k\geq 7m+9-i\), we have \(k+2-k_{1}\geq 10\). Then by Theorem 3.7, we have \(g(7^{m},7^{m},(k_{1},1^{i-1}))>0\) and \(g((m-7)^{m},(m-7)^{m},(k+2-k_{1},1^{m^{2}-i-k-1}))>0\). We can use the Semigroup property to add the partition triples, which implies that \(g(\bigsqcup_{m},\bigsqcup_{m},(k+2,2^{i-1},1^{m^{2}-2i-k}))>0\). Then by the transposition property, we have \(g(\bigsqcup_{m},\bigsqcup_{m},\mu_{i}(k,m))>0\). If \(k\leq 7m+8-i\), \(2i+k\leq 2(7m+8)\leq 15m\), we consider the decomposition \(\bigsqcup_{m}=\bigsqcup_{m_{1}}+_{V}(m_{1}^{m-m_{1}})+_{H}((m-m_{1})^{m})\), where \(m_{1}=\bigsqcap\sqrt{k+8}\bigrb\rfloor\). Since \(m_{1}\leq\bigsqcap\sqrt{k+8}\bigrb\rfloor\) and \(m\geq 16\), we have \(m_{1}^{2}-(k+8)\leq k+8\leq m^{2}-k-2i\), which implies that \(m^{2}-m_{1}^{2}-i+4\geq i-4\). Moreover, since \(k\leq 7m\) and \(m\geq 20\), we have \(m_{1}\leq m-8\). We will show that there exists a decomposition \(\mu_{i}(k,m)=\mu_{4}(k,m_{1})+_{H}(a+d_{1},a)+_{H}(b+d_{2},b)\) such that \((a+d_{1},a)\vdash m_{1}(m-m_{1})\), \((b+d_{2},b)\vdash m(m-m_{1})\) and \(a+b=i-4\). We consider the following two cases: 1. If \(m\) is odd, then \(m(m-m_{1})\) and \(m^{2}-m_{1}^{2}-2(i-4)\) always have the same parity. If \(m^{2}-m_{1}^{2}-2(i-4)=m(m-m_{1})-2\), let \(d_{2}=m(m-m_{1})-4\) and it follows that \(b=2\); otherwise, let \(d_{2}=\min(m(m-m_{1}),m^{2}-m_{1}^{2}-2(i-4))\). It is easy to check that \(a\neq 1\) and \(b\neq 1\) in this case. 2. If \(m\) is even, then \(m_{1}(m-m_{1})\) and \(m^{2}-m_{1}^{2}-2(i-4)\) always have the same parity. If \(m^{2}-m_{1}^{2}-2(i-4)=m_{1}(m-m_{1})-2\), let \(d_{1}=m_{1}(m-m_{1})-4\) and it follows that \(a=2\); otherwise, let \(d_{1}=\min(m_{1}(m-m_{1}),m^{2}-m_{1}^{2}-2(i-4))\). It is easy to check that \(a\neq 1\) and \(b\neq 1\) in this case. By Corollary 5.1.1, we have \(g(\raisebox{-0.5pt}{\includegraphics[height=5.6pt]{fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/ fig/ fig/ fig/ fig/ fig// fig// fig// fig// fig// fig// fig// fig/// fig/// fig/// fig/// fig/// fig/// fig//// fig/// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig///// fig///// fig///// fig///// fig////// fig/////// fig/ ## Appendix A Missing partitions in tensor square of square with a small side length With the help of computer, we find all partitions \(\lambda\vdash m^{2}\) such that \(g(\raisebox{-0.5pt}{\includegraphics[height=34.143307pt]{figs/3.pdf}}_{m}, \raisebox{-0.5pt}{\includegraphics[height=34.143307pt]{figs/3.pdf}}_{m}, \lambda)=0\) for \(m=4,5,6,7\): * \(g(\raisebox{-0.5pt}{\includegraphics[height=34.143307pt]{figs/3.pdf}}_{4}, \raisebox{-0.5pt}{\includegraphics[height=34.143307pt]{figs/3.pdf}}_{4},\lambda)=0\) if and only if \(\lambda\) or \(\lambda^{\prime}\in\{(15,1),(14,1,1),(13,2,1),(12,3,1),(12,1,1,1,\) * \((11,5),(10,1,1,1,1,1),(9,7),(8,7,1),(8,2,1,1,1,1,1),(7,7,2),(7,5,4)\}\); * \(g(\raisebox{-0.5pt}{\includegraphics[height=34.143307pt]{figs/3.pdf}}_{5}, \raisebox{-0.5pt}{\includegraphics[height=34.143307pt]{figs/3.pdf}}_{5},\lambda)=0\) if and only if \(\lambda\) or \(\lambda^{\prime}\in\{(24,1),(23,1,1),(22,2,1),(21,3,1),(21,1,1,1,\) * \((19,1,1,1,1,1,1),(14,1,1,1,1,1,1,1,1,1)\}\); * \(g(\raisebox{-0.5pt}{\includegraphics[height=34.143307pt]{figs/3.pdf}}_{6}, \raisebox{-0.5pt}{\includegraphics[height=34.143307pt]{figs/3.pdf}}_{6},\lambda)=0\) if and only if \(\lambda\) or \(\lambda^{\prime}\in\{(35,1),(34,1,1),(33,2,1),(32,3,1),(32,1,1,1,\) * \((30,1,1,1,1,1),(23,1^{13}),(19,17)\}\); * \(g(\raisebox{-0.5pt}{\includegraphics[height=34.143307pt]{figs/3.pdf}}_{7}, \raisebox{-0.5pt}{\includegraphics[height=34.143307pt]{figs/3.pdf}}_{7}, \lambda)=0\) if and only if \(\lambda\) or \(\lambda^{\prime}\in\{(48,1),(47,1,1),(46,2,1),(45,3,1),(45,1,1,1,\)
2304.11695
Hankel determinant for a general subclass of m-fold symmetric bi-univalent functions defined by Ruscheweyh operator
Making use of the Hankel determinant and the Ruscheweyh derivative, in this work, we consider a general subclass of m-fold symmetric normalized bi-univalent functions defined in the open unit disk. Moreover, we investigate the bounds for the second Hankel determinant of this class and some consequences of the results are presented. In addition, to demonstrate the accuracy on some functions and conditions, most general programs are written in Python V.3.8.8 (2021).
Pishtiwan Othman Sabir, Ravi P. Agarwal, Shabaz Jalil MohammedFaeq, Pshtiwan Othman Mohammed, Nejmeddine Chorfi, Thabet Abdeljawad
2023-04-23T16:20:47Z
http://arxiv.org/abs/2304.11695v2
Hankel determinant for a general subclass of \(m\)-fold symmetric bi-univalent functions defined by Ruscheweyh operator ###### Abstract. Making use of the Hankel determinant and Ruscheweyh derivative, in this work, we consider a general subclass of \(m\)-fold symmetric normalized bi-univalent functions defined in the open unit disk. Moreover, we investigate the bounds for the second Hankel determinant of this class and some interesting applications of the results pointed out are discussed. Key words and phrases:Analytic and univalent functions, \(m\)-fold symmetric univalent functions, \(m\)-fold symmetric bi-univalent functions, Ruscheweyh derivative, Hankel determinant 2010 Mathematics Subject Classification: 30C45, 30C50 ## 1. Introduction Let \(\mathcal{A}\) denote the class of the analytic functions \(f\) in the open unit disk \(\mathbb{U}=\{z\in\mathbb{C}:|z|<1\}\), normalized by the conditions \(f(0)=f^{\prime}(0)-1=0\) of the Tayler-Maclaurin series expansion \[f(z)=z+\sum_{n=2}^{\infty}a_{k}z^{k}. \tag{1}\] Further, assume that \(\mathcal{S}\) denote the subclass of \(\mathcal{A}\) which contains all univalent functions in \(\mathbb{U}\) satisfying (1) and that \(\mathcal{P}\) represents the subclass of all functions \(h(z)\) of the form \[h(z)=1+h_{1}z+h_{2}z^{2}+h_{3}z^{3}+\cdots. \tag{2}\] Which is analytic in the open unit disk \(\mathbb{U}\) and \(\mathrm{Re}(h(z))>0,z\in\mathbb{U}\). For a function \(f\in\mathcal{A}\) defined by (1), the Ruscheweyh derivative operator [22] is defined by ###### Abstract We consider the \(\Sigma\)-invariant function \(f\in\mathcal{A}\) on \(\mathbb{N}_{0}=\{0,1,2,\ldots\}\). We consider the \(\Sigma\)-invariant function \(f\in\mathcal{A}\) on \(\mathbb{N}_{0}=\{0,1,2,\ldots\}\). We consider the \(\Sigma\)-invariant function \(f\in\mathcal{A}\) on \(\mathbb{N}_{0}=\{0,1,2,\ldots\}\). We consider the \(\Sigma\)-invariant function \(f\) on \(\mathbb{N}_{0}=\{0,1,2,\ldots\}\). We consider the \(\Sigma\)-invariant function \(f\) on \(\mathbb{N}_{0}=\{0,1,2,\ldots\}\). the bi-univalent functions class \(\Sigma\) and obtaining non-sharp bounds on the Taylor-Maclaurin coefficients \(\left|a_{2}\right|\) and \(\left|a_{3}\right|\) (see, for example, [16, 17, 27]). However, the coefficient estimates bound of \(\left|a_{n}\right|(n\in\left\{4,5,6,\ldots\right\})\) for a function \(f\in\Sigma\) defined by (1) are still an open problem. In fact, there is no natural way to get upper bound for coefficients greater than three. In exceptional cases, there are some articles in which the Faber polynomial techniques were used for finding upper bounds for higherorder coefficients (see, for example, [3, 5]). The Hankel determinant is a valuable tool in studying univalent functions whose components are coefficients of functions in the subclasses of \(\mathcal{S}\) that have attracted much attention. The Hankel determinants \(H_{q}(n)\)\((n,q\in\mathbb{N})\) of the function \(f\) are defined by (see [20]) \[H_{q}(n)=\left|\begin{array}{cccc}a_{n}&a_{n+1}&\cdots&a_{n+q-1}\\ a_{n+1}&a_{n+2}&\cdots&a_{n+q}\\ \vdots&\vdots&&\vdots\\ a_{n+q-1}&a_{n+q}&\cdots&a_{n}+2q-2\end{array}\right|\quad\left(a_{1}=1\right).\] It has been clear that \[H_{2}(1)=\left|\begin{array}{cc}a_{1}&a_{2}\\ a_{2}&a_{3}\end{array}\right|,\] and \[H_{2}(2)=\left|\begin{array}{cc}a_{2}&a_{3}\\ a_{3}&a_{4}\end{array}\right|.\] The estimating for the upper bounds of \(\left|H_{2}(1)\right|=\left|a_{3}-a_{2}^{2}\right|\) and \(\left|H_{2}(2)\right|=\left|a_{2}a_{4}-a_{3}^{2}\right|\) are called Fekete-Szego and second Hankel determinant problems, respectively. Additionally, Fekete and Szego [11] proposed the summarized functional \(a_{3}-\mu a_{2}^{2}\), which \(\mu\) is some real number. Lee et al. [14] presented a concise overview of the Hankel determinants for analytic univalent functions and obtained bounds for \(\mathrm{H}_{2}(2)\) for functions belonging to some classes defined by subordination. The estimation of \(\left|\mathrm{H}_{2}(2)\right|\) has been the focus of recent Hankel determinant papers (see, for example, [4, 9, 21, 23]). For each function \(f\in\mathcal{S}\), the function \[h(z)=\left(f\left(z^{m}\right)\right)^{\frac{1}{m}},\quad\left(z\in\mathbb{U},m\in \mathbb{N}\right) \tag{4}\] is univalent and maps the unit disk into a region with \(m\)-fold symmetry. A function \(f\) is said to be \(m\)-fold symmetric (see [13]) and indicated by \(\mathcal{A}_{m}\), if it has the following normalized form: \[f(z)=z+\sum_{k=1}^{\infty}a_{mk+1}z^{mk+1},\quad(z\in\mathbb{U},m\in\mathbb{N}). \tag{5}\] We denote by \(\mathcal{S}_{m}\) the class of \(m\)-fold symmetric univalent functions in \(\mathbb{U}\), which are normalized by the series expansion (5). In fact, the functions in class \(\mathcal{S}\) are 1-fold symmetric. In view of the work of Koepf [13] the \(m\)-fold symmetric function \(h\in\mathcal{P}\) is of the form \[h(z)=1+h_{m}z^{m}+h_{2m}z^{2m}+h_{3m}z^{3m}+\cdots. \tag{6}\] Analogous to the concept of \(m\)-fold symmetric univalent functions, Srivastava et al. [25] defined the concept of \(m\) fold symmetric bi-univalent function in a direct way. Each function \(f\in\Sigma\) generates an \(m\)-fold symmetric bi-univalent function for each \(m\in\mathbb{N}\). The normalized form of \(f\) is given as (5) and the extension \(g=f^{-1}\) is granted as follows: \[\begin{split}& g(w)=w-a_{m+1}w^{m+1}+\left[(m+1)a_{m+1}^{2}-a_{2m+ 1}\right]w^{2m+1}\\ &-\left[\frac{1}{2}(m+1)(3m+2)a_{m+1}^{3}-(3m+2)a_{m+1}a_{2m+1}+ a_{3m+1}\right]w^{3m+1}+\cdots.\end{split} \tag{7}\] We denote by \(\Sigma_{\rm m}\) the class of \(m\)-fold symmetric bi-univalent functions in \(\mathbb{U}\). For \(m=1\), the series (7) coincides with the series (3) of the class \(\Sigma\). Some examples of \(m\)-fold symmetric bi-univalent functions are given as follows: \[\left[\frac{z^{m}}{1-z^{m}}\right]^{\frac{1}{m}},\quad\left[-\log\left(1-z^{m }\right)\right]^{\frac{1}{m}}\quad\text{ and }\quad\left[\frac{1}{2}\log\left(\frac{1+z^{m}}{1-z^{m}}\right)\right]^{ \frac{1}{m}},\] with the corresponding inverse functions \[\left(\frac{w^{m}}{1+w^{m}}\right)^{\frac{1}{m}},\quad\left(\frac{e^{w^{m}}-1}{e^{ w^{m}}}\right)^{\frac{1}{m}}\quad\text{ and }\quad\left(\frac{e^{2w^{m}}-1}{e^{2w^{m}}+1}\right)^{\frac{1}{m}},\] respectively. Recently, authors have been interested in studying the \(m\)-fold symmetric bi-univalent functions class \(\Sigma_{\text{m}}\) (see, for example, [4, 7, 18, 26]) and obtaining non-sharp bounds estimates on the first two Taylor-Maclaurin coefficients \(\left|a_{m+1}\right|\) and \(\left|a_{2m+1}\right|\). In this respect, Altinkaya and Yalcin [2] obtained non-sharp estimates on the second Hankel determinant for the subclass \(H_{\Sigma_{m}}(\beta)\) of \(m\)-fold symmetric bi-univalent functions class \(\Sigma_{\text{m}}\). For a function \(f\in\mathcal{A}_{m}\) defined by (5), one can think of the \(m\)-fold Ruscheweyh derivative operator \(\mathcal{R}^{\gamma}:\mathcal{A}_{m}\rightarrow\mathcal{A}_{m}\), which is analogous to the Ruscheweyh derivative \(\mathcal{R}^{\gamma}:\mathcal{A}\rightarrow\mathcal{A}\), and can defined as follows: \[\mathcal{R}^{\gamma}f(z)=z+\sum_{k=1}^{\infty}\frac{\Gamma(\gamma+k+1)}{ \Gamma(k+1)\Gamma(\gamma+1)}a_{mk+1}z^{mk+1}\quad\left(\gamma\in\mathbb{N}_{0},m\in\mathbb{N},z\in\mathbb{U}\right).\] Considering a significant role of the Hankel determinant in recent years, the object of this paper, is to study estimates for \(\left|H_{2}(2)\right|\) of a general subclass of \(m\)-fold symmetric bi-univalent functions in \(\mathbb{U}\) in applying the \(m\)-fold Ruscheweyh derivative operator and obtain upper bounds on \(\left|a_{m+1}a_{3m+1}-a_{2m+1}^{2}\right|\) for functions in the subclass \(\Xi_{\Sigma_{m}}(\lambda,\gamma;\beta)\) and improve some recent works of many authors. In order to derive our main results, we need to the following lemmas that will be useful in proving the basic theorem of section 2. **Lemma 1**.: _[_10_]_ _If the function \(h\in\mathcal{P}\) is given by the series (2), then_ \[\left|h_{k}\right|\leq 2\quad(k\in\mathbb{N}), \tag{8}\] _and_ \[\left|h_{2}-\frac{h_{1}^{2}}{2}\right|\leq 2-\frac{\left|h_{2}\right|^{2}}{2}. \tag{9}\] **Lemma 2**.: _[_12_]_ _If the function \(h\in\mathcal{P}\) is given by the series (2), then_ \[2h_{2}=h_{1}^{2}+x\left(4-h_{1}^{2}\right), \tag{10}\] _and_ \[4h_{3}=h_{1}^{3}+2\left(4-h_{1}^{2}\right)h_{1}x-h_{1}\left(4-h_{1}^{2}\right)x ^{2}+2\left(4-h_{1}^{2}\right)\left(1-|x|^{2}\right)z, \tag{11}\] _for some \(x,z\) with \(|x|\leq 1\) and \(|z|\leq 1\)._ ## 2. The Main Result and Its Consequences Altinkaya and Yalcin [1] introduced the subclass \(T_{\Sigma}^{\lambda}(n;\beta)\) of bi-univalent functions class \(\Sigma\) and obtained non-sharp coefficient estimates on \(|a_{2}a_{4}-a_{3}^{2}|\) for functions in this subclass. In this respect, our main results in this section, is to study estimates for the second Hankel determinant of the subclass \(\Xi_{\Sigma_{m}}(\lambda,\gamma;\beta)\) of \(m\)-fold symmetric bi-univalent functions in \(\mathbb{U}\), and we have shown that our results are an improvement of the obtained estimates given in [1]. **Definition 1**.: A function \(f\in\Sigma_{m}\) given by (5) is called in the class \(\Xi_{\Sigma_{m}}(\lambda,\gamma;\beta)\) (\(\lambda\geq 1\), \(\gamma\in\mathbb{N}_{0},0\leq\beta<1\) and \(m\in\mathbb{N}\)) if it satisfies the conditions \[\operatorname{Re}\left\{(1-\lambda)\frac{\mathcal{R}^{\gamma}f(z)}{z}+\lambda \left(\mathcal{R}^{\gamma}f(z)\right)^{\prime}\right\}>\beta, \tag{12}\] and \[\operatorname{Re}\left\{(1-\lambda)\frac{\mathcal{R}^{\gamma}f(w)}{w}+\lambda \left(\mathcal{R}^{\gamma}f(w)\right)^{\prime}\right\}>\beta, \tag{13}\] where \(z,w\in\mathbb{U}\) and the function \(g=f^{-1}\) is given by (7). **Theorem 1**.: _Let \(f\in\Xi_{\Sigma_{m}}(\lambda,\gamma;\beta)\) be given by (5). Then_ \[\left|a_{m+1}a_{3m+1}-a_{2m+1}^{2}\right|\leq\left\{\begin{array}{c}\frac{4( 1-\beta)^{2}}{(\gamma+1)^{2}(m\lambda+1)}\left[\frac{(m+1)^{2}(1-\beta)^{2}}{ (\gamma+1)^{2}(m\lambda+1)^{3}}+\frac{6}{(\gamma+2)(\gamma+3)(3m\lambda+1)} \right],\qquad\beta\in[0,\tau]\\ \\ \frac{4(1-\beta)^{2}}{(\gamma+1)^{2}(\gamma+2)^{2}(2m\lambda+1)^{2}}\left[4- \frac{[\omega_{2}(1-\beta)+9\omega_{3}-4\omega_{4}]^{2}}{\omega_{4}[\omega_{ 1}(1-\beta)^{2}-2\omega_{2}(1-\beta)-12\omega_{3}+4\omega_{4}]}\right],\beta \in[\tau,1)\end{array}\right.\] _where_ \[\left|a_{m+1}a_{3m+1}-a_{2m+1}^{2}\right|\leq\left\{\begin{array}{c}\frac{4( 1-\beta)^{2}}{(\gamma+1)^{2}(m\lambda+1)}\left[\frac{(m+1)^{2}(1-\beta)^{2}}{ (\gamma+1)^{2}(m\lambda+1)^{3}}+\frac{6}{(\gamma+2)(\gamma+3)(3m\lambda+1)} \right],\qquad\beta\in[0,\tau]\\ \\ \frac{4(1-\beta)^{2}}{(\gamma+1)^{2}(\gamma+2)^{2}(2m\lambda+1)^{2}}\left[4- \frac{[\omega_{2}(1-\beta)+9\omega_{3}-4\omega_{4}]^{2}}{\omega_{4}[\omega_{ 1}(1-\beta)^{2}-2\omega_{2}(1-\beta)-12\omega_{3}+4\omega_{4}]}\right],\beta \in[\tau,1)\end{array}\right.\] _where_ \[\left|a_{m+1}a_{3m+1}-a_{2m+1}^{2}\right|\leq\left\{\begin{array}{c}\frac{4( 1-\beta)^{2}}{(\gamma+1)^{2}(m\lambda+1)}\left[\frac{(m+1)^{2}(1-\beta)^{2}}{ (\gamma+1)^{2}(m\lambda+1)^{3}}+\frac{6}{(\gamma+2)(\gamma+3)(3m\lambda+1)} \right],\qquad\beta\in[0,\tau]\\ \\ \frac{4(1-\beta)^{2}}{(\gamma+1)^{2}(\gamma+2)^{2}(2m\lambda+1)^{2}}\left[4- \frac{[\omega_{2}(1-\beta)+9\omega_{3}-4\omega_{4}]^{2}}{\omega_{4}[\omega_{1}( 1-\beta)^{2}-2\omega_{2}(1-\beta)-12\omega_{3}+4\omega_{4}]}\right],\beta\in[ \tau,1)\end{array}\right.\] _where_ \[\left|a_{m+1}a_{3m+1}-a_{2m+1}^{2}\right|\leq\left\{\begin{array}{c}\frac{4( 1-\beta)^{2}}{(\gamma+1)^{2}(m\lambda+1)}\left[\frac{(m+1)^{2}(1-\beta)^{2}}{ (\gamma+1)^{2}(m\lambda+1)^{3}}+\frac{6}{(\gamma+2)(\gamma+3)(3m\lambda+1)} \right],\qquad\beta\in[0,\tau]\\ \\ \frac{4(1-\beta)^{2}}{(\gamma+1)^{2}(\gamma+2)^{2}(2m\lambda+1)^{2}}\left[4- \frac{[\omega_{2}(1-\beta)+9\omega_{3}-4\omega_{4}]^{2}}{\omega_{4}[\omega_{1}( 1-\beta)^{2}-2\omega_{2}(1-\beta)-12\omega_{3}+4\omega_{4}]}\right],\beta\in[ \tau,1)\end{array}\right.\] _where_ \[\left|a_{m+1}a_{3m+1}-a_{2m+1}^{2}\right|\leq\left\{\begin{array}{c}\frac{4( 1-\beta)^{2}}{(\gamma+1)^{2}(m\lambda+1)}\left[\frac{(m+1)^{2}(1-\beta)^{2}}{ (\gamma+1)^{2}(m\lambda+1)^{3}}+\frac{6}{(\gamma+2)(\gamma+3)(3m\lambda+1)} \right],\qquad\beta\in[0,\tau]\\ \\ \frac{4(1-\beta)^{2}}{(\gamma+1)^{2}(\gamma+2)^{2}(2m\lambda+1)^{2}}\left[4- \frac{[\omega_{2}(1-\beta)+9\omega_{3}-4\omega_{4}]^{2}}{\omega_{4}[\omega_{1}( 1-\beta)^{2}-2\omega_{2}(1-\beta)-12\omega_{3}+4\omega_{4}]}\right],\beta\in[ \tau,1)\end{array}\right.\] _where_ \[\left|a_{m+1}a_{3m+1}-a_{2m+1}^{2}\right|\leq\left\{\begin{array}{c}\frac{4( 1-\beta)^{2}}{(\gamma+1)^{2}(m\lambda+1)}\left[\frac{(m+1)^{2}(1-\beta)^{2}}{ (\gamma+1)^{2}(m\lambda+1)^{3}}+\frac{6}{(\gamma+2)(\gamma+3)(3m\lambda+1)} \right],\qquad\beta\in[0,\tau]\\ \\ \frac{4(1-\beta)^{2}}{(\gamma+1)^{2}(\gamma+2)^{2}(2m\lambda+1)^{2}}\left[4- \frac{[\omega_{2}(1-\beta)+9\omega_{3}-4\omega_{4}]^{2}}{\omega_{4}[\omega_{1}( 1-\beta)^{2}-2\omega_{2}(1-\beta)-12\omega_{3}+4\omega_{4}]}\right],\beta\in[ \tau,1)\end{array}\right.\] _where_ \[\left|a_{m+1}a_{3m+1}-a_{2m+1}^{2}\right|\leq\left\{\begin{array}{c}\frac{4( 1-\beta)^{2}}{(\gamma+1)^{2}(m\lambda+1)}\left[\frac{(m+1)^{2}(1-\beta)^{2}}{ (\gamma+1)^{2}(m\lambda+1)^{3}}+\frac{6}{(\gamma+2)(\gamma+3)(3m\lambda+1)} \right],\qquad\beta\in[0,\tau]\\ \\ \frac{4(1-\beta)^{2}}{(\gamma+1)^{2}(\gamma+2)^{2}(2m\lambda+1)^{2}}\left[4- \frac{[\omega_{2}(1-\beta)+9\omega_{3}-4\omega_{4}]^{2}}{\omega_{4}[\omega_{1}( 1-\beta)^{2}-2\omega_{2}(1-\beta)-12\omega_{3}+4\omega_{4}]}\right],\beta\in[ \tau,1)\end{array}\right.\] _where_ \[\left|a_{m+1}a_{3m+1}-a_{2m+1}^{2}\right|\leq\left\{\begin{array}{c}\frac{4( 1-\beta)^{2}}{(\gamma+1)^{2}(m\lambda+1)}\left[\frac{(m+1)^{2}(1-\beta)^{2}}{( \gamma+1)^{2}(m\lambda+1)^{3}}+\frac{6}{(\gamma+2)(\gamma+3)(3m\lambda+1)} \right],\qquad\beta\in[0,\tau]\\ \\ \frac{4(1-\beta)^{2}}{ \[\omega_{1} :=(m+1)^{2}(\gamma+2)^{2}(\gamma+3)(2m\lambda+1)^{2}(3m\lambda+1), \tag{15}\] \[\omega_{2} :=m(\gamma+1)(\gamma+2)(\gamma+3)(m\lambda+1)^{2}(2m\lambda+1)(3m \lambda+1),\] (16) \[\omega_{3} :=(\gamma+1)^{2}(\gamma+2)(m\lambda+1)^{3}(2m\lambda+1)^{2},\] (17) \[\omega_{4} :=(\gamma+1)^{2}(\gamma+3)(m\lambda+1)^{4}(3m\lambda+1), \tag{14}\] _and_ \[\tau:=1-\tfrac{\omega_{2}+\sqrt{\omega_{2}^{2}+12\omega_{1}\omega_{3}}}{2 \omega_{1}}.\] Proof.: It follows from (12) and (13) that there exist \(p\) and \(q\) in the class \(\mathcal{P}\) such that \[(1-\lambda)\frac{\mathcal{R}^{\gamma}f(z)}{z}+\lambda\left(\mathcal{R}^{ \gamma}f(z)\right)^{\prime}=\beta+(1-\beta)p(z), \tag{18}\] and \[(1-\lambda)\frac{\mathcal{R}^{\gamma}f(w)}{w}+\lambda\left(\mathcal{R}^{ \gamma}f(w)\right)^{\prime}=\beta+(1-\beta)q(z). \tag{19}\] Where \(p\) and \(q\) are given by the series (6). We also find that \[\begin{split}(1-\lambda)&\frac{\mathcal{R}^{\gamma} f(z)}{z}+\lambda\left(\mathcal{R}^{\gamma}f(z)\right)^{\prime}\\ &=1+(\gamma+1)(m\lambda+1)a_{m+1}z^{m}+\frac{1}{2}(\gamma+1)( \gamma+2)(2\lambda m+1)a_{2m+1}z^{2m}\\ &+\frac{1}{6}(\gamma+1)(\gamma+2)(\gamma+3)(3\lambda m+1)a_{3m+1 }z^{3m}+\cdots,\end{split} \tag{20}\] and \[\begin{split}&(1-\lambda)\frac{\mathcal{R}^{\gamma}g(w)}{w}+ \lambda\left(\mathcal{R}^{\gamma}g(w)\right)^{\prime}\\ &=1-(\gamma+1)(m\lambda+1)a_{m+1}w^{m}+\frac{1}{2}(\gamma+1)( \gamma+2)(2m\lambda+1)\times\\ &\left[(m+1)a_{m+1}^{2}-a_{2m+1}\right]w^{2m}-\frac{1}{6}(\gamma+ 1)(\gamma+2)(\gamma+3)(3m\lambda+1)\times\\ &\left[\frac{1}{2}(m+1)(3m+2)a_{m+1}^{3}-(3m+2)a_{m+1}a_{2m+1}+a _{3m+1}\right]w^{3m}+\cdots.\end{split} \tag{21}\] Equating coefficients in (18) and (19), yields \[(\gamma+1)(m\lambda+1)a_{m+1}=(1-\beta)p_{m}, \tag{22}\] \[\frac{1}{2}(\gamma+1)(\gamma+2)(2\lambda m+1)a_{2m+1}=(1-\beta)p_{2m}, \tag{23}\] \[\frac{1}{6}(\gamma+1)(\gamma+2)(\gamma+3)(3m\lambda+1)a_{3m+1}=(1-\beta)p_{3m}, \tag{24}\] and \[-(\gamma+1)(m\lambda+1)a_{m+1}=(1-\beta)q_{m}, \tag{25}\] \[\frac{1}{2}(\gamma+1)(\gamma+2)(2m\lambda+1)\left[(m+1)a_{m+1}^{2}-a_{2m+1} \right]=(1-\beta)q_{2m}, \tag{26}\] \[-\frac{1}{6}(\gamma+1)(\gamma+2)(\gamma+3)(3m\lambda+1)\times\] \[\left[\frac{1}{2}(m+1)(3m+2)a_{m+1}^{3}-(3m+2)a_{m+1}a_{2m+1}+a_{ 3m+1}\right]=(1-\beta)q_{3m}. \tag{27}\] Form (22) and (25), we get \[p_{m}=-q_{m}, \tag{28}\] and \[a_{m+1}=\frac{1-\beta}{(\gamma+1)(m\lambda+1)}p_{m}. \tag{29}\] Now, from (23) and (26) and (29), we obtain \[a_{2m+1}=\frac{(m+1)(1-\beta)^{2}}{2(\gamma+1)^{2}(m\lambda+1)^{2}}p_{m}^{2}+ \frac{(1-\beta)}{(\gamma+1)(\gamma+2)(2m\lambda+1)}\left(p_{2m}-q_{2m}\right). \tag{30}\] Also, from (24), (27), (29) and (30), we find that \[\begin{split} a_{3m+1}&=\frac{(3m+2)(1-\beta)^{2}}{2( \gamma+1)^{2}(\gamma+2)(m\lambda+1)(2m\lambda+1)}p_{m}\left(p_{2m}-q_{2m}\right) \\ &+\frac{3(1-\beta)}{(\gamma+1)(\gamma+2)(\gamma+3)(3m\lambda+1)} \left(p_{3m}\right.\left.-q_{3m}\right).\end{split} \tag{31}\] Then, from (29), (30) and (31) we can establish that \[\begin{split}& a_{m+1}a_{3m+1}-a_{2m+1}^{2}=-\frac{(m+1)^{2}(1- \beta)^{4}}{4(\gamma+1)^{4}(m\lambda+1)^{4}}p_{m}^{4}\\ &+\frac{m(1-\beta)^{3}}{2(\gamma+1)^{3}(\gamma+2)(m\lambda+1)^{2 }(2m\lambda+1)}p_{m}^{2}\left(p_{2m}-q_{2m}\right)\\ &+\frac{3(1-\beta)^{2}}{(\gamma+1)^{2}(\gamma+2)(\gamma+3)(m \lambda+1)(3m\lambda+1)}p_{m}\left(p_{3m}-q_{3m}\right)\\ &-\frac{(1-\beta)^{2}}{(\gamma+1)^{2}(\gamma+2)^{2}(2m\lambda+1) ^{2}}\left(p_{2m}-q_{2m}\right)^{2}.\end{split} \tag{32}\] According to Lemma 2 and (28), we write \[p_{2m}-q_{2m}=\frac{4-p_{m}^{2}}{2}(x-y), \tag{33}\] and \[\begin{split} p_{3m}-q_{3m}=&\frac{p_{m}^{3}}{2}+ \frac{p_{m}\left(4-p_{m}^{2}\right)}{2}(x+y)-\frac{p_{m}\left(4-p_{m}^{2} \right)}{4}\left(x^{2}+y^{2}\right)\\ &+\frac{4-p_{m}^{2}}{2}\left[\left(1-|x|^{2}\right)z-\left(1-|y|^ {2}\right)w\right],\end{split} \tag{34}\] \[p_{2m}+q_{2m}=p_{m}^{2}+\frac{4-p_{m}^{2}}{2}(x+y), \tag{35}\] for some \(x,y,z\) and \(w\) with \(|x|\leq 1,|y|\leq 1,|z|\leq 1\) and \(|w|\leq 1\). Using (33) and (34) in (32), and taking the triangle inequality, we obtain \[\left|a_{m+1}a_{3m+1}-a_{2m+1}^{2}\right|\] \[= |-\frac{(m+1)^{2}(1-\beta)^{4}}{4(\gamma+1)^{4}(m\lambda+1)^{4}}p_{ m}^{4}+\frac{m(1-\beta)^{3}}{4(\gamma+1)^{3}(\gamma+2)(m\lambda+1)^{2}(2m \lambda+1)}p_{m}^{2}\left(4-p_{m}^{2}\right)(x-y)\] \[+\frac{3(1-\beta)^{2}}{(\gamma+1)^{2}(\gamma+2)(\gamma+3)(m \lambda+1)(3m\lambda+1)}p_{m}\left[\frac{p_{m}^{3}}{2}+\frac{p_{m}\left(4-p_{m }^{2}\right)}{2}(x+y)\right.\] \[-\frac{p_{m}\left(4-p_{m}^{2}\right)}{4}\left(x^{2}+y^{2}\right)+ \frac{4-p_{m}^{2}}{2}\left[\left(1-|x|^{2}\right)z-\left(1-|y|^{2}\right)w\right]\] \[-\frac{(1-\beta)^{2}}{4(\gamma+1)^{2}(\gamma+2)^{2}(2m\lambda+1) ^{2}}\left(4-p_{m}^{2}\right)^{2}(x-y)^{2}\mid\] \[\leq\frac{(m+1)^{2}(1-\beta)^{4}}{4(\gamma+1)^{4}(m\lambda+1)^{4}}p_{m}^{4}+ \frac{3(1-\beta)^{2}}{2(\gamma+1)^{2}(\gamma+2)(\gamma+3)(m\lambda+1)(3m \lambda+1)}p_{m}^{4}\] \[+\frac{3(1-\beta)^{2}}{(\gamma+1)^{2}(\gamma+2)(\gamma+3)(m \lambda+1)(3m\lambda+1)}p_{m}\left(4-p_{m}^{2}\right)\] \[+\left[\frac{m(1-\beta)^{3}}{4(\gamma+1)^{3}(\gamma+2)(m\lambda+1 )^{2}(2m\lambda+1)}p_{m}^{2}\left(4-p_{m}^{2}\right)\right.\] \[\left.+\frac{3(1-\beta)^{2}}{2(\gamma+1)^{2}(\gamma+2)(\gamma+3)( m\lambda+1)(3m\lambda+1)}p_{m}^{2}\left(4-p_{m}^{2}\right)\right](|x|+|y|)\] \[+\left[\frac{3(1-\beta)^{2}}{4(\gamma+1)^{2}(\gamma+2)(\gamma+3)( m\lambda+1)(3m\lambda+1)}p_{m}^{2}\left(4-p_{m}^{2}\right)\right.\] \[\left.-\frac{3(1-\beta)^{2}}{2(\gamma+1)^{2}(\gamma+2)(\gamma+3)( m\lambda+1)(3m\lambda+1)}p_{m}\left(4-p_{m}^{2}\right)\right]\left(|x|^{2}+|y|^{2}\right)\] \[+\frac{(1-\beta)^{2}}{4(\gamma+1)^{2}(\gamma+2)^{2}(2m\lambda+1) ^{2}}\left(4-p_{m}^{2}\right)^{2}(|x|+|y|)^{2}.\] Since \(p\) in the class \(\mathcal{P}\), we have \(|p_{m}|\leq 2\). Letting \(p_{m}=\rho\), we may assume without loss of generality that \(\rho\in[0,2]\). Thus, for \(\mu_{1}=|x|\leq 1\) and \(\mu_{2}=|y|\leq 1\), we get \[\left|a_{m+1}a_{3m+1}-a_{2m+1}^{2}\right|\leq F_{1}+F_{2}\left(\mu_{1}+\mu_{2} \right)+F_{3}\left(\mu_{1}^{2}+\mu_{2}^{2}\right)+F_{4}\left(\mu_{1}+\mu_{2} \right)^{2}=F\left(\mu_{1},\mu_{2}\right),\] where \[F_{1} = \frac{1}{2}\left(\mu_{1}+\mu_{2}\right)+F_{3}\left(\mu_{1}+\mu_{ 2}\right)+F_{4}\left(\mu_{1}+\mu_{2}\right)^{2}\] \[+\frac{1}{2}\left(\mu_{1}+\mu_{2}\right)+F_{4}\left(\mu_{1}+\mu_ {2}\right)+F_{5}\left(\mu_{1}+\mu_{2}\right)+F_{6}\left(\mu_{1}+\mu_{2}\right) \right)\] \[+\frac{1}{2}\left(\mu_{1}+\mu_{2}\right)+F_{6}\left(\mu_{1}+\mu_{ 2}\right)+F_{7}\left(\mu_{1}+\mu_{2}\right)+F_{8}\left(\mu_{1}+\mu_{2}\right)\] \[+\frac{1}{2}\left(\mu_{1}+\mu_{2}\right)+F_{9}\left(\mu_{1}+\mu_ {2}\right)+F_{10}\left(\mu_{1}+\mu_{2}\right)+F_{11}\left(\mu_{1}+\mu_{2} \right)\right)\] \[+\frac{1}{2}\left(\mu_{1}+\mu_{2}\right)+F_{12}\left(\mu_{1}+\mu_ {2}\right)+F_{13}\left(\mu_{1}+\mu_{2}\right)+F_{14}\left(\mu_{1}+\mu_{2} \right)+F_{15}\left(\mu_{1}+\mu_{2}\right)\] \[+\frac{1}{2}\left(\mu_{1}+\mu_{2}\right)+F_{16}\left(\mu_{1}+\mu_ {2}\right)+F_{17}\left(\mu_{1}+\mu_{2}\right)+F_{18}\left(\mu_{1}+\mu_{2}\right)\] \[+\frac{1}{2}\left(\mu_{1}+\mu_{2}\right)+F_{19}\left(\mu_{1}+\mu_ {2}\right)+F_{11}\left(\mu_{1}+\mu_{2}\right)+F_{12}\left(\mu_{1}+\mu_{2} \right)\] \[+\frac{1}{2}\left(\mu_{1}+\mu_{2}\right)+F_{13}\left(\mu_{1}+\mu_ {2}\right)+F_{14}\left(\mu_{1}+\mu_{2}\right)+F_{15}\left(\mu_{1}+\mu_{2}\right)\] \[+\frac{1}{2}\left(\mu_{1}+\mu_{2}\right)+F_{16}\left(\mu_{1}+\mu_ {2}\right)+F_{17}\left(\mu_{1}+\mu_{2}\right)+F_{18}\left(\mu_{1}+\mu_{2} \right)\] \[+\frac{1}{2}\left(\mu_{1}+\mu_{2}\right)+F_{19}\left(\mu_{1}+\mu_ {2}\right)+F_{11}\left(\mu_{1}+\mu_{2}\right)+F_{12}\left(\mu_{1}+\mu_{2}\right)\] \[+\frac{1}{2}\left(\mu_{1}+\mu_{2}\right)+F_{13}\left(\mu_{1}+\mu_ {2}\right)+F_{14}\left(\mu_{1}+\mu_{2}\right)+F_{15}\left(\mu_{1}+\mu_{2}\right)\] \[+\frac{1}{2}\left(\mu_{1}+\mu_{2}\right)+F_{16}\left(\mu_{1}+\mu_ {2}\right)+F_{17}\left(\mu_{1}+\mu_{2}\right)+F_{18}\left(\mu_{1}+\mu_{2} \right)\] \[+\frac{1}{2}\left(\mu_{1}+\mu_{2}\right)+F_{19}\left(\mu_{1}+\mu_ {2}\right)+F_{17}\left(\mu_{1}+\mu_{2}\right)+F_{18}\left(\mu_{1}+\mu_{2}\right)\] \[+\frac{1}{2}\left(\mu_{1}+\mu_{2}\right)+F_{19}\left(\mu_{1}+\mu_ {2}\right)+F_{18}\left(\mu_{1}+\mu_{2}\right)+F_{19}\left(\mu_{1}+\mu_{2} \right)\] \[+\frac{1}{2}\left(\mu_{1}+\mu_{2}\right)+F_{19}\left(\mu_{1}+\mu_ {2}\right)+F_{19}\left(\mu_{1}+\mu_{2}\right)+F_{19}\left(\mu_{1}+\mu_{2} \right)\] \[+\frac{1}{2}\left(\mu_{1}+\mu_{2}\right)+F_{19}\left(\mu_{1}+\mu_ {2}\right)+F_{19}\left(\mu_{1}+\mu_{2}\right)+F_{19}\left(\mu_{1}+\mu_{2} \right)\] \[+\frac{1}{2}\left \[F_{1}=F_{1}(\rho)=\frac{(m+1)^{2}(1-\beta)^{4}\rho^{4}}{4(\gamma+1)^{4}(m \lambda+1)^{4}}+\frac{3(1-\beta)^{2}\rho^{4}}{2(\gamma+1)^{2}(\gamma+2)(\gamma+3) (m\lambda+1)(3m\lambda+1)}\] \[+\frac{3(1-\beta)^{2}\rho\left(4-\rho^{2}\right)}{(\gamma+1)^{2}(\gamma+2)( \gamma+3)(m\lambda+1)(3m\lambda+1)}\geq 0,\] \[F_{2}=F_{2}(\rho)=\frac{m(1-\beta)^{3}\rho^{2}\left(4-\rho^{2}\right)}{4( \gamma+1)^{3}(\gamma+2)(m\lambda+1)^{2}(2m\lambda+1)}\] \[+\frac{3(1-\beta)^{2}\rho^{2}\left(4-\rho^{2}\right)}{2(\gamma+1)^{2}(\gamma+ 2)(\gamma+3)(m\lambda+1)(3m\lambda+1)}\geq 0,\] \[F_{3}=F_{3}(\rho)=\frac{3(1-\beta)^{2}\rho^{2}\left(4-\rho^{2}\right)}{4( \gamma+1)^{2}(\gamma+2)(\gamma+3)(m\lambda+1)(3m\lambda+1)}\] \[-\frac{3(1-\beta)^{2}\rho\left(4-\rho^{2}\right)}{2(\gamma+1)^{2}(\gamma+2)( \gamma+3)(m\lambda+1)(3m\lambda+1)}\leq 0,\] \[F_{4}=F_{4}(\rho)=\frac{(1-\beta)^{2}\left(4-\rho^{2}\right)^{2}}{4(\gamma+1) ^{2}(\gamma+2)^{2}(2m\lambda+1)^{2}}\geq 0.\] Now, we need to maximize \(F\left(\mu_{1},\mu_{2}\right)\) in the closed square \(\mathbb{S}=[0,1]\times[0,1]\) for \(\rho\in[0,2]\). We must investigate the maximum of \(F\left(\mu_{1},\mu_{2}\right)\) according to \(\rho\in(0,2),\rho=0\) and \(\rho=2\), taking in view the sign of \(F_{\mu_{1}\mu_{1}}F_{\mu_{2}\mu_{2}}-\left(F_{\mu_{1}\mu_{2}}\right)^{2}\). First, let \(\rho\in(0,2)\). Since \(F_{3}<0\) and \(F_{3}+2F_{4}>0\) for \(\rho\in(0,2)\), we deduce that \[F_{\mu_{1}\mu_{1}}F_{\mu_{2}\mu_{2}}-\left(F_{\mu_{1}\mu_{2}}\right)^{2}<0.\] Thus, the function \(F\) can not have a local maximum in the interior of the square \(\mathbb{S}\). Now, we investigate the maximum of \(F\) on the boundary of the square \(\mathbb{S}\). For \(\mu_{1}=0\) and \(\mu_{2}\in[0,1]\) (similarly for \(\mu_{2}=0\) and \(\mu_{1}\in[0,1]\) ), we obtain \[F\left(0,\mu_{2}\right)=G\left(\mu_{2}\right)=F_{1}+F_{2}\mu_{2}+\left(F_{3}+F _{4}\right)\mu_{2}^{2}.\] Case 1. Let \(F_{3}+F_{4}\geq 0\). In this case, for \(0<\mu_{2}<1\) and for any fixed \(\rho\) with \(\rho\in(0,2)\), it is clear that \[G^{\prime}\left(\mu_{2}\right)=F_{2}+2\left(F_{3}+F_{4}\right)\mu_{2}>0,\] that is, \(G\left(\mu_{2}\right)\) is an increasing function. Hence, for fixed \(\rho\in(0,2)\), the maximum of \(G\left(\mu_{2}\right)\) occurs at \(\mu_{2}=1\) and \[\max\left\{F\left(0,\mu_{2}\right)\right\}=\max\left\{G\left(\mu_{2}\right)\right\}=G (1)=F_{1}+F_{2}+F_{3}+F_{4}.\] Case 2. Let \(F_{3}+F_{4}<0\). Since \[F_{2}+2\left(F_{3}+F_{4}\right)\geq 0,\] for any fixed \(\rho\) with \(\rho\in\left(0,2\right)\), it is clear that \[F_{2}+2\left(F_{3}+F_{4}\right)<F_{2}+2\left(F_{3}+F_{4}\right)\mu_{2}<F_{2}, \quad\left(\mu_{2}\in\left(0,1\right)\right),\] and so \(G\left(\mu_{2}\right)>0\). Hence, for fixed \(\rho\in\left(0,2\right)\), the maximum of \(G\left(\mu_{2}\right)\) occurs at \(\mu_{2}=1\). Similarly, for \(\mu_{2}=0\) and \(\mu_{1}\in\left[0,1\right]\), we get \[\max\left\{F\left(\mu_{1},0\right)\right\}=\max\left\{G\left(\mu_{1}\right) \right\}=G(1)=F_{1}+F_{2}+F_{3}+F_{4}.\] Also, for \(\rho=2\), we obtain \[F\left(\mu_{1},\mu_{2}\right)=\frac{4(\gamma+2)(\gamma+3)(3m\lambda+1)(m+1)^{ 2}(1-\beta)^{4}+24(\gamma+1)^{2}(m\lambda+1)^{3}(1-\beta)^{2}}{(\gamma+1)^{4}( \gamma+2)(\gamma+3)(m\lambda+1)^{4}(3m\lambda+1)}. \tag{36}\] Keeping in view the value (36) and the above Cases 1 and 2 for \(\mu_{2}\in\left[0,1\right)\) and any fixed \(\rho\in\left[0,2\right]\), \[\max\left\{G\left(\mu_{2}\right)\right\}=G(1)=F_{1}+F_{2}+F_{3}+F_{4}.\] For \(\mu_{1}=1\) and \(\mu_{2}\in\left[0,1\right]\) (similarly for \(\mu_{2}=1\) and \(\mu_{1}\in\left[0,1\right]\) ), we obtain \[F\left(1,\mu_{2}\right)=H\left(\mu_{2}\right)=F_{1}+F_{2}+F_{3}+F_{4}+\left(F _{2}+2F_{4}\right)\mu_{2}+\left(F_{2}+F_{4}\right)\mu_{2}^{2}.\] Thus, from the above Cases 1 and 2 for \(F_{3}+F_{4}\), we get \[\max\left\{F\left(1,\mu_{2}\right)\right\}=\max\left\{H\left(\mu_{2}\right) \right\}=H(1)=F_{1}+2\left(F_{2}+F_{3}\right)+4F_{4}.\] Since \(G(1)\leq H(1)\) for \(\rho\in\left(0,2\right)\), we have \[\max\left\{F\left(\mu_{1},\mu_{2}\right)\right\}=F(1,1).\] On the boundary of the square \(\mathbb{S}\). Thus, the maximum of the function \(F\) occurs at \(\mu_{1}=1\) and \(\mu_{2}=1\) in the closed square \(\mathbb{S}\) and for \(\rho\in\left(0,2\right)\). Let \(K:(0,2)\rightarrow\mathbb{R}\) be given by \[K(\rho)=\max\left\{F\left(\mu_{1},\mu_{2}\right)\right\}=F(1,1)=F_{1}+2\left(F_{2} +F_{3}\right)+4F_{4}. \tag{37}\] Substituting the values of \(F_{1},F_{2},F_{3}\) and \(F_{4}\) in the function \(K\) defined by (37), yields \[K(\rho)=\frac{(1-\beta)^{2}}{4(\gamma+1)^{4}(\gamma+2)^{2}(\gamma +3)(1+m\lambda)^{4}(1+2m\lambda)^{2}(1+3m\lambda)}\times\] \[\left[\left[(m+1)^{2}(\gamma+2)^{2}(\gamma+3)(2m\lambda+1)^{2}(3m \lambda+1)(1-\beta)^{2}\right.\right.\] \[-2m(\gamma+1)(\gamma+2)(\gamma+3)(m\lambda+1)^{2}(2m\lambda+1)( 3m\lambda+1)(1-\beta)\] \[\left.\left.-12(\gamma+1)^{2}(\gamma+2)(m\lambda+1)^{3}(2m \lambda+1)^{2}+4(\gamma+1)^{2}(\gamma+3)(m\lambda+1)^{4}(3m\lambda+1)\right] \rho^{4}\right.\] \[+\left[8m(\gamma+1)(\gamma+2)(\gamma+3)(1+m\lambda)^{2}(1+2m \lambda)(1+3m\lambda)(1-\beta)\right.\] \[\left.+72(\gamma+1)^{2}(\gamma+2)(m\lambda+1)^{3}(2m\lambda+1)^{ 2}-32(\gamma+1)^{2}(\gamma+3)(m\lambda+1)^{4}(3m\lambda+1)\right]\rho^{2}\] \[\left.+64(\gamma+1)^{2}(\gamma+3)(1+m\lambda)^{4}(1+3m\lambda) \right].\] Assume that \(\max K(\rho)\in(0,2)\). Therefore, we find \[K^{\prime}(\rho)=\frac{(1-\beta)^{2}}{(\gamma+1)^{4}(\gamma+2)^{ 2}(\gamma+3)(1+m\lambda)^{4}(1+2m\lambda)^{2}(1+3m\lambda)}\times\] \[\left[\left[(m+1)^{2}(\gamma+2)^{2}(\gamma+3)(2m\lambda+1)^{2}(3 m\lambda+1)(1-\beta)^{2}\right.\right.\] \[-2m(\gamma+1)(\gamma+2)(\gamma+3)(m\lambda+1)^{2}(2m\lambda+1)(3 m\lambda+1)(1-\beta)\] \[\left.\left.-12(\gamma+1)^{2}(\gamma+2)(m\lambda+1)^{3}(2m\lambda +1)^{2}+4(\gamma+1)^{2}(\gamma+3)(m\lambda+1)^{4}(3m\lambda+1)\right]\rho^{3}\right.\] \[+\left[4m(\gamma+1)(\gamma+2)(\gamma+3)(\lambda m+1)^{2}(1+2m \lambda)(1+3m\lambda)(1-\beta)\right.\] \[\left.+36(\gamma+1)^{2}(\gamma+2)(m\lambda+1)^{3}(2m\lambda+1)^{ 2}-16(\gamma+1)^{2}(\gamma+3)(m\lambda+1)^{4}(3m\lambda+1)\right]\rho\right].\] Next, we conclude the following results Result 1. Let \[\omega_{1}(1-\beta)^{2}-2\omega_{2}(1-\beta)-12\omega_{3}+4\omega_{4}\geq 0,\] that is, \[\beta\in\left[0,1-\frac{\omega_{2}+\sqrt{\omega_{2}^{2}+\omega_{1}\left[12\omega_{3 }-4\omega_{4}\right]}}{\omega_{1}}\right],\] where \(\omega_{1},\omega_{2},\omega_{3}\) and \(\omega_{4}\) are given by (14), (15), (16) and (17), respectively. Therefore, \(K^{\prime}(\rho)>0\) for every \(\rho\in(0,2)\). Moreover, since \(K(\rho)\) is an increasing function in the interior of the interval \([0,2]\), it has no maximum value in this interval, that is, \(\max K(\rho)\) must be on the boundary of \(\rho\in(0,2]\). Thus, we have \[\max\{K(\rho):0<\rho\leq 2\}=K(2)=\frac{4(1-\beta)^{2}}{(\gamma+1)^{2 }(m\lambda+1)}\left[\frac{(m+1)^{2}(1-\beta)^{2}}{(\gamma+1)^{2}(m\lambda+1)^ {3}}\right.\] \[\left.+\frac{6}{(\gamma+2)(\gamma+3)(3m\lambda+1)}\right].\] Result 2. Let \[\omega_{1}(1-\beta)^{2}-2\omega_{2}(1-\beta)-12\omega_{3}+4\omega_{4}<0,\] that is, \[\beta\in\left(1-\frac{\omega_{2}+\sqrt{\omega_{2}^{2}+\omega_{1}\left[12 \omega_{3}-4\omega_{4}\right]}}{\omega_{1}},1\right).\] Then \(K^{\prime}(\rho)=0\) implies the real critical point \(\rho_{1}=0\) or \[\rho_{2}=\sqrt{\frac{16\omega_{4}-4\omega_{2}(1-\beta)-36\omega_{3}}{\omega_{ 1}(1-\beta)^{2}-2\omega_{2}(1-\beta)-12\omega_{3}+4\omega_{4}}}.\] When \[\beta\in\left(1-\frac{\omega_{2}+\sqrt{\omega_{2}^{2}+\omega_{1}\left[12 \omega_{3}-4\omega_{4}\right]}}{\omega_{1}},1-\frac{\omega_{2}+\sqrt{\omega_ {2}^{2}+12\omega_{1}\omega_{3}}}{2\omega_{1}}\right),\] we observe that \(\rho_{2}\geq 2\), that is, \(\rho_{2}\) is in the exterior of the interval \((0,2)\). Therefore, the maximum value of \(K(\rho)\) occurs at \(\rho_{1}=0\) or \(\rho_{2}=2\), which contradicts our assumption of having the maximum value in the interval \((0,2)\). When \[\beta\in\left(1-\frac{\omega_{2}+\sqrt{\omega_{2}^{2}+12\omega_{1}\omega_{3}} }{2\omega_{1}},1\right),\] we observe that \(\rho_{2}\in(0,2)\). Since \(K^{\prime\prime}\left(\rho_{2}\right)<0\), the maximum value of \(K(\rho)\) occurs at \(\rho=\rho_{2}\). Thus, we have \[\max\{K(\rho):\rho\in[0,2]\}=K\left(\rho_{2}\right) =\frac{4(1-\beta)^{2}}{(\gamma+1)^{2}(\gamma+2)^{2}(2m\lambda+1)^ {2}}\times\] \[\left[4-\frac{[\omega_{2}(1-\beta)+9\omega_{3}-4\omega_{4}]^{2}}{ \omega_{4}\left[\omega_{1}(1-\beta)^{2}-2\omega_{2}(1-\beta)-12\omega_{3}+4 \omega_{4}\right]}\right].\] This completes the proof. By setting \(\lambda=1\) and \(\gamma=0\) in Theorem 1, we obtain the following consequence. **Corollary 1**.: _[_2_]_ _Let \(f\in\Xi_{\Sigma_{\mathrm{m}}}(\beta)\quad(0\leq\beta<1)\) be given by (5). Then_ \[\left|a_{m+1}a_{3m+1}-a_{2m+1}^{2}\right| \leq\left\{\begin{array}{cc}\frac{4(1-\beta)^{2}}{m+1}\left[ \frac{(1-\beta)^{2}}{m+1}+\frac{1}{3m+1}\right],&\beta\in[0,v]\\ \\ \frac{(1-\beta)^{2}}{(2m+1)^{2}}\left[4-\frac{[m(1-\beta)\psi_{1}+3\psi_{2}-2 \psi_{3}]^{2}}{\psi_{3}[(2m+1)(1-\beta)^{2}\psi_{1}-m(1-\beta)\psi_{1}+\psi_{3} -2\psi_{2}]}\right],\beta\in[v,1)\end{array}\right.\] _where_ \[\psi_{1} :=(2m+1)(3m+1),\] \[\psi_{2} :=(m+1)(2m+1)^{2},\] \[\psi_{3} :=(m+1)^{2}(3m+1),\] _and_ \[v=\frac{(3m+1)(7m+4)-\sqrt{m^{2}(3m+1)^{2}+8\psi_{2}(3m+1)}}{4 \psi_{1}}.\] By taking \(m=1\) in Theorem 1, we conclude the following result. **Corollary 2**.: _Let \(f\in\Xi_{\Sigma}(\lambda,\gamma;\beta)\quad(\lambda\geq 1,\gamma\in\mathbb{N}_{0}, 0\leq\beta<1)\) be given by (1). Then_ \[\left|a_{2}a_{4}-a_{3}^{2}\right| \leq\left\{\begin{array}{cc}\frac{8(1-\beta)^{2}}{(\gamma+1)^{ 2}(\lambda+1)}\left[\frac{2(1-\beta)^{2}}{(\gamma+1)^{2}(\lambda+1)^{3}}+ \frac{3}{(\gamma+2)(\gamma+3)(3\lambda+1)}\right],&\beta\in[0,\xi]\\ \\ \frac{4(1-\beta)^{2}}{(\gamma+1)^{2}(\gamma+2)^{2}(2\lambda+1)^{2}}\left[4- \frac{[\theta_{2}(1-\beta)+9\theta_{3}-4\vartheta_{4}]^{2}}{\vartheta_{4}[4 \vartheta_{1}(1-\beta)^{2}-2\vartheta_{2}(1-\beta)-12\vartheta_{3}+4\vartheta_ {4}]}\right],\beta\in[\xi,1)\end{array}\right.\] _where_ \[\left|a_{2}a_{4}-a_{3}^{2}\right| \leq\left\{\begin{array}{cc}\frac{8(1-\beta)^{2}}{(\gamma+1)^{ 2}(\lambda+1)}\left[\frac{2(1-\beta)^{2}}{(\gamma+1)^{2}(\lambda+1)^{3}}+ \frac{3}{(\gamma+2)(\gamma+3)(3\lambda+1)}\right],&\beta\in[0,\xi]\\ \\ \frac{4(1-\beta)^{2}}{(\gamma+1)^{2}(\gamma+2)^{2}(2\lambda+1)^{2}}\left[4- \frac{[\theta_{2}(1-\beta)+9\theta_{3}-4\vartheta_{4}]^{2}}{\vartheta_{4}[4 \vartheta_{1}(1-\beta)^{2}-2\vartheta_{2}(1-\beta)-12\vartheta_{3}+4\vartheta_ {4}]}\right],\beta\in[\xi,1)\end{array}\right.\] \[\left|a_{2}a_{4}-a_{3}^{2}\right| \leq\left\{\begin{array}{cc}\frac{8(1-\beta)^{2}}{(\gamma+1)^{2}( \lambda+1)}\left[\frac{2(1-\beta)^{2}}{(\gamma+1)^{2}(\lambda+1)^{3}}+\frac{3 }{(\gamma+2)(\gamma+3)(3\lambda+1)}\right],&\beta\in[0,\xi]\\ \\ \frac{4(1-\beta)^{2}}{(\gamma+1)^{2}(\gamma+2)^{2}(2\lambda+1)^{2}}\left[4- \frac{[\theta_{2}(1-\beta)+9\theta_{3}-4\vartheta_{4}]^{2}}{\vartheta_{4}[4 \vartheta_{1}(1-\beta)^{2}-2\vartheta_{2}(1-\beta)-12\vartheta_{3}+4\vartheta_ {4}]}\right],\beta\in[\xi,1)\end{array}\right.\] \[\left|a_{2}a_{4}-a_{3}^{2}\right| \leq\left\{\begin{array}{cc}\frac{8(1-\beta)^{2}}{(\gamma+1)^{2}( \lambda+1)}\left[\frac{2(1-\beta)^{2}}{(\gamma+1)^{2}(\lambda+1)^{3}}+\frac {3}{(\gamma+2)(\gamma+3)(3\lambda+1)}\right],&\beta\in[0,\xi]\\ \\ \frac{4(1-\beta)^{2}}{(\gamma+1)^{2}(\gamma+2)^{2}(2\lambda+1)^{2}}\left[4- \frac{[\theta_{2}(1-\beta)+9\theta_{3}-4\vartheta_{4}]^{2}}{\vartheta_{4}[4 \vartheta_{1}(1-\beta)^{2}-2\vartheta_{2}(1-\beta)-12\vartheta_{3}+4\vartheta_ {4}]}\right],\beta\in[\xi,1)\end{array}\right.\] \[\left|a_{2}a_{4}-a_{3}^{2}\right| \leq\left\{\begin{array}{cc}\frac{8(1-\beta)^{2}}{(\gamma+1)^{2}( \lambda+1)}\left[\frac{2(1-\beta)^{2}}{(\gamma+1)^{2}(\lambda+1)^{3}}+\frac {3}{(\gamma+2)(\gamma+3)(3\lambda+1)}\right],&\beta\in[0,\xi]\\ \\ \frac{4(1-\beta)^{2}}{(\gamma+1)^{2}(\gamma+2)^{2}(2\lambda+1)^{2}}\left[4- \frac{[\theta_{2}(1-\beta)+9\theta_{3}-4\vartheta_{4}]^{2}}{\vartheta_{4}[4 \vartheta_{1}(1-\beta)^{2}-2\vartheta_{2}(1-\beta)-12\vartheta_{3}+4\vartheta_ {4}]}\right],&\beta\in[\xi,1)\end{array}\right.\] \[\left|a_{2}a_{4}-a_{3}^{2}\right| \leq\left\{\begin{array}{cc}\frac{8(1-\beta)^{2}}{(\gamma+1)^{2}( \lambda+1)}\left[\frac{2(1-\beta)^{2}}{(\gamma+1)^{2}(\lambda+1)^{3}}+\frac{3 }{(\gamma+2)(\gamma+3)(3\lambda+1)}\right],&\beta\in[0,\xi]\\ \\ \frac{4(1-\beta)^{2}}{(\gamma+1)^{2}(\gamma+2)^{2}(2\lambda+1)^{2}}\left[4- \frac{[\theta_{2}(1-\beta)+9\theta_{3}-4\vartheta_{4}]^{2}}{\vartheta_{4}[4 \vartheta_{1}(1-\beta)^{2}-2\vartheta_{2}(1-\beta)-12\vartheta_{3}+4\vartheta_ {4}]}\right],&\beta\in[\xi,1)\end{array}\right.\] \[\left|a_{2}a_{4}-a_{3}^{2}\right| \leq\left\{\begin{array}{cc}\frac{8(1-\beta)^{2}}{(\gamma+1)^{2}( \lambda+1)}\left[\frac{2(1-\beta)^{2}}{(\gamma+1)^{2}(\lambda+1)^{3}}+\frac {3}{(\gamma+2)(\gamma+3)(3\lambda+1)}\right],&\beta\in[0,\xi]\\ \\ \frac{4(1-\beta)^{2}}{(\gamma+1)^{2}(\gamma+2)^{2}(2\lambda+1)^{2}}\left[4- \frac{[\theta_{2}(1-\beta)+9\theta_{3}-4\vartheta_{4}]^{2}}{\vartheta_{4}[4 \vartheta_{1}(1-\beta)^{2}-2\vartheta_{2}(1-\beta)-12\vartheta_{3}+4\vartheta_ {4}]}\right],&\beta\in[\xi,1)\end{array}\right. \[\vartheta_{1} :=(\gamma+2)^{2}(\gamma+3)(2\lambda+1)^{2}(3\lambda+1),\] \[\vartheta_{2} :=(\gamma+1)(\gamma+2)(\gamma+3)(\lambda+1)^{2}(2\lambda+1)(3 \lambda+1),\] \[\vartheta_{3} :=(\gamma+1)^{2}(\gamma+2)(\lambda+1)^{3}(2\lambda+1)^{2},\] \[\vartheta_{4} :=(\gamma+1)^{2}(\gamma+3)(\lambda+1)^{4}(3\lambda+1),\] _and_ \[\xi:=1-\tfrac{\vartheta_{2}+\sqrt{\vartheta_{2}^{2}+48\vartheta_{1}\vartheta_ {3}}}{8\vartheta_{1}}.\] **Remark 1**.: Corollary 2 improves a result which was proven by Altinkaya and Yalcin [1, Theorem 3]. By putting \(\gamma=0\) in Corollary 2, we obtain the following result. **Corollary 3**.: _Let \(f\in\Xi_{\Sigma}(\lambda;\beta)\quad(\lambda\geq 1,0\leq\beta<1)\) be given by (1). Then_ \[\left|a_{2}a_{4}-a_{3}^{2}\right|\leq\left\{\begin{array}{c}\frac{8(1-\beta )^{2}}{\lambda+1}\left[\frac{2(1-\beta)^{2}}{(\lambda+1)^{3}}+\frac{1}{2(3 \lambda+1)}\right],\quad\beta\in[0,\epsilon]\\ \\ \frac{2(1-\beta)^{2}}{(2\lambda+1)^{2}}\left[4-\frac{[\eta_{2}(1-\beta)+3 \eta_{3}-2\eta_{4}]^{2}}{\eta_{4}[8\eta_{1}(1-\beta)^{2}-2\eta_{2}(1-\beta)- 4\eta_{3}+2\eta_{4}]}\right],\beta\in[\epsilon,1)\end{array}\right.\] _where_ \[\eta_{1} :=(2\lambda+1)^{2}(3\lambda+1),\] \[\eta_{2} :=(\lambda+1)^{2}(2\lambda+1)(3\lambda+1),\] \[\eta_{3} :=(\lambda+1)^{3}(2\lambda+1)^{2},\] \[\eta_{4} :=(\lambda+1)^{4}(3\lambda+1),\] _and_ \[\epsilon:=1-\frac{(\lambda+1)^{2}(3\lambda+1)+\sqrt{(\lambda+1)^{4}(3\lambda+ 1)^{2}+32(\lambda+1)^{3}(2\lambda+1)^{2}(3\lambda+1)}}{16(2\lambda+1)(3 \lambda+1)}.\] **Remark 2**.: Corollary 3 improves a result which was obtained by Altinkaya and Yalcin [1, Corollary 5]. By setting \(\lambda=1\) in Corollary 3, we get the following consequence. **Corollary 4**.: _[_8_]_ _Let \(f\in\Xi_{\Sigma}(\beta)\quad(0\leq\beta<1)\) be given by (1). Then_ \[\left|a_{2}a_{4}-a_{3}^{2}\right|\leq\left\{\begin{array}{ll}(1-\beta)^{2} \left[(1-\beta)^{2}+\frac{1}{2}\right],&\beta\in\left[0,\frac{11-\sqrt{37}}{1 2}\right]\\ \\ \frac{(1-\beta)^{2}}{16}\left[\frac{60\beta^{2}-84\beta-25}{9\beta^{2}-15 \beta+1}\right],&\beta\in\left[\frac{11-\sqrt{37}}{12},1\right)\end{array} \right..\] **Remark 3**.: Corollary 4 provides a recovery over a result which was obtained by Altinkaya and Yalcin [1, Corollary 4]. ## 3. Conclusions In this investigation, we consider a constructed subclass \(\Xi_{\Sigma_{m}}(\lambda,\gamma;\beta)\) of the class \(\Sigma_{m}\) of \(m\)-fold symmetric bi-univalent functions and several interesting properties of the results are discussed. Moreover, by sufficient specialization of the parameters, some consequences of the class are mentioned and showed that the results presented improve some existing upper bounds for \(\mathrm{H}_{2}(2)\) on certain subclasses of 1-fold symmetric bi-univalent functions.
2304.00940
On the microstructure of higher-dimensional Reissner-Nordström black holes in quantum regime
Thermodynamic Riemannian geometry provides great insights into the microscopic structure of black holes (BHs). One such example is the Ruppeiner geometry which is the metric space comprising the second derivatives of entropy with respect to other extensive variables of the system. Reissner-Nordstr\"om black holes (RNBHs) are known to be endowed with a flat Ruppeiner geometry for all higher spacetime dimensions. However this holds true if one invokes classical gravity where the semi-classical Bekenstein-Hawking entropy best describes the thermodynamics of the system. If the much deeper quantum gravity and string theories entail modifications to BH entropy, this prompts the question whether the Ruppeiner flatness associated with higher dimensional RNBHs still persists. We investigate this problem by considering non-perturbative (exponential) and perturbative (logarithmic) modifications to BH entropy of a $5$D RNBH. We find that while the case is so for larger (classical) geometries, the situation is radically altered for smaller (quantum) geometries. Namely, we show surprising emergence of multiple phase transitions that depend on the choice of extent of corrections to BH entropy and charge. Our consideration involves differentiated extremal and non-extremal geometric scales corresponding to the validity regime of corrections to entropy. More emphasis is laid on the exponential case as the contributions become highly non-trivial on small scales. An essential critical mass scale arises in this case that marks the onset of these phase transitions while the BH diminishes in size via Hawking evaporation. We contend that this critical value of mass perhaps best translates as the epoch of a classical to quantum BH phase transition.
Syed Masood A. S. Bukhari, Behnam Pourhassan, Houcine Aounallah, Li-Gang Wang
2023-04-03T12:54:16Z
http://arxiv.org/abs/2304.00940v3
# On the microstructure of higher-dimensional Reissner-Nordstrom black holes in quantum regime ###### Abstract Thermodynamic Riemannian geometry provides great insights into the microscopic structure of black holes (BHs). One such example is the Ruppeiner geometry which is the metric space comprising the second derivatives of entropy with respect to other extensive variables of the system. Reissner-Nordstrom black holes (RNBHs) are known to be endowed with a flat Ruppeiner geometry for all higher spacetime dimensions. However this holds true if one invokes classical gravity where the semi-classical Bekenstein-Hawking entropy best describes the thermodynamics of the system. If the much deeper quantum gravity and string theories entail modifications to BH entropy, this prompts the question whether the Ruppeiner flatness associated with higher dimensional RNBHs still persists. We investigate this problem by considering non-perturbative (exponential) and perturbative (logarithmic) modifications to BH entropy of a 5D RNBH. We find that while the case is so for larger (classical) geometries, the situation is radically altered for smaller (quantum) geometries. Namely, we show surprising emergence of multiple phase transitions that depend on the choice of extent of corrections to BH entropy and charge. More emphasis is laid on the exponential case as the contributions become highly non-trivial on small scales. An essential critical mass scale arises in this case, that marks the onset of these phase transitions while the BH diminishes in size via Hawking evaporation. We contend that this critical value of mass perhaps best translates as the epoch of a classical to quantum BH phase transition. Introduction The powerful principle in Boltzmann's parlance: _"If you can heat it, it has microscopic structure"_[1], has proven so instrumental in understanding the microstructure of physical systems. Ever since the ground-breaking works by Bekenstein [2] and Hawking [3; 4], the study of black hole (BH) thermodynamics is thriving as one of the major paradigms of modern physics. One of main lessons due to this is the fact that entropy of a BH, \(S_{BH}=A/(4\ell_{p}^{2})\), where \(A\) is horizon area and \(\ell_{p}\) is the Planck length, scales with its area than volume, and this observation lies at the heart of _holographic principle_[5; 6]. This relation quantifies the amount of entropy to be associated with a BH as a thermodynamic system as perceived by an external observer, providing a basis for conceiving BH microstructures. As regards the final fate of BH shrinking via Hawking evaporation, one is forced to consider the quantum structure of spacetime geometry. Almost all known theories of quantum gravity necessitate the existence of a minimal length (often characterized by Planck length \(\ell_{p}\)) where classical geometry is plagued by quantum fluctuations [7]. This entails radical consequences for entropy-area relation for a BH as it approaches quantum regime [8; 9; 10], including holographic principle [11]. What happens is the modification of classical Bekenstein-Hawking relation, and this obliquely counts as deciphering the microscopic origin of BH entropy [12]. Numerous studies have elucidated the way one accounts for these modifications via different approaches, and interestingly these corrections enter the scenario either through a perturbative or a non-perturbative framework. Perturbative methods include the microstate counting in string theory and loop quantum gravity [13; 14; 15; 16; 17], generally manifesting as logarithmic corrections, while non-perturbative methods feature as exponential corrections [18; 19; 20; 21]. A prominent method to incorporate non-perturbative terms is by employing AdS/CFT correspondence [22] and using Kloosterman sums for massless supergravity fields near the horizon [23; 24; 20]. For a large BH, all these corrections are suppressed and hence can be ignored, implying that Bekenstein-Hawking relation suffices to discuss the thermodynamic behavior of horizon. On contrary, for a smaller black hole, where quantum fluctuations become relevant, the logarithmic and other expansion terms also contribute, however still in a perturbative manner. The most interesting situation arises from the exponential term which dominates non-perturbatively and dramatically changes the physics around this regime. A considerable volume of literatures are devoted for both perturbative and non-perturbative corrections in different contexts. For example, using holographic arguments based on AdS/CFT duality, quantum corrections to BH entropy have been computed in Refs. [25; 26; 27; 28], including extremal geometries of Reissner-Nordstrom [8] and rotating BHs [29]. The entropy of a conformal field theory can be obtained using Cardy's formula, and this approach has been used in Ref. [30] to compute a leading order (logarithmic) correction to a Banados-Teitelboim-Zanelli (BTZ) black hole. In Ref. [31], authors investigate and analyze the sub-leading correction terms to Bekenstein-Hawking relation within conformal field theory. For further insight into the problem, and many diverse aspects of (non-equilibrium) quantum thermodynamics of BHs, we refer the reader to the Refs. [32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42] for a comprehensive look. The above formulations are all built on the notion of an existing gravitational system with a well defined geometry. However, it is quite possible that if one starts from a thermodynamic footing viz. entropy and Clausius relation, the result is an emergent geometry. A seminal paper by Jacobson [43] laid the foundation for a thermodynamic viewpoint on Einstein gravity. The central idea is the ubiquity of the Clausius relation, \(\delta Q=TdS\), where \(\delta Q\) is the matter-energy flux crossing a local Rindler horizon with an associated Unruh temperature \(T\), supplemented by entropy-area correspondence. Consequently, the equations of general relativity emerge as a thermodynamic equation of state in a natural way. Since then the original idea has been generalized in many ways and led to many new ideas and great insights [44; 45; 46; 47; 48; 49; 50]. For example, some higher-curvature gravity models have been shown to possess intriguing thermodynamic interpretation leading to an _emergent gravity paradigm_[51; 52; 53]. It is noteworthy that logarithmic corrections to entropy also arise due to thermal fluctuations around an equilibrium configuration without any need for an underlying quantum gravity theory [54]. However, an extension of Jacobson formalism relates thermal fluctuations to quantum geometry fluctuations [55]. In light of the realization that quantum gravity predicts corrections to classical thermodynamic variables, it is reasonable to assume that this thermodynamics which holds in both classical and quantum domains of spacetime geometry might be able to suggest the modifications to gravitational dynamics of Einstein equations. This is the very principle underlying the motivation of this work. However, the present work only investigates the modified thermodynamics including the phase transitions based on (non-)perturbative corrections to BH entropy [20; 21; 24], without going to compute corrections to spacetime geometry. We discuss the consequences of these non-perturbative (exponential) and perturbative (logarithmic) corrections to a five-dimensional (5D) Reissner-Nordstrom BH (RNBH). We address the question of thermodynamic (un-)stability via the information geometric approach. The paper is organized as follows. In Sec.II, we review the geometry of higher-dimensional RNBH and the modifications to BH entropy. Sec. III details the stability analysis of 5D RNBH based on modified heat capacity of a BH system. Sec. IV provides a discussion of Ruppeiner approach to thermodynamic geometry and hence we compute the associated curvature scalar for our system. The conclusion is drawn in Sec. V. ## II Conceptual aspects: higher-dimensional Reissner-Nordstrom geometry and corrections to entropy Theories beyond general relativity, in addition to many others, include a class of higher dimensional models of gravity that hold great scope from mathematical and physical point of view. The initiation is rooted in the ideas by Kaluza and Klein [56; 57] as a way to unify electromagnetism with gravity and currently occupies a special position in string theories [58]. The situation is akin to quantum field theory where one can chose an arbitrary field content beyond the existing boundaries of Standard Model, shedding light on many general features of quantum fields. The hope here is a D-dimensional extension of general relativity could lead to valuable insights into the theory and especially into one of its robust predictions, the BHs [59]. As is well known, the hallmarks of 4D BHs comprise spherical topology, dynamical stability, uniqueness, and satisfying a set of basic rules-the laws of BH mechanics. A growing understanding suggests that gravity offers much richer physics in D\(>4\) dimensions, as evinced by the discovery of dynamical instabilities in extended horizon geometries [60], and BHs endowed with non-spherical topologies not generally identified with uniqueness-a trait otherwise associated with their 4D counterparts [61]. A more fascinating result links higher dimensional BHs to fluid dynamics in the so-called _blackfold_ approach [62]. In addition, it has been shown that the behaviour of higher dimensional BH thermodynamics is affected in an energy-dependent background geometry [63]. The inclusion of extra dimensions in BHs dates back to Tangherlini [64], who formulated a D-dimensional solution for Schwarzschild and RNBH. The action is given by [65] \[S=-\frac{1}{16\pi G_{D}}\int d^{D}x\sqrt{-g}(R-F^{2}), \tag{1}\] where \(G_{D}\) is D-dimensional Newton's gravitational constant, \(g\) is the determinant of metric tensor \(g_{\mu\nu}\), \(R\) represents Ricci scalar, and \(F^{2}\) the electromagnetic Lagrangian, which yields a D-dimensional RN spacetime. The resultant metric, which represents a static and spherically symmetric spacetime, is given by \[ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}d\Omega_{D-2}^{2}, \tag{2}\] where \(d\Omega_{D-2}\) is the metric of unit \((D-2)\)-sphere. Here the metric function \(f(r)\) reads \[f(r)=1-\frac{2\mu}{r^{D-3}}+\frac{q^{2}}{r^{2(D-3)}}. \tag{3}\] The parameters \(\mu\) and \(q\) are some constants that help us to define Arnowitt-Deser-Misner (ADM) mass \(M\) and electric charge \(Q\) of BH as \[M=\left(\frac{D-2}{16\pi}W_{D-2}\right)\mu,\ Q=\left(\frac{\sqrt{2(D-2)(D-3)} }{8\pi}W_{D-2}\right)q, \tag{4}\] with \(W_{D}=\left(2\pi\frac{D+1}{2}\right)/\Gamma(\frac{D+1}{2})\). We deal with D\(=5\) case, hence the above parameters read \[f(r)=1-\frac{2\mu}{r^{2}}+\frac{q^{2}}{r^{4}},\ \ M=\frac{3\pi}{4}\mu,\ \ Q=\frac{\sqrt{3\pi}}{4}q. \tag{5}\] Since there is a linear mapping between \((\mu,q)\) and \((M,Q)\) respectively, we can safely treat \(M\) and \(Q\) as our mass and charge throughout the work. For the non-extremal case \(M>Q\), the zeros of \(f(r)\) give two horizons located at \[r_{\pm}=\sqrt{M\pm\sqrt{M^{2}-Q^{2}}}, \tag{6}\] where \(r_{+}\) is the BH event horizon and \(r_{-}\) is the inner Cauchy horizon. The temperature is defined from the metric function \(f(r)\) as \[T=\tfrac{1}{4\pi}\left(\tfrac{df(r)}{dr}\right)_{r=r_{+}}=\frac{4M}{r^{3}}- \frac{4Q^{2}}{r^{5}}, \tag{7}\] where we dropped the constant factor \(1/4\pi\) for convenience. We substitute \(r_{+}\) from eq.(6) in the above equation to obtain \(T\) in terms of \(M\) and \(Q\). Hence we have \[T=\frac{4M}{\left(\sqrt{M^{2}-Q^{2}}+M\right)^{3/2}}-\frac{4Q^{2}}{\left(\sqrt {M^{2}-Q^{2}}+M\right)^{5/2}}. \tag{8}\] As regards the microscopic origin of BH entropy that invokes a full theory of quantum gravity, the first of its kind was reported in string theory framework [12], and interestingly string theory rests on extra dimensions. As pointed out earlier, the modifications to original Bekenstein-Hawking entropy become appreciable when the hole size approaches quantum gravity scale, any underlying quantum gravity theory that doesn't produce the original entropy relation in leading order is surely incorrect. Even though quantum BH geometry is itself a model-dependent approach and one has to rather start from quantizing the gravitational action that may prove a daunting task if accounting for non-perturbative corrections [66]. However, the quantum corrections to entropy is another powerful way to study the end stage of BH as the size decreases. The primary impetus comes from Jacobson formalism [43] and non-equilibrium thermodynamics [54], and relating thermal fluctuations to geometry [55]. The class of perturbative quantum corrections to BH entropy usually assume the general form [14; 15; 33; 34] \[S_{p}=\alpha\ln\left(\frac{A}{4\ell_{p}^{2}}\right)+\frac{4\beta\ell_{p}^{2}}{ A}+..., \tag{9}\] where \(A=4\pi r_{+}^{2}\) is the area of event horizon, and \(\alpha\) and \(\eta\) are constants. The non-perturbative corrections are of the following form [20; 21; 24]: \[S_{np}=\eta e^{-A/4\ell_{p}^{2}}. \tag{10}\] The total BH entropy is the sum of original entropy \(S_{0}=A/4\ell_{p}^{2}\), perturbative and non-perturbative terms, \[S_{BH}=S_{0}+S_{p}+S_{np}=\frac{A}{4\ell_{p}^{2}}+\alpha\ln\left(\frac{A}{4 \ell_{p}^{2}}\right)+\frac{\beta 4\ell_{p}^{2}}{A}+\eta e^{-A/4\ell_{p}^{2}}+... \tag{11}\] It is important to note here that the above functional form is valid for all 4D BHs, and the parameters \((\alpha,\beta,\eta)\) in the above equation signify the scale at which the corrections become relevant and can be obtained by a quantum corrected action which yields the required 4D RNBH. For ordinary 5D RNBH, the entropy is given by \[S_{0}=\tfrac{1}{2}\pi^{2}r_{+}^{3}, \tag{12}\] with the horizon area \(2\pi^{2}r_{+}^{3}\). Since the corrections apply to all BHs in general, we conjecture that a 5D RNBH also receives the corrections. Our focus here is to examine the effect of exponential term given by [21] \[S_{exp} =\frac{1}{2}\pi^{2}r_{+}^{3}+\eta\exp\left(-\frac{1}{2}\pi^{2}r_{ +}^{3}\right)\] \[=\frac{1}{2}\pi^{2}\left(\sqrt{M^{2}-Q^{2}}+M\right)^{3/2}+\eta e ^{-\frac{1}{2}\pi^{2}\left(\sqrt{M^{2}-Q^{2}}+M\right)^{3/2}}, \tag{13}\] and the logarithmic term given by [54] \[S_{log} =S_{0}-\frac{1}{2}\alpha\log\left(S_{0}T^{2}\right)\] \[=\left(\sqrt{M^{2}-Q^{2}}+M\right)^{3/2}-\frac{1}{2}\alpha\log \Bigg{[}\left(\sqrt{M^{2}-Q^{2}}+M\right)^{3/2} \tag{14}\] \[\times\left(\frac{4M}{\left(\sqrt{M^{2}-Q^{2}}+M\right)^{3/2}}- \frac{4Q^{2}}{\left(\sqrt{M^{2}-Q^{2}}+M\right)^{5/2}}\right)^{2}\Bigg{]}, \tag{15}\] where the parameters \(\eta\) and \(\alpha\) characterize the extent of exponential and logarithmic corrections, respectively. The range of \(\eta\) can be taken as far as the original Bekenstein-Hawking contribution is dominant, i.e. the exponential terms are suppressed for bigger sizes. For \(\alpha\), the parameter range is usually taken as \(\alpha\in[0,1]\)[67]. We plot entropy for exponential \((S_{exp})\) and logarithmic \((S_{log})\) corrections, respectively, in figures 1 and 2. We use these relations to discuss the thermodynamic stability and phase transition for our BH system in next section. Figure 1 shows that the entropy \(S_{exp}\) does not vanish for 5D RNBH as it evaporates to smaller sizes in presence of non-perturbative corrections, and this signals the onset of quantum fluctuations (scaled by \(\eta\)), while agreeing with Bekenstein-Hawking contribution for larger sizes. For unperturbed BHs (\(\eta=0\)), we recover the full Bekenstein-Hawking entropy for 5D case. For logarithmic corrections, entropy is plotted in figure 2. It is clear that due to log corrections, BH possess less entropy compared to the original one. However, it is seen that as extremal limit (\(M=Q\)) is reached, there is a sudden rise in entropy. However, there is a no singularity, just as is the case with exponential corrections. One can easily appreciate from figure 2 (a) that logarithmic corrections are universal for all BH sizes but contribute perturbatively. However, in quantum regime, they show significant impact as seen in figure 2 (b). Note that the dashed boundary, related to a critical mass \(M_{0}\), separates the role of \(\alpha\) for classical and quantum domains of BH geometry. This would appropriately be treated as the point of classical to quantum phase transition. \(M_{0}\) would be the value of BH mass such that the term inside the logarithm of eq. (2) is 1, i.e. whenever \[\left(\frac{4M}{\left(\sqrt{M^{2}-Q^{2}}+M\right)^{3/2}}-\frac{4Q^{2}}{\left( \sqrt{M^{2}-Q^{2}}+M\right)^{5/2}}\right)^{2}=\frac{1}{\left(\sqrt{M^{2}-Q^{2 }}+M\right)^{3/2}}.\] Figure 1: Entropy \(S_{exp}\) variation with mass \(M\) for a 5D RNBH with non-perturbative corrections. The correction parameter \(\eta\) greatly controls entropy behavior as the BH shrinks to smaller sizes. Note that at extremal limit, \(S_{exp}\) possesses a large (remnant) value and _does not_ diverge. Setting \(Q=1\), we numerically find the value of \(M_{0}\approx 1.04861\). It is imperative to mention here that the usual picture of BH evaporation involves shrinkage of whole BH horizon radius \(r_{+}\). However, as we work in a canonical ensemble paradigm, it is possible to treat \(Q\) as a constant parameter throughout the process, while the BH mass \(M\) dictates the size of our system. However, the caveat is that \(Q\) is extremely small so as to help clearly differentiate the classical-quantum split as \(M\) decreases due to evaporation. Hence, following this logic, when \(M>Q\), our BH geometry is classical, and as \(M\to Q\), it possesses a quantum description. These two phases kind of coexist at the critical mass \(M_{0}\), which we indicated above for logarithmic case. For exponential case, we will define it from heat capacity. ## III Thermodynamic stability and phase transition: the role of (non-) perturbative corrections ### Is 5D RNBH colder than its Schwarzschild counterpart? We answer this question by considering the temperature \(T\) of a 5D RNBH. From eq. (8), one can determine the behavior of Figure 3: Temperature \(T\) vs BH mass \(M\): the role of charge \(Q\) is to reduce the temperature of RNBHs compared to Schwarzschild case. Figure 2: (a) Entropy \(S_{log}\) variation with mass \(M\) for logarithmic corrections, and (b) short-distance behaviour of \(S_{log}\) on log-log scale. temperature in terms of BH mass \(M\) as the BH evaporates. However, its is safe to treat \(Q\) as a free parameter that remains fixed during all this process, which means one assumes that horizon shrinkage is linked to \(M\) only. It is well known that BHs radiate via Hawking radiation at temperature \(T\). From figure 3, at first glance, it is evident that temperature \(T\) of our BH (\(Q\neq 0\)) is less than its neutral \((Q=0)\) counterpart (Schwarzschild BH). Thus a charged BH is always colder than its neutral cousin. This in other words reflects the fact that a charged BH emits fewer neutral massless particles than an uncharged one. It is noteworthy that this result is quite known for D= 4 spacetime dimensions [68], and interestingly continues to hold even in extra-dimensional case. Also note that \(T\) decays with increasing \(M\) for both charged and uncharged cases due to competition between the two terms in eq. (8). For large BH, that is a non-extremal geometry \((M>Q)\), it is crucial to note that we did not invoke the effect of \(\eta\) and \(\alpha\) on \(T\). This can be argued on the basis of simple realization from AdS/CFT correspondence that all possible perturbative and non-perturbative corrections to entropy can be expressed as functions of original entropy and temperature [69; 70; 71; 72]. ### Remnant formation, phase structure and instabilities A remnant is the leftover structure after a BH ceases its Hawking evaporation. Technically speaking, a remnant is a localized late stage outcome of Hawking evaporation. This generally occurs in almost all quantum gravity and string theories where spacetime structure approaches Planck scale, and is more suitably expressed by quantum geometry. This discussion is central to the BH information paradox [73]. However, one of the most intriguing predictions of string theories is the existence of extra dimensions, it is thus highly interesting to uncover the implications for thermodynamic behaviour for 5D RNBH. We now turn our attention to the study of thermodynamic stability and the conditions under which our system shows phase transition. This is possible by studying the variation of different thermodynamic quantities or state functions in either canonical or grand canonical ensemble. Canonical ensemble theory assumes charge \(Q\) to be a fixed parameter. Therefore, heat capacity, denoted by \(C_{Q}\), dictates the stability conditions for such a system. The positivity of \(C_{Q}\) guarantees a stable phase and vice-versa, and the divergence a phase transition [74; 75]. In particular, a vanishing and divergent heat capacity corresponds to _first_ and _second order phase transitions_, respectively [76]. It has been argued, however, that the phase transition analysis is more plausible by utilizing thermodynamic curvature rather than heat capacity [77]. In general, \(C_{Q}\) is defined as \[C_{Q}=T\left(\frac{\partial S}{\partial T}\right)_{Q}=T\left(\frac{\partial S /\partial M}{\partial T/\partial M}\right)_{Q}. \tag{16}\] #### ii.2.1 Exponential corrections For exponential corrections, we use eq. (13) to first compute \[\frac{\partial S_{exp}}{\partial M} =\frac{1}{4\sqrt{(M-Q)(M+Q)}}\left[3\pi^{2}e^{-\frac{1}{2}\pi^{2} \left(\sqrt{M^{2}-Q^{2}}+M\right)^{3/2}}\right.\] \[\left.\times\left(\sqrt{M^{2}-Q^{2}}+M\right)^{3/2}\left(e^{\frac {1}{2}\pi^{2}\left(\sqrt{M^{2}-Q^{2}}+M\right)^{3/2}}-\eta\right)\right], \tag{17}\] and \[\frac{\partial T}{\partial M}=\frac{6Q^{2}-2M\left(\sqrt{(M-Q)(M+Q)}+M\right) }{\sqrt{(M-Q)(M+Q)}\left(\sqrt{(M-Q)(M+Q)}+M\right)^{5/2}}. \tag{18}\] \[C_{Q,exp} =\frac{1}{2\left(M\left(\sqrt{(M-Q)(M+Q)}+M\right)-3Q^{2}\right)} \left[3\pi^{2}e^{-\frac{1}{2}\pi^{2}\left(\sqrt{(M-Q)(M+Q)}+M\right)^{3/2}}\times \left(\sqrt{(M-Q)(M+Q)}+M\right)^{3/2}\right.\] \[\times\left(M\left(\sqrt{(M-Q)(M+Q)}+M\right)-Q^{2}\right)\left(e ^{\frac{1}{2}\pi^{2}\left(\sqrt{M^{2}-Q^{2}}+M\right)^{3/2}}-\eta\right) \right], \tag{19}\] which evidently incorporates the non-perturbative corrections parameterized by \(\eta\). By plotting \(C_{Q,exp}\) in figure 4, we perform a graphical analysis to infer what happens to the thermodynamic behavior of our BH as its size shrinks. The first thing we observe here is that, in both uncorrected (\(\eta=0\)) and corrected (\(\eta\neq 0\)) cases, \(C_{Q,exp}\) stays negative for larger BH sizes, and suffers from an infinite discontinuity as it shrinks further in size. At this point, it turns from negative to positive, thereby manifesting an unstable to stable phase transition. In particular, this transition represents a _second order phase transition_. This is somewhat peculiar to charged BHs, conceptualized by Davies [74], and is purely of geometric origin due to presence of horizons in the spacetime. Once we enter quantum domain, it then again turns negative through \(C_{Q}=0\) in presence of \(\eta\), and tends to be more negative (unstable) for larger \(\eta\). It finally goes to zero at extremal limit (\(M=Q\)). This negativity of \(C_{Q}\) only occurs in presence of \(\eta\), and is absent for \(\eta=0\) case. Hence, \(\eta\) lends different behaviours to the end stages of our BH as it approaches extremal geometry. \(C_{Q}=0\) signifies a _first order phase transition_. We thus conclude that, with _eta_, our BH always remains thermodynamically in an unstable phase for larger sizes and attains stability for some region before again becoming unstable. So for classical geometries, our BH is unstable, and it undergoes stable/unstable phase transitions in quantum regime. Roots of \(C_{Q}\) indicate, what generally are known as bound points, which separate physically acceptable positive temperature solutions from negative (unphysical) ones [78]. However, in our case, in addition to temperature considerations, it aids in identifying a critical mass \(M_{c}\), corresponding to the first root of \(C_{Q,exp}\), which marks the onset of phase transitions. We find \(M_{c}\) to be \[M_{c}=\frac{\left(16\log^{8}(\eta)+\pi^{8}Q^{6}\log^{4}(\eta)+\sqrt{\log^{8}( \eta)\left(\pi^{8}Q^{6}-16\log^{4}(\eta)\right)^{2}}\right)^{2/3}+4\pi^{8/3}Q^ {2}\log^{4}(\eta)}{4\pi^{4/3}\log^{2}(\eta)\sqrt{16\log^{8}(\eta)+\pi^{8}Q^{6 }\log^{4}(\eta)+\sqrt{\log^{8}(\eta)\left(\pi^{8}Q^{6}-16\log^{4}(\eta)\right) ^{2}}}}, \tag{20}\] which obviously depends on \(\eta\) and \(Q\) on the BH, and this phase transition is absent in original 5D RNBHs, and has its sole origin in non-perturbative corrections to entropy. Since non-perturbative corrections become relevant only at small (quantum) Figure 4: (a) Impact of \(\eta\): heat capacity \(C_{Q,exp}\) (linear scale) vs BH mass \(M\) (log scale), and (b) Impact of charge \(Q\) in presence of \(\eta\). scales, it is appropriate to treat this as a large (classical) to small (quantum) BH phase transition, quite ubiquitous in BHs [79]. The second zero of heat capacity characterizes a BH that does not exchange energy with its surroundings. This means that the BH ceases evaporation at this stage and ends up as a black remnant. It is hard to ascertain purely from \(C_{Q,exp}\) alone what happens beyond this point since our study only concerns up to the extremal limit. As we will see later, thermodynamic geometry conveys much richer structure than heat capacity at extremal limit. This situation finds its parallel in the role of \(Q\) as depicted in figure 4 (b). The larger the \(Q\), evaporation stops at a larger \(M\), \(Q\) just quickens the second order phase transition. It is noteworthy from the right half of the plot in figure 4 (a) that no matter how big the \(\eta\) parameter, all plots overlap and become indiscriminate, which depicts that non-perturbative corrections have no role for macroscopic BHs. As a side remark, we juxtapose our thermodynamic observation with the gravitational instabilities of five dimensional Reissner-Nordstrom BHs. It has been extensively argued that higher dimensional Reissner-Nordstrom BHs in D\(<7\) dimensions generally remain gravitationally stable against large values of \(Q\)[80; 81]. However, from thermodynamic point of view, our BH shows an unstable phase for larger sizes as well as smaller sizes. Our findings conform to the arguments presented in Refs. [80; 81; 82] for smaller and larger sizes on either side of discontinuity in \(C_{Q,exp}\) as seen from figure 4 (b), however, it does not so for larger radii. It would be interesting to take this correlation further, which would perhaps span a separate work. #### iii.2.2 Logarithmic corrections In this case, the heat capacity is given by \[C_{Q,log} =-\frac{\left(M\left(\sqrt{(M-Q)(M+Q)}+M\right)-Q^{2}\right)}{2 \sqrt{(M-Q)(M+Q)}\left(M\left(\sqrt{(M-Q)(M+Q)}+M\right)-3Q^{2}\right)}\left[6 M^{2}\sqrt{\sqrt{(M-Q)(M+Q)}+M}-4\alpha M-6Q^{2}\right.\] \[\left.\times\left(\sqrt{\sqrt{(M-Q)(M+Q)}+M}\right)+3\alpha\sqrt {(M-Q)(M+Q)}+6M\sqrt{(M-Q)(M+Q)}\sqrt{\sqrt{(M-Q)(M+Q)}+M}\right],\] and plotted in figure 5. We observe from figure 5 that for all cases, our BH possesses negative heat capacity \(C_{Q,log}\) (unstable) for larger sizes, and a particular behaviour for the case \(\alpha\) is only manifested as one approaches extremal limit (\(M=Q\)). Uncorrected case \(\alpha=0\) (blue line) approaches zero and hence BH remains in stable phase till the remnant forms at \(M=Q\). For \(\alpha\neq 0\), with fixed \(Q\) shown in figure 5 (a), the infinite discontinuity where the system turns from unstable to stable phase and which signifies a second order phase transition, occurs at same value of \(M\) for all cases. Hence, in general, our BH is unstable and becomes stable before ending up as remnant at \(M=Q\). figure 5 (b) is the close-up view of figure 5 (a), and one can see that for larger values of \(\alpha\), heat capacity has increasing trend before the infinite discontinuity, i.e. it tends to make the BH stable. For positive \(C_{Q,log}\), after the discontinuity, it lowers heat capacity. Hence it seems thermal fluctuations, embodied in \(\alpha\) tend to stabilize BH for large sizes and destabilize it for smaller sizes. The underlying reason may be that for smaller sizes, thermal fluctuations in presence of \(\alpha\) make geometry unstable. The infinite discontinuity point is however shifted towards higher \(M\) for different values of \(Q\) as shown in figure 5 (c), which signifies competition between \(M\) and \(Q\). Note that unlike exponential case, logarithmic modifications do not have a critical mass in \(C_{Q,log}\) inasmuch as it would indicate a large to small BH phase transition. Rather, \(C_{Q,log}\) possesses a zero only at \(M=Q\), which however represents a remnant. In that sense, the critical mass would correspond to the magnitude of \(Q\). ## IV Thermodynamic Ruppeiner Geometry Geometric ideas, as enshrined in thermodynamic geometry, have tremendously advanced our understanding of the thermodynamic structure of black holes. A scalar curvature (an invariant) defined in this parameter space helps us to gain further insight into the phase transitions and microscopic structure of black holes. The ideas have been proposed in the context of thermal fluctuation theory, which leads to the thermodynamic Riemannian geometry [83]. These so-called information geometric approaches are expected to potentially provide lessons about microscopic degrees of freedom for BHs [77]. To put it simply, if a BH has an associated thermodynamic behaviour just like ordinary gases or fluids, there must be underlying _micromolecules_ with a typical interaction phenomena. We are fortunate enough that information geometry attempts to furnish a deep insight into this microstructure. The first of its kind was formulated by Weinhold [84; 85], where a metric defined on the state of equilibrium states with components as Hessian of internal energy. The metric is therefore given by \[g_{\mu\nu}^{W}=\partial_{\mu}\partial_{\nu}M(S,N^{i}), \tag{21}\] where \(M\) is internal energy (in geometrized units \(c=1\)), \(S\) is the system's entropy, and \(N^{i}\) constitute all other extensive parameters of system like volume, internal energy etc. \(\mu,\nu=0,1,2,..\) are dimensions that correspond to different extensive parameters. This construction gives the following line element \[ds_{W}^{2}=g_{\mu\nu}^{W}dx^{\mu}dx^{\nu}, \tag{22}\] from which one can define the curvature scalar (a Gaussian curvature). Inspired by this, Ruppeiner [83; 86] introduced entropy \(S\) in place of \(M\) and derived the line element, and it was found that it provides the information about phase transitions. Since Figure 5: Heat capacity \(C_{Q,log}\) variation with mass \(M\) for logarithmic corrections: (a) Impact of \(\alpha\) for fixed \(Q\), (b) zoomed-in view of (a), and (c) Impact of \(Q\) for a fixed \(\alpha\). then there have been many attempts to extend this information geometric approach to the BH thermodynamics. A Legendre-invariant metric due to Quevedo [87; 88] attempted to resolve some of issues surrounding Weinhold/Ruppeiner formalism, while a more recent to this row is Hendi-Panahiyan-Eslam-Panah-Momennia (HPEM) metric [89]. Here, we employ the formalism due to Ruppeiner to our BH system as it evaporates to smaller sizes and attempt to reveal the underlying transformation as the hole reduces to quantum scales. Previously, it has been found that all higher-dimensional variants of RNBHs manifest a flat Ruppeiner geometry (zero curvature), thereby indicating an ideal state behaviour [90]. However, here we show that the case is not so when the hole size approaches quantum regime dominated by perturbative or non-perturbative quantum corrections. The curvature scalar diverges for exponential case and indicates a phase transition at smaller sizes, which coincides with the zero of \(C_{Q,exp}\) (at extremal limit) reported earlier in Section III. We begin from the well-known Boltzmann entropy relation \[S=k_{B}\ln\Omega, \tag{23}\] where \(k_{B}\) is Boltzmann constant and \(\Omega\) denotes the number of microstates of system. The inversion of \(\Omega\) \[\Omega=\exp{\left(\frac{S}{k_{B}}\right)}, \tag{24}\] acts as starting point of thermodynamic fluctuation theory from which the Ruppeiner approach emerges. Consider a set of parameters \(x^{0}\) and \(x^{1}\) which characterize a thermodynamic system (here the BH). The probability of finding this system in the intervals \(x^{0}+dx^{0}\) and \(x^{1}+dx^{1}\) is given by \[P(x^{0},x^{1})dx^{0}dx^{1}=\mathcal{A}\Omega(x^{0},x^{1})dx^{0}dx^{1}, \tag{25}\] where \(\mathcal{A}\) is a normalization constant. Upon using eq. (24), we can write \[P(x^{0},x^{1})\propto\exp{\left(\frac{S}{k_{B}}\right)}, \tag{26}\] and \[S(x^{0},x^{1})=S_{bh}(x^{0},x^{1})+S_{E}(x^{0},x^{1}), \tag{27}\] where \(S_{bh}\) is the BH, \(S_{E}\) the environment entropy, such that \(S_{bh}<S_{E}\sim S\). For a small change in entropy around equilibrium point \(x^{\mu}_{0}\) (where \(\mu,\nu=0,1\)), we can write the total entropy by expanding it around the equilibrium, \[S =S_{0}+\frac{\partial S_{bh}}{\partial x^{\mu}}\bigg{|}_{x^{\mu} =x^{\mu}_{0}}\Delta x^{\mu}_{bh}+\frac{\partial S_{E}}{\partial x^{\mu}} \bigg{|}_{x^{\mu}=x^{\mu}_{0}}\Delta x^{\mu}_{E}\] \[\quad+\frac{1}{2}\frac{\partial^{2}S_{bh}}{\partial x^{\mu} \partial x^{\nu}}\bigg{|}_{x^{\mu}=x^{\mu}_{0}}\Delta x^{\mu}_{bh}\Delta x^{ \nu}_{bh}+\frac{1}{2}\frac{\partial^{2}S_{E}}{\partial x^{\mu}\partial x^{\nu }}\bigg{|}_{x^{\mu}=x^{\mu}_{0}}\Delta x^{\mu}_{E}\Delta x^{\nu}_{E}+....,\] where \(S_{0}\) is the equilibrium entropy at \(x^{\mu}_{0}\). Now, if one assumes a closed system where extensive parameters of BH and environment \(x^{\mu}_{bh}\) and \(x^{\mu}_{E}\), respectively, have conservative additive nature, such that \(x^{\mu}_{bh}+x^{\mu}_{E}=x^{\mu}_{total}=constant\), then we can write \[\frac{\partial S_{bh}}{\partial x^{\mu}}\bigg{|}_{x^{\mu}=x^{\mu}_{0}}\Delta x ^{\mu}_{bh}=-\frac{\partial S_{E}}{\partial x^{\mu}}\bigg{|}_{x^{\mu}=x^{\mu} _{0}}\Delta x^{\mu}_{E}. \tag{28}\] This leads us to \[\Delta S=\frac{1}{2}\frac{\partial^{2}S_{bh}}{\partial x^{\mu} \partial x^{\nu}}\bigg{|}_{x^{\mu}=x^{\mu}_{0}}\Delta x^{\mu}_{bh}\Delta x^{ \nu}_{bh}+\frac{1}{2}\frac{\partial^{2}S_{E}}{\partial x^{\mu}\partial x^{ \nu}}\bigg{|}_{x^{\mu}=x^{\mu}_{0}}\Delta x^{\mu}_{E}\Delta x^{\nu}_{E}. \tag{29}\] As \(S_{E}\sim S\), the second term in eq. (29) is very small and can be ignored, which leaves behind only BH system with the probability given by \[P(x^{0},x^{1})\propto\exp{\left(-\frac{1}{2}\Delta^{2}\right)}, \tag{30}\] where \(\Delta l^{2}\) is given by \[\Delta l^{2}=-\frac{1}{k_{B}}g_{\mu\nu}\Delta x^{\mu}\Delta x^{\nu}. \tag{31}\] If we set \(k_{B}=1\), we get \[\Delta l^{2}=g_{\mu\nu}\Delta x^{\mu}\Delta x^{\nu}, \tag{32}\] where \[g_{\mu\nu}=-\frac{\partial^{2}S_{bh}}{\partial x^{\mu}\partial x^{\nu}}. \tag{33}\] In eq. (32), \(\Delta l^{2}\) is a dimensionless, positive definite, invariant quantity, since probability is a scalar quantity. The above line element closely resembles the one in Einstein gravity, and conventionally interpreted as being the thermodynamic length between two equilibrium fluctuation states: _thermodynamic states are further apart if the fluctuation probability is less_[77]. This is in line with the familiar Le Chatelier's principle that assures a local thermodynamic stability. The corresponding metric, after dropping the subscript \(bh\), reads \[g_{\mu\nu}=-\frac{\partial^{2}S}{\partial x^{\mu}\partial x^{\nu}}, \tag{34}\] which is the famous Ruppeiner metric. It is possible to define a curvature scalar for the above line element, similar to what one does in Riemannian geometry. For that matter, consider the Christoffel symbols \[\Gamma^{\sigma}_{\mu\nu}=\frac{1}{2}g^{\sigma\rho}\left(\partial_{\nu}g_{\rho \mu}+\partial_{\mu}g_{\rho\nu}-\partial_{\rho}g_{\mu\nu}\right), \tag{35}\] and the Riemann tensor \[R^{\sigma}_{\rho\mu\nu}=\partial_{\nu}\Gamma^{\sigma}_{\rho\mu}-\partial_{\mu }\Gamma^{\sigma}_{\rho\nu}+\Gamma^{\delta}_{\rho\mu}\Gamma^{\sigma}_{\delta \nu}-\Gamma^{\delta}_{\rho\nu}\Gamma^{\sigma}_{\delta\mu}, \tag{36}\] from which we define Ricci tensor and scalar as follows \[R_{\mu\nu}=R^{\sigma}_{\mu\sigma\nu},\;\;R=g^{\mu\nu}R_{\mu\nu}. \tag{37}\] Applying the above method, one can define curvature scalar for Ruppeiner geometry. It turns out that for a 2-dimensional space with a non-diagonal \(g_{\mu\nu}\), Ricci curvature scalar reads [91] \[R=-\frac{1}{\sqrt{g}}\left[\frac{\partial}{\partial x^{0}}\left( \frac{g_{01}}{g_{00}\sqrt{g}}\frac{\partial g_{00}}{\partial x^{1}}-\frac{1}{ \sqrt{g}}\frac{\partial g_{11}}{\partial x^{0}}\right)\right. \tag{38}\] \[\left.+\frac{\partial}{\partial x^{1}}\left(\frac{2}{\sqrt{g}} \frac{\partial g_{01}}{\partial x^{0}}-\frac{1}{\sqrt{g}}\frac{\partial g_{ 00}}{\partial x^{1}}-\frac{g_{01}}{g_{00}\sqrt{g}}\frac{\partial g_{00}}{ \partial x^{0}}\right)\right], \tag{39}\] where \(g=\det g_{\mu\nu}=g_{00}g_{11}-g_{01}^{2}\). The Ruppeiner metric is \[g_{\mu\nu}=-\partial_{\mu}\partial_{\nu}S(M,N^{i}), \tag{40}\] where \(M\) is the BH mass, and \(N^{i}\) the set of other extensive parameters. Naturally for our case, we choose charge \(Q\) as second extensive variable. The line element therefore reads \[ds_{R}^{2}=g_{MM}dM^{2}+2g_{MQ}dMdQ+g_{QQ}dQ^{2}, \tag{41}\] with the metric \[g_{\mu\nu}=\left(\begin{array}{cc}g_{MM}&g_{MQ}\\ \\ g_{QM}&g_{QQ}\end{array}\right).\] The components of \(g_{\mu\nu}\) are given by \[g_{MM}=-\frac{\partial}{\partial M}\left(\frac{\partial S}{\partial M}\right),\;g _{MQ}=-\frac{\partial}{\partial M}\left(\frac{\partial S}{\partial Q}\right), \tag{42}\] \[g_{QM}=-\frac{\partial}{\partial Q}\left(\frac{\partial S}{\partial M}\right), \;\text{and}\;g_{QQ}=-\frac{\partial}{\partial Q}\left(\frac{\partial S}{ \partial Q}\right), \tag{43}\] and are detailed in Appendix. Curvature is \[R=-\frac{1}{\sqrt{g}}\left[\frac{\partial}{\partial M}\left( \frac{g_{MQ}}{g_{MM}\sqrt{g}}\frac{\partial g_{MM}}{\partial Q}-\frac{1}{ \sqrt{g}}\frac{\partial g_{QQ}}{\partial M}\right)\right. \tag{44}\] \[\left.+\frac{\partial}{\partial Q}\left(\frac{2}{\sqrt{g}}\frac{ \partial g_{MQ}}{\partial M}-\frac{1}{\sqrt{g}}\frac{\partial g_{MM}}{ \partial Q}-\frac{g_{MQ}}{g_{MM}\sqrt{g}}\frac{\partial g_{MM}}{\partial M} \right)\right], \tag{45}\] where \(g=\det g_{\mu\nu}=g_{MM}g_{QQ}-g_{MQ}^{2}\) (See Appendix). Before computing Ruppeiner curvature, it is imperative to emphasize the interpretation of it. A zero Ruppeiner curvature has been associated with non-interacting BH molecules- much like an ideal gas. A non-zero Ruppeiner curvature depicts non-vanishing interactions between BH molecules. A negative curvature indicates attractive interactions and vice-versa [77]. If that is the case, a negative curvature would allude to the existence of a stable system. Since Ruppeiner curvature signifies interactions between BH constituents, one would expect a BH system always have a large curvature since it is a collapsed object with incredible density. However, following Ruppeiner's reasoning [77], it seems convincing to assume that gravitational degrees of freedom responsible for holding up the system elements might have a non-statistical nature since the gravitating particles have collapsed into central singularity. Thermodynamic curvature merely reflects the interactions (perhaps non-gravitational) among the fluctuating thermodynamic constituents at BH surface originating from the underlying gravity-bound system. In that case, associating an ideal gas-like behaviour with zero curvature makes perfect sense. ### Exponential corrections In this section, we discuss thermodynamic geometry of 5D RNBH in presence of exponential corrections (\(\eta\)). Since the final expression for Ruppeiner curvature \(R_{exp}\) turns out to be too long, henceforth we only carry out graphical analysis by plotting \(R_{exp}\) for a range of parameters. It is possible to plot \(R_{exp}\) as a function of mass \(M\) while keeping \(Q\) fixed. Since for non-extremal case, \(M\) exceeds \(Q\) which means horizon radii is mostly governed by \(M\) than \(Q\). We quantify the role of \(\eta\) and \(Q\) separately. To this end, we present a 2d plot of \(R_{exp}\) for two different values of \(Q\) in figure 6. figure 6 (a) is for \(Q=1\) and figure 6 (a) for the case \(Q=2\). One can see in both cases, for large sizes with bigger \(M\), \(R_{exp}\) is zero and changes radically as \(M\to Q\), the quantum domain. Thus our BH manifests a flat geometry for larger sizes and becomes curved (negative or positive) while approaching the extremal limit. This in other words indicates an ideal gas like behaviour for larger sizes, while manifesting multiple phase transitions for smaller (quantum) sizes. At \(M=Q\), \(R_{exp}\rightarrow\pm\infty\) depending on the choice of \(\eta\) and \(Q\) signalling a phase transition. First consider the case \(Q=1\). As shown in figure 6 (a), for \(\eta=1\) (black curve) [see also figure 6 (c) for a clear view], \(R_{exp}\) diverges to \(-\infty\), whilst rest of the cases show positive divergence. Hence we conclude that for \(\eta=1\), our BH ends up in a stable phase and unstable for rest of the cases where \(R_{exp}\rightarrow+\infty\). \(\eta=1\) case possesses only two phase transitions while as rest of the cases have more than two. The first phase transition is where \(R_{exp}\) turns from zero to negative, and second one when it goes to positive through zero, before diverging at \(M=Q\). So our BH changes from ideal to stable phase, then again ideal phase (momentarily) before becoming unstable. Hence exponential corrections lend a region of stability for the BH before the final phase transition at \(M=Q\). Beyond \(M=Q\), \(R_{exp}\) becomes imaginary and can't tell anything about the system. For the case \(Q=2\), we find all curves behave like \(\eta=Q=1\), which means in this case, the BH ends up in a stable phase. To better appreciate this scenario, we present our results using density plots in figures 7 and 8. One can see that it exactly corroborates the 2d case as shown above. To be precise, we can see the divergence in \(R_{exp}\), beyond which it becomes imaginary. The original unmodified case corresponding to \(\eta=0\) [figure 7(a)] for both values of \(Q\) shows a flat curvature. It is also important to mention here that divergence points in \(R_{exp}\) match with at least one root of \(C_{Q,exp}\), i.e. the extremal limit \(M=Q\). We also present a parameter space for \(R_{exp}\) in figure 9 with respect to \(M\) and \(\eta\). For classical geometries, \(R_{exp}\) is flat, as seen from figure 9 (b), whereas the situation changes in quantum regime, where multiple phase transitions occur [see figure 9 (a)]. It has been previously shown in Ref. [90] that \(5D\) RNBHs possess flat Ruppeiner geometry for all sizes. Our findings show that this holds true for only classical geometries with original Bekenstein-Hawking entropy. Once quantum gravity-inspired entropy is invoked, these results no longer hold for smaller BH sizes, and most importantly, we rather have multiple phase transitions on small scales. ### Logarithmic corrections It would be interesting to check the similar physics of Ruppeiner geometry for logarithmic modifications to BH entropy, which as we said earlier are universal in nature though more pronounced on smaller scales. Once again as the expression for curvature turns out to be too long, a graphical analysis would suffice all what we need to unveil. Similar to exponential case, Ruppeiner geometry is flat \(R_{log}=0\) for larger sizes and become positively curved (unstable) as \(M\) decreases. There is a positive divergence, \(R_{log}\rightarrow+\infty\), indicating a phase transition, before it goes to zero again (ideal phase). The occurrence of divergence in \(R_{log}\) depends on magnitude of logarithmic corrections (\(\alpha\)), with divergence point Figure 6: Impact of \(\eta\) on Ruppeiner curvature scalar \(R_{exp}\) for a fixed \(Q\). \(R_{exp}\) is zero for bigger hole sizes until \(M=Q\), where it diverges and shows a phase transition. shifting towards higher \(M\) as \(\alpha\) increases. Unlike exponential case, there is no correspondence between divergences in \(R_{log}\) and heat capacity divergences or zeros. We conclude, from Ruppeiner geometry analysis, that thermal fluctuations tend to make 5D RNBH unstable in quantum regime before extremal limit. Figure 7: Density plot of \(R_{exp}\) to show different regions of stability/instability. Beyond \(M=Q\), \(R_{exp}\) is imaginary. Figure 8: Density plot of \(R_{exp}\) to show different regions of stability/instability for the case \(Q=2\). Figure 9: Parameter space of \(R_{exp}\) with interplay of \(M\) and \(\eta\) for fixed \(Q\): (a) large BH size view, and (b) small BH size view. Multiple phase transitions can be seen here which arise out of \(\eta\). ## V Conclusion The semi-classical formulation of thermodynamics for BHs rests on the Bekenstein-Hawking entropy, which is inadequate to provide any clues for microscopic origin of thermodynamics. Since at present, we have no sensible theory of quantum gravity, attempts to address this question of microstructure has ushered us in many directions. Thermodynamic Ruppeiner geometry is a robust candidate to investigate the microstructure of BHs. A curvature defined on the thermodynamic state space of the system tells us about the underlying interactions among BH constituents. In particular, a positive Ruppeiner curvature shows an unstable system and vice-versa, where as zero curvature indicates an ideal gas-like state. Here, we used Ruppeiner geometry to uncover the thermodynamic behaviour of an evaporating 5D RNBH for both classical and quantum domains, when its entropy is modified by non-perturbative (exponential) and perturbative (logarithmic) contributions. Our findings suggest that our BH, under the influence of corrections, may undergo several phase transitions as it approaches extremal limit, where mass and charge balance each other. For exponential corrections, characterized by \(\eta\), whether the system is stable or unstable in the region near and at the extremal point solely depends on the choice of \(Q\) and \(\eta\). The first phase transition occurs around a critical mass scale which differentiates ideal phase from a stable phase ( \(R_{exp}=-\)ve region). \(R_{exp}\) finally blows up positively (going via zero) or negatively at extremal limit. For logarithmic modifications quantified by \(\alpha\), Ruppeiner curvature \(R_{log}\) diverges positively before extremal limit while becoming zero at extremal limit. The divergence point is shifted to larger sizes as \(\alpha\) increases. We emphasize here that, in absence of quantum gravity modifications, the BH manifests zero curvature (Ruppeiner flat), completely agreeing with previous results that show flat Ruppeiner geometry for RNBHs for all higher spacetime dimensions [90]. ## Appendix A Computing \(g_{\mu\nu}\) for Ruppeiner curvature ### Exponential corrections The components of metric \(g_{\mu\nu}\) are given by Figure 10: Impact of \(\alpha\) on Ruppeiner curvature scalar \(R_{log}\). \[g_{MM} =\frac{-3\pi^{2}}{16\left(M^{2}-Q^{2}\right)^{3/2}}\Bigg{[}2\sqrt{M+ \sqrt{M^{2}-Q^{2}}}\Bigg{\{}-3Q^{2}+M\left(M+\sqrt{M^{2}-Q^{2}}\right)\Bigg{\}}+ \eta e^{-\frac{\pi^{2}}{2}\left(M+\sqrt{M^{2}-Q^{2}}\right)^{3/2}}\] \[\times\Bigg{\{}12\pi^{2}M^{4}+3\pi^{2}Q^{4}+12\pi M^{3}\sqrt{M^{ 2}-Q^{2}}+6Q^{2}\sqrt{M+\sqrt{M^{2}-Q^{2}}}-M\sqrt{M^{2}-Q^{2}}\] \[\times\left(9\pi^{2}Q^{2}+2\sqrt{M+\sqrt{M^{2}-Q^{2}}}\right)-M^ {2}\left(15\pi^{2}Q^{2}+2\sqrt{M+\sqrt{M^{2}-Q^{2}}}\right)\Bigg{\}}\Bigg{]},\] \[g_{MQ} =\frac{-3\pi^{2}Qe^{-\frac{\pi^{2}}{2}\left(M+\sqrt{M^{2}-Q^{2} }\right)^{3/2}}}{16\left(M^{2}-Q^{2}\right)^{3/2}}\left[2e^{\frac{\pi^{2}}{2 }\left(M+\sqrt{M^{2}-Q^{2}}\right)^{3/2}}\left\{-Q^{2}+M\left(M+\sqrt{M^{2}-Q ^{2}}\right)\right\}\right.\] \[-\eta\Bigg{\{}6\pi^{2}M^{3}\sqrt{M+\sqrt{M^{2}-Q^{2}}}+2M\left( \sqrt{M^{2}-Q^{2}}-3\pi^{2}Q^{2}\sqrt{M+\sqrt{M^{2}-Q^{2}}}\right)+Q^{2}\left( 2-3\pi^{2}\sqrt{M^{2}-Q^{2}}\right.\] \[\times\sqrt{M+\sqrt{M^{2}-Q^{2}}}\Bigg{)}+M^{2}\left(2+6\pi^{2} \sqrt{M^{2}-Q^{2}}\times\sqrt{M+\sqrt{M^{2}-Q^{2}}}\right)\Bigg{\}}\Bigg{]},\] \[g_{QM} =\frac{-3\pi^{2}Q}{16\left(M^{2}-Q^{2}\right)^{3/2}}\Bigg{[}2 \left(-2M+\sqrt{M^{2}-Q^{2}}\right)\sqrt{M+\sqrt{M^{2}-Q^{2}}}+\eta e^{-\frac {\pi^{2}}{2}\left(M+\sqrt{M^{2}-Q^{2}}\right)^{3/2}}\] \[\times\Bigg{\{}-6\pi^{2}M^{3}\sqrt{M^{2}-Q^{2}}+M\left(6\pi^{2}Q ^{2}-4\sqrt{M+\sqrt{M^{2}-Q^{2}}}\right)+\sqrt{M^{2}-Q^{2}}\left(2-3\pi^{2} \sqrt{M^{2}-Q^{2}}\sqrt{M+\sqrt{M^{2}-Q^{2}}}\right)\Bigg{\}}\Bigg{]},\] \[g_{QQ} =\frac{-3\pi^{2}e^{-\frac{\pi^{2}}{2}\left(M+\sqrt{M^{2}-Q^{2}} \right)^{3/2}}}{16\left(M^{2}-Q^{2}\right)^{3/2}\sqrt{M+\sqrt{M^{2}-Q^{2}}}} \Bigg{[}e^{\frac{\pi^{2}}{2}\left(M+\sqrt{M^{2}-Q^{2}}\right)^{3/2}}\left(-4M^ {2}-4M^{2}\sqrt{M^{2}-Q^{2}}+2Q^{2}\sqrt{M^{2}-Q^{2}}\right)\] \[+\eta\Bigg{\{}4M^{3}-2Q^{2}\sqrt{M^{2}-Q^{2}}-3\pi^{2}Q^{4}\sqrt{ M+\sqrt{M^{2}-Q^{2}}}+3\pi^{2}MQ^{2}\sqrt{M^{2}-Q^{2}}\sqrt{M+\sqrt{M^{2}-Q^{2}}}\] \[+M^{2}\left(4\sqrt{M^{2}-Q^{2}}+3\pi^{2}Q^{2}\sqrt{M+\sqrt{M^{2} -Q^{2}}}\right)\Bigg{\}}\Bigg{]},\] with the determinant \[g =\frac{-9\pi^{4}\left(1-\eta e^{-\frac{\pi^{2}}{2}\left(M+\sqrt{ M^{2}-Q^{2}}\right)^{3/2}}\right)}{64\left(M^{2}-Q^{2}\right)^{2}\sqrt{M+\sqrt{M^{2}-Q^ {2}}}}\Bigg{[}2e^{\frac{\pi^{2}}{2}\left(M+\sqrt{M^{2}-Q^{2}}\right)^{3/2}} \sqrt{M+\sqrt{M^{2}-Q^{2}}}\left(2M^{3}-2MQ^{2}+2M^{2}\sqrt{M^{2}-Q^{2}}-Q^{2} \sqrt{M^{2}-Q^{2}}\right)\] \[+\eta\Bigg{\{}24\pi^{2}M^{5}+24\pi^{2}M^{4}\sqrt{M^{2}-Q^{2}}+4 MQ^{2}\left(3\pi^{2}Q^{2}+\sqrt{M+\sqrt{M^{2}-Q^{2}}}\right)-4M^{2}\sqrt{M^{2}-Q^{2}}\] \[\times\left(6\pi^{2}Q^{2}+\sqrt{M+\sqrt{M^{2}-Q^{2}}}\right)+Q^ {2}\sqrt{M^{2}-Q^{2}}\left(3\pi^{2}+2\sqrt{M+\sqrt{M^{2}-Q^{2}}}\right) \Bigg{\}}\Bigg{]}.\] ### Logarithmic corrections The metric elements \(g_{\mu\nu}\) read as \[g_{MM} =\frac{1}{4\left(M^{2}-Q^{2}\right)^{2}}\Bigg{[}-M^{2}\left(4 \alpha+3\sqrt{M^{2}-Q^{2}}\sqrt{\sqrt{M^{2}-Q^{2}}+M}\right)+3M\left(\alpha\sqrt {M^{2}-Q^{2}}+Q^{2}\sqrt{\sqrt{M^{2}-Q^{2}}+M}\right)\] \[+Q^{2}\left(9\sqrt{M^{2}-Q^{2}}\sqrt{\sqrt{M^{2}-Q^{2}}+M}-4 \alpha\right)-3M^{3}\sqrt{\sqrt{M^{2}-Q^{2}}+M}\Bigg{]},\] \[g_{MQ} =-\frac{1}{4\left(M^{2}-Q^{2}\right)^{2}}\Bigg{[}Q\Bigg{(}-3M^{2} \sqrt{\sqrt{M^{2}-Q^{2}}+M}+6M\sqrt{(M-Q)(M+Q)}\sqrt{\sqrt{M^{2}-Q^{2}}+M}\] \[+3Q^{2}\sqrt{\sqrt{M^{2}-Q^{2}}+M}-8\alpha M+3\alpha\sqrt{(M-Q)( M+Q)}\Bigg{)}\Bigg{]},\] \[g_{QM} =-\frac{1}{4\left(M^{2}-Q^{2}\right)^{2}}\Bigg{[}Q\Bigg{(}-3M^{2 }\sqrt{\sqrt{M^{2}-Q^{2}}+M}+6M\sqrt{(M-Q)(M+Q)}\sqrt{\sqrt{M^{2}-Q^{2}}+M}\] \[+3Q^{2}\sqrt{\sqrt{M^{2}-Q^{2}}+M}-8\alpha M+3\alpha\sqrt{(M-Q)( M+Q)}\Bigg{)}\Bigg{]},\] \[g_{QQ} =\frac{1}{4\left(Q^{3}-M^{2}Q\right)^{2}}\Bigg{[}3\alpha M^{4}-3 M^{3}\left(\alpha\sqrt{M^{2}-Q^{2}}+Q^{2}\sqrt{\sqrt{M^{2}-Q^{2}}+M}\right)+M^{2}Q ^{2}\left(9\sqrt{M^{2}-Q^{2}}\sqrt{\sqrt{M^{2}-Q^{2}}+M}-10\alpha\right)\] \[+3M\Big{(}2\alpha Q^{2}\sqrt{M^{2}-Q^{2}}+Q^{4}\sqrt{\sqrt{M^{2} -Q^{2}}+M}\Big{)}-Q^{4}\left(\alpha+3\sqrt{M^{2}-Q^{2}}\sqrt{\sqrt{M^{2}-Q^{2} }+M}\right)\Bigg{]},\] with the determinant \[g =\frac{1}{16\left(Q^{3}-M^{2}Q\right)^{2}}\Bigg{[}-36M^{3}Q^{2}-3 M^{2}\left(7\alpha^{2}+12Q^{2}\sqrt{M^{2}-Q^{2}}\right)+\alpha Q^{2}\left(13 \alpha-15\sqrt{M^{2}-Q^{2}}\sqrt{\sqrt{M^{2}-Q^{2}}+M}\right)\] \[+3M\left(7\alpha^{2}\sqrt{M^{2}-Q^{2}}+8\alpha Q^{2}\sqrt{\sqrt{ M^{2}-Q^{2}}+M}+12Q^{4}\right)+18Q^{4}\sqrt{M^{2}-Q^{2}}\Bigg{]}.\] ## Acknowledgments SMASB is supported by the CSC Scholarship of China at Zhejiang University.
2310.02010
Rings of functions which are discontinuous on a finite set with countable range
Consider the ring $C_c(X)_F$ of real valued functions which are discontinuous on a finite set with countable range. We discuss $(\mathcal{Z}_c)_F$-filters on $X$ and $(\mathcal{Z}_c)_F$-ideals of $C_c(X)_F$. We establish an analogous version of Gelfand-Kolmogoroff theorem in our setting. We prove some equivalent conditions when $C_c(X)_F$ is a Baer-ring and a regular ring. Lastly, we talk about the zero divisor graph on $C_c(X)_F$.
Achintya Singha, D. Mandal, Samir Ch Manda, Sagarmoy Bag
2023-10-03T12:31:41Z
http://arxiv.org/abs/2310.02010v1
# Rings of functions which are discontinuous on a finite set with countable range ###### Abstract. Consider the ring \(C_{c}(X)_{F}\) of real valued functions which are discontinuous on a finite set with countable range. We discuss \((\mathcal{Z}_{c})_{F}\)-filters on \(X\) and \((\mathcal{Z}_{c})_{F}\)-ideals of \(C_{c}(X)_{F}\). We establish an analogous version of Gelfand-Kolmogoroff theorem in our setting. We prove some equivalent conditions when \(C_{c}(X)_{F}\) is a Baer-ring and a regular ring. Lastly, we talk about the zero divisor graph on \(C_{c}(X)_{F}\). Key words and phrases:\((\mathcal{Z}_{c})_{F}\)-filters, \((\mathcal{Z}_{c})_{F}\)-ideals, structure spaces, Baer-ring, \(F_{c}P\)-space, Zero divisor graph 2010 Mathematics Subject Classification: Primary 54C30; Secondary 54C40 * The Corresponding Author, Email: [email protected] ## 1. Introduction We start with a \(T_{1}\) topological space \((X,\tau)\). Let \(C_{c}(X)_{F}\) be the collection of all real valued functions on \(X\) which are discontinuous on a finite set with countable range. Then \(C_{c}(X)_{F}\) is a commutative ring with unity, where addition and multiplications are defined pointwise. We define for \(f,g\in C_{c}(X)_{F}\), \(f\leq g\) if and only if \(f(x)\leq g(x)\) for all \(x\in X\). Then \(f\lor g=\frac{f+g+|f-g|}{2}\in C_{c}(X)_{F}\) and \(f\wedge g=-(-f\lor-g)\in C_{c}(X)_{F}\). Thus \((C_{c}(X)_{F},+,\cdot,\leq)\) is a lattice ordered ring. Clearly, \(C_{c}(X)_{F}\subseteq C(X)_{F}\) (\(\equiv\) rings of functions which are discontinuous on a finite set, studied briefly in [1, 5]). It is interesting to see that taking an infinite set \(X\) with co-finite topology (or any irreducible topological space), we get \(C_{c}(X)_{F}=C(X)_{F}\). Let \(C_{c}(X)\) be the ring of all real valued continuous functions with countable range. We see that the ring \(C_{c}(X)_{F}\) properly contains the ring \(C_{c}(X)\). Our intention in this paper is to study some ring properties of \(C_{c}(X)_{F}\) and interpret certain topological behaviours of \(X\) via \(C_{c}(X)_{F}\). In Section 2, we define \(\mathcal{F}_{c}\)-completely separated subsets of \(X\) and establish that two subsets are \(\mathcal{F}_{c}\)-completely separated if and only if they are contained in two disjoint zero sets. In Section 3, we develop a connection between \((\mathcal{Z}_{c})_{F}\)-filters and ideals of \(C_{c}(X)_{F}\). We define \((\mathcal{Z}_{c})_{F}\)-ideal and some equivalent conditions of \((\mathcal{Z}_{c})_{F}\)-prime ideals of \(C_{c}(X)_{F}\) (see Theorem 3.7) and arrive at a decision that \(C_{c}(X)_{F}\) is a Gelfand ring. Also, we develop conditions when every ideal of \(C_{c}(X)_{F}\) is fixed. We prove an ideal of \(C_{c}(X)_{F}\) is an essential ideal if and only if it is free and also show that the set of all \((\mathcal{Z}_{c})_{F}\)-ideals and the set of all \(z^{\circ}\)-ideals are identical (see Corollary 3.15). Moreover we establish some results related to socle of \(C_{c}(X)_{F}\). In the next section, we discuss structure spaces of \(C_{c}(X)_{F}\) and prove that the set of all maximal ideals of \(C_{c}(X)_{F}\) with hull-kernel topology is homeomorphic to the set of all \((\mathcal{Z}_{c})_{F}\)-ultrafilters on \(X\) with Stone topology (see Theorem 4.3) and also establish an analogue version of the Gelfand-Kolmogoroff theorem (see Theorem 4.4). In Example 4.5, we show that \(\beta_{\circ}X\) and structure space of \(C_{c}(X)_{F}\) are not homeomorphic. In the next section, we establish that the ring \(C_{c}(X)_{F}\) properly contains the ring \(C_{c}(X)\) and discuss some relations between \(C_{c}(X)\) and \(C_{c}(X)_{F}\). In Section 6, we furnish some equivalent conditions when \(C_{c}(X)_{F}\) is a Baer-ring. A space \(X\) is called \(F_{c}P\)-space if \(C_{c}(X)_{F}\) is regular and in Theorem 7.6, some equivalent conditions of \(F_{c}P\)-space are proved. Finally, we introduce and study the main features of the zero divisor graph of \(C_{c}(X)_{F}\) in Section 8. ## 2. Definitions and Preliminaries For any \(f\in C_{c}(X)_{F}\), \(Z(f)=\{x\in X:f(x)=0\}\) is called zero set of the function \(f\) and \(Z[C_{c}(X)_{F}]\) aggregates of all zero sets in \(X\). Then \(Z[C_{c}(X)_{F}]=Z[C_{c}^{*}(X)_{F}]\), where \(C_{c}^{*}(X)_{F}=\{f\in C_{c}(X)_{F}:f\mbox{ is bounded}\}\). Now we can easily check the following properties of zero sets: **Properties 2.1**.: Let \(f,g\in C_{c}(X)_{F}\) and for any \(r\in\mathbb{R}\), \(\underline{r}\) stands for constant function from \(X\) to \(\mathbb{R}\). Then * \(Z(f)=Z(|f|)=Z(|f|\wedge\underline{1})=Z(f^{n})\) (for all \(n\in\mathbb{N}\)). * \(Z(\underline{0})=X\) and \(Z(\underline{1})=\emptyset\). * \(Z(f^{2}+g^{2})=Z(f)\cap Z(g)=Z(|f|+|g|)\). * \(Z(f\cdot g)=Z(f)\cup Z(g)\). * \(\{x\in X:f(x)\geq r\}\) and \(\{x\in X:f(x)\leq r\}\) are zero sets in \(X\). Any two subsets \(A\) and \(B\) of a topological space \(X\) are called completely separated [see 1.15,[8]] if there exists a continuous function \(f:X\to[0,1]\) such that \(f(A)=\{0\}\) and \(f(B)=\{1\}\). Analogously, we define \(\mathcal{F}_{c}\)-completely separated as follows: **Definition 2.2**.: Two subsets \(A\) and \(B\) of a topological space \(X\) are said to be \(\mathcal{F}_{c}\)-completely separated in \(X\) if there exists an element \(f\in C_{c}(X)_{F}\) such that \(f(A)=0\) and \(f(B)=1\). **Theorem 2.3**.: _Two subsets \(A\) and \(B\) of a topological space \(X\) are \(\mathcal{F}_{c}\)-completely separated if and only if they are contained in disjoint members of \(Z[C_{c}(X)_{F}]\)._ Proof.: Let \(A,B\) be two \(\mathcal{F}_{c}\)-completely separated subsets of \(X\). Then there exists \(f\in C_{c}(X)_{F}\) with \(f:X\to[0,1]\) such that \(f(A)=\{0\}\) and \(f(B)=\{1\}\). Take \(Z_{1}=\{x\in X:f(x)\leq\frac{1}{5}\}\) and \(Z_{2}=\{x\in X:f(x)\geq\frac{1}{3}\}\). The \(Z_{1},Z_{2}\) are disjoint zero sets in \(Z[C_{c}(X)_{F}]\) and \(A\subseteq Z_{1},B\subseteq Z_{2}\). Conversely, let \(A\subseteq Z(f),B\subseteq Z(g)\), where \(Z(f)\cap Z(g)=\phi,f,g\in C_{c}(X)_{F}\). Let \(h=\frac{f^{2}}{f^{2}+g^{2}}:X\to[0,1]\). Now \(Z(f)\cap Z(g)=Z(f^{2}+g^{2})=\phi\). Then we have \(h\in C_{c}(X)_{F}\). Also \(h(A)=\{0\}\), \(h(B)=\{1\}\). This shows that \(A,B\) are \(\mathcal{F}_{c}\)-separated in \(X\). **Corollary 2.4**.: Any two disjoint zero sets in \(Z[C_{c}(X)_{F}]\) are \(\mathcal{F}_{c}\)-completely separated in \(X\). **Theorem 2.5**.: _If two disjoint subsets \(A\) and \(B\) of \(X\) are \(\mathcal{F}_{c}\)-completely separated, then there exists a finite subset \(F\) of \(X\) such that \(A\setminus F\) and \(B\setminus F\) are completely separated in \(X\setminus F\)._ Proof.: Let \(A\) and \(B\) be \(\mathcal{F}_{c}\)-completely separated in \(X\). Then by Theorem 2.3, there exist two zero sets \(Z(f_{1})\) and \(Z(f_{2})\) such that \(A\subseteq Z(f_{1})\) and \(B\subseteq Z(f_{2})\). Since \(f_{1},f_{2}\in C_{c}(X)_{F}\), there is a finite subset \(F\) such that \(f_{1},f_{2}\in C(X\setminus F)\). Now \(A\setminus F\subseteq Z(f_{1})\setminus F\) and \(B\setminus F\subseteq Z(f_{2})\setminus F\). Also, \(Z(f_{1})\setminus F\) and \(Z(f_{2})\setminus F\) are disjoint zero sets in \(X\setminus F\). Thus by Theorem 1.15 in [8], \(A\setminus F\) and \(B\setminus F\) completely separated in \(X\setminus F\). We recall that \(C_{c}(X)\) be the ring of all real valued continuous functions with countable range and \(C_{c}^{*}(X)=\{f\in C_{c}(X):f\text{ is bounded}\}\). Then we have the following lemma. **Lemma 2.6**.: _For a topological space \(X\), the following statements are hold._ 1. \(C_{c}(X)_{F}\) _is a reduce ring._ 2. _An element_ \(f\in C_{c}(X)_{F}\) _is unit if and only if_ \(Z(f)=\emptyset\)_._ 3. _Any element of_ \(C_{c}(X)_{F}\) _is zero divisor or unit._ 4. \(C_{c}(X)_{F}=C_{c}^{*}(X)_{F}\) _if and only if for any finite subset_ \(F\) _of_ \(X\)_,_ \(C_{c}(X\setminus F)=C_{c}^{*}(X\setminus F)\)_._ Proof.: (i) It is trivial. (ii) Let \(f\in C_{c}(X)_{F}\) be a unit. Then there exists \(g\in C_{c}(X)_{F}\) such that \(fg=\underline{1}\). Therefore \(Z(f)=\emptyset\). Conversely, let \(Z(f)=\emptyset\). Then \(\frac{1}{f}\in C_{c}(X)_{F}\) is the inverse of \(f\). (iii) Let \(f\in C_{c}(X)_{F}\) and \(Z(f)=\emptyset\). Then \(f\) is a unit element. If \(Z(f)\neq\emptyset\), then for \(x\in Z(f)\), \(\chi_{\{x\}}\in C_{c}(X)_{F}\) and \(f\cdot\chi_{\{x\}}=0\) i.e., \(f\) is a zero divisor. (iv) Suppose that \(F\) is a finite subset of \(X\) and \(f\in C_{c}(X\setminus F)\). Now we define \(g\) as \[g(x)=\left\{\begin{array}{ll}0,&if\ x\in F\\ f(x),&otherwise.\end{array}\right.\] Then \(g\in C_{c}(X)_{F}=C_{c}^{*}(X)_{F}\) and \(g|_{X\setminus F}=f\), hence \(C_{c}(X\setminus F)=C_{c}^{*}(X\setminus F)\). Conversely, let \(f\in C_{c}(X)_{F}\). Then there exists a finite subset \(F\) of \(X\) such that \(f\) is continuous on \(X\setminus F\). By hypothesis \(f\) is bounded on \(X\setminus F\). Therefore \(f\in C_{c}^{*}(X)_{F}\). ## 3. \((\mathcal{Z}_{c})_{F}\)-filters and ideals of \(C_{c}(X)_{F}\) Throughout the article, an ideal of \(C_{c}(X)_{F}\) (or \(C_{c}^{*}(X)_{F}\)) always stands for a proper ideal. **Definition 3.1**.: A non-empty family \(\mathcal{F}\) of subsets of \(Z[C_{c}(X)_{F}]\) is called \((\mathcal{Z}_{c})_{F}\)-filter on \(X\) if it satisfies the following three conditions: 1. \(\phi\notin\mathcal{F}\). 2. \(Z_{1},Z_{2}\in\mathcal{F}\) implies \(Z_{1}\cap Z_{2}\in\mathcal{F}\). 3. If \(Z\in\mathcal{F}\) and \(Z^{\prime}\in Z[C_{c}(X)_{F}]\) such that \(Z\subseteq Z^{\prime}\), then \(Z^{\prime}\in\mathcal{F}\). A \((\mathcal{Z}_{c})_{F}\)-filter on \(X\) which is not properly contained in any \((\mathcal{Z}_{c})_{F}\)-filter on \(X\) is called \((\mathcal{Z}_{c})_{F}\)-ultrafilter. A straight forward use of Zorn's lemma ensures that a \((\mathcal{Z}_{c})_{F}\)-filter on \(X\) can be extended to a \((\mathcal{Z}_{c})_{F}\)-ultrafilter on \(X\). There is an expected duality existing between ideals (maximal ideals) in \(C_{c}(X)_{F}\) and the \((\mathcal{Z}_{c})_{F}\)-filters (\((\mathcal{Z}_{c})_{F}\)-ultrafilters) on \(X\). This is realized by the following theorem. **Theorem 3.2**.: _For the ring \(C_{c}(X)_{F}\), the following statements are true._ 1. _If_ \(I\) _is an ideal of_ \(C_{c}(X)_{F}\)_, then_ \(Z[I]=\{Z(f):f\in I\}\) _is a_ \((\mathcal{Z}_{c})_{F}\)_-filter on_ \(X\)_. Dually for any_ \((\mathcal{Z}_{c})_{F}\)_-filter_ \(\mathcal{F}\) _on_ \(X\)_,_ \(Z^{-1}[\mathcal{F}]=\{f\in C_{c}(X)_{F}:Z(f)\in\mathcal{F}\}\) _is an ideal (proper) in_ \(C_{c}(X)_{F}\)_._ 2. _If_ \(M\) _is a maximal ideal of_ \(C_{c}(X)_{F}\) _then_ \(Z[M]\) _is a_ \((\mathcal{Z}_{c})_{F}\)_-ultrafilter on_ \(X\)_. If_ \(\mathcal{U}\) _is a_ \((\mathcal{Z}_{c})_{F}\)_-ultrafilter on_ \(X\)_, then_ \(Z^{-1}[\mathcal{U}]\) _is a maximal ideal of_ \(C_{c}(X)_{F}\)_. Furthermore the assignment:_ \(M\mapsto Z[M]\) _defines a bijection on the set of all maximal ideals in_ \(C_{c}(X)_{F}\) _and the aggregate of all_ \((\mathcal{Z}_{c})_{F}\)_-ultrafilters on_ \(X\)_._ Like the notion of \(z\)-ideal in \(C(X)\) (see 2.7 in [8]), we now define \((\mathcal{Z}_{c})_{F}\)-ideal in \(C_{c}(X)_{F}\). **Definition 3.3**.: An ideal \(I\) of \(C_{c}(X)_{F}\) is called \((\mathcal{Z}_{c})_{F}\)-ideal if \(Z^{-1}Z[I]=I\). It follows from the Theorem 3.2(ii) that each maximal ideal of \(C_{c}(X)_{F}\) is a \((\mathcal{Z}_{c})_{F}\)-ideal; the converse of this statement is false as is shown by the following example. **Example 3.4**.: Let \(I=\{f\in C_{c}(X)_{F}:f(0)=f(1)=0\}\). Then \(I\) is a \((\mathcal{Z}_{c})_{F}\)-ideal in \(C_{c}(X)_{F}\), which is not even a prime ideal in the ring \(C_{c}(X)_{F}\). The next theorem describes maximal ideals of \(C_{c}(X)_{F}\). **Theorem 3.5**.: _For any \(f\in C_{c}(X)_{F}\), we have \(M_{f}=\{g\in C_{c}(X)_{F}:Z(f)\subseteq Z(g)\}\)._ Proof.: Let \(g\in M_{f}\) and \(x\in Z(f)\setminus Z(g)\). Now \(M_{x}=\{f\in C_{c}(X)_{F}:x\in Z(f)\}\) is a maximal ideal of \(C_{c}(X)_{F}\) contains \(f\) but does not contain \(g\), a contradiction. Thus \(Z(f)\subseteq Z(g)\). For reverse part, let \(M\) be a maximal ideal of \(C_{c}(X)_{F}\) which contains \(f\) and \(Z(f)\subseteq Z(g)\) for some \(g\in C_{c}(X)_{F}\). Then we have \(Z(g)\in Z[M]\) and this implies that \(g\in Z^{-1}Z[M]\). Since \(M\) is a \((\mathcal{Z}_{c})_{F}\)-ideal, \(g\in M\). **Corollary 3.6**.: An ideal \(I\) of \(C_{c}(X)_{F}\) is a \((\mathcal{Z}_{c})_{F}\)-ideal if and only if whenever \(Z(f)\subseteq Z(g)\), where \(f\in I\) and \(g\in C_{c}(X)_{F}\), then \(g\in I\). The following two results are analogous to Theorem 2.9 and Theorem 2.11 respectively in [8] and thus we state them without any proof. **Theorem 3.7**.: _The following four statements are equivalent for a \((\mathcal{Z}_{c})_{F}\)-ideal \(I\) in \(C_{c}(X)_{F}:\)_ * \(I\) _is a prime ideal._ * \(I\) _contains a prime ideal in_ \(C_{c}(X)_{F}\)_._ * _For all_ \(f,g\in C_{c}(X)_{F},fg=0\Rightarrow f\in I\) _or_ \(g\in I\)_._ * _Given_ \(f\in C_{c}(X)_{F}\)_, there exists_ \(Z\in Z[I]\) _on which_ \(f\) _does not change its sign._ **Corollary 3.8**.: Each prime ideal in \(C_{c}(X)_{F}\) is contained in a unique maximal ideal, in other words \(C_{c}(X)_{F}\) is a Gelfand ring. **Theorem 3.9**.: _Sum of any two \((\mathcal{Z}_{c})_{F}\)-ideals in \(C_{c}(X)_{F}\) is a \((\mathcal{Z}_{c})_{F}\)-ideal._ Proof.: Let \(I\) and \(J\) be two \((\mathcal{Z}_{c})_{F}\)-ideals of \(C_{c}(X)_{F}\). Let \(f\in I\), \(g\in J\), \(h\in C_{c}(X)_{F}\) and \(Z(f+g)\subseteq Z(h)\). Then by Corollary 3.6, it is enough to prove that \(h\in I+J\). Now we can find a finite subset \(F\) such that \(f,g,h\in C_{c}(X)_{F}\). Define \[k(x)=\left\{\begin{array}{ll}0,&if\ x\in(Z(f)\cap Z(g))\setminus F\\ \frac{hf^{2}}{f^{2}+g^{2}},&if\ x\in(X\setminus F)\setminus(Z(f)\cap Z(g)). \end{array}\right.\] \[l(x)=\left\{\begin{array}{ll}0,&if\ x\in(Z(f)\cap Z(g))\setminus F\\ \frac{hq^{2}}{f^{2}+g^{2}},&if\ x\in(X\setminus F)\setminus(Z(f)\cap Z(g)). \end{array}\right.\] Now we show that \(k\) and \(l\) are continuous on \(X\setminus F\). Moreover, it is enough to show that \(k\) and \(l\) are continuous on \((Z(f)\cap Z(g))\setminus F\). For \(x\in(Z(f)\cap Z(g))\setminus F\), \(h(x)=0\) and for any \(\epsilon>0\) there exists a neighbourhood \(U\) of \(x\) such that \(h(U)\subseteq(-\epsilon,\epsilon)\). On the other hand \(k(x)\leq h(x)\) and \(l(x)\leq h(x)\) for all \(x\in U\). Hence \(k\) and \(l\) are continuous on \(X\setminus F\). Set \(k^{*}(X\setminus F)=k(X\setminus F)\), \(k^{*}(F)=h(F)\) and \(l^{*}(X\setminus F)=l(X\setminus F)\), \(l^{*}(F)=0\). Then \(k^{*},l^{*}\in C_{c}(X)_{F}\), \(Z(f)\subseteq Z(k)\subseteq Z(k^{*})\), \(Z(g)\subseteq Z(l)\subseteq Z(l^{*})\) and \(h=k^{*}+l^{*}\). Since \(I\) and \(J\) are \((\mathcal{Z}_{c})_{F}\)-ideal of \(C_{c}(X)_{F}\), \(k^{*}\in I\) and \(l^{*}\in J\). Therefore \(h\in I+J\). **Corollary 3.10**.: Suppose that \(\{I_{k}\}_{k\in S}\) is a collection of \((\mathcal{Z}_{c})_{F}\)-ideals of \(C_{c}(X)_{F}\). Then \(\sum\limits_{k\in S}I_{k}=C_{c}(X)_{F}\) or \(\sum\limits_{k\in S}I_{k}\) is a \((\mathcal{Z}_{c})_{F}\)-ideal of \(C_{c}(X)_{F}\). In a reduced ring, every minimal prime ideal is also \(z\)-ideals (which is proved in [11]). Now using this result, Theorem 3.7 and the above corollary, we have the following corollary. **Corollary 3.11**.: Let \(\{P_{i}\}_{i\in I}\) be a collection of minimal prime ideals of \(C_{c}(X)_{F}\). Then \(\sum\limits_{i\in I}P_{i}=C_{c}(X)_{F}\) or \(\sum\limits_{i\in I}P_{i}\) is a prime ideal of \(C_{c}(X)_{F}\). **Definition 3.12**.: An ideal \(I\) of \(C_{c}(X)_{F}\) is called fixed if \(\cap Z[I]\neq\emptyset\). Otherwise it is called free. **Theorem 3.13**.: _For any topological space \(X\), the following statements are equivalent._ * _The space_ \(X\) _is finite._ _._ 2. _Every proper ideal of_ \(C_{c}(X)_{F}\) _(or_ \(C_{c}^{*}(X)_{F}\)_) is fixed._ 3. _Every maximal ideal of_ \(C_{c}(X)_{F}\) _(or_ \(C_{c}^{*}(X)_{F}\)_) is fixed._ 4. _Each_ \((\mathcal{Z}_{c})_{F}\)_-filter on_ \(X\) _is fixed._ 5. _Each_ \((\mathcal{Z}_{c})_{F}\)_-ultrafilter on_ \(X\) _is fixed._ Let \(M_{A}=\{f\in C_{c}(X)_{F}:A\subseteq Z(f)\}\), for a subset \(A\) of \(X\). Then \(M_{A}\) is an ideal of \(C_{c}(X)_{F}\) and \(M_{A}=\bigcap\limits_{x\in A}M_{x}\), where \(M_{x}=\{f\in C_{c}(X)_{F}:x\in Z(f)\}\) is a fixed maximal ideal of \(C_{c}(X)_{F}\). **Theorem 3.14**.: _The following statements are true._ 1. _For two ideals_ \(I\) _and_ \(J\) _of_ \(C_{c}(X)_{F}\)_,_ \(Ann(I)\subseteq Ann(J)\) _if and only if_ \(\bigcap Z[I]\subseteq\bigcap Z[J]\) _if and only if_ \(\bigcap COZ[J]\subseteq\bigcap COZ[I]\)_._ 2. _For any subset_ \(S\) _of_ \(C_{c}(X)_{F}\) _we have_ \(Ann(S)=M_{\bigcup COZ[S]}=\{f\in C_{c}(X)_{F}:\bigcup COZ[S]\subseteq Z(f)\}\)_._ Proof.: (i) Let \(x\in\bigcap\limits_{f\in I}Z(f)\). Then \(h=\chi_{\{x\}}\in C_{c}(X)_{F}\) and \(x\in X\setminus Z(h)\subseteq\bigcap\limits_{f\in I}Z(f)\). Hence \(fh=0\) for all \(f\in I\). Then \(h\in Ann(I)\subseteq Ann(J)\). Therefore \(gh=0\) for each \(g\in J\). Thus \(x\in X\setminus Z(h)\subseteq\bigcap\limits_{g\in J}Z(g)\). Conversely, let \(h\in Ann(I)\). Then \(hf=0\) for all \(f\in I\). This implies that \(X\setminus Z(h)\subseteq\bigcap\limits_{f\in I}Z(f)\). Then by given hypothesis, \(X\setminus Z(h)\subseteq Z(g)\) for each \(g\in J\). Thus \(gh=0\), for each \(g\in J\), implies \(Ann(I)\subseteq Ann(J)\). (ii) Let \(f\in Ann(S)\). Then \(fg=0\), for all \(g\in S\). This shows that \(\bigcup COZ[S]\subseteq Z(f)\) i.e., \(f\in M_{\bigcup COZ[S]}\). For the reverse part, let \(f\in M_{(\bigcup COZ[S])}\). Then \(X\setminus Z(g)\subseteq\bigcup COZ[S]\subseteq Z(f)\) for each \(g\in S\). Thus \(f\in Ann(S)\). A non-zero ideal in a commutative ring is said to be essential if it intersects every non-zero ideals non-trivially. Let \(R\) be a commutative ring with unity. For \(a\in R\), let \(P_{a}\) be the intersection of all minimal prime ideals of \(R\) containing \(a\). Then an ideal \(I\) of \(R\) is called a \(z^{\circ}\)-ideal of \(R\) if for each \(a\in I\), \(P_{a}\subseteq I\). We now state a well-known result that if \(I\) is an ideal of a commutative reduced ring \(R\), then \(I\) is an essential ideal if and only if \(Ann(I)=\{r\in R:rI=0\}=0\) (see [6, 9] and Lemma 2.1 in [14]). **Corollary 3.15**.: The following statement hold. 1. An ideal \(I\) of \(C_{c}(X)_{F}\) is an essential ideal if and only if \(I\) is a free ideal. 2. The set of all \((\mathcal{Z}_{c})_{F}\)-ideals and \(z^{\circ}\)-ideals of \(C_{c}(X)_{F}\) are identical. Proof.: (i) It follows trivially from the above Theorem 3.14. (ii) Clearly, every \(z^{\circ}\)-ideal is a \((\mathcal{Z}_{c})_{F}\)-ideal. Now let \(I\) be a \((\mathcal{Z}_{c})_{F}\)-ideal and \(Ann(f)\subseteq Ann(g)\). Then using Theorem 3.14, we have \(Z(f)\subseteq Z(g)\). Therefore \(g\in I\). This completes the proof. It is well known that the intersection of all essential ideals or sum of all minimal prime ideals in a commutative ring with unity is called socle (see [9]). **Proposition 3.16**.: _In a commutative ring with unity the following statements are true._ 1. _A non-zero ideal_ \(I\) _of_ \(C_{c}(X)_{F}\) _is minimal if and only if_ \(I\) _is generated by_ \(\chi_{\{a\}}\)_, for some_ \(a\in X\)_._ 2. _A non-zero ideal_ \(I\) _of_ \(C_{c}(X)_{F}\) _is minimal if and only if_ \(|Z[I]|=2\)_._ 3. _The socle of_ \(C_{c}(X)_{F}\) _consists of all functions which vanish everywhere except on a finite subset of_ \(X\)_._ 4. _The socle of_ \(C_{c}(X)_{F}\) _is an essential ideal which is also free._ Proof.: (i) Let \(I\) be a non-zero ideal of \(C_{c}(X)_{F}\) and \(f\) be a non-zero element of \(I\). Then there exists \(a\in X\) such that \(f(a)\neq 0\). Now \(\chi_{\{a\}}=\frac{1}{f(a)}\chi_{\{a\}}f\in I\). This shows that \(I\) is generated by \(\chi_{\{a\}}\). Conversely, let \(a\in X\). Then the ideal generated by \(\chi_{\{a\}}\) is the set of all constant multiple of \(\chi_{\{a\}}\), which is clearly a minimal ideal. (ii) Let us assume \(|Z[I]|=2\) and \(0\neq f\in I\) with \(f(a)\neq 0\) for some \(a\in X\). Then \(\chi_{\{a\}}\in I\) and for any non-zero element \(g\in I\), \(Z(g)=Z(\chi_{\{a\}})=X\setminus\{a\}\). Thus \(g=g(a)\chi_{\{a\}}\) and hence \(I\) is generated by \(\chi_{\{a\}}\). Hence by (i), \(I\) is minimal and the remaining part of the proof follows immediately. (iii) From \((i)\), we show that the socle of \(C_{c}(X)_{F}\) is equal to the ideal generate by \(\chi_{\{a\}}\)'s which is equal to the set of all functions that vanishes everywhere except on a finite set. (iv) Clearly from \((i)\), any non-zero function \(f\) has a non-zero multiple which is in the socle of \(C_{c}(X)_{F}\). This implies that socle is essential. Then by Corollary 3.15, the socle is a free ideal. **Corollary 3.17**.: For a topological space \(X\), the the following statements are equivalent: 1. \(X\) is finite set. 2. \(C_{c}(X)_{F}=Soc(C_{c}(X)_{F})\), where \(Soc(C_{c}(X)_{F})\) is the socle of \(C_{c}(X)_{F}\). Proof.: \((i)\implies(ii):\) Let \(X=\{x_{1},x_{2},\cdots,x_{n}\}\). Then \(1=\sum\limits_{i=1}^{n}\chi_{\{x_{i}\}}\in Soc(C_{c}(X)_{F})\), using result \((iii)\) of the above proposition. Therefore \(C_{c}(X)_{F}=Soc(C_{c}(X)_{F})\). \((ii)\implies(i):\) Let \(C_{c}(X)_{F}=Soc(C_{c}(X)_{F})\). Then \(\underline{1}\in Soc(C_{c}(X)_{F})\). Hence from the above proposition, \(X=X\backslash Z(\underline{1})\) is a finite set. This completes the proof. **Proposition 3.18**.: _The following statements are true._ 1. _Any fixed maximal ideal of_ \(C_{c}(X)_{F}\) _is generated by an idempotent._ 2. _Any non-maximal prime ideal of_ \(C_{c}(X)_{F}\) _is essential ideal._ Proof.: (i) Let \(\alpha\in X\) and consider the maximal ideal \(M_{\alpha}\). Then for any \(f\in M_{\alpha}\), \(f=f(1-\chi_{\{\alpha\}})\). This shows that \(M_{\alpha}\) generated by an idempotent \(1-\chi_{\{\alpha\}}\). (ii) Let \(P\) be a non-maximal prime ideal of \(C_{c}(X)_{F}\). For each \(\alpha\in X\), the ideal generated by \(1-\chi_{\{\alpha\}}\) is maximal ideal (since ideal generated by \(\chi_{\{\alpha\}}\) is minimal), then \(1-\chi_{\{\alpha\}}\notin P\). Thus \(\chi_{\{\alpha\}}\in P\) and hence \(P\) contains in socle. Therefore \(P\) is essential. The following theorem is an analogous version of Theorem 2.2 in [7]. **Theorem 3.19**.: _Let \(J_{1}=\{f\in C_{c}(X)_{F}:\) for all \(g,Z(1-fg)\) is finite\(\}\). Then \(J_{1}\) is equal to the intersection of all essential maximal ideals (free maximal ideals) of \(C_{c}(X)_{F}\). Also, for all \(f\in C_{c}(X)_{F}\), \(COZ(f)\) is a countable set._ Proof.: Let \(f\in J_{1}\) and \(M\) be an essential maximal ideal in \(C_{c}(X)_{F}\) such that it does not contain \(f\). Then for some \(g\in C_{c}(X)_{F}\) and \(m\in M\), we have \(gf+m=1\). This implies that \(m=1-gf\) and hence \(Z(m)\) is finite. Take \(h=m+\chi_{Z(m)}\). Then \(h\in M\), since \(m\in M\) and \(\chi_{Z(m)}\in Soc(C_{c}(X)_{F})\subseteq M\). On the other hand \(h\) is invertible, a contradiction. Thus \(f\) belongs to each essential maximal ideal of \(C_{c}(X)_{F}\). Therefore \(J_{1}\subseteq\bigcap\limits_{M\in S}M\), where \(S\) is the collection of all essential maximal ideals of \(C_{c}(X)_{F}\). Next, let \(f\) be any element in the intersection of all essential maximal ideals of \(C_{c}(X)_{F}\). Let \(g\in C_{c}(X)_{F}\) be such that \(Z(1-gf)\) infinite. This implies that for any \(s\in Soc(C_{c}(X)_{F})\) and any \(t\in C_{c}(X)_{F}\), the function \(s+t(1-gf)\) has a zero and thus it can not be equal to \(1\). Then the ideal \(Soc(C_{c}(X)_{F})+<1-gf>\) is a proper essential ideal. Thus there exists an essential maximal ideal \(M\) containing it. Therefore \(1-gf\in M\) and \(f\in M\), a contradiction. Hence \(Z(1-gf)\) is finite. This completes the proof. For the second part, we define \(F_{n}=\{x\in X:|f(x)|\geq\frac{1}{n}\}\), for each \(n\in\mathbb{N}\). Since \(COZ(f)=\bigcup\limits_{n=1}^{\infty}F_{n}\), it is enough to show that \(F_{n}\) is a finite set for any \(n\in\mathbb{N}\). If possible, let \(F_{n}\) be infinite for some \(n\in\mathbb{N}\). Let \(g:\mathbb{R}\rightarrow\mathbb{R}\) be a continuous function such that \(g(x)=\frac{1}{x}\) if \(|x|\geq\frac{1}{n}\) and take \(h=g\circ f\). Then \(h\in C_{c}(X)_{F}\) and \(F_{n}\subseteq Z(1-hf)\). This implies that \(Z(1-fh)\) is infinite, a contradiction. ## 4. Structure space of \(C_{c}(X)_{f}\) Let \(Max(C_{c}(X)_{F})\) be the structure space of \(C_{c}(X)_{F}\) i.e., \(Max(C_{c}(X)_{F})\) is the set of all maximal ideals of \(C_{c}(X)_{F}\) equipped with hull-kernel topology. Then \(\{\mathcal{M}_{f}:f\in C_{c}(X)_{F}\}\) form a base for closed sets with this hull-kernel topology (see 7M [8]), where \(\mathcal{M}_{f}=\{M\in C_{c}(X)_{F}:f\in M\}\). Using Theorem 1.2 of [10], we have \(Max(C_{c}(X)_{F})\) is a Hausdorff compact space. It is checked that the structure space of \(C_{c}(X)_{F}\) is identical with the set of all \((\mathcal{Z}_{c})_{F}\)-ultrilatlers on \(X\) with Stone topology. Let \(\beta_{\circ}^{F}X\) be an index set for the family of all \((\mathcal{Z}_{c})_{F}\)-ultrafilters on X i.e., for each \(p\in\beta_{\circ}^{F}X\), there exists a \((\mathcal{Z}_{c})_{F}\)-ultrafilter on \(X\), which is denoted by \(\mathcal{U}^{p}\). For any \(p\in X\), we can find a fixed \((\mathcal{Z}_{c})_{F}\)-ultrafilter \(\mathcal{U}_{p}\) and set \(\mathcal{U}_{p}=\mathcal{U}^{p}\). Then we can think \(X\) is a subset of \(\beta_{\circ}^{F}X\). Now we wish to define a topology on \(\beta_{\circ}^{F}X\). Let \(\beta=\{\overline{Z}:Z\in Z[C_{c}(X)_{F}]\}\), where \(\overline{Z}=\{p\in\beta_{\circ}^{F}X:Z\in\mathcal{U}^{p}\}\). Then \(\beta\) is a base for closed sets for some topology on \(\beta_{\circ}^{F}X\). Since X belongs to every \((\mathcal{Z}_{c})_{F}\)-ultrafilters on \(X\), then we have \(\overline{X}=\beta_{\circ}^{F}X\). We can easily check that \(\overline{Z}\cap X=Z\) and for \(Z_{1},Z_{2}\in Z[C_{c}(X)_{F}]\) with \(Z_{1}\subseteq Z_{2}\), then \(\overline{Z_{1}}\subseteq\overline{Z_{2}}\). This leads to the following result. **Theorem 4.1**.: _For \(Z\in Z[C_{c}(X)_{F}]\), \(\overline{Z}=Cl_{\beta_{\circ}^{F}X}Z\)._ Proof.: Let \(Z\in Z[C_{c}(X)_{F}]\) and \(\overline{Z_{1}}\in\beta\) be such that \(Z\subseteq\overline{Z_{1}}\). Then \(Z\subseteq\overline{Z_{1}}\cap X=Z_{1}\). This implies \(\overline{Z}\subseteq\overline{Z_{1}}\). Therefore \(\overline{Z}\) is the smallest basic closed set containing Z. Hence \(\overline{Z}=Cl_{\beta_{\circ}^{F}X}Z\). **Corollary 4.2**.: \(X\) is a dense subset of \(\beta_{\circ}^{F}X\). Proof.: Since \(X\) is a member of every \((\mathcal{Z}_{c})_{F}\)-ultrafilter on \(X\), it implies that \(\overline{X}=\beta_{\circ}^{F}X\). Now, we want to show that \(Max(C_{c}(X)_{F})\) and \(\beta_{\circ}^{F}X\) are homeomorphic. **Theorem 4.3**.: _The map \(\phi:Max(C_{c}(X)_{F})\rightarrow\beta_{\circ}^{F}X\), defined by \(\phi(M)=p\) is a homeomorphism, where \(Z[M]=\mathcal{U}^{p}\)._ Proof.: The map \(\phi\) is bijective by Theorem 3.2 (ii). Basic closed set of \(Max(C_{c}(X)_{F})\) is of the form \(\mathcal{M}_{f}=\{M\in Max(C_{c}(X)_{F}):f\in M\}\), for some \(f\in C_{c}(X)_{F}\). Now \(M\in\mathcal{M}_{f}\Leftrightarrow f\in M\Leftrightarrow Z(f)\in Z[M]\) (Since maximal ideal is a \((\mathcal{Z}_{c})_{F}\)-ideal) \(\Leftrightarrow Z(f)\in\mathcal{U}^{p}\Leftrightarrow p\in\overline{Z(f)}\). Thus \(\phi(\mathcal{M}_{f})=\overline{Z(f)}\). Therefore \(\phi\) interchanges basic closed sets of \(Max(C_{c}(X)_{F})\) and \(\beta_{\circ}^{F}X\). Hence \(Max(C_{c}(X)_{F})\) is homeomorphic to \(\beta_{\circ}^{F}X\). Now we prove the following theorem which is an analogous version of the Gelfand-Kolmogoroff Theorem 7.3 [8]. **Theorem 4.4**.: _Every maximal ideal of \(C_{c}(X)_{F}\) is of the form \(M^{p}=\{f\in C_{c}(X)_{F}:p\in Cl_{\beta_{\circ}^{F}X}Z(f)\}\), for some \(p\in\beta_{\circ}^{F}X\)._ Proof.: Let \(M\) be any maximal ideal of \(C_{c}(X)_{F}\). Then \(Z[M]\) is a \((\mathcal{Z}_{c})_{F}\)-ultrafilter on \(X\). Thus \(Z[M]=\mathcal{U}^{p}\), for some \(p\in\beta_{\circ}^{F}X\). So, \(f\in M\Leftrightarrow Z(f)\in Z[M]\) as \(M\) is a \((\mathcal{Z}_{c})_{F}\)-ideal \(\Leftrightarrow Z(f)\in Z[M]=\mathcal{U}^{p}\Leftrightarrow p\in\overline{Z(f) }=Cl_{\beta_{\circ}^{F}X}Z(f)\). Hence \(M=\{f\in C_{c}(X)_{F}:p\in Cl_{\beta_{\circ}^{F}X}Z(f)\}\) and so we can write \(\{f\in C_{c}(X)_{F}:p\in Cl_{\beta_{\circ}^{F}X}Z(f)\}=M^{p}\), \(p\in\beta_{\circ}^{F}X\). This completes the proof. We know that the structure space of \(C_{c}(X)\) is homeomorphic to \(\beta_{\circ}X\) (\(\equiv\) the Banaschewski compactification of a zero dimensional space \(X\)). Also, it is interesting to note that the structure space of \(C_{c}(X)\) and the structure space of \(C_{c}(X)_{F}\) are same if \(X\) is equipped with discrete topology. The following example shows that these spaces may not be homeomorphic to each other. **Example 4.5**.: Take \(X=\{\frac{1}{n}:n\in\mathbb{N}\}\cup\{0\}\). Consider \((X,\tau_{u})\), where \(\tau_{u}\) is the subspace topology on \(X\) of the real line. Since \(X\) is a zero dimensional space (it has a base of clopen sets), then the Stone-\(\check{C}\)ech compactification of \(X\), \(\beta X=\beta_{\circ}X\) (see [13], subsection 4.7). Again \(X\) is a compact space implies \(X\) is homeomorphic to \(\beta X=\beta_{\circ}X\). On the other hand since \(X\) contains only one non-isolated point and it is a countable set, \(C_{c}(X)_{F}=\mathbb{R}^{X}=C(X,\tau_{d})\), where \(C(X,\tau_{d})\) is the rings of continuous functions with discrete topology \(\tau_{d}\). Hence \(\beta_{\circ}^{F}X\) is the Stone-\(\check{C}\)ech compactification of \(X\) equipped with the discrete topology and the cardinality of \(\beta_{\circ}^{F}X\) is equal to \(|\beta\mathbb{N}|=2^{c}\) (see 9.3 in [8]), where \(\beta\mathbb{N}\) is the Stone-\(\check{C}\)ech compactification of the set \(\mathbb{N}\) of all natural numbers. Now the cardinality of \(\beta X\) is \(\aleph_{\circ}\) implies that \(\beta_{\circ}^{F}X\) is not homeomorphic to \(\beta X=\beta_{\circ}X\). ## 5. \(C_{c}(X)_{F}\) and \(C_{c}(X)\) In this section, we shall discuss relation between \(C_{c}(X)_{F}\) and \(C_{c}(X)\). It is interesting to see that the ring \(C_{c}(X)_{F}\) properly contains the ring \(C_{c}(X)\). In fact, for any topological space \(X\), let \(x\) be a non-isolated point of \(X\). Then \(\chi_{\{x\}}\in C_{c}(X)_{F}\), but \(\chi_{\{x\}}\not\in C_{c}(X)\). Now we recall that an over ring \(S\) of a reduced ring \(R\) is called a quotient ring of \(R\) if for any non-zero element \(s\in S\), there is an element \(r\in R\) such that \(0\neq sr\in R\). **Theorem 5.1**.: _For any topological space \(X\), the following statements are equivalent._ 1. \(C_{c}(X)_{F}=C_{c}(X)\)_._ 2. \(X\) _is a discrete space._ 3. \(C_{c}(X)_{F}\) _is a quotient ring of_ \(C_{c}(X)\)_._ Proof.: \((i)\Leftrightarrow(ii)\) Let \(X\) be a discrete space, then clearly \(C_{c}(X)_{F}=C_{c}(X)\). Next we assume that \(C_{c}(X)_{F}=C_{c}(X)\). Then for each \(x\in X\), \(\chi_{\{x\}}\in C_{c}(X)_{F}=C_{c}(X)\), a continuous map. Then \(\{x\}\) is an isolated point. Therefore \(X\) is a discrete space. \((ii)\Rightarrow(iii)\) It is trivial. \((iii)\Rightarrow(ii)\) Let \(x_{\circ}\in X\). Then \(\chi_{\{x_{\circ}\}}\in C_{c}(X)_{F}\). Then by given hypothesis there exists a function \(f\in C_{c}(X)\) such that \(0\neq f\cdot\chi_{\{x_{\circ}\}}\in C_{c}(X)\). Now \(f(x)\chi_{\{x_{\circ}\}}(x)=f(x_{\circ})\chi_{\{x_{\circ}\}}(x)\) for all \(x\in X\). Hence \(\chi_{\{x_{\circ}\}}\) is a continuous function. This implies that \(\{x_{\circ}\}\) is an isolated point. Therefore \(X\) is a discrete space. **Lemma 5.2**.: _Let \(\phi:C_{c}(X)_{F}\to C_{c}(Y)\) be a ring isomorphism, where \(X\) and \(Y\) are two topological spaces. Then the following statements are true._ 1. _Both_ \(\phi\) _and_ \(\phi^{-1}\) _are order preserving function._ 2. _Both_ \(\phi\) _and_ \(\phi^{-1}\) _preserve constant functions (and their values)._ 3. _For any_ \(x_{\circ}\in X\) _there is an element_ \(y_{\circ}\in Y\) _such that_ \(\phi(\chi_{\{x_{\circ}\}})=\chi_{\{y_{\circ}\}}\) _and_ \(\phi(f)(y_{\circ})=f(x_{\circ})\) _for any_ \(f\in C_{c}(X)_{F}\)_._ Proof.: (i) Let \(f\in C_{c}(X)_{F}\) and \(f\geq 0\), then there exists an element \(g\in C_{c}(X)_{F}\) such that \(f=g^{2}\). Hence \(\phi(f)=(\phi(g))^{2}\geq 0\). Thus \(\phi\) is order preserving. Similarly, \(\phi^{-1}\) is also order preserving. (ii) We can easily check that \(\phi(1)=1\) and use this we can calculate that \(\phi\) maps any constant function with rational value to same constant and together with the order preserving property of \(\phi\) shows that \(\phi\) preserves constant functions. Similarly, constant functions are preserved by \(\phi^{-1}\). (ii) For \(x\in X\), \(\phi(\chi_{\{x_{\circ}\}})\) is an idempotent element of \(C_{c}(Y)\). Then there is a clopen subset \(A\) of \(Y\) such that \(\phi(\chi_{\{x_{\circ}\}})=\chi_{A}\). Clearly, \(A\) is non-empty. Now, we prove that \(A\) is a singleton set. If possible, let \(y,z\in A\). Then there exists a continuous function \(f:Y\to[0,1]\) such that \(f(y)=0\) and \(f(z)=1\). Take \(g=min\{f,\chi_{A}\}\in C_{c}(Y)\). Then \(0\leq g\leq\chi_{A}\) and we have \(0\leq\phi^{-1}(g)\leq\chi_{\{x\}}\), consequently \(\phi^{-1}(g)=k\chi_{\{x\}}\) for some real number \(k\). Hence \(g=k\chi_{A}\). Now \(g(y)=k\chi_{A}(y)\) implies that \(k=0\) and \(g(z)=\chi_{A}(z)\) implies that \(k=1\), a contradiction. Therefore \(A\) is a singleton set. To prove the second part, let \(x_{\circ}\in X\), \(f\in C_{c}(X)_{F}\) and \(\phi(\chi_{\{x_{\circ}\}})=\chi_{\{y_{\circ}\}}\) for some \(y_{\circ}\in Y\). If possible, let \(\phi(f-f(x_{\circ}))(y_{\circ})\neq 0\). Then we have \(\phi(f-f(x_{\circ}))^{2}\geq\frac{d^{2}}{3}\chi_{\{y_{\circ}\}}=\phi(\frac{d^{ 2}}{3}\chi_{\{x_{\circ}\}})\), where \(d=\phi(f-f(x_{\circ}))(y_{\circ})\). Thus \((f-f(x_{\circ}))^{2}\geq\frac{d^{2}}{3}\chi_{\{x_{\circ}\}}\), contradiction when evaluated at \(x_{\circ}\). Therefore \(\phi(f)(y_{\circ})=f(x_{\circ})\). **Theorem 5.3**.: _For a topological spaces \(X\). There exists a topological space \(Y\) such that \(C_{c}(X)_{F}\cong C_{c}(Y)\) if and only if set of all non-isolated points of \(X\) is finite._ Proof.: Let \(\{x_{1},x_{2},\cdots,x_{n}\}\) be the set of all non-isolated points of \(X\). Set \(Y=X\setminus\{x_{1},x_{2},\cdots,x_{n}\}\). Then for each \(f\in C_{c}(X)_{F}\), we have \(f|_{Y}\in C_{c}(Y)\) and \(f\mapsto f|_{Y}\) is the required isomorphism. For the reverse part, let \(\phi:C_{c}(X)_{F}\to C_{c}(Y)\) be a ring isomorphism. If possible, let \(\{x_{1},x_{2},\cdots\}\) be infinite number of isolated points and without loss of generality for each \(i\), \(x_{i}\) be a limit point of \(X\setminus\{x_{1},x_{2},\cdots\}\). Then by Lemma 5.2(iii), we get \(\{y_{1},y_{2},\cdots\}\) such that \(\phi(\chi_{\{x_{i}\}})=\chi_{\{y_{i}\}}\) for each \(i\). Let \(g=\sum\limits_{i=1}^{\infty}\chi_{\{y_{i}\}}\) (which is an element of \(C_{c}(Y)\)) and \(f\) be an inverse image of \(g\) under \(\phi\). Then \(f(x)=0\) if and only if \(x\notin\{x_{1},x_{2},\cdots\}\) and hence discontinuity set of \(f\) is infinite, which contradicts that \(f\in C_{c}(X)_{F}\). ## 6. \(C_{c}(x)_{F}\) as a Baer-ring In this section, we give a characterization of \(C_{c}(X)_{F}\) as a Baer-ring. A ring \(R\) is called a Baer-ring if annihilator of every non-empty ideal is generated by an idempotent. A ring \(R\) is said to be a \(SA\)-ring if for any two ideals \(I\) and \(J\), there exists an ideal \(K\) of \(R\) such that \(Ann(I)\cap Ann(J)=Ann(K)\). A ring \(R\) is called an \(IN\)-ring if for any two ideals \(I\) and \(J\), \(Ann(I\cap J)=Ann(I)\cap Ann(J)\). Clearly, any \(SA\)-ring is always an \(IN\)-ring. Next lemma states a characterization of \(IN\)-ring when it is also a reduced ring. **Lemma 6.1**.: _([15]) Let \(R\) be a reduced ring. Then the following statements are equivalent._ 1. _For any two orthogonal ideals_ \(I\) _and_ \(J\) _of_ \(R\)_,_ \(Ann(I)+Ann(J)=R\)_._ 2. _For any two ideals_ \(I\) _and_ \(J\) _of_ \(R\)_,_ \(Ann(I)+Ann(J)=Ann(I\cap J)\)_._ Furthermore, the next lemma gives some equivalent conditions of Baer-ring. **Lemma 6.2**.: _([4]) Let \(R\) be a commutative reduced ring. Then the following statements are equivalent._ 1. \(R\) _is a Baer-ring._ 2. \(R\) _is a_ \(SA\)_-ring._ 3. _The space of prime ideals of_ \(R\) _is extremally disconnected._ 4. \(R\) _is a_ \(IN\)_-ring._ Now we wish to establish some equivalent conditions when \(C_{c}(X)_{F}\) is an \(IN\)-ring, \(SA\)-ring or a Baer-ring. For this purpose we first prove the following lemma. **Lemma 6.3**.: _For any subset \(A\) of a space \(X\), there exists a subset \(S\) of \(C_{c}(X)_{F}\) such that \(A=\bigcup COZ[S]=\bigcup\{COZ(f):f\in S\}\)._ Proof.: This follows immediately, since \(A=\bigcup\{\chi_{\{x\}}:x\in A\}\) and \(\chi_{\{x\}}\in C_{c}(X)_{F}\) for all \(x\in X\). Now, in this situation, we are ready to prove the following equivalent conditions. **Theorem 6.4**.: _The following statements are equivalent._ 1. _Any two disjoint subsets of_ \(X\) _are_ \(\mathcal{F}_{c}\)_-completely separated._ 2. \(C_{c}(X)_{F}\) _is an_ \(IN\)_-ring._ 3. \(C_{c}(X)_{F}\) _is a_ \(SA\)_-ring._ 4. \(C_{c}(X)_{F}\) _is a Baer-ring._ 5. _The space of prime ideals of_ \(C_{c}(X)_{F}\) _is an extremally disconnected space._ 6. _Any subset of_ \(X\) _is of the form_ \(COZ(e)\) _for some idempotent_ \(e\in C_{c}(X)_{F}\)_._ 7. _For any subset_ \(A\) _of_ \(X\)_, there exists a finite subset_ \(F\) _of_ \(X\) _such that_ \(A\setminus F\) _is clopen in_ \(X\setminus F\)_._ Proof.: From Lemma 6.2, statements \((ii),(iii),(iv)\) and \((v)\) are equivalent. \((i)\implies(ii)\): Let \(I\) and \(J\) be two orthogonal ideals of \(C_{c}(X)_{F}\). Then \(IJ=0\) and \(\bigcup COZ[I]\), \(\bigcup COZ[J]\) are two disjoint subsets of \(X\). Now, by given hypothesis there exist two elements \(f_{1},f_{2}\in C_{c}(X)_{F}\) such that \(\bigcup COZ[I]\subseteq Z(f_{1})\) and \(\bigcup COZ[J]\subseteq Z(f_{2})\) with \(Z(f_{1})\cap Z(f_{2})=\emptyset\). This implies that \(f_{1}\in Ann(I)\), \(f_{2}\in Ann(J)\) and \(Z(f_{1}^{2}+f_{2}^{2})=\emptyset\). Then \(f_{1}^{2}+f_{2}^{2}\) is a unit element in \(Ann(I)+Ann(J)\). Hence \(Ann(I)+Ann(J)=C_{c}(X)_{F}\). Thus by Lemma 6.1, \(C_{c}(X)_{F}\) is an \(IN\)-ring. \((ii)\implies(i)\): Suppose that \(A\) and \(B\) are two disjoint subsets of \(X\). Then by Lemma 6.3, there are two subset \(S_{1}\) and \(S_{2}\) of \(C_{c}(X)_{F}\) such that \(A=\bigcup COZ[S_{1}]\) and \(B=\bigcup COZ[S_{2}]\). Let \(I\) and \(J\) be two ideals generated by \(S_{1}\) and \(S_{2}\) respectively. Then we have \(\bigcup COZ[I]\cap\bigcup COZ[J]=A\cap B=\emptyset\). This implies that \(IJ=0\) i.e., \(I\) and \(J\) are orthogonal ideals of \(C_{c}(X)_{F}\). Then by Lemma 6.1, \(Ann(I)+Ann(J)=C_{c}(X)_{F}\). So there exist \(h_{1}\in Ann(I)\) and \(h_{2}\in Ann(J)\) such that \(1=h_{1}+h_{2}\), a unit element. Then \(Z(h_{1})\) and \(Z(h_{2})\) are disjoint. Since \(h_{1}\in Ann(I)\), \(A=\bigcup COZ[S_{1}]=\bigcup COZ[I]\subseteq Z(h_{1})\). Similarly, \(B\subseteq Z(h_{2})\). This proves \((i)\). \((iv)\implies(vi)\): Let \(A\) be a subset of \(X\). Then by Lemma 6.3, there exists a subset \(S\) of \(C_{c}(X)_{F}\) such that \(A=\bigcup COZ[S]\). Let \(I\) be an ideal generated by \(S\). Then by given hypothesis, there exists an idempotent \(e\in C_{c}(X)_{F}\) such that \(Ann(e)=Ann(I)\). Thus by using Theorem 3.14, we have \(A=\bigcup COZ[I]=COZ(e)\). \((vi)\implies(iv)\): Suppose that \(I\) is an ideal of \(C_{c}(X)_{F}\). Then by given hypothesis there is an idempotent \(e\in C_{c}(X)_{F}\) such that \(\bigcup COZ[I]=COZ(e)\). Then by Theorem 3.14, \(Ann(I)=Ann(e)=(1-e)C_{c}(X)_{F}\). Hence \(C_{c}(X)_{F}\) is a Baer-ring. \((vi)\implies(vii)\): Let \(A=COZ(e)\), for some idempotent \(e\in C_{c}(X)_{F}\). Then \(e\in C(X\setminus F)\) for some finite subset \(F\) of \(X\). Now \(A\setminus F=COZ(e)\cap(X\setminus F)\) is clopen in \(X\setminus F\). \((vii)\implies(vi)\): Trivial. ## 7. \(F_{c}P\)-apace **Definition 7.1**.: A commutative ring \(R\) with unity is said to be a von Neumann regular ring or simply a regular ring if for each \(a\in R\), there exists \(r\in R\) such that \(a=a^{2}r\). We recall that X is a P-space if and only if C(X) is regular ring [4J,[8]]. Now we define \(F_{c}P\)-Space as follows. **Definition 7.2**.: A space X is called \(F_{c}P\)-space if \(C_{c}(X)_{F}\) is regular. **Example 7.3**.: Consider the space \(X=\{0,1,\frac{1}{2},\frac{1}{3},\cdots\}\) (endowed with the subspace topology from the usual topology on the real line \(\mathbb{R}\)). It is clear that \(C_{c}(X)_{F}=\mathbb{R}^{X}\), which means that X is an \(F_{c}P\)-space. On the other hand by 4K.1 [8], X is not a \(P\)-space. Next example shows that for every \(P\)-space \(X\), \(C_{c}(X)_{F}\) may not be regular i.e., not a \(F_{c}P\)-apace. **Example 7.4**.: Let the set of rational numbers \(\mathbb{Q}\) be the subspace of the real line \(\mathbb{R}\). Let \(X=\mathbb{Q}^{*}=\mathbb{Q}\cup\{\infty\}\), one point compactification of \(\mathbb{Q}\). Then every continuous function \(f:\mathbb{Q}^{*}\rightarrow\mathbb{R}\) is a constant function. Hence \(C(\mathbb{Q}^{*})\) is isomorphic to \(\mathbb{R}\), a regular ring. Hence \(X\) is a \(P\)-space. But we wish to show that \(C_{c}(X)_{F}\) is not a regular ring. Let \(f:\mathbb{Q}^{*}\rightarrow\mathbb{R}\) be defined as, \[f(x)=\left\{\begin{array}{ll}cos(\frac{\pi x}{2}),&if\ x\in\mathbb{Q}\\ 2,&if\ x=\infty.\end{array}\right.\] Then \(f\in C_{c}(X)_{F}\). If possible, let there exists a \(g\in C_{c}(X)_{F}\) such that \(f=f^{2}g\). Then \(g(x)=\frac{1}{f(x)}\) when \(f(x)\neq 0\) and discontinuity set of \(g\), \(D_{g}\supseteq\{1,-1,3,-3,5,-5,\cdots\}\). This shows that \(g\notin C_{c}(X)_{F}\). Hence \(C_{c}(X)_{F}\) is not regular i.e., \(X\) is not a \(F_{c}P\)-apace. However, if we consider \(X\) as a Tychonoff space, then the following statement is true. **Theorem 7.5**.: _If \(X\) is a P-space, then it is also a \(F_{c}P\)-space._ Proof.: Let \(f\in C_{c}(X)_{F}\). Then \(f\in C(X\setminus F)\) for a finite subset \(F\) of \(X\). Since subspace of a \(P\)-space is \(P\)-space (see 4K.4 in [8]), \(X\setminus F\) is a \(P\)-space. So \(C(X\setminus F)\) is regular. This implies that \((f|_{X\setminus F})^{2}.g=f|_{X\setminus F}\), for some \(g\in C(X\setminus F)\). Now, we define \(g^{*}:X\rightarrow\mathbb{R}\) by \[g^{*}(x)=\left\{\begin{array}{ll}g(x),&if\ x\in(X\setminus Z(f))\setminus F \\ 0,&if\ x\in(Z(g)\cap Z(f))\setminus F\\ 0,&if\ x\in(Z(f)\setminus Z(g))\setminus F\\ \frac{1}{f(x)},&if\ x\in F\setminus Z(f)\\ 0,&if\ x\in Z(f)\cap F.\end{array}\right.\] Since \(f|_{X\setminus F}\) is continuous, then \((X\setminus Z(f))\setminus F\) is open in \(X\setminus F\). Also, \(X\setminus F\) is open in \(X\) and hence \((X\setminus Z(f))\setminus F\) is open in \(X\). Now, \((Z(g)\cap Z(f))\setminus F\) is \(G_{\delta}\)-set in \(X\setminus F\) and \(X\setminus F\) is \(G_{\delta}\)-set in \(X\), then \((Z(g)\cap Z(f))\setminus F\) is \(G_{\delta}\)-set in \(X\). Hence it is open in \(X\) (see 4J.4 [8]). Again, \((Z(f)\setminus Z(g))\setminus F=(Z(f)\cap(Z(g))^{c})\setminus F\) is \(G_{\delta}\)-set in \(X\setminus F\) and hence open in \(X\) (using 4J.4 [8]). Then using Pasting lemma, we can easily observe that \(g^{\star}\) is well defined and continuous on \(X\setminus F\). Thus \(g^{\star}\in C_{c}(X)_{F}\) and \(f=f^{2}.g^{\star}\). So it is a \(F_{c}P\)-space. **Theorem 7.6**.: _The following statements are equivalent._ 1. _The space_ \(X\) _is an_ \(F_{c}P\)_-space._ 2. _For any_ \(Z\in Z[C_{c}(X)_{F}]\)_, there exists a finite subset_ \(F\) _in_ \(X\) _such that_ \(Z\setminus F\) _is a clopen subset in_ \(X\setminus F\)_._ 3. \(C_{c}(X)_{F}\) _is a_ \(PP\)_-ring, that is annihilator of every element is generated by an idempotent._ Proof.: \((i)\implies(ii)\): Let \(Z(f)\in Z[C_{c}(X)_{F}]\). By \((i)\), there exists \(g\in C_{c}(X)_{F}\) such that \(f^{2}g=f\). Also, there is a finite subset \(F\) of \(X\) such that \(f,g\in C(X\setminus F)\). Hence for any \(x\in X\setminus F\) we have \(f(x)^{2}g(x)=f(x)\). Therefore \(Z(f|_{X\setminus F})\cup Z((1-fg)|_{X\setminus F})=X\setminus F\). On the other hand \(Z(f|_{X\setminus F}\)\()\cap Z((1-fg)|_{X\setminus F})=\phi\). These shows that \(Z(f|_{X\setminus F})=Z(f)\setminus F\) is clopen in \(X\setminus F\). \((ii)\implies(iii)\): Let \(f\in C_{c}(X)_{F}\). Then there is a finite subset \(F\) of \(X\) such that \(Z(f)\setminus F\) is clopen in \(X\setminus F\). So \(Z(f)\setminus F=Z(e)\) for some idempotent \(e\in C_{c}(X\setminus F)\). Therefore \(Z(f)=Z(e^{\star})\) where \(e^{\star}|_{X\setminus F}=e,e^{\star}\) is zero on \(Z(f)\cap F\) and \(e^{\star}\) is equal to 1 on \(F\setminus Z(f)\). Then by Theorem 3.14, we have \(Ann(f)=Ann(e^{\star})=(1-e^{\star})C_{c}(X)_{F}\), i.e., \(C_{c}(X)_{F}\) is a \(PP\)-ring. \((iii)\implies(i)\): Assume that \(f\in C_{c}(X)_{F}\). By hypothesis, there is an idempotent \(e\in C_{c}(X)_{F}\) such that \(Ann(e)=Ann(f)\). By Theorem 3.14, \(Z(e)=Z(f)\). Now, \(F\) is a finite subset such that \(f,e\in C_{c}(X\setminus F)\). Then \(Z(f)\setminus F=Z(e)\setminus F\) is a clopen subset in \(X\setminus F\). Now, we define \(f^{\star}:X\rightarrow\mathbb{R}\) by \[f^{\star}(x)=\left\{\begin{array}{ll}0,&if\ x\in Z(f)\\ \frac{1}{f(x)},&otherwise.\end{array}\right.\] Then \(f^{\star}\in C_{c}(X)_{F}\) and \(f^{2}f^{\star}=f\). Thus \(C_{c}(X)_{F}\) is a regular ring i.e., X is an \(F_{c}P\)-space. ## 8. Zero divisor graph of \(C_{c}(X)_{f}\) Consider the graph \(\Gamma(C_{c}(X)_{F})\) of the ring \(C_{c}(X)_{F}\) with the vertex set \(V,\) the collection of all nonzero zero divisors in the ring \(C_{c}(X)_{F}\) and two vertices \(f,g\) are adjacent if and only if \(fg=0\) on \(X\). We recall that for \(f\in C_{c}(X)_{F}\), \(Z(f)=\{x\in X:f(x)=0\}\) is called the zero set of \(f\). For a \(T_{1}\) space \(X\) and \(x\in X\), the characteristic function \(\chi_{\{x\}}\) defined by \(\chi_{\{x\}}(y)=0\) if \(y\neq x\) and \(\chi_{\{x\}}(x)=1\) is a member of \(C_{c}(X)_{F}\). The following result provides a condition of the adjacency of two vertices of \(\Gamma(C_{c}(X)_{F})\) in terms of their zero sets. **Lemma 8.1**.: _Two vertices \(f,g\) in the graph \(\Gamma(C_{c}(X)_{F})\) are adjacent if and only if \(Z(f)\cup Z(g)=X\)._ Proof.: To begin with, let us assume that \(Z(f)\cup Z(g)=X\). Then \(fg=0\). So \(f\) and \(g\) are adjacent in \(\Gamma(C_{c}(X)_{F})\). Conversely, let \(f\) and \(g\) be adjacent in \(\Gamma(C_{c}(X)_{F})\). Then \(fg=0\). This implies \(Z(0)=Z(fg)=Z(f)\cup Z(g)=X\). **Lemma 8.2**.: _For any two vertices \(f,g\) there is another vertex \(h\) in the graph \(\Gamma(C_{c}(X)_{F})\) which is adjacent to both \(f\) and \(g\) if and only if \(Z(f)\cap Z(g)\neq\emptyset\)._ Proof.: Firstly, we consider that there is a vertex \(h\) such that \(h\) is adjacent to both \(f\) and \(g\). Then \(hf=0\) and \(hg=0\). As \(h\) is non-zero, there exists a point \(x_{\circ}\in X\) such that \(h(x_{\circ})\neq 0\). Then obviously, \(f(x_{\circ})=0\) and \(g(x_{\circ})=0\). Hence \(x_{\circ}\in Z(f)\cap Z(g)\). Thus \(Z(f)\cap Z(g)\neq\emptyset\). Conversely, let \(Z(f)\cap Z(g)\neq\emptyset\) and \(y\in Z(f)\cap Z(g)\). Take \(h=\chi_{\{y\}}\). Then \(h\in C(X)_{F}\) and both \(hf=0\) and \(hg=0\). So \(h\) is adjacent to both \(f\) and \(g\) in the graph \(\Gamma(C_{c}(X)_{F})\). **Lemma 8.3**.: _For any two vertices \(f,g\) there are distinct vertices \(h_{1}\) and \(h_{2}\) in \(\Gamma(C_{c}(X)_{F})\) such that \(f\) is adjacent to \(h_{1}\), \(h_{1}\) is adjacent to \(h_{2}\) and \(h_{2}\) is adjacent to \(g\) if \(Z(f)\cap Z(g)=\emptyset\)._ Proof.: Let us choose \(x\in Z(f)\) and \(y\in Z(g)\). Consider two functions \(h_{1}=\chi_{\{x\}}\) and \(h_{2}=\chi_{\{y\}}\). Then \(Z(h_{1})=X\setminus\{x\}\) and \(Z(h_{2})=X\setminus\{y\}\). So \(Z(h_{1})\cup Z(f)=X\), \(Z(h_{2})\cup Z(g)=X\) and \(Z(h_{1})\cup Z(h_{2})=X\). Hence by Lemma 8.1, we can say that \(f\) is adjacent to \(h_{1}\), \(h_{1}\) is adjacent to \(h_{2}\) and \(h_{2}\) is adjacent to \(g\). **Definition 8.4**.: For two vertices \(f,g\) in any graph \(G\), \(d(f,g)\) is defined as the length of the smallest path between \(f\) and \(g\). **Theorem 8.5**.: _For any two vertices \(f,g\) in the graph \(\Gamma(C_{c}(X)_{F})\), we have the following outputs:_ 1. \(d(f,g)=1\) _if and only if_ \(Z(f)\cup Z(g)=X\)_._ 2. \(d(f,g)=2\) _if and only if_ \(Z(f)\cup Z(g)\neq X\) _and_ \(Z(f)\cap Z(g)\neq\emptyset\)_._ 3. \(d(f,g)=3\) _if and only if_ \(Z(f)\cup Z(g)\neq X\) _and_ \(Z(f)\cap Z(g)=\emptyset\)_._ Proof.: (i) It follows from Lemma 8.1. (ii) Let \(d(f,g)=2\). So \(f\) and \(g\) are not adjacent to each other. Then by Lemma 8.1, \(Z(f)\cup Z(g)\neq X\). Moreover, there is a vertex \(h\in\Gamma(C(X)_{F})\) such that \(h\) is adjacent to both \(f\) and \(g\). Hence by Lemma 8.2, we have \(Z(f)\cap Z(g)\neq\emptyset\). Conversely, let \(Z(f)\cup Z(g)\neq X\) and \(Z(f)\cap Z(g)\neq\emptyset\). Then by Lemma 8.1 and 8.2, \(f\) and \(g\) are not adjacent and there is a third vertex \(h\), adjacent to both \(f\) and \(g\). Hence \(d(f,g)=2\). (iii) Let \(d(f,g)=3\). Then by Lemmas 8.1 and 8.2, we get \(Z(f)\cup Z(g)\neq X\) and \(Z(f)\cap Z(g)=\emptyset\). Conversely, let \(Z(f)\cup Z(g)\neq X\) and \(Z(f)\cap Z(g)=\emptyset\). Then by Lemma 8.1 and 8.2, \(f\) and \(g\) are not adjacent to each other and there is no common vertex \(h\) which is adjacent to both \(f\) and \(g\). Hence \(d(f,g)\geq 3\). Since \(Z(f)\cap Z(g)=\emptyset\), applying Lemma 8.3, there are two distinct vertices \(h_{1}\) and \(h_{2}\) such that \(f\) is adjacent to \(h_{1}\), \(h_{1}\) is adjacent to \(h_{2}\) and \(h_{2}\) is adjacent to \(g\). As a consequence, \(d(f,g)=3\) **Definition 8.6**.: The maximum of all possible \(d(f,g)\) is called the diameter of a graph \(G\) and it is denoted by \(diam(G)\). Also, the length of the smallest cycle in the graph \(G\) is called the girth of the graph \(G\) and it is denoted by \(gr(G)\). If there does not exist any cycle in the graph \(G\), we declare \(gr(G)=\infty\). **Theorem 8.7**.: _If a space \(X\) contains at least three elements, then \(diam(\Gamma(C_{c}(X)_{F}))=gr(\Gamma(C_{c}(X)_{F}))=3.\)_ Proof.: Let us take three distinct points \(x,y,z\) in \(X\). Consider the functions \(f=1-\chi_{\{x\}}\) and \(g=1-\chi_{\{y\}}\). Then \(Z(f)=\{x\}\) and \(Z(g)=\{y\}\). Thus \(Z(f)\cup Z(g)\neq X\) because \(z\notin Z(f)\cup Z(g).\) As \(Z(f)\cap Z(g)=\emptyset\), by Theorem 8.5(iii), \(d(f,g)=3.\) But we know that \(d(f,g)\leq 3\) for all vertices \(f,g\) in \(\Gamma(C_{c}(X)_{F})\). Hence we have \(diam(\Gamma(C_{c}(X)_{F}))=3.\) For the girth of the graph, take \(h_{1}=\chi_{\{x\}}\), \(h_{2}=\chi_{\{y\}}\) and \(h_{3}=\chi_{\{z\}}\). Then the union of any two zero sets among \(Z(h_{1})\), \(Z(h_{2})\) and \(Z(h_{3})\) is \(X\). Thus \(h_{1},h_{2}\) and \(h_{3}\) form a triangle. Since there is no loop in the graph \(\Gamma(C_{c}(X)_{F})\), the girth \(gr(\Gamma(C_{c}(X)_{F}))=3.\) **Theorem 8.8**.: \(diam(\Gamma(C_{c}(X)_{F}))=2\) _if and only if \(gr(\Gamma(C_{c}(X)_{F}))=\infty\) if and only if \(|X|=2\)._ Proof.: Let \(X=\{x,y\}\). Then for any vertex \(f\) of \(\Gamma(C_{c}(X)_{F})\), \(Z(f)\) must be singleton. Let us consider \(f=\chi_{\{x\}}\) and \(g=\chi_{\{y\}}\). Then \(f\) and \(2f\) are not adjacent to each other whereas \(g\) is adjacent to both \(f\) and \(2f\). Now for two vertices \(f\) and \(g\), if their zero sets are same, then they must be constant multiple of each other and thus they cannot be adjacent and their distance is \(2\) and if their zero sets are not same then they are adjacent to each other. Hence for any two vertices \(f,g\), \(d(f,g)\) is either \(1\) or \(2\). Thus we conclude that \(diam(\Gamma(C_{c}(X)_{F}))=2\). Since there are only two distinct zero sets, there cannot exist any cycle in the graph \(\Gamma(C_{c}(X)_{F})\). Thus the girth \(gr(\Gamma(C_{c}(X)_{F}))=\infty\). Now suppose \(diam(\Gamma(C_{c}(X)_{F}))=2\) or girth \(gr(\Gamma(C_{c}(X)_{F}))=\infty\). By Theorem 8.7, we see that if \(X\) contains more than two points then diameter and girth both are \(3\). Hence we have \(|X|=2\) because if \(X\) is singleton, then there is no zero divisor. **Definition 8.9**.: For a vertex \(f\) in a graph \(G\), the associated number \(e(f)\) is defined by \(e(f)=\max\{d(f,g):g(\neq f)\) is a vertex in \(G\}\). The vertex \(g\) with smallest associated number is called a centre of the graph. The associated number of the centre vertex in \(G\) is called the radius of the graph and it is denoted by \(\rho(G)\). The following result is about the associated number of any vertex in the graph \(\Gamma(C_{c}(X)_{F})\). **Lemma 8.10**.: _For any vertex \(f\) in the graph \(\Gamma(C_{c}(X)_{F})\), we have_ \[e(f)=\left\{\begin{array}{l}2\mbox{ if }X\setminus Z(f)\mbox{ is singleton}\\ 3\mbox{ otherwise.}\end{array}\right.\] Proof.: Suppose \(X\setminus Z(f)=\{x_{\circ}\}\). Let \(g\) be any vertex in \(\Gamma(C_{c}(X)_{F})\) such that \(g\neq f\). Then there are only two possibilities, namely \(x_{\circ}\in Z(g)\) or \(x_{\circ}\notin Z(g)\). If \(Z(g)\) contains \(x_{\circ}\) then \(fg=0\). In this case \(f\) and \(g\) are adjacent to each other. Thus \(d(f,g)=1\). On the other hand, if \(Z(g)\) does not contain \(x_{\circ}\) then \(Z(g)\subseteq Z(f)\). This implies that \(Z(f)\cap Z(g)=Z(g)\neq\emptyset\) and \(Z(f)\cup Z(g)=Z(f)\neq\emptyset\). Therefore by Theorem 8.5, \(d(f,g)=2.\) Hence we prove that \(e(f)=2\). On the other hand, let \(X\setminus Z(f)\) contains at least two points, say \(x_{\circ}\) and \(y_{\circ}\). By Theorem 8.5, we see that \(e(f)\leq 3.\) Now choose \(g=1-\chi_{\{x_{\circ}\}}\). Then \(Z(g)=\{x_{\circ}\}\). Clearly, \(Z(f)\cap Z(g)=\emptyset\) and \(Z(f)\cup Z(g)\neq X\) because \(y_{\circ}\) does not belong to the union. Hence by Theorem 8.5, for this particular \(g\), we get \(d(f,g)=3\). Thus we obtain that \(e(f)=3\). **Corollary 8.11**.: The radius \(\rho(\Gamma(C_{c}(X)_{F}))\) of the graph \(\Gamma(C_{c}(X)_{F})\) is always \(2\). Proof.: We can always consider a vertex \(f\) with \(e(f)=2\), for example take \(f=\chi_{\{x_{\circ}\}}.\) Then \(X\setminus Z(f)\) is singleton. So we have radius of \(\Gamma(C_{c}(X)_{F})=2\). **Definition 8.12**.: A graph \(G\) is said to be 1. triangulated if every vertex of the graph \(G\) is a vertex of a triangle. 2. hyper-triangulated if every edge of the graph \(G\) is an edge of a triangle. **Theorem 8.13**.: _The graph \(\Gamma(C_{c}(X)_{F})\) is neither triangulated nor hyper-triangulated._ Proof.: At first, we prove that the graph \(\Gamma(C_{c}(X)_{F})\) is not triangulated. For this, let us consider \(x_{\circ}\in X.\) Now define \(f=1-\chi_{\{x_{\circ}\}}.\) Then \(Z(f)=\{x_{\circ}\}.\) We claim that there is no triangle containing \(f\) as a vertex. If possible, let \(g,h\) be two vertices such that \(f,g,h\) make a triangle. Then by Lemma 8.1, \(Z(g)=X-\{x_{\circ}\}=Z(h).\) But again by Lemma 8.1, \(g\) and \(h\) cannot be adjacent. This creates a contradiction that \(f,g,h\) make a triangle. Now to prove hypertriangulated, let us take a point \(x_{\circ}\in X\). Now take two functions \(f=\chi_{\{x_{\circ}\}}\) and \(g=1-\chi_{\{x_{\circ}\}}\). Then \(Z(f)\cup Z(g)=X\) and \(Z(f)\cap Z(g)=\emptyset\). Then by Lemma 8.2, it is not possible to get a triangle that contains the edge connecting \(f\) and \(g\). So the graph \(\Gamma(C_{c}(X)_{F})\) is neither triangulated nor hyper-triangulated. The above mentioned result is totally different from the case of \(C(X)\). In fact, we have **Proposition 8.14** ([2]).: _The following results are true._ 1. \(\Gamma(C(X))\) _is triangulated if and only if_ \(X\) _does not contain any non-isolated points._ 2. \(\Gamma(C(X))\) _is hyper-triangulated if and only if_ \(X\) _is a connected middle_ \(P\)_-space._ For definition of middle \(P\)-space see [2]. **Definition 8.15**.: For two vertices \(f\) and \(g\) in any graph \(G\), we denote by \(c(f,g)\) the length of the smallest cycle containing \(f\) and \(g\). If there is no cycle containing \(f\) and \(g\), we declare \(c(f,g)=\infty\). In the following theorem, we shall discuss all possible values of \(c(f,g)\) in the graph \(\Gamma(C_{c}(X)_{F})\). **Theorem 8.16**.: _Let \(f\) and \(g\) be two vertices in the graph \(\Gamma(C_{c}(X)_{F})\). Then_ 1. \(c(f,g)=3\) _if and only if_ \(Z(f)\cup Z(g)=X\) _and_ \(Z(f)\cap Z(g)\neq\emptyset\)_._ 2. \(c(f,g)=4\) _if and only if_ \(Z(f)\cup Z(g)=X\) _and_ \(Z(f)\cap Z(g)=\emptyset\) _or_ \(Z(f)\cup Z(g)\neq X\) _and_ \(Z(f)\cap Z(g)\neq\emptyset\)_._ 3. \(c(f,g)=6\) _if and only if_ \(Z(f)\cup Z(g)\neq X\) _and_ \(Z(f)\cap Z(g)=\emptyset\)_._ Proof.: (i) Suppose \(Z(f)\cup Z(g)=X\) and \(Z(f)\cap Z(g)\neq\emptyset\). Thus by Lemma 8.1 and 8.2, \(f\) and \(g\) are adjacent to each other and there is another vertex \(h\) adjacent to both \(f\) and \(g\). Hence we obtain a triangle with vertices \(f,g\) and \(h\). This shows that \(c(f,g)=3\). Conversely, if \(c(f,g)=3\) then there exists a triangle with \(f,g\) and \(h\) as its vertices for some other vertex \(h\). Now using Lemma 8.1 and 8.2, we find that \(Z(f)\cup Z(g)=X\) and \(Z(f)\cap Z(g)\neq\emptyset\). (ii) Consider \(Z(f)\cup Z(g)=X\) and \(Z(f)\cap Z(g)=\emptyset\). Then using Lemma 8.1, \(f\) and \(g\) are adjacent to each other. Now by Lemma 8.3, there are vertices \(h_{1}\) and \(h_{2}\) such that \(f\) is adjacent to \(h_{1}\), \(h_{1}\) is adjacent to \(h_{2}\) and \(h_{2}\) is adjacent to \(g\). Thus we get a cycle of length \(4\) with vertices in order, \(f,h_{1},h_{2}\) and \(g\). As \(Z(f)\cap Z(g)=\emptyset\), by Lemma 8.2, there is no triangle containing \(f\) and \(g\) as its vertices. Thus \(c(f,g)=4\). Now suppose \(Z(f)\cup Z(g)\neq X\) and \(Z(f)\cap Z(g)\neq\emptyset\). Then using Lemma 8.1, \(f\) and \(g\) are not adjacent to each other. By Lemma 8.2, there exists a vertex \(h\) such that \(h\) is adjacent to both \(f\) and \(g\). Then \(2h\) is also adjacent to both \(f\) and \(g\). Thus we get a quadrilateral containing vertices in order \(f,h,g\) and \(2h\). Again condition \(Z(f)\cup Z(g)\neq X\) implies that it is not possible to have a triangle containing \(f\) and \(g\) as its vertices. So \(c(f,g)=4\). To prove the converse, let \(c(f,g)=4.\) Now \(Z(f)\cup Z(g)=X\), then we must have \(Z(f)\cap Z(g)=\emptyset\), otherwise we have a triangle having vertices \(f\) and \(g\). If we have \(Z(f)\cup Z(g)\neq X\), then \(f\) and \(g\) are not adjacent to each other. But there is a quadrilateral containing \(f\) and \(g\). So there must exist two functions \(h_{1}\) and \(h_{2}\) such that both \(h_{1}\) and \(h_{2}\) are adjacent to both \(f\) and \(g\). So by Lemma 8.2, we have \(Z(f)\cap Z(g)\neq\emptyset\). (iii) Let \(Z(f)\cup Z(g)\neq X\) and \(Z(f)\cap Z(g)=\emptyset\). Then \(f\) and \(g\) are not adjacent to each other. As \(Z(f)\cap Z(g)=\emptyset\), by Lemma 8.3, there are two vertices \(h_{1}\) and \(h_{2}\) in \(\Gamma(C(X)_{F})\) such that there is a path connecting \(f,h_{1},h_{2}\) and \(g\) in order. So immediately there is another path connecting \(g,2h_{2},2h_{1}\) and \(f\). So we get a cycle of length \(6\), namely \(f,h_{1},h_{2},g,2h_{2},2h_{1}\) and \(f\). Let us make it clear that with the given condition it is not possible to get a cycle of length \(5\). As \(f\) and \(g\) are not adjacent to each other, to have a cycle of length \(5\), we must have a path of length \(2\) joining \(f\) and \(g\) which is not possible as \(Z(f)\cap Z(g)=\emptyset.\) This implies that \(c(f,g)=6\). Conversely, let \(c(f,g)=6.\) Then by proof of (i) and (ii), we have \(Z(f)\cup Z(g)\neq X\) and \(Z(f)\cap Z(g)=\emptyset\).
2303.05647
Electronic states in quantum wires on a Möbius strip
We study the properties of a two-dimensional non-relativistic electron gas (TDEG) constrained on wires along a M\"{o}bius strip. We considered wires around the strip and along the transverse direction, across the width of the strip. For each direction, we investigate how the curvature modifies the electronic states and their corresponding energy spectrum. At the center of the strip, the wires around the surface form quantum rings whose spectrum depends on the strip radius $a$. For wires at the edge of the strip, the inner edge turns into the outer edge. Accordingly, the curvature yields localized states in the middle of the wire. Along the strip width, the effective potential exhibits a parity symmetry breaking leading to the localization of the bound state on one side of the strip.
J. J. L. R. Pinto, J. E. G. Silva, C. A. S. Almeida
2023-03-10T01:49:08Z
http://arxiv.org/abs/2303.05647v2
# Electronic states in quantum wires on the Mobius strip. ###### Abstract In this work, we study the properties of an electron constrained on wires along the Mobius strip. We considered wires around the strip and along the transverse direction, across the width of the strip. For each direction, we investigate how the curvature modifies the electronic states and their corresponding energy spectrum. At the center of the strip, the wires around the surface form quantum rings whose spectrum depends on the strip radius \(a\). For wires at the edge of the strip, the inner edge turns into the outer edge. Accordingly, the curvature yields localized states in the middle of the wire. Along the strip width, the effective potential exhibits a parity symmetry breaking leading to the localization of the bound state on one side of the strip. ## I Introduction In the latest decades, the investigation of the low-dimensional system had attracted much attention due to their unconventional properties [1; 2]. Geometry has crucial importance in two-dimensional systems, such as in flat graphene, carbon nanotubes, or even in one-dimensional quantum wires and quantum rings. Indeed, the curvature of the graphene surface modifies the electronic, elastic, and thermal properties of the material [1; 2; 3]. This new field, named _curvatronics_, explores new features and phenomena driven by the curvature of low-dimensional samples. Among the two-dimensional geometries proposed, a graphene Mobius strip has been studied both theoretical and experimentally [4; 5; 6; 7; 8]. The graphene Mobius strip is a single-sided surface built by gluing the two ends of graphene ribbons, after rotating one end by \(180^{o}\). If one starts in one edge of the Mobiusstrip and takes a \(360^{o}\) rotation, it ends up on the other edge of the strip. Thus, besides the curvature, the asymmetry of the Mobius strip ought to influence the electronic properties. The influence of curvature on the quantum dynamics of a particle constrained on surfaces is a long-standing issue [9; 10; 11]. One widely used approach is the so-called squeezing method, wherein the Schrodinger equation on the surface is obtained by considering a small width \(\epsilon\) to the surface and taking the limit \(\epsilon\to 0\). As a result, a curvature-dependent potential is obtained, the so-called da Costa potential [11; 12]. The effects of the da Costa potential have been studied on several surfaces, such as the catenoid[13], helicoid[12], and nanotorus[14], among others. The electronic features of the whole Mobius strip have been previously explored in other works, for instance, in the Ref. [15]. In the ref. [16; 17] only the minimal coupling was considered. More recently, the Laplace-Beltrami operator on the curved Mobius strip was investigated at the limit when the strip width tends to zero, at the limit of thin strips[18]. In this work, we analyze the features of an electron constrained in a wire on the Mobius strip. In fact, we can use the curvature of the Mobius strip to build one-dimensional structures on this surface and analyze the electronic structures of an electron restricted to these quantum wires. The electron that is constrained to move in the longitudinal direction will move in a quantum wire around the Mobius strip while being influenced by the bending of the strip in that direction, whereas an electron constrained in the transverse direction will move in a quantum wire in the width of the strip, also felt the influence of curvature in that direction. Thus, we consider two possible wires: one along the length of the strip or along the width of the strip. For the former, we vary the angular variable \(\theta\) while keeping the width variable \(u\) constant. For the latest, we fix \(\theta\) and vary \(u\). For each wire, we obtain the effective potential containing the curvature influence of such wire, and we analyze the wave functions and their corresponding energy levels. In addition, the symmetries inherited or broken by the geometry upon the electronic states are discussed. This paper is organized as follows: In section II, we present a brief review of the differential geometry of the Mobius strip. In section III, we obtain the effective Hamiltonian containing the da Costa potential and discuss their symmetries and the appropriate boundary conditions of our problem. In section IV, we define the wires around the strip, and along the width of the strip. For each wire, we studied the properties of the eigenfunctions and their respective spectrum. We also discuss some limiting cases in each wire, i.e., large width compared to the strip radius, etc. Finally, additional considerations and future perspectives are presented in section V. ## II The Mobius Strip In this section, we will provide a brief geometric description of the Mobius strip and construct a curvilinear coordinate system on its surface. In cylindrical coordinates, a Mobius Strip with inner radius \(a\), and width \(2w\), can be parameterized as, \[\vec{\mathbf{r}}(u,\theta)=\left(a+uCos\frac{\theta}{2}\right)\hat{\rho}+uSen \frac{\theta}{2}\hat{k} \tag{1}\] where \(u\) is the coordinate that measures the distance between a point on the strip and its inner circle, measured along the strip, with \(-w<u<w\), and \(\theta\) runs around the strip, this is \(0\leq\theta\leq 2\pi\). The vectors which are tangents to the surface are described by, \[\begin{cases}\hat{\mathbf{e}}_{u}=\frac{\partial\vec{\mathbf{r}}}{\partial u }=\cos\frac{\theta}{2}\hat{\rho}+\sin\frac{\theta}{2}\hat{k}\\ \hat{\mathbf{e}}_{\theta}=\frac{\partial\vec{\mathbf{r}}}{\partial\theta}=- \frac{u}{2}\sin\frac{\theta}{2}\hat{\rho}+(a+u\cos\frac{\theta}{2})\hat{\theta }+\frac{u}{2}\cos\frac{\theta}{2}\hat{k}.\end{cases} \tag{2}\] And the normal vector is given by: \[\hat{\mathbf{n}}=\frac{\hat{\mathbf{e}}_{u}\times\hat{\mathbf{e}}_{\theta}}{| \hat{\mathbf{e}}_{u}\times\hat{\mathbf{e}}_{\theta}|}, \tag{3}\] that is, \[\hat{\mathbf{n}}=\frac{1}{\beta}[-\sin\frac{\theta}{2}(a+u\cos\frac{\theta}{2 })\hat{\rho}-\frac{u}{2}\hat{\theta}+\cos\frac{\theta}{2}(a+u\cos\frac{\theta} {2})\hat{k}], \tag{4}\] where, \[\beta(u,\theta)=\sqrt{\frac{u^{2}}{4}+\left(a+u\cos\frac{\theta}{2}\right)^{ 2}}. \tag{5}\] In this way, we can build a coordinate system on the Mobius strip determined by unit vectors and orthogonal to each other, \(\hat{\mathbf{e}}_{u}\), \(\hat{\mathbf{e}}_{\theta}\), and \(\hat{\mathbf{n}}\). From the tangent vectors, we define the metric tensor for our surface \(g_{ij}=\hat{\mathbf{e}}_{i}\cdot\hat{\mathbf{e}}_{j}\). In the matrix notation, the metric tensor takes the form \[g_{ij}=\begin{pmatrix}1&0\\ 0&\beta^{2}(u,\theta)\end{pmatrix}, \tag{6}\] It is worthwhile to mention that the metric is not invariant under a parity transformation \((u,\theta)\rightarrow(-u,-\theta)\). However, \(\beta\) is invariant under an inversion along the width and a rotation under a \(2\pi\) angle, i.e., \(\beta(-u,\theta+2\pi)=\beta(u,\theta)\). In addition, the metric is invariant under the transformation \((u,\theta)\rightarrow(-u,2\pi-\theta)\). These symmetries can be seen in fig.(2). The curvature of surfaces is given by calculating the mean and Gaussian curvatures, which are obtained from calculating the first and second fundamental forms[19]. The elements of the second fundamental form are obtained by means of \(h_{ij}=\partial_{i}\hat{\mathbf{e}}_{j}\cdot\hat{\mathbf{n}}\), from which we obtain the elements of the matrix from Weingarten[19]: \[h_{j}^{i}=\frac{1}{\beta^{2}}\begin{pmatrix}0&\frac{a}{2\beta}\\ \frac{a}{2\beta}&\frac{\sin\frac{\theta}{2}(4\beta^{2}+u^{2})}{4\beta} \end{pmatrix}, \tag{7}\] where \(h_{j}^{i}=g^{ik}h_{kj}\). The Gaussian curvature is defined as \(K=det(h_{j}^{i})\), and the mean curvature is given by \(H=\frac{1}{2}Tr(h_{j}^{i})\). For the Mobius strip, the curvature has the expression \[H(u,\theta)=\frac{\sin(\theta/2)(4\beta^{2}+u^{2})}{8\beta^{3}}, \tag{8}\] Figure 1: Möbius strip with a local reference frame determined by the tangent vectors \(\hat{\mathbf{e}}_{u}\), \(\hat{\mathbf{e}}_{\theta}\), and the normal vector \(\hat{\mathbf{n}}\). and, \[K(u,\theta)=-\frac{a^{2}}{4\beta^{4}}. \tag{9}\] The behavior of the mean curvature \(H\) and the Gaussian curvature \(K\) with respect to \(u\) and \(\theta\) is showing in the fig.(3) and in the fig.(4), respectively. Note that the curvatures exhibit the symmetry under the transformation \((u,\theta)\rightarrow(-u,2\pi-\theta)\). Moreover, both curvatures are greater around the middle of the strip, near the angles \(\theta=\pi/w\) and \(\theta=\frac{3\pi}{2}\). ## III Electron on Mobius strip In this section, we describe the dynamics of a constrained electron on the surface of a Mobius strip. A restricted electron in a surface, in the absence of external fields, is governed by the Hamiltonian [11; 13] \[\hat{\mathcal{H}}=\frac{1}{2m^{*}}g^{ij}\hat{P}_{i}\hat{P}_{j}+V_{dC}, \tag{10}\] where \(m^{*}\) is the effective mass of the electron, \(\hat{P}_{i}:=-i\hbar\nabla_{i}\) is the momentum operator, and \(V_{g}\) is a potential of geometric origin that corresponds to a contribution of the curvature in the Hamiltonian of the electron. The covariant derivative \(\nabla_{i}\) of the momentum operator is given by \(\nabla_{i}V^{j}:=\partial_{i}V^{j}+\Gamma^{j}_{ik}V^{k}\), where \(\Gamma^{j}_{ik}=\frac{g^{jm}}{2}(\partial_{i}g_{mk}+\partial_{k}g_{mi}- \partial_{m}g_{ik})\) is the Christoffel symbol. Thus, the spinless stationary Schrondinger equations is \[\frac{1}{2m^{*}}g^{ij}\{-\hbar^{2}[\partial_{i}\partial_{j}\psi-\Gamma^{k}_{ ij}\partial_{k}\psi]\}+V_{g}\psi=E\psi. \tag{11}\] The geometric potential \(V_{g}\), known as da Costa potential, depends on the Gaussian curvature \(K\) and the mean curvature \(H\) by \(V_{g}=V_{dC}=-\frac{\hbar^{2}}{2m^{*}}\big{(}H^{2}-K\big{)}\)[11]. Thus, the da Costa potential in the Mobius strip is given by \[V_{dC}=-\frac{\hbar^{2}}{2m^{*}}\Big{[}\frac{\sin^{2}(\theta/2)}{64\beta^{6} }(4\beta^{2}+u^{2})^{2}+\frac{a^{2}}{4\beta^{4}}\Big{]}, \tag{12}\] whose behavior is sketched in fig.(5). It is worthwhile to mention that the geometric potential exhibits the same symmetries and behavior of the mean and the gaussian curvatures. Before proceeding in the study of the Schrodinger equation, let us analyze the symmetries of the Hamiltonian. Figure 4: Gaussian curvature of a Möbius strip according to equation (11), where the result is varied for a Möbius strip of internal radius \(a=1\), \(-5\leq u\leq 5\), and \(0\leq\theta\leq 2\pi\). Figure 5: Geometric potential on the Möbius strip. Figure 3: Mean curvature of a Möbius strip according to equation (10), where the result is varied for a Möbius strip of internal radius \(a=1\), \(-5\leq u\leq 5\), and \(0\leq\theta\leq 2\pi\). ### Symmetries The Hamiltonian can be cast into the form \[\hat{\mathcal{H}}=\frac{1}{2m}\Big{[}\hat{P}_{u}^{2}+\frac{1}{\beta^{2}}\hat{P}_{ \theta}^{2}-i\hbar\frac{\partial_{u}\beta}{\beta}\hat{P}_{u}+i\hbar\frac{ \partial_{\theta}\beta}{\beta^{3}}\hat{P}_{\theta}\Big{]}+V_{dC}. \tag{13}\] It is worthwhile to mention that, unlike the catenoid, nanotubes, helicoid, and torus among others, the Mobius strip has no axial symmetry, i.e., the surface does not remain invariant under the transformation \(\theta\rightarrow-\theta\). This asymmetry is inherited by the Hamiltonian (13), which now depends explicitly on \(\theta\). The Hamiltonian dependence on \(\theta\) means that the angular momentum with respect to the \(z\) axis \(L_{z}\) is no longer conserved. Accordingly, the \(\hat{L}_{z}\) operator \(\hat{L}_{z}=-\frac{i\hbar}{\beta^{2}}\frac{d}{d\theta}\) no longer commutes with the Hamiltonian (13). As a result, we cannot separate the wave function as \(\Psi(u,\theta)=e^{im\theta}\psi(u)\). Another noteworthy feature of the Hamiltonian (13) is that the presence of the first-derivative terms turns the Hamiltonian non-Hermitian. In fact, \[\Big{[}\frac{i\hbar}{2m}\frac{\partial_{u}\beta}{\beta}\hat{P}_{u}\Big{]}^{ \dagger}=\frac{-\hbar^{2}}{2m}\partial_{u}\left(\frac{\partial_{u}\beta}{ \beta}\right)-\frac{i\hbar}{2m}\frac{\partial_{u}\beta}{\beta}\hat{P}_{u} \tag{14}\] and similarly \[\Big{[}\frac{i\hbar}{2m}\frac{\partial_{\theta}\beta}{\beta^{3}}\hat{P}_{ \theta}\Big{]}^{\dagger}=\frac{-\hbar^{2}}{2m}\partial_{\theta}\left(\frac{ \partial_{\theta}\beta}{\beta^{3}}\right)-\frac{i\hbar}{2m}\frac{\partial_{ \theta}\beta}{\beta^{3}}\hat{P}_{\theta} \tag{15}\] Since the Hamiltonian does not depend on the time, the Hamiltonian is invariant under the time-reversal symmetry \(\mathcal{T}\), where \(t\rightarrow-t\). This symmetry leads to the conservation of energy. Although the Hamiltonian is not invariant under parity transformation \((-u,-\theta)\rightarrow(u,\theta)\), the Hamiltonian inherits the combined symmetry \((-u,2\pi-\theta)\rightarrow(u,\theta)\). This symmetry can be considered as a modified Mobius parity transformation \(\mathcal{P}_{M}\). Therefore, the Mobius strip is invariant under \(\mathcal{P}_{M}\mathcal{T}\) transformation. The non-hermiticity of the Hamiltonian (13) could lead to imaginary energy eigenvalues. However, as shown by some authors[20; 21], the \(\mathcal{PT}\) symmetry allows the existence of a real energy spectrum. ## IV Quantum Rings and Quantum Wires on a Mobius Strip In the previous sections, we review the basic features of the Mobius strip geometry and explored the properties of the electron Hamiltonian. In this section, we define wires on the Mobius strip wherein the electron will be free to move. The effects of the curvature of these wires upon the electron states will be investigated. Let us start with a wire along the \(\theta\) direction. ### Electron in a quantum ring on the Mobius strip. By fixing the variable \(u\), the stationary Schrodinger equation \(H\Psi=E\Psi\) with Hamiltonian given by eq.(13) yields to \[-\frac{1}{\beta^{2}}\frac{d^{2}\psi}{d\theta^{2}}+\frac{\partial_{\theta}\beta }{\beta^{3}}\frac{d\psi}{d\theta}-(H^{2}-K)\psi=\epsilon\psi, \tag{16}\] where \(\epsilon=\frac{2mE}{\hbar^{2}}\). Although having only derivatives with respect to \(\theta\), the eq.(16) depends on the choice of \(u\) fixed. By performing the change of variable \[v(u_{0},\theta)=\int\beta(u_{0},\theta)d\theta, \tag{17}\] the Schrodinger equation becomes \[-\frac{d^{2}\psi(v)}{dv^{2}}+U(u_{0},v)\psi(v)=\epsilon\psi(v). \tag{18}\] where \(U(v,u_{0})=-\left(H^{2}(u_{0},v)-K(u_{0},v)\right)\) is the effective potential. The variable \(v\) is the arc length for a given \(u\). In fig.(6) we plotted the effective potential along the angular wire for the same values of \(u\). It is worthwhile to mention that, the effective potential is just the da Costa potential. Hence, if we consider a minimal coupling, where no geometric potential squeezing potential is present, the effective potential for the angular wire vanishes. In addition, note that the potential is asymmetric as we change \(u_{0}\rightarrow-u_{0}\). Thus, the choice of \(u_{0}\) leads to the localization of the electron on one side or the other side of the strip. #### iv.1.1 Quantum ring at the center of the Mobius strip Let us start our analysis with the most symmetric case, i.e., for u=0. This wire forms a quantum ring around the Mobius strip. Indeed, for \(u=0\), the metric component \(\beta(0,\theta)=a\). Figure 6: Effective potentials on the Möbius strip for fixed values of \(u\) for a Möbius strip of length \(L=5\) and inner radius \(a=1\). For \(u=0\), the effective Schrodinger equation eq.(18) becomes \[-\ddot{\psi}(v)+\left[-\frac{1}{4a^{2}}\sin^{2}\left(\frac{v}{2a}\right)-\frac{1} {4a^{2}}\right]\psi(v)=\epsilon\psi(v). \tag{19}\] Thus, the effective potential for a wire on the Mobius strip for \(u=0\) is given by \[U(v)=-\frac{1}{4a^{2}}-\frac{1}{4a^{2}}\sin^{2}\left(\frac{v}{2a}\right), \tag{20}\] whose behavior is sketched in fig.(7). Note that the first term in eq.(20) represents the potential due to the curvature of the ring, whereas the second term steams from the Mobius strip curvature. By redefining the variable \(x=\frac{v}{2a}\) the Schrodinger equation along the ring at \(u=0\) can be cast into a Mathieu equation \[\frac{d^{2}\psi(x)}{dx^{2}}+[p-2q\cos(2x)]\psi(x)=0, \tag{21}\] where, \[\begin{cases}p=4a^{2}\epsilon+\frac{3}{2},\\ q=\frac{1}{4},\end{cases} \tag{22}\] where \(p\) and \(q\) are parameters. Mathieu's equation has even index solutions denoted by \(Ce_{2m}(a_{2m},q,x)\) and \(Se_{2m}(b_{2m},q,x)\), \(m=1,2,3...\), periodic in \(\pi\), and odd index solutions denoted by \(Ce_{2m+1}(a_{2m+1},q,x)\) and \(Se_{2m+1}(b_{2m+1},q,x)\),\(m=1,2,3...\), periodic in \(2\pi\)[22]. The complete solution to our problem is therefore: \[\psi(x)=C_{1}Ce_{2m+1}(q,x)+C_{1}Se_{2m+1}(q,x). \tag{23}\] Since the potential is periodic with a period of \(2\pi\), we adopt the following boundary conditions \[\psi(0)=\psi(2\pi)=0 \tag{24}\] By doing this the solution will depend only on the functions \(Se_{2m+1}\). Finally, we arrive at the following normalized solution for the range \(v\in[0,2\pi]\): \[\psi(v)=\frac{1}{\sqrt{\pi}}Se_{2m+1}(\frac{v}{2a},\frac{1}{4}) \tag{25}\] The energy spectrum of the particle is related to the parameters \(b_{2m+1}\) by \[E_{2n+1}=\frac{\hbar^{2}}{2\mu a^{2}}\left(\frac{b_{2n+1}}{4}-\frac{3}{8} \right), \tag{26}\] where \(n=1,2,3...\). Hence, the ground state has energy \(E_{0}=-\frac{3\hbar^{2}}{16\mu a^{2}}\). Compared with the usual quantum circular ring, the wave function and energy spectrum of a particle of mass \(\mu\) are: \[\psi(\theta)=\frac{e^{im\theta}}{\sqrt{2\pi}},\quad E_{m}^{(ring)}=\frac{ \hbar^{2}}{2\mu a^{2}}m^{2},\quad(m\in\mathbb{Z}), \tag{27}\] where \(m\) corresponds to the eigenvalue of the angular momentum of the particle in the \(z\) direction. It is interesting to note that the spectrum of the particle in the center of the Mobius strip is not degenerate at any level, which does not occur with the particle in the circular ring, which from the ground state is doubly degenerate at each level. We plot the probability density functions of the first three eigenstates in figure (7). #### iii.2.2 Quantum wire at the edges of the Mobius strip. Let us now consider the motion of the electron in a wire at the edge of the strip. For a strip with a width w=2, \(u\) goes from -1 to 1, and the maximum values of \(u\), therefore, represent the edge of the strip. The wire is not a close loop (ring), due to the \(4\pi\) symmetry of the Mobius strip. The Schrodinger equation (16), for \(u=\pm 1\) Figure 7: In graph (a) we have the effective potential for one for a particle at the center of the Möbius strip (\(u=0\)), with inner radius \(a=1\) and length \(L=2\pi a\). In (b) we have the probability density for the first four eigenstates of the wave function of equation (46). is therefore \[\begin{split}-\frac{1}{\beta_{u=\pm 1}^{2}}\frac{d^{2}\psi}{d \theta}-\frac{\pm(1\pm\cos\frac{\theta}{2})\sin\frac{\theta}{2}}{2\beta_{u=\pm 1 }^{4}}\frac{d\psi}{d\theta}\\ -\left[\frac{1}{4\beta_{u=\pm 1}^{4}}-\frac{(1+4\beta_{u=\pm 1})^{2} \sin^{2}\frac{\theta}{2}}{64\beta_{u=\pm 1}^{6}}\right]\psi=\psi\epsilon,\end{split} \tag{28}\] where \(\beta_{u=\pm 1}^{2}=\sqrt{\frac{1}{4}+(1\pm\cos\frac{\theta}{2})^{2}}\). The Schrodinger equation (28) also has a term dependent on the first order derivative in \(\theta\). Then, performing the change of variable (17) we obtain a Schodinger equation in the variable \(v\). Since the expression of the effective potential in \(v\) is rather cumbersome, we present only the graphics for the potentials and their respective wave functions. In fig.(10), the effective potential is plotted for \(a=1\) and \(L=5\). The potential exhibits a symmetric well around the origin \(v=0\). The ground state has a bell shape localized at the origin. On the other hand, the first excited state in two points is displaced symmetrically from the origin, as shown in fig.(11). The potential changes drastically as we reduce the inner radius \(a\). In the figure (13), we plotted the effective potential for \(a=0.5\), where a barrier arises near the origin. As a result, the wave function for the ground state becomes shifted from the \(v=0\). #### iv.2.3 Quantum wire along the Mobius strip width Now let us consider the electron constrained to move along wires in the \(u\) direction. We investigate how the wave function will be modified by the Mobius strip curvature. By fixing the coordinate \(\theta\), the Schrodinger equation reads \[-\frac{d^{2}\psi}{du^{2}}-\frac{\partial_{u}\beta}{\beta}\frac{d\psi}{du}-(H^{ 2}-K)\psi=\epsilon\psi. \tag{29}\] This equation can be further simplified by considering the following change on the wave function \[\psi(u)=\psi(u,\theta_{0})=\frac{1}{\sqrt{\beta(u,\theta_{0})}}\phi(u). \tag{30}\] The resulting Schrodinger equation for the function \(\phi(u)\) has the form \[-\frac{d^{2}\phi(u)}{du^{2}}+W(u,\theta_{0})\phi(u)=\epsilon\phi(u), \tag{31}\] where the effective potential is given by \[\begin{split} W(u,\theta_{0})=\left(\frac{\partial_{u}\beta}{2 \beta}\right)_{|\theta=\theta_{0}}^{2}+\partial_{u}\left(\frac{\partial_{u} \beta}{2\beta}\right)_{|\theta=\theta_{0}}\\ -\left(H^{2}(u,\theta_{0})-K(u,\theta_{0})\right).\end{split} \tag{32}\] We plot the effective potential along the \(u\) direction \(w(u)\) in eq.(32) for some fixed values of \(\theta\). In the fig.(14), the wire is located at \(\theta=0\). The potential exhibits a single well shifted to the left, towards the inner portion of the strip. The respective wave function is then, localized around this inner point. For \(\theta=\pi\), shown in fig.(15), the potential well is symmetric with respect to the origin \(u=0\). The wider potential allows the wave function to spread along the wire. For \(\theta=2\phi\), both the potential well and the wave function are shifted to the right of the origin, as seen in fig.(16). The behavior described above shows that the directional dependence of the Mobius strip geometry enables us to devise quantum wires wherein the density of states is controlled by the position and the curvature of the strip. ## V Final remarks and perspectives In this work, we studied the effects of the curvature of quantum wires constrained on the Mobius strip upon the electron properties. The advantage of considering the wire on the surface stems from the fact that the geometric potential depends not only on the wire curvature but on the surface curvature as well. Moreover, the electronic properties are modified by the twist and the symmetries of the Mobius strip. By considering a wire along the length of the strip at the center, i.e., for \(u=0\), the wire forms a ring whose Figure 8: Quantum wire on the edge of the Möbius strip. Figure 9: Quantum wire the width of the Möbius strip. effective potential has a ground state localized symmetrically around \(\theta=\pi\). On the other hand, the excited states are more affected by the ring curvature \(1/a\). For a wire on the edge of the strip, the states are localized around \(\theta=0\). If the inner radius \(a\) is reduced compared to the strip width \(w\), the ground state gets localized in two symmetrical points around the origin. We also considered wires along the strip width, i.e., for \(\theta\) fixed. The anisotropy of the Mobius strip turns the effective potential highly dependent on the chosen angle. For \(\theta=0\), the ground state is localized on the inner side of the strip, i.e., \(u<0\). As the angle \(\theta\) varies from 0 Figure 11: (a) Effective potential at the edge of a Möbius strip (solid black line), and energy value of the second restricted state (dashed red line). (b) The probability density function for the second bound state on the edge of the Möbius strip (solid red line). Results for a Möbius strip with inner radius \(a=1\) and length \(L=5\). Figure 12: (a) Effective potential at the edge of a Möbius strip (solid black line), and energy value of the first restricted state (dashed blue line). (b) The probability density function for the first bound state on the edge of the Möbius strip (solid blue line). Results for a Möbius strip with inner radius \(a=0.5\) and length \(L=5\). Figure 10: (a) Effective potential at the edge of a Möbius strip (solid black line), and energy value of the first restricted state (dashed blue line). (b) The probability density function for the first bound state on the edge of the Möbius strip(solid blue line). Results for a Möbius strip with inner radius \(a=1\) and length \(L=5\). to \(2\pi\), the wave function is shifted from the inner to the outer side, i.e., for \(u>0\). The present work suggests further investigations. For instance, the effects of the curvature of the strip with multiple twists or the inclusion of external magnetic or electric fields could provide a way to tuning and control the density of states at other points along the strip. Moreover, the effects of the spin, by means of the Dirac or the Pauli equation seem promising. Figure 14: (**a**) Effective potential at the width of Möbius strip for \(\theta_{0}=0\) (solid black line ) and energy value of the bound state (dashed blue line). (**b**) Probability density of the bound state at the width of Möbius strip. We use \(a=1\) and \(w=10\). Figure 15: (**a**) Effective potential at the width of Möbius strip for \(\theta_{0}=\pi\) (solid black line ) and energy value of the bound state (dashed blue line). (**b**) Probability density of the bound state at the width of Möbius strip. We use \(a=1\) and \(w=10\). ## Acknowledgements The authors thank the Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq), grants n\({}^{2}\) 162277/2021-0 (J.J.L.R.), n\({}^{2}\) 312356/2017-0 (JEGS), n\({}^{2}\) 309553/2021-0 (CASA) for financial support. ## Data availability The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request
2302.12012
Novel Epileptic Seizure Detection Techniques and their Empirical Analysis
An Electroencephalogram (EEG) is a non-invasive exam that records the brain's electrical activity. This is used to help diagnose conditions such as different brain problems. EEG signals are taken for epilepsy detection, and with Discrete Wavelet Transform (DWT) and machine learning classifier, they perform epilepsy detection. In Epilepsy seizure detection, machine learning classifiers and statistical features are mainly used. The hidden information in the EEG signal helps detect diseases affecting the brain. Sometimes it is complicated to identify the minimum changes in the EEG in the time and frequency domain's purpose. The DWT can give a suitable decomposition of the signals in different frequency bands and feature extraction. We use the tri-dimensionality reduction algorithm, Principal Component Analysis (PCA), Independent Component Analysis (ICA), and Linear Discriminant Analysis (LDA). Finally, features are selected by using a fusion rule and at the last step, three different classifiers, Support Vector Machine (SVM), Naive Bayes (NB), and K-Nearest-Neighbor (KNN) have been used individually for the classification. The proposed framework is tested on the Bonn dataset. The simulation results provide 100% accuracy for the LDA and NB combination outperforming accuracy with other classifiers combinations, including 89.17% for LDA and SVM, 80.42% for LDA and KNN, 89.92% for PCA and NB, 85.58% PCA and SVM, 80.42% PCA and KNN, 82.33% for ICA and NB, 90.42% for ICA and SVM, 90% for ICA and KNN. Also, the LDA and NB combination shows the sensitivity, specificity, accuracy, Precision, and Recall of 100%, 100%, 100%, 100%, and 100%. The results prove the effectiveness of this model.
Rabel Guharoy, Nanda Dulal Jana, Suparna Biswas, Lalit Garg
2023-02-17T20:58:17Z
http://arxiv.org/abs/2302.12012v4
Detection of Epilepsy Seizure using Different Dimensionality Reduction Techniques and Machine Learning on Transform Domain ###### Abstract: An Electroencephalogram (EEG) is a non-invasive exam that records electrical activity of the brain. This exam is used to help diagnose conditions such as different brain problems. EEG signals are taken for the purpose of epilepsy detection and with Discrete Wavelet Transform (DWT) and machine learning classifier, they perform epilepsy detection. In Epilepsy seizure detection, mainly machine learning classifiers and statistical features are used. The hidden information in the EEG signal is useful for detecting diseases affecting the brain. Sometimes it is very difficult to identify the minimum changes in the EEG in time and frequency domain's purpose. The DWT can give good decomposition of the signals in different frequency bands and feature extraction. We use the tri-dimensionality reduction algorithm.; Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Linear Discriminant Analysis (LDA). Finally, features are selected by using a fusion rule and at the last step three different classifiers Support Vector Machine (SVM), Naive Bayes (NB) and K-Nearest-Neighbor (KNN) has been used for the classification. The proposed framework is tested on the Bonn dataset and the simulation results provide the maximum accuracy for the combination of LDA and NB for 10-fold cross validation technique. It shows the maximum average sensitivity, specificity, accuracy, Precision and Recall of 100%, 100%, 100%, 100% and 100%. The results prove the effectiveness of this model. E lectroencephalogram (EEG), Epilepsy, Discrete Wavelet Transform (DWT), Principal Component Analysis (PCA), Independent Component Analysis (ICA), Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), Naive Bayes (NB) and K-Nearest-Neighbor (KNN). ## 1 Introduction: Epilepsy is an abnormal electrical brain activity as well as called seizure, like an electrical storm inside the brain. Chronic neurological disorder is epilepsy. Inside the brain, electrical activity disturbance is the main cause [3, 5, 6]. It could be originated by different reasons [7] such as information, it may be low sugar level and shortage of oxygen during childbirth in blood [8, 9]. In the world, epilepsy affects 50 million people approximately, with 100 million being affected one time in their lifetime [5, 10]. Overall, 1% of the world's burden of epilepsy, and the control rate is 0.5%-1% [4, 11]. It is detected by brain neurons through analysing the brain signals. Generate the signals through the neuron's connection with each neuron in a difficult way to share with human organs. Electroencephalogram (EEG) and Electrocorticography (ECoG) media monitoring of the same brain signal. These signals are complex, non-linear, non-stationary, noisy and produce big data. Seizure detection and discovery is a challenging work for the brain-related knowledge. Classify EEG data and detect seizures with sensible patterns without compromising performance through the machine learning classifiers. Main challenges are selecting better classifiers and features. Last few years mainly focus on the machine learning classifiers and taxonomy of statistical features- 'black-box' and 'non-black-box'. Mainly focus on state-of-the-art methods and ideas will give a better understanding of seizure detection and classification [2]. As per nuero expert seizures can be divided to two types based on symptoms, one is partial and another one is generalized [7, 13]. DWT are used for different groups of epilepsy types of electroencephalogram (EEG) signals. Discrete wavelet transform is better for the feature extraction step because it efficiently works in this field. Used feature selection to get minimized dimensionality without irrelevant features. It is used for differential evolution purposes. In this research work considering feature extraction and selection using efficient models for EEG classification. seven varieties of wavelets were tested. Few kinds of discrete wavelet transforms are used to process a spacious difference of features. In raw data, using multi-level DWT and a number of sub-bands to extract the features. In features, applied for seven statistical functions. These functions include SD, AVP, MAV, mean, variance, Shannon entropy and skewness. Choose better features to use these function values as an input to DE come in order to classify the signals, using three matching metrics, six supervised machine learning and two ensemble learning methods. A method to classify cases of epilepsy by brain signals better accuracy. For the diagnosis of epilepsy used DE with DWT to obtain clear and better results. It builds a model for an effective classification process. In result, various performance of supervised classifiers in terms of accuracy, recall measures, and precision. In this research SVM were better for accuracy, NB and KNN are better in convergent results. In these three algorithms, SVM is better performed in EEG signals classification. Also applying F-measure purpose on different data sets and various classifiers, SVM gets the highest percentage of that and accuracy also [1]. Various methods are already given. The EEG signals found on the Brain-computer Interface (BCI) equipment are non-linear, non-stationary, weak and time-varying, improving the accuracy through an effective feature extraction method. The process of EEG signals acquisition is some common noises like electrooculogram and electrocardiogram artifacts, it is most affecting the extraction of main information. Combining independent component analysis (ICA), common spatial pattern (CSP) and wavelet transform (WT) to process methods on EEG signals. ICA algorithm for breaking the EEG signals into independent components. Then these components are decomposed by WT to reach the wavelet coefficient of each independent source. The two-compromise threshold function is used to action the wavelet packet coefficients. After that CSP algorithm is used to extract the features of the denoised EEG data. Lastly four classification algorithms are used for classification to check the effectiveness of the better algorithm. Result is better identified and remove EOG and ECG artifacts from the data and preserve the neural activity [12]. From literature review it is observed most of the existing works are very time consuming due to high dimensional feature space. So in this work we have tried to reduce the feature dimension in two steps. At the first step after applying DWT, three different dimensionality reduction techniques are used to reduce the feature dimension. In the next step a feature level fusion technique has been used for further reduction of feature dimension. Finally at the last step three different classifiers have been used to detect epilepsy with high accuracy. Rest of the paper is organized as follows: Section 2 provides a brief review on epilepsy detection method, Section 3 provides all the techniques, Section 4 provides our proposed methodology and algorithm, Section 5 provides the results and discussion. ## 2 Literature review: As per previous research, Applications of machine learning are significantly seen on health and biological data sets for better outcomes [15, 16]. Researchers/scientists in different areas, specifically, data mining and machine learning, are actively involved in proposing solutions for better seizure detection. Machine learning has been significantly applied to discover sensible and meaningful patterns from different domain datasets [25, 26]. It plays a significant and potential role in solving the problems of various disciplines like healthcare [25, 27, 28, 29, 30, 31, 32, 33]. Applications of machine learning can also be seen on brain datasets for seizure detection, epilepsy lateralization, differentiating seizure sates, and localization [26, 27, 28, 29]. This has been done by various machine learning classifiers such as ANN, SVM, decision tree, decision forest, and random forest [26, 28]. Amin et al. [18] proposed a discrete wavelet transform-based feature extraction scheme for the classification of EEG signals. Discrete wavelet transform was applied on EEG signals and the relative wavelet energy was calculated from the detailed coefficients and approximation coefficients of the last decomposition level. The EEG dataset used in their work consisted of two classes: (1) EEG recorded during the Raven's advanced progressive metric test and (2) EEG recorded in resting condition with eyes open. An accuracy of 98 % was obtained in their work on using the support vector machine with approximation (A4) and detailed coefficients (D4). It was observed that their feature extraction approach had the potential to classify the EEG signals recorded during a complex cognitive task and also achieved a high accuracy rate. Al-Qerem et al. [1] developed a Wavelet family and differential evolution are proposed for categorization of epilepsy cases based on electroencephalogram (EEG) signals. Discrete wavelet transform is widely used in feature extraction because it efficiently works in this field, as confirmed by the results of previous studies. The feature selection step is used to minimize dimensionality by excluding irrelevant features. This step is conducted using differential evolution. This article presents an efficient model for EEG classification by considering feature extraction and selection. Seven different types of common wavelets were tested in our research work. These are Discrete Meyer (dmey), Reverse biorthogonal (rbio), Biorthogonal (bior), Daubechies (db), Symlets (sym), Coiflets (coif), and Haar (Haar). Different types of feature extraction are used to transform different types of discrete wavelets. Afterwards, we use differential evolution to choose appropriate features that will achieve the best performance of signal classification. For the classification step, we have used Bonn databases to build the classifiers and test their performance. The results prove the effectiveness of the proposed model. Epilepsy is a serious chronic neurological disorder, which can be detected by analyzing the brain signals produced by brain neurons. Neurons are connected to each other in a complex way to communicate with human organs and generate signals. The monitoring of these brain signals is commonly done using Electroencephalogram (EEG) and Electrocorticography (ECoG) media. These signals are complex, noisy, non-linear, non-stationary and produce a high volume of data. Hence, the detection of seizures and discovery of the brain-related knowledge is a challenging task. Machine learning classifiers are able to classify EEG data and detect seizures along with revealing relevant sensible patterns without compromising performance. As such, various researchers have developed a number of approaches to seizure detection using machine learning classifiers and statistical features. The main challenges are selecting appropriate classifiers and features. The aim of this paper is to present an overview of the wide varieties of these techniques over the last few years based on the taxonomy of statistical features and machine learning classifiers--'black-box' and 'non-black-box'. The presented state-of-the-art methods and ideas will give a detailed understanding about seizure detection and classification, and research directions in the future [2]. Electrocardiogram (ECG) is the P-QRS-T wave, representing the cardiac function. The information concealed in the ECG signal is useful in detecting the disease afflicting the heart. It is very difficult to identify the subtle changes in the ECG in time and frequency domains. The Discrete Wavelet Transform (DWT) can provide good time and frequency resolutions and is able to decipher the hidden complexities in the ECG. In this study, five types of beat classes of arrhythmia as recommended by Association for Advancement of Medical Instrumentation (AAMI) were analysed namely: non-ectopic beats, supra-ventricular ectopic beats, ventricular ectopic beats, fusion betas and unclassifiable and paced beats. Three dimensionality reduction algorithms; Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Independent Component Analysis (ICA) were independently applied on DWT sub bands for dimensionality reduction. These dimensionality reduced features were fed to the Support Vector Machine (SVM), neural network (NN) and probabilistic neural network (PNN) classifiers for automated diagnosis. ICA features in combination with PNN with spread value of 0.03 performed better than the PCA and LDA. It has yielded an average sensitivity, specificity, positive predictive value (PPV) and accuracy of 99.97%, 99.83%, 99.21% and 99.28% respectively using a ten-fold cross validation scheme [20]. In this paper, a new binary harmony search (BHS) is proposed to select the optimal channel sets and optimize the system accuracy. The BHS is implemented on the training data sets to select the optimal channels and the test data sets are used to evaluate the classification performance on the selected channels. The sparse representation-based classification, linear discriminant analysis, and support vector machine are performed on the common spatial pattern (CSP) features for motor imagery (MI) classification. Two public EEG datasets are employed to validate the proposed BHS method. The paired t-test is conducted on the test classification performance between the BHS and traditional CSP with all channels. The results reveal that the proposed BHS method significantly improved classification accuracy as compared to the conventional CSP method (p \(<\) 0.05) [19]. The electroencephalogram (EEG) signals based on the Brian-computer Interface (BCI) equipment is weak, non-linear, non-stationary and time-varying, so an effective feature extraction method is the key to improving the recognition accuracy. Electrooculogram and electrocardiogram artifacts are common noises in the process of EEG signals acquisition, it seriously affects the extraction of useful information. This paper proposes a processing method on EEG signals by combining independent component analysis (ICA), wavelet transform (WT) and common spatial pattern (CSP). First, the independent component analysis algorithm is used to break the EEG signals into independent components; and then these independent components are decomposed by WT to obtain the wavelet coefficient of each independent source. The soft and hard compromise threshold function is used to process the wavelet packet coefficients. Then the CSP algorithm is used to extract the features of the denoised EEG data. Finally, four common classification algorithms are used for classification to verify the effectiveness of the improved algorithm. The experimental results show that the EEG signals processed by the proposed method has obvious advantages in identifying and removing electrooculogram (EOG) and electrocardiogram (ECG) artifacts; meanwhile, it can preserve the neural activity that is missed in the noise component. Cross-comparison experiments also proved that the proposed method has higher classification accuracy than other algorithms [12]. ## 3 Background: This section provides the description of all the tools and techniques which are used in this work. At first we have explained the importance and function of DWT and in the next subsection all the dimensionality techniques such as PCA,ICA and LDA are explained. ### Discrete Wavelet Transform (DWT): Frequency domain techniques are very popular feature extraction techniques for different classification problems. Wavelet Transform (WT) is one of the frequency-based feature extraction approaches which shows the property of time-frequency localization [WT1][WT2] and is suitable for the analysis of non-stationary signals. ECG signals are basically non-stationary in nature. That's why WT is one of the effective tools to analyze the ECG signals [39][40]. DWT decomposes a one-dimensional signal into two subbands, such as high and low-frequency subbands. This high-frequency sub-band is called detail and the low-frequency subband is called approximation. let x is a one-dimensional signal. In the case of DWT, at first, these samples are passed through a low-pass filter with impulse g and a high-pass filter h. \[\text{y[n]}=\text{(x * g) [n]}=\sum\limits_{k=-\infty}^{\infty}x[k]g[n - k] \tag{1}\] This equation result and signal is decomposed and continues using a high-pass filter h. The outputs give the detail coefficients from high-pass filter and approximation coefficients from low-pass filter g. These two filters are related two each other and this is DWT decomposition of a signal using the filter band shown Fig 1. This signal passes through the filter h(n) and down factor of 2, to obtain the approximation coefficient in the first level. By following the same process, a signal is passed through another filter g(n) and a down sample of the same factor of 2 obtains the detail coefficients. In the second level approximation coefficients are passed through the same filter h(n) and g(n) and the down sample obtains the coefficients in second level. i.e: \[\mathrm{y}_{\mathrm{low}}[\mathrm{n}]=\sum_{k=-\infty}^{\infty}x[k]g[2n-k] \tag{2}\] \[\mathrm{y}_{\mathrm{low}}[\mathrm{n}]=\sum_{k=-\infty}^{\infty}x[k]h[2n-k] \tag{3}\] We get output from above equations, half the frequency band of the signal. So, each result has half the frequency band of the input, after that frequency resolution has been doubled. i.e., subsampling operator \(\downarrow\) \[\mathrm{(y\downarrow k)}[\mathrm{n}]=\mathrm{y}\ [\mathrm{kn}] \tag{4}\] The above sum can be written more constructive way, \[\mathrm{y}_{\mathrm{low}} = \mathrm{(x*g)}\downarrow 2 \tag{5}\] \[\mathrm{y}_{\mathrm{high}} = \mathrm{(x*h)}\downarrow 2 \tag{6}\] ### Principal Component Analysis (PCA): Principal Component Analysis (PCA) is a linear dimensionality reduction method that projects the data towards the direction of maximum variableness. It is widely used to reduce the dimensions of large data sets within a small one that contains most of the information from the large data set. It is also used for improving the performance of different machine learning algorithms. The method works by representing data in a space that best represents the variation in terms of sum-squared error. The method is also useful for segmenting signals from different sources. Step 1: Compute the covariance matrix from the data as, \[\mathrm{C} = \mathrm{(X-x^{-})}\ \mathrm{(X-x^{-})^{T}} \tag{7}\] where X is the data matrix of DWT coefficients in a sub band of N \(\times\)100 dimension, N is the total number of patterns, x\({}^{-}\) represents the mean vector of X. Step 2: Compute the matrix of eigenvectors V and diagonal matrix of eigenvalues D as \[\mathrm{V^{-1}\ CV} = \mathrm{D} \tag{8}\] Figure 1: 2\({}^{\mathrm{nd}}\) level of Coefficient’s Step 3: The eigenvectors in V are sorted in the descending order of eigenvalues in D and the data is projected on these eigenvector directions by taking the inner product between the data matrix and sorted eigenvector matrix as, \[\text{Projected data}=[V^{\text{T}}\left(X-x^{-}\right)^{\text{T}}]^{\text{T}} \tag{9}\] ### Independent Component Analysis (ICA): Independent component analysis (ICA) can be applied on mixed signals. Independence relates to the ability to guess one component from the information carried by others. Statistically, it denotes that the joint probability of independent quantities is earned as a result of multiplication of the probability value of each component. ICA algorithm used for statistical techniques that may be effective in areas to remove signal noise and EEG signals. Previously used this ICA based technique for the purpose of appropriate and removing pollution current in brain wave planning [16][17]. Mainly search kind of technique is focused on minimizing the effect of surrounding parameters into EEG signals form. If x is a vector with mixtures {x1, x2,..., xn} and let s be the source vector with {s1, s2,..., sn}. Let A denote the weight matrix with elements aij. The ICA model assumes that the signal x (the DWT coefficients in a sub band) which we were observing is linearly mixed with the source signals. The ICA model is given by, \[x=\text{As}\qquad\text{ or }\qquad x=\sum_{i=1}^{n}aisi \tag{10}\] The above equation is called independent component analysis. The problem is to determine both the matrix A and the independent components s, knowing only the measured variables x. The only assumption the methods take is that the components are independent. It has also been proved that the components must have non gaussian distribution [10]. ICA looks a lot like the "blind source separation" (BSS) problem or blind signal separation: a source is in the ICA problem an original signal, so an independent component. In the ICA case there is also no information about the independent components, like in the BSS problem. Whitening can be performed via eigenvalue decomposition of the covariance matrix: \[\text{VDV}^{\text{T}}=\text{E}[\tilde{x}\tilde{x}^{\text{T}}] \tag{11}\] where V is the matrix of orthogonal eigenvectors and D is a diagonal matrix with the corresponding eigenvalues. The whitening is done by multiplication with the transformation matrix P: \[\tilde{x}=\text{P}\tilde{x} \tag{12}\] \[\text{P}=\text{VD}^{1/2}\text{V}^{\text{T}} \tag{13}\] The matrix for extracting the independent components from \(\tilde{x}\) is \(\tilde{W}\), where \(\text{P}=\tilde{W}\text{P}\). ### Linear Discriminant Analysis (LDA): Linear Discriminant Analysis (LDA) technique is used to find a linear combination of features that separates or characterizes two or more classes of objects or events. The goal of LDA is to find a feature subspace that maximally separates the groups. LDA generates a new variable which is a joining of the original predictors. This is achieved by maximizing the differences between the predefined groups, with respect to the new variable. The predictor scores are combined in such a way that a single new discriminant score is obtained. It can also be visualized as a data dimension reduction method that gives a one-dimensional line for p-dimensional predictors. Mainly based on linear score function, a function of a class, \(\mu_{i}\), and the pooled variance-covariance matrix. The Linear Score Function is defined as: \[\mathrm{s^{L}_{i}(X)=-\frac{1}{2}\,\mathrm{\mu_{\Sigma\,\mu i}^{-1}}+\mathrm{\mu_{ \Sigma\,x}^{-1}}+\mathrm{logP(\Pi_{i})=d_{i0}+\sum\limits_{j=1}^{p}dijxj+logP( \Pi_{i})=d^{L}_{i}(X)+logP(\Pi_{i})\quad\quad\ldots} \tag{14}\] Where \(\mathrm{d_{i0}=-\frac{1}{2}\,\mathrm{\mu_{\Sigma\,\mu i}^{-1}}\) and \(\mathrm{d_{ij}=jth}\) element of \(\mathrm{\mu_{\Sigma\,\mathrm{And}}^{-1}}\).The linear discrimination function. As we can see from the above formula, the far right-hand expression is similar to a linear regression with intercept \(\mathrm{d}\mathrm{i}\mathrm{o}\) and coefficients \(\mathrm{d}\mathrm{i}\mathrm{j}\). ### contributions The main contributions of this work are presented as follows: i)In this paper we have presented three different methods of epilepsy detection techniques by combining 5th level of DWT, three different dimensionality reduction techniques PCA,LDA and ICA and at the last step three different classifiers SVM,KNN and NB. ii) Most of the existing work has used a large dimension of feature space which indirectly increases the time complexity. But in this work we have integrated feature extraction with feature selection to reduce the dimension of features. iii) The proposed method can detect epilepsy with very high accuracy. ## 4 Proposed Methodology: Schematic overview of our proposed methodology is depicted in Fig.2. The working of each and every block is explained below: In the present work EEG signal is decomposed by DWT up to level five using Daubechies (\(\mathrm{d}\mathrm{B}\mathrm{I}\)) as shown in Fig 3. Figure 2: Block Diagram of the proposed method. After this 5th level decomposition we have worked with 6 subbands which are CD1(1st level detail coefficient), CD2(2nd level detail coefficient), CD3(3rd level detail coefficient), CD4(4th level detail coefficient), CD5(5th level detail coefficient), CA5(5th level approximation coefficient). All these 6 subbands for one sample of ECG signal are shown in Fig.4. After 5th level wavelet decomposition, we have applied 3 different dimensionality reduction techniques PCA, LDA and ICA on the six selected subbands by setting the dimension of features of length L. At first we have selected anyone dimensionality reduction technique and applied it on Figure 4: 6 subbands for one sample of EEG signal Figure 3: Block Diagram of 5th level Decomposition of EEG signal. approximation coefficients (CA5) and detail coefficients CD1, CD2, CD3, CD4, CD5 and extracted the features CA5_L, CD1_L, CD2_L, CD3_L, CD4_L, CD5_L of length L. Then for further reduction of dimension, a feature level fusion technique has been applied. The whole algorithm is explained below in Algorithm 1. For the feature selection we have used max function and then linear combination rule as described in Step 5 of Algorithm 1. Finally combine the features by applying the fusion rule as discussed in step 6. Here the value of \(\mu\)1 and \(\mu\)2 has been selected by applying trial and error method, following the linear combination rule \(\mu\)1+ \(\mu\)2=1. For this method \(\mu\)1, \(\mu\)2 are set as 0.7 and 0.3. Finally at the last stage three different classifiers Support Vector Machine (SVM), Naive Bayes (NB) and K-Nearest-Neighbor (KNN) have been applied for classification. The whole algorithm of feature extraction and feature selection is explained in Algorithm 1. **Algorithm1:** Step1: Select dimensionality reduction technique PCA/LDA/ICA Step2: Set dimension of features (L) Step3: Apply dimensionality reduction technique on approximation coefficients (CA5) and extract the features CD5_L Step4: Apply dimensionality reduction technique on detail coefficients on CD1, CD2, CD3, CD4, CD5 and extract the features CD1_L, CD2_L, CD3_L, CD4_L, CD5_L Step5: Compute CD_L= MAX (CD1_L, CD2_L, CD3_L, CD4_L, CD5_L) Step6: Compute F=\(\mu\)1* CA5_L + \(\mu\)2*CD_L Step7: Select any one classifier (KNN or SVM or NB) Step8: Obtain the class. ## 5 Result and Discussion: **Data sets:** In this work, we have used the Bonn dataset, which was recorded at Bonn University. This dataset is widely used for the detection of epilepsy [41]. This dataset is publicly available and sampled at 173.6 Hz with a 23.6 s duration. It consists of a total of 500 EEG signals of five different classes named S, F, N, O, and Z. Each category has 100 different EEG signals [42]. This recorded data considering highest accuracy, all signals are recorded in the same 128-channel amplifier system channel. Each section has different acquisition of circumstances, like open eye, close eye, seizure free status, seizure-free status, inside five epileptogenic zones and seizure activity. Details of explanation of the Bonn dataset is explained in Table.1. \begin{table} \begin{tabular}{l l l l} \hline Set name & Annotation of data & Size & Acquisition circumstances \\ \hline Set A & Z000.txt—Z100.txt & 564 KB & Five healthy subjects with open eye \\ Set B & O000.txt—O100.txt & 611 KB & Five healthy subjects with close eye \\ Set C & N000.txt—N100.txt & 560 KB & Five epileptics with seizure free status \\ Set D & F000.txt—F100.txt & 569 KB & Five epileptics with seizure-free status, inside five epileptogenic zone \\ \hline \end{tabular} \end{table} Table 1: Samples of data in normal and seizure cases. We have applied 10-fold cross validation technique for the partitioning of training and testing of classifiers. Tables 2, 3, and 4 represent the various performance measures for the proposed method by selecting the dimensionality reduction technique ICA with KNN, SVM and NB. These tables are focused on relative similarity between the various performance measures. As per table data, we have achieved the maximum average accuracy of 100% for the combination of ICA with NB in the 2-class classification between A-E dataset. We have also computed other performance measures like F-measure, Recall, Specificity,sensitivity and precision, which are given in Tables 2,3 and 4. The proposed method achieved the maximum average sensitivity of 100% for all the 3 combinations (ICA\(+\)KNN),(ICA\(+\)NB) and (ICA\(+\)SVM) in case of A-E data set. From Table 3 it is also observed that for the dataset B-E also we have achieved the maximum average sensitivity of 100%. Comparing Table 2,3 and 4, it is noticed that maximum average Specificity and F-measure value is obtained for the dataset A-E. Tables 5,6, and 7 represent the various performance measures for the dimensionality reduction technique PCA with 3 different classifiers NB, SVM and KNN respectively. As per the table data, we have noticed that the highest accuracy 100% is obtained in case of A-E dataset for NB classifier. For the dataset A-E we have obtained the highest average value of recall, and F-measures is 100%. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline CASE & Accuracy & Sensitivity & Specificity & Precision & Recall & F-measure \\ \hline A-C & 72.00 & 82.65 & 61.52 & 67.89 & 82.65 & 0.74 \\ \hline A-D & 72.50 & 97.03 & 47.76 & 65.55 & 97.03 & 0.78 \\ \hline A-E & 100.00 & 100.00 & 100.00 & 100.00 & 100.00 & 1.00 \\ \hline B-C & 82.00 & 67.22 & 95.48 & 95.60 & 67.22 & 0.78 \\ \hline B-D & 68.00 & 91.43 & 45.27 & 62.48 & 91.42 & 0.74 \\ \hline B-E & 99.50 & 99.23 & 100.00 & 100.00 & 99.23 & 0.99 \\ \hline \end{tabular} \end{table} Table 4: Results of proposed model for the combination of ICA with NB. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline CASE & Accuracy(\%) & Sensitivity(\%) & Specificity(\%) & Precision(\%) & Recall(\%) & F-measure \\ \hline A-C & 88.50 & 93.99 & 81.14 & 85.24 & 93.99 & 0.89 \\ \hline A-D & 83.50 & 82.65 & 83.87 & 83.99 & 82.65 & 0.82 \\ \hline A-E & 93.00 & 100.00 & 86.01 & 88.36 & 100.00 & 0.94 \\ \hline B-C & 91.50 & 93.33 & 89.83 & 90.47 & 93.33 & 0.92 \\ \hline B-D & 91.50 & 93.31 & 90.37 & 90.12 & 93.31 & 0.91 \\ \hline B-E & 92.00 & 100.00 & 84.07 & 86.32 & 100.00 & 0.92 \\ \hline \end{tabular} \end{table} Table 2: Results of proposed model for the combination of ICA with KNN. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline CASE & Accuracy(\%) & Sensitivity(\%) & Specificity(\%) & Precision(\%) & Recall(\%) & F-measure \\ \hline A-C & 88.50 & 93.99 & 81.14 & 85.24 & 93.99 & 0.89 \\ \hline A-D & 83.50 & 82.65 & 83.87 & 83.99 & 82.65 & 0.82 \\ \hline A-E & 93.00 & 100.00 & 86.01 & 88.36 & 100.00 & 0.94 \\ \hline B-C & 91.50 & 93.33 & 89.83 & 90.47 & 93.33 & 0.92 \\ \hline B-D & 91.50 & 93.31 & 90.37 & 90.12 & 93.31 & 0.91 \\ \hline B-E & 92.00 & 100.00 & 84.07 & 86.32 & 100.00 & 0.92 \\ \hline \end{tabular} \end{table} Table 3: Results of proposed model for the combination of ICA with SVM. Specificity values of 100% were achieved for the datasets A-E and B-E for SVM and KNN respectively. Tables 8,9, and 10 represent the various performance measures owing to the NB, SVM and KNN classifiers using LDA dimensionality reduction technique. Comparing the tables Table 2, Table3, Table4, Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10 it is noticed that the combination (LDA+NB) provides the best result with respect to all the measures and for all the data sets The NB classifier achieved an accuracy of 100% for the dataset: A-C, A-D, A-E, B-C, B-D, and B-E. Specificity and precision values of 100% are achieved for all the datasets combinations (A-C, A-D, A-E, B-C, B-D, and B-E). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline CASE & Accuracy & Sensitivity & Specificity & Precision & Recall & F-measure \\ \hline A-C & 100.00 & 100.00 & 100.00 & 100.00 & 100.00 & 1.00 \\ \hline A-D & 100.00 & 100.00 & 100.00 & 100.00 & 100.00 & 1.00 \\ \hline A-E & 100.00 & 100.00 & 100.00 & 100.00 & 100.00 & 1.00 \\ \hline B-C & 100.00 & 100.00 & 100.00 & 100.00 & 100.00 & 1.00 \\ \hline B-D & 100.00 & 100.00 & 100.00 & 100.00 & 100.00 & 1.00 \\ \hline B-E & 100.00 & 100.00 & 100.00 & 100.00 & 100.00 & 1.00 \\ \hline \end{tabular} \end{table} Table 10: Results of proposed model LDA with NB Algorithm. Figure 5: Confusion matrix for PCA with KNN Algorithm. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline CASE & Accuracy & Sensitivity & Specificity & Precision & Recall & F-measure \\ \hline A-C & 100.00 & 100.00 & 100.00 & 100.00 & 100.00 & 1.00 \\ \hline A-D & 72.00 & 72.48 & 73.36 & 72.75 & 72.48 & 0.71 \\ \hline A-E & 96.00 & 99.09 & 90.63 & 95.56 & 99.09 & 0.97 \\ \hline B-C & 91.00 & 86.70 & 94.02 & 93.57 & 86.70 & 0.90 \\ \hline B-D & 100.00 & 100.00 & 100.00 & 100.00 & 100.00 & 1.00 \\ \hline B-E & 76.00 & 88.38 & 63.88 & 74.31 & 88.38 & 0.80 \\ \hline \end{tabular} \end{table} Table 9: Results of proposed model LDA with SVM Algorithm. Figure 6: Confusion matrix for PCA with SVM Algorithm. Fig. 8-10 shows the average accuracy for different folds in case of ICA,PCA and LDA for the three different classifiers SVM, NB, KNN. From Figure 8 it is observed that (ICA+SVM) combination provides the better result compared to (ICA+NB) and(ICA+KNN). From Figure 9 also it is observed that SVM classifier provides the better result compared to KNN and NB for the case of PCA. But for LDA, it is noticed from Fig.10 that (LDA+NB) combination provides the better result compared to (LDA+KNN) and (LDA+SVM) combinations. Figure 8: Fold wise accuracy using ICA and SVM, NB, KNN Figure 7: Confusion matrix for PCA with NB Algorithm. Figure 10: Fold wise accuracy using LDA and SVM, NB, KNN Figure 9: Fold wise accuracy using PCA and SVM, NB, KNN Figure 11. ROC plot for PCA, ICA, LDA and SVM, NB, KNN Fig. 11 shows the ROC plots for PCA, ICA, LDA and SVM, NB, and KNN. All plot is the same because almost all the results came out to approximately 100. So, the same curve for all. ROC plots obtained by three classifiers from the dataset of 5 different sets of EEG signals. ## 6 Conclusion: EEG signals are widely used for the detection and diagnosis of epileptic seizures. Clinical diagnosis of epilepsy is a time killing and very much expensive process. So it is required to design a simple and fast technique for the classification of EEG signals.In this work a new epilepsy detection method has been presented by integrating DWT with three different dimensionality reduction techniques PCA, LDA and ICA. In the next step feature dimensions are further reduced by applying feature level fusion technique. In our proposed method we have used three different classifiers for detecting epilepsy seizures. This method provides the maximum accuracy of 100% for the combination of LDA with NB. Dimension of the feature space of our proposed method is very low, so it would be a practical technique for the treatment of epilepsy. In our future work we want to apply optimization algorithms to select the important feature for the detection of epilepsy.
2301.04320
Rethinking complex-valued deep neural networks for monaural speech enhancement
Despite multiple efforts made towards adopting complex-valued deep neural networks (DNNs), it remains an open question whether complex-valued DNNs are generally more effective than real-valued DNNs for monaural speech enhancement. This work is devoted to presenting a critical assessment by systematically examining complex-valued DNNs against their real-valued counterparts. Specifically, we investigate complex-valued DNN atomic units, including linear layers, convolutional layers, long short-term memory (LSTM), and gated linear units. By comparing complex- and real-valued versions of fundamental building blocks in the recently developed gated convolutional recurrent network (GCRN), we show how different mechanisms for basic blocks affect the performance. We also find that the use of complex-valued operations hinders the model capacity when the model size is small. In addition, we examine two recent complex-valued DNNs, i.e. deep complex convolutional recurrent network (DCCRN) and deep complex U-Net (DCUNET). Evaluation results show that both DNNs produce identical performance to their real-valued counterparts while requiring much more computation. Based on these comprehensive comparisons, we conclude that complex-valued DNNs do not provide a performance gain over their real-valued counterparts for monaural speech enhancement, and thus are less desirable due to their higher computational costs.
Haibin Wu, Ke Tan, Buye Xu, Anurag Kumar, Daniel Wong
2023-01-11T05:59:50Z
http://arxiv.org/abs/2301.04320v1
# Rethinking Complex-Valued Deep Neural Networks for Monaural Speech Enhancement ###### Abstract Despite multiple efforts made towards adopting complex-valued deep neural networks (DNNs), it remains an open question whether complex-valued DNNs are generally more effective than real-valued DNNs for monaural speech enhancement. This work is devoted to presenting a critical assessment by systematically examining complex-valued DNNs against their real-valued counterparts. Specifically, we investigate complex-valued DNN atomic units, including linear layers, convolutional layers, long short-term memory (LSTM), and gated linear units. By comparing complex- and real-valued versions of fundamental building blocks in the recently developed gated convolutional recurrent network (GCRN), we show how different mechanisms for basic blocks affect the performance. We also find that the use of complex-valued operations hinders the model capacity when the model size is small. In addition, we examine two recent complex-valued DNNs, i.e. deep complex convolutional recurrent network (DCCRN) and deep complex U-Net (DCUNET). Evaluation results show that both DNNs produce identical performance to their real-valued counterparts while requiring much more computation. Based on these comprehensive comparisons, we conclude that complex-valued DNNs do not provide a performance gain over their real-valued counterparts for monaural speech enhancement, and thus are less desirable due to their higher computational costs. Haibin Wu, Ke Tan, Buye Xu, Anurag Kumar, Daniel Wong+Meta Reality Labs Research, USA Monaural speech enhancement, complex-valued neural networks, computational cost, deep learning Footnote †: This work was done while H. Wu was a research scientist intern at Meta. ## 1 Introduction Recent years have witnessed promising performance improvement of monaural speech enhancement models in the complex domain, given the importance of phase for speech quality [1, 2, 3, 4, 5, 6, 7, 8, 9]. A recent study [10] develops the key atomic components for complex-valued DNNs and claim that complex-valued parameters have various merits from computational, biological, and signal processing perspectives. Complex-valued DNNs, which operates with complex-valued arithmetic, seems to be advantageous for complex-domain speech enhancement, where DNNs are trained to learn complex spectrograms. Motivated by such an intuition, multiple efforts [3, 11, 12, 13, 14, 15, 16] adopted complex-valued DNNs for monaural speech enhancement. However, to the best of our knowledge, none of these studies has justified a performance gain provided by complex-valued DNNs over their real-valued counterparts with the same network structure and model size. Drude et al. [17] compared real- and complex-valued DNNs with fully-connected layers for beamforming, and found that the complex-valued DNN does not yield superior performance to the real-valued DNN while more computationally expensive. For monaural speech enhancement, despite the promising performance improvement produced by recent complex-valued DNNs, it remains unclear whether it is the complex-valued nature that fundamentally brings the merits. A recent notable model named DCCRN [7] extends the convolutional recurrent network in [18] by replacing convolutional and LSTM layers with their complex-valued counterparts to estimate the ideal complex ratio mask. The DCCRN exhibits competitive performance over earlier works, which has drawn the community's attention to the efficacy of complex-valued DNNs for speech enhancement. However, we believe that it is premature to ascertaine the performance improvement to the use of complex-valued operations due to the lack of systematic comparisons between DCCRN and its real-valued counterpart, in which only the complex-valued layers are replaced by the corresponding real-valued layers while all other configurations remain unaltered, including input features, training targets, training objectives, network structure and model size. Without such apples-to-apples comparisons, it is difficult to justify the attribution of the improvement achieved by complex-valued DNNs. This study presents a critical assessment by systematically examining complex-valued DNNs against their real-valued counterparts through comprehensive comparisons: 1. Based on the principles of complex-valued computation [10], we formulate complex-valued DNN atomic units for investigation, including linear layers, convolutional/deconvolutional layers, LSTM, and gated linear units. We compare their performance with that of their real-valued counterparts on monaural speech enhancement. 2. To comprehensively investigate complex-valued operations in different types of layer topology, we adopt GCRN - a real-valued DNN originally developed for complex-domain speech enhancement, which integrates a variety of layer types. We enumerate all the different versions of fundamental building blocks of GCRN, and show how different computing mechanisms in basic blocks affect the performance. We observe that the models with complex-valued components do not outperform the real-valued counterparts. In addition, given the fact that many real-world applications require a computationally efficient model, we conduct the same comparisons with a setting where the model size is very small. We find that, with such a setting, complex-valued operations even hinders speech enhancement performance compared to real-valued operations. 3. Two recent compelling models based on complex-valued operations, DCCRN [7] and DCUNET [3], have shown promising performance for monaural speech enhancement. In this work, we evaluate their real-valued versions with the same parameter count, and conduct investigation with different loss functions, learning rates and minibatch sizes, in terms of both enhancement performance and training stability. The experimental results reveal that the complex-valued versions do not outperform their real-valued counterparts while they have higher computational costs. This is consistent with the observation in [19]. ## 2 Methodology This section introduces the basic building blocks for complex-valued DNNs, followed by the case study design. ### Building blocks #### 2.1.1 Linearity Fully connected layers, convolution layers and deconvolution layers are composed of matrix multiplications. We omit the bias to simplify the description. Taking the input complex-valued feature matrix as \(X=X_{r}+jX_{i}\) and the complex-valued parameter matrix as \(W=W_{r}+jW_{i}\), the complex-valued output can be elaborated as: \[Y=(X_{r}W_{r}-X_{i}W_{i})+j(X_{r}W_{i}+X_{i}W_{r}), \tag{1}\] where \(Y\) denotes the output feature of the complex-valued layer, the subscripts \(r\) and \(i\) denote real and imaginary parts respectively. #### 2.1.2 Activation function Given a complex-valued representation \(z\), the activation function operates on the real and imaginary part independently as: \[a=f(Re\ z)+jf(Im\ z), \tag{2}\] where \(a\) is the output representation, \(Re\) and \(Im\) extract the real and imaginary parts respectively, and \(f\) denotes the activation function. #### 2.1.3 Lstm For LSTM layers, we have two versions: **Quasi complex-valued LSTM** In [7], the complex LSTM operation is treated as two separate operations on the real and imaginary parts. To be specific, they initialize two real-valued sub-LSTM layers, namely LSTM\({}_{r}\) and LSTM\({}_{i}\), corresponding to the real and imaginary LSTM respectively. Given the input feature \(X=X_{r}+jX_{i}\), the output feature can be derived as: \[F_{rr}=\text{LSTM}_{r}(X_{r}),F_{ir}=\text{LSTM}_{r}(X_{i}), \tag{3}\] \[F_{ri}=\text{LSTM}_{i}(X_{r}),F_{ii}=\text{LSTM}_{i}(X_{i}),\] \[F_{out}=(F_{rr}-F_{ii})+j(F_{ri}+F_{ir}),\] where \(F_{out}\) is the output feature. **Fully complex-valued LSTM** In addition to the quasi complex-valued LSTM, which does not perform complex-valued operations within sub-LSTM layers, we also investigate fully complex-valued LSTM, which totally follows the the arithmetic of complex numbers. Each matrix multiplication and activation function in this LSTM strictly follows the arithmetic in Sections 2.1.1 and 2.1.2. #### 2.1.4 Gated linear unit Gated linear unit [20] is a widely used layer topology, which consists of two separate convolutional layers and one gating operation. The two separate convolutional layers process the same input, and produce their outputs \(F^{(1)}\) and \(F^{(2)}\), respectively. A sigmoid function is applied to \(F^{(2)}\) to derive a gate, which is then element-wisely multiplied with \(F^{(1)}\) to yield the output of the gated linear unit. In a complex-valued gated linear unit, let \(F^{(1)}=F^{(1)}_{r}+jF^{(1)}_{i}\) and \(F^{(2)}=F^{(2)}_{r}+jF^{(2)}_{i}\) be the outputs of the two convolutional layers. We derive two gating mechanisms, i.e. separate gating and magnitude gating. **Separate gating** For separate gating, we apply a sigmoid function to the real and imaginary parts of \(F^{(2)}\) separately, which amounts to a complex-valued gate. The real and imaginary parts of this gate are element-wisely multiplied with \(F^{(1)}_{r}\) and \(F^{(1)}_{i}\), respectively. **Magnitude gating** Unlike separate gating, magnitude gating calculates a real-valued gate \(F^{(g)}\) from the magnitude of the complex feature map \(F^{(2)}\): \[F^{(g)}=(\sigma(|F^{(2)}|)-0.5)\times 2, \tag{4}\] where \(\sigma\) denotes the sigmoid function, and \(|\cdot|\) extracts the magnitude of a complex feature map. Since the magnitude is nonnegative, applying the sigmoid function to the magnitude always results in values ranging from 0.5 to 1. Hence we use an affine transformation to normalize the gating value to the range of 0 to 1. The resulting gate is applied to both real and imaginary parts of \(F^{(1)}\). Such magnitude gating preserves the phase of \(F^{(1)}\)[21]. ### Case study design In this section, we carefully design the case studies, and elaborate the rationales and objectives of each case study. In these case studies, all pairs of real- and complex-valued models for comparison have the same configurations, including input features, training targets, training objectives, network structure and model size. **Basic Unit** This case study compares different complex layers defined in Section 2.1 with their real-valued counterparts, in terms of enhancement performance and computational costs. Specifically, we compare: 1) a model with a stack of three complex-valued linear layers and its corresponding real-valued model, where each of the two hidden layers has 406 units in the complex-valued model and 512 units in the real-valued model, respectively. Such a configuration ensures that the two models have almost the same number of parameters. Note that each hidden layer is followed by a rectified linear unit function; 2) quasi complex-valued LSTM, fully complex-valued LSTM, and real-valued LSTM, each of which contains three LSTM layers followed by a linear output layer. In these three models, each LSTM layer contains 732, 732 and 1024 units, respectively. The implementations described in Section 2.1.3 are adopted for quasi complex-valued LSTM and fully complex-valued LSTM; 3) DCUNET, a convolutional encoder-decoder model developed in [3], and its real-valued counterpart (RUNET), in which all complex-valued convolutional, deconvolutional and linear layers are replaced by their real-valued counterparts. Akin to 1) and 2), we slightly adjust hyperparameters (e.g. number of out channels in convolutional layers) for RUNET, such that its model size is almost the same as DCUNET. Note that all these models are trained to learn complex spectral mapping. **GCRN** GCRN [5] is a representative model for our investigation, because it consists of different types of layers including convolutional/deconvolutional layers, gated linear units, LSTM layers, and linear layers. The original GCRN has two decoders, one for real part estimation and the other for imaginary part estimation. We instead use a single shared decoder for both real and imaginary parts, corresponding to two output channels in the last deconvolutional layer of the decoder. Such an architecture can be naturally converted into complex-valued versions for comparison by replacing each layer with their complex-valued counterpart. In this case study, we aim to investigate: 1) whether replacing specific layers of GCRN with their complex-valued counterparts can lead to better performance; 2) how the use of complex-valued operations affect speech enhancement performance when the model is constrained to a relatively small amount of parameters; 3) which gating mechanism in Section 2.1.4 is the better choice, from both training stability and enhancement performance aspects. Note that regarding the bottleneck LSTM in GCRN, we adopt the quasi complex-valued LSTM for investigation. **DCCRN** In [7], the performance gain achieved by DCCRN is attributed by the authors to the complex multiplication constraint, which they believe can help DNNs learn complex representations more effectively. However, they did not compare DCCRN with its real-valued counterpart using the same configurations. Thus it is difficult to justify the attribution of the performance improvement, which is likely due to either the use of complex-valued operations or other components in the model design. The objective of this case study is to show whether DCCRN can outperform its real-valued counterparts, with the same amount of parameters. Specifically, we adopt the "DCCRN-E" configuration, which achieves the best performance in [7]. To derive the corresponding real-valued version, we simply replace the complex-valued layers with their real-valued counterparts, and reduce the channel numbers in the encoder to [32, 64, 64, 64, 128, 256] to maintain the number of parameters. ## 3 Experiments ### Experimental setup In our experiments, the Interspeech2020 DNS Challenge training speech dataset [22] is used to create our training, validation and test sets, which contains roughly 65000 speech signals uttered by 1948 speakers in total. We randomly split these speakers into three distinct sets for training, validation and test sets, which include 1753 (\(\sim\)90%), 97 (\(\sim\)5%) and 98 (\(\sim\)5%) speakers, respectively. Similarly, we partition the DNS Challenge noise dataset with around 65000 signals into 90%, 5% and 5% for training, validation and test sets, respectively. By randomly pairing speech and noise signals, we create a training set with 500000 noisy mixtures and a validation set with 1000 noisy mixtures, in both of which the signal-to-noise ratio (SNR) is randomly sampled between -5 and 5 dB. Following the same procedure, three test sets are created at different SNR levels, i.e. -5, 0 and 5 dB. Note that all speech and noise signals are randomly truncated to 10 seconds before mixing. We additionally use the synthetic test set released by DNS Challenge for evaluation. All signals are sampled at 16 kHz. Short-time Fourier transform is performed to obtain spectrograms. We adopt the Adam optimizer to train all models. Multiple metrics are employed to measure the speech enhancement performance, including wide-band perceptual evaluation speech quality (WB-PESQ) [23], short-time objective intelligibility (STOI) [24], scale-invariant signal-to-distortion ratio (SI-SDR) [25], DNSMOS P. 835 [26] and NORESQA-MOS [27]. ### Experimental results **Basic Unit** In Table 1, 1). columns (1a), (1b), (1c) denote the fully complex-valued LSTM, quasi complex-valued LSTM and real-valued LSTM. 2). Real-valued LSTM has half as many MACs as both complex-valued LSTMs. Among the three models, the quasi complex-valued LSTM achieves the best performance, while its improvement over the real-valued LSTM is marginal. Columns (1d) \begin{table} \begin{tabular}{l c|c|c c c|c c|c c} \hline \hline & SNR & Noisy & (1a),C-LSTM & (1b),Quasi C-LSTM & (1c),LSTM & (1d),C-Linear & (1e),R-Linear & (1f),DCUNET & (1g),RUNET \\ \hline \multirow{3}{*}{STOI} & -5 dB & 0.69 & 0.85 & 0.86 & 0.86 & 0.61 & 0.61 & 0.85 & 0.85 \\ & 0 dB & 0.78 & 0.90 & 0.91 & 0.91 & 0.70 & 0.70 & 0.90 & 0.90 \\ & 5 dB & 0.85 & 0.94 & 0.94 & 0.94 & 0.76 & 0.76 & 0.94 & 0.94 \\ \hline \multirow{3}{*}{WB-PESQ} & -5 dB & 1.11 & 1.65 & 1.71 & 1.69 & 1.12 & 1.12 & 1.64 & 1.70 \\ & 0 dB & 1.15 & 1.95 & 2.02 & 2.00 & 1.17 & 1.28 & 1.92 & 2.00 \\ & 5 dB & 1.24 & 2.29 & 2.35 & 2.34 & 1.24 & 1.25 & 2.27 & 2.36 \\ \hline \multirow{3}{*}{SI-SDR (dB)} & -5 dB & -5.00 & 10.80 & 11.10 & 10.87 & 0.92 & 1.23 & 10.80 & 10.87 \\ & 0 dB & 0.05 & 13.62 & 13.92 & 13.78 & 4.69 & 4.94 & 13.79 & 13.86 \\ & 5 dB & 5.01 & 16.36 & 16.64 & 16.55 & 7.19 & 7.57 & 16.74 & 16.84 \\ \hline \(\#\) Para & - & - & 23.35 M & 23.35 M & 23.62 M & 0.59 M & 0.59 M & 3.10 M & 3.12 M \\ \hline \(\#\) MACs & - & - & 5.90 G & 5.90 G & 2.98 G & 119.59 M & 59.88 M & 56.69 G & 19.87 G \\ \hline \hline \end{tabular} \end{table} Table 1: Investigation of different basic units, where the number of multiply-accumulate (MAC) operations is measured on a 1-second signal. \begin{table} \begin{tabular}{l l|c c c|c c|c c|c c|c c} \hline \hline & & \multicolumn{2}{c|}{**STOI**} & \multicolumn{2}{c|}{**NB-PESQ**} & \multicolumn{2}{c|}{**SI-SDR (dB)**} & \multicolumn{2}{c|}{**\# Para**} & \multicolumn{2}{c}{**\# MACs**} \\ & & -5 dB & dB & 5 dB & 0 dB & 5 dB & -5 dB & 0 dB & 5 dB & 0 dB & 5 dB \\ \hline \multirow{3}{*}{(2a)} & Noisy & 0.69 & 0.78 & 0.85 & 1.11 & 1.15 & 1.24 & -5 & 0.01 & 5.01 & - & - \\ \hline (2a) & GCRN (real-valued model) & 0.84 & 0.90 & 0.94 & 1.57 & 1.87 & 2.24 & 8.30 & 11.29 & 14.13 & 9.25 M & 1.72 G \\ (2b) & GCRN + \(\bigtriangleup\) & 0.83 & 0.90 & 0.94 & 1.55 & 1.85 & 2.22 & 11.17 & 13.98 & 9.25 M & 2.57 G \\ (2c) & GCRN + \(\bigloup\) - Separate & 0.83 & 0.90 & 0.93 & 1.53 & 1.80 & 2.15 & 7.64 & 10.51 & 13.22 & 9.12 M & 1.72 G \\ (2d) & GCRN + \(\bigtriangleup\) - Magnitude & 0.83 & 0.90 & 0.94 & 1.56 & 1.85 & 2.23 & 7.66 & 10.63 & 13.43 & 9.12 M & 1.72 G \\ (2e) & GCRN + \(\bigtriangleup\) - Separate & 0.83 & 0.90 & 0.93 & 1.52 & 1.81 & 2.16 & 8.14 & 11.16 & 14.02 & 9.12 M & 2.57 G \\ (2f) & GCRN + \(\bigtriangleup\) - Magnitude & 0.84 & 0.90 & 0.94 & 1.56 & 1.87 & 2.24 & 7.89 & 10.89 & 13.76 & 9.12 M & 2.57 G \\ (2g) & GCRN + \(\bigtriangleup\) + \(\bigtriangledown\) & 0.83 & 0.93 & 1.53 & 1.83 & 2.20 & 7.95 & 10.96 & 13.88 & 8.83 M & 1.72 G \\ (2h) & GCRN + \(\bigtriangleup\) + \(\bigtriangledown\) - Magnitude & 0.83 & 0.90 & 0.94 & 1.54 & 1.85 & 2.23 & 7.67 & 10.75 & 13.79 & 8.83 M & 1.72 G \\ (2i) & GCRN + \(\bigtriangleup\) + \(\bigtriangledown\) - Separate & 0.82 & 0.89 & 0.93 & 1.52 & 1.80 & 2.15 & 7.62 & 10.70 & 13.52 & 8.83 M & 2.57 G \\ (2j) & GCRN + \(\bigtriangleup\) + \(\bigtriangledown\) - Magnitude & 0.83 & 0.90 & 0.94 & 1.57 & 1.88 & 2.27 & 7.65 & 10.87 & 13.82 & 8.83 M & 2.57 G \\ (2A) & GCRN \(\bigtriangledown\) (real-valued model) & 0.83 & 0.89 & 0.93 & 1.50 & 1.79 & 2.16 & 7.28 & 10.33 & 13.40 & 9.25 M & 1.72 G \\ (2j) & GCRN + \(\bigtriangleup\) + \(\bigtriangledown\) - Magnitude \(\bigtriangleup\) & 0.82 & 0.89 & 0.93 & 1.47 & 1.74 & 2.10 & 7.25 & 10.33 & 13.46 & 8.83 M & 2.57 G \\ \hline \hline \end{tabular} \end{table} Table 2: Investigation of different complex-valued components in GCRN. \(\bigtriangleup\), \(\bigtriangledown\) denote using the quasi complex-valued LSTM in the bottleneck, complex-valued convolutional layers, complex-valued deconvolutional layers, respectively. “- Separate” and “- Magnitude” denote using separate and magnitude gating mechanisms in GLUs, respectively, and \(\odot\) denotes the model performs complex ratio masking rather than complex spectral mapping originally used in [5]. \begin{table} \begin{tabular}{l l|c and (1e) denote the complex- and real-valued DNNs consisting of linear layers. Although the real-valued DNN only has half of the MAC number in the complex-valued DNN, it still produces slightly better performance than the latter. 3). (1f) and (1g) denote the DCNNET and its corresponding real-valued version respectively We see that the real-valued UNET outperforms DCNNET in terms of both enhancement performance and computational efficiency. **GCRN** In Table 2, (2a) is the original real-valued GCRN. (2b)-(2j) are the models where some components are replaced by the corresponding complex-valued version. Moreover, (2A) and (2J) have the same model structure as (2a) and (2j), but are trained to perform complex ratio masking rather than complex spectral mapping. In Table 3, we reduce the model size to roughly 2 M and 0.6 M, where "CGCRN" denotes the same configuration as (2j). We can observe: 1). Replacing the components of GCRN with their complex-valued versions can not get any performance gain, as shown in (2a)-(2j). 2). In the comparison between the models trained for complex ratio masking, i.e. (2A) and (2J), the real-valued model performs slightly better than the complex-valued model. 3). Although the magnitude gating and separate gating lead to similar performance, the training loss curve of the former is much more stable than that of the latter. It is likely because the magnitude gating preserves phase information which could help stabilize the training. 4). In the small model setting, the real-valued models consistently outperforms the complex-valued counterparts. Furthermore, their performance gap increases as the model size becomes smaller. **DCCRN** Tables 4 and 5 compare the DCCRN with its real-valued counterpart on our simulated test set and the DNS Challenge synthetic test set, respectively. The following observations are obtained: 1). With three different training objectives, i.e. SI-SDR, L\({}_{1}\) and MSE, the real- and complex-valued models yield almost identical performance in all the metrics on both datasets. Take, for example the -5 dB case with the SI-SDR training loss in Table 4. The STOI, WB-PESQ and SI-SDR improvements over noisy mixtures are 0.18, 0.67 and 16.01 dB for the complex-valued model, and 0.18, 0.69 and 16.06 dB for the real-valued model, respectively. 2) As shown in Table 5, the real-valued model models slightly better scores than the complex-valued model in both DNSMOS and NORESQA-MOS, i.e. two metrics that highly correlate with subjective quality scores. 3). We have also made comparisons under settings with different learning rates and minibatch sizes. We find that DCCRN is less robust than its real-valued counterpart against different learning rates. In addition, both models produce very similar performance with different minibatch sizes. However, we do not show these comparison results due to the page limit. 4). The real-valued model has only one-third of the MAC amount in the complex-valued model. Specifically, the number of MACs for the complex-valued model is 14.38 G, while it is only 4.84G for the real-valued model. Given that the two models yield almost the same performance, the complex-valued model is less efficient for real-world applications. ## 4 Concluding Remarks Through the extensive experiments, we draw the following conclusions for monaural speech enhancement: 1). Complex-valued DNNs yield similar performance to their real-valued counterparts with the same number of parameters. 2). When the model size is relatively small, the use of complex-valued operations is detrimental to the enhancement performance. 3). The performance gain achieved by DCCRN and DCUNET is not attributed to the use of complex-valued operations. Furthermore, complex-valued DNNs require more MACs than their real-valued counterparts, without any performance gain. A complex number multiplication can break into four real number multiplications. Based on our systematic comparisons, we believe that real-valued DNNs have the capacity to achieve comparable performance to their complex-valued counterparts with the same model size and network structure. Although complex-valued DNNs intuitively seem a more natural choice than real-valued DNNs for processing complex spectrograms, they are more computationally expensive and thus an inferior choice for real applications that are efficiency-sensitive. We believe that there is no sufficient evidence justifying the superiority of complex-valued DNNs over real-valued DNNs for monaural speech enhancement. This study demonstrates that it is more than nontrivial to rethink the efficacy of complex-valued operations in speech enhancement systems. \begin{table} \begin{tabular}{l c|c|c c|c c|c c} \hline \hline & \multicolumn{1}{c|}{Noisy} & \multicolumn{1}{c|}{DCCRN-SISDR} & \multicolumn{1}{c|}{DCCRN-Real-SISDR} & \multicolumn{1}{c|}{DCCRN-L\({}_{1}\)} & \multicolumn{1}{c|}{DCCRN-Real-L\({}_{1}\)} & \multicolumn{1}{c}{DCCRN-MSE} & \multicolumn{1}{c}{DCCRN-Real-MSE} \\ \hline & -5 dB & 0.69 & 0.87 & 0.87 & 0.86 & 0.86 & 0.85 & 0.85 \\ STOI & 0 dB & 0.78 & 0.92 & 0.92 & 0.91 & 0.91 & 0.90 & 0.90 \\ & 5 dB & 0.85 & 0.95 & 0.95 & 0.95 & 0.95 & 0.94 & 0.94 \\ \hline & -5 dB & 1.11 & 1.78 & 1.80 & 1.73 & 1.69 & 1.55 & 1.56 \\ WB-PESQ & 0 dB & 1.15 & 2.13 & 2.14 & 2.05 & 2.00 & 1.83 & 1.86 \\ & 5 dB & 1.24 & 2.51 & 2.54 & 2.43 & 2.38 & 2.16 & 2.19 \\ \hline & -5 dB & -5.00 & 11.01 & 11.06 & 8.36 & 8.28 & 8.09 & 8.18 \\ SI-SDR (dB) & 0 dB & 0.05 & 14.00 & 14.06 & 11.25 & 11.19 & 11.20 & 11.27 \\ & 5 dB & 5.01 & 16.99 & 17.05 & 14.36 & 14.27 & 14.41 & 14.54 \\ \hline \hline \end{tabular} \end{table} Table 4: Comparisons between real- and complex-valued versions of DCCRN with different training objectives. “Real” means the real-valued version of DCCRN. “SISDR”, “L\({}_{1}\)”, “-MSE” denote using SI-SDR, L\({}_{1}\) and mean squared error (MSE) losses for training, respectively, where both L\({}_{1}\) and MSE losses are computed on the clean and estimated real, imaginary and magnitude spectrograms. \begin{table} \begin{tabular}{l|c|c c c|c c|c c} \hline \hline & Noisy & \multicolumn{1}{c|}{DCCRN-SISDR} & \multicolumn{1}{c|}{DCCRN-Real-SISDR} & \multicolumn{1}{c|}{DCCRN-L\({}_{1}\)} & \multicolumn{1}{c|}{DCCRN-Real-L\({}_{1}\)} & \multicolumn{1}{c}{DCCRN-MSE} & \multicolumn{1}{c}{DCCRN-Real-MSE} \\ \hline STOI & 0.92 & 0.97 & 0.97 & 0.97 & 0.97 & 0.97 & 0.97 \\ WB-PESQ & 1.58 & 2.92 & 2.89 & 2.92 & 2.86 & 2.61 & 2.64 \\ SI-SDR (dB) & 9.23 & 19.60 & 19.54 & 17.11 & 17.13 & 17.33 & 17.55 \\ DNSMOS (OVRL) & 2.48 & 3.30 & 3.33 & 3.28 & 3.30 & 3.19 & 3.20 \\ NORESQA-MOS & 1.90 & 4.31 & 4.34 & 4.27 & 4.31 & 3.80 & 3.96 \\ \hline \(\#\) Para & - & 3.67 M & 3.64 M & 3.67 M & 3.64 M & 3.67 M & 3.64 M \\ \hline \(\#\) MACs & - & 14.38 G & 4.84 G & 14.38 G & 4.84 G & 14.38 G & 4.84 G \\ \hline \hline \end{tabular} \end{table} Table 5: Comparisons between real- and complex-valued versions of DCCRN with different training objectives on the DNS Challenge synthetic test set without reverberation.
2308.08011
Shortcut-V2V: Compression Framework for Video-to-Video Translation based on Temporal Redundancy Reduction
Video-to-video translation aims to generate video frames of a target domain from an input video. Despite its usefulness, the existing networks require enormous computations, necessitating their model compression for wide use. While there exist compression methods that improve computational efficiency in various image/video tasks, a generally-applicable compression method for video-to-video translation has not been studied much. In response, we present Shortcut-V2V, a general-purpose compression framework for video-to-video translation. Shourcut-V2V avoids full inference for every neighboring video frame by approximating the intermediate features of a current frame from those of the previous frame. Moreover, in our framework, a newly-proposed block called AdaBD adaptively blends and deforms features of neighboring frames, which makes more accurate predictions of the intermediate features possible. We conduct quantitative and qualitative evaluations using well-known video-to-video translation models on various tasks to demonstrate the general applicability of our framework. The results show that Shourcut-V2V achieves comparable performance compared to the original video-to-video translation model while saving 3.2-5.7x computational cost and 7.8-44x memory at test time.
Chaeyeon Chung, Yeojeong Park, Seunghwan Choi, Munkhsoyol Ganbat, Jaegul Choo
2023-08-15T19:50:38Z
http://arxiv.org/abs/2308.08011v2
# Shortcut-V2V: Compression Framework for Video-to-Video Translation ###### Abstract Video-to-video translation aims to generate video frames of a target domain from an input video. Despite its usefulness, the existing networks require enormous computations, necessitating their model compression for wide use. While there exist compression methods that improve computational efficiency in various image/video tasks, a generally-applicable compression method for video-to-video translation has not been studied much. In response, we present Shortcut-V2V, a general-purpose compression framework for video-to-video translation. Shortcut-V2V avoids full inference for every neighboring video frame by approximating the intermediate features of a current frame from those of the previous frame. Moreover, in our framework, a newly-proposed block called AdaBD adaptively blends and deforms features of neighboring frames, which makes more accurate predictions of the intermediate features possible. We conduct quantitative and qualitative evaluations using well-known video-to-video translation models on various tasks to demonstrate the general applicability of our framework. The results show that Shortcut-V2V achieves comparable performance compared to the original video-to-video translation model while saving 3.2-5.7\(\times\) computational cost and 7.8-44\(\times\) memory at test time. Our code and videos are available at [https://shortcut-v2v.github.io/](https://shortcut-v2v.github.io/). ## 1 Introduction Video-to-video translation is a task of generating temporally consistent and realistic video frames of a target domain from a given input video. Recent studies on video-to-video translation present promising performance in various domains such as inter-modality translation between labels and videos [37, 36, 20], and intra-modality translation between driving scene videos [35] or face videos [1]. Despite enhanced usefulness, video-to-video translation networks usually require substantial computational cost and memory usage, which limits their applicability. For instance, multiply-accumulates (MACs) of a widely-used video translation model, vid2vid [37], is 2066.69G, while the basic convolutional neural networks, ResNet v2 50 [10] and Inception v3 [31], are 4.12G and 6G, respectively. Furthermore, temporally redundant computations for adjacent video frames also harm the cost efficiency of a video-to-video translation network. Performing full inference for every neighboring video frame that contains common visual features inevitably entails redundant operations [22, 21]. In this regard, Fast-Vid2vid [44] proposes a compression framework for vid2vid [37] based on spatial input compression and temporal redundancy reduction. However, it cannot be applied to other video-to-video translation models since it is designed specifically for vid2vid. Moreover, Fast-Vid2vid does not support real-time inference since it requires future frames to infer a current one. Alternatively, one can apply model compression approaches for image-to-image translation [15, 24, 13] directly to video-to-video translation, considering video frames as separate images. However, these approaches are not designed to consider the correlation among adjacent video frames during the compression. This may result in unrealistic output quality in video-to-video translation, where the inherent temporal coherence of an input video needs to be preserved in the outputs. Also, frame-by-frame inference without temporal redundancy reduction involves unnecessary computations, resulting in computational inefficiency. In this paper, we propose _Shortcut-V2V_, a general-purpose framework for improving the computational efficiency of video-to-video translation based on temporal redundancy reduction. Shortcut-V2V allows the original video-to-video translation model to avoid temporally re dundant computations by approximating the decoding layer features of the current frame with largely reduced computations. To enable lightweight estimation, our framework leverages features from the previous frame (_i.e_., reference features), which have high visual similarity with the current frame. We also exploit current frame features from the encoding layer to handle newly-appeared regions in the current frame. Specifically, we first globally align the previous frame features with the current frame features, and our novel _Adaptive Blending and Deformation block (AdaBD)_ in Shortcut-V2V blends features of neighboring frames while performing detailed deformation. AdaBD adaptively integrates the features regarding their redundancy in a lightweight manner. In this way, our model significantly improves the test-time efficiency of the original network while preserving its original performance. Shortcut-V2V is easily applicable to a pretrained video-to-video translation model to save computational cost and memory usage. Our framework is also suitable for real-time inference since we do not require future frames for the current frame inference. To the best of our knowledge, this is the first attempt at a general-purpose model compression approach for video-to-video translation. We demonstrate the effectiveness of our approach using well-known video-to-video translation models, Unsupervised RecycleGAN [35] (Unsup) and vid2vid [37]. Shortcut-V2V reduces 3.2-5.7\(\times\) computational cost and 7.8-44\(\times\) memory usage while achieving comparable performance to the original model. Since there is no existing general-purpose compression method, we compare our method with Fast-Vid2Vid [44] and the compression methods for image-to-image translation. Our model presents superiority over the existing approaches in both quantitative and qualitative evaluations. Our contributions are summarized as follows: * We introduce a novel, general-purpose model compression framework for video-to-video translation, Shortcut-V2V, that enables the original network to avoid temporally redundant computations. * We present AdaBD that exploits features from neighboring frames via adaptive blending and deformation in a lightweight manner. * Our framework saves up to 5.7\(\times\) MACs and 44\(\times\) parameters across various video-to-video translation tasks, achieving comparable performance to the original networks. ## 2 Related Work ### Video-to-Video Translation Recent video-to-video translation networks are generally classified into pix2pixHD-based [38] and CycleGAN-based [42] generator. vid2vid [37] proposes a pix2pixHD-based sequential generation framework that synthesizes a current output given the previous outputs as additional guidance. As the following work, few-shot vid2vid [36] achieves few-shot generalization of vid2vid based on attention modules, and world-consistent vid2vid [20] is proposed to improve long-term temporal consistency of vid2vid. While pix2pixHD-based models require paired annotated videos, RecycleGAN [1] and MocycleGAN [4] propose CycleGAN-based video translation models. They exploit spatio-temporal consistency losses to generate realistic videos using unpaired datasets. STC-V2V [23] leverages optical flow for semantic/temporal consistency to improve the output quality of the existing models. Unsupervised RecycleGAN [35] achieves state-of-the-art performance among CycleGAN-based frameworks with a pseudo-supervision by the synthetic flow. Although the existing video-to-video translation networks achieve decent performance, they commonly demand a non-trivial amount of Figure 1: Overview of Shortcut-V2V. (a) is an overall framework of Shortcut-V2V, and (b) shows a detailed architecture of Shortcut block. \(\uparrow\)\(2\) and \(\downarrow\)\(2\) refer to upsampling and downsampling by a factor of 2, respectively. G in Offset G and Offset/Mask G indicates a generator. computational costs and memory usage. Also, frame-by-frame inference necessarily causes redundant operations due to temporal redundancy among adjacent frames. ### Model Compression Model compression for video-related tasks has been actively proposed in various domains [21, 22, 18, 9, 30, 8, 16, 32], such as object detection, action recognition, semantic segmentation, and super-resolution. Several studies [21, 9, 22] exploit temporal redundancy among video frames to improve efficiency during training or inference. For instance, Habibian _et al_. [9] distill only the residual between adjacent frames from a teacher model to a student to speed up the inference. Also, Fast-Vid2Vid [44] firstly proposes a compression framework for video-to-video translation based on spatial and temporal compression. However, Fast-Vid2Vid focuses on vid2vid [37], limiting its application to other video-to-video translation networks. Also, temporal redundancy reduction via motion compensation in Fast-Vid2Vid requires the future frame to infer the current frame, which is not suitable for real-time inference. Alternatively, model-agnostic compression methods for image-to-image translation can also be applied to video-to-video translation models. The existing approaches for image synthesis mainly tackle channel pruning [15], knowledge distillation [13, 24], NAS [15], etc. For instance, CAT [13] compresses the teacher network with one-step pruning to satisfy the target computation budget, while OMGD [24] conducts a single-stage online distillation in which the teacher generator supports the student generator to be refined progressively. However, image-based compression methods cannot consider temporal coherence among neighboring video frames, which may induce unrealistic results in video translation tasks. Also, performing full model inference for each video frame still poses computational inefficiency due to the temporal redundancy. ### Deformable Convolution Deformable convolution [6] is originally proposed to enhance the transformation capability of a convolutional layer by adding the estimated offsets to a regular convolutional kernel in vision tasks such as object detection or semantic segmentation. Besides its original application, recent studies [33, 7, 39, 2, 12, 17] demonstrate that deformable convolution is also capable of aligning adjacent video frames. TDAN [33] utilizes deformable convolution to capture implicit motion cues between consecutive frames by dynamically predicted offsets in video super-resolution. In addition, EDVR [39] stacks several deformable convolution blocks to estimate large and complex motions in video restoration. In this paper, we also leverage a deformable convolution to adaptively align adjacent video frames in a lightweight manner. ## 3 Shortcut-V2V In this paper, we propose Shortcut-V2V, a general compression framework to improve the test-time efficiency in video-to-video translation. As illustrated in Fig. 1(a), given \(\{\mathbf{I}_{t}\}_{t=0}^{N_{T}-1}\) as input video frames, we first use full teacher model \(T\) to synthesize the output of the first frame. Then, for the next frames, our newly-proposed Shortcut block efficiently approximates \(\mathbf{f}_{t}\), the features from the \(l_{d}\)-th decoding layer of the teacher model. This is achieved by leveraging the \(l_{e}\)-th encoding layer features \(\mathbf{a}_{t}\) along with reference features, \(\mathbf{a}_{ref}\) and \(\mathbf{f}_{ref}\), from the previous frame. Here, \(l_{d}\) and \(l_{e}\) correspond to layer indices of the teacher model. Lastly, predicted features \(\mathbf{\hat{f}}_{t}\) are injected into the following layers of the teacher model to synthesize the final output \(\mathbf{\hat{O}}_{t}\). To avoid error accumulation, we conduct full teacher inference and update the reference features at every max interval \(\alpha\). We provide the detailed inference process of Shortcut-V2V in Algorithm 1. ``` 1:Input: Input video \(\{\mathbf{I}_{t}\}_{t=0}^{N_{T}-1}\) of length \(N_{T}\), teacher model \(T\), layer index of encoder \(l_{e}\) and decoder \(l_{d}\), Shortcut block \(S\), max interval \(\alpha\) 2:Output: Output video \(\{\mathbf{O}_{t}\}_{t=0}^{N_{T}-1}\) 3:for\(t=0\) to \(N_{T}-1\)do 4:\(\mathbf{a}_{t}=T_{[x_{t}]}(\mathbf{I}_{t})\) 5:if\(t\%\alpha=0\)then 6:\(\mathbf{f}_{t}=T_{[l_{e}+1:l_{d}-1]}(\mathbf{a}_{t})\) 7:\(\mathbf{a}_{\text{ref}}\), \(\mathbf{f}_{\text{ref}}\leftarrow\mathbf{a}_{t}\), \(\mathbf{f}_{t}\)\(\triangleright\) Update the reference features 8:else 9:\(\mathbf{f}_{t}=S(\mathbf{f}_{\text{ref}},\mathbf{a}_{\text{ref}},\mathbf{a}_{t})\) 10:endif 11:\(\mathbf{O}_{t}=T_{[l_{d}:]}(\mathbf{f}_{t})\) 12:endfor ``` **Algorithm 1** Shortcut-V2V Inference The architecture of our model is mainly inspired by Deformable Convolutional Network (DCN) [6, 43], which we explain in the next section. ### Deformable Convolutional Network DCN [6, 43] is initially introduced to improve the transformation capability of a convolutional layer in image-based vision tasks, _e.g_., object detection and semantic segmentation. In the standard convolutional layer, a \(3\times 3\) kernel with dilation 1 samples points over input features using a sampling position \(\mathbf{p}_{k}\in\{(-1,-1),(-1,0),...,(0,1),(1,1)\}\). Given input feature maps \(\mathbf{x}\), DCN predicts additional offsets \(\Delta\mathbf{p}\in\mathbb{R}^{2N_{\mathbf{p}}\times H\times W}\) to augment each sampling position along x-axis and y-axis. Here, \(N_{\mathbf{p}}\) is the number of sampling positions in a kernel, and \(H\) and \(W\) are the height and width of output feature maps, respectively. For further manipulation of input feature amplitudes over the sampled points, DCNv2 [43] introduces a modulated deformable convolution with \(\mathbf{m}\in\mathbb{R}^{N_{\mathbf{p}}\times H\times W}\) consisting of learnable modulation scalars. The deformed output feature maps \(\mathbf{x}^{\prime}\) by DCNv2 are defined as: \[\mathbf{x}^{\prime}=f_{dc}(\mathbf{w},\mathbf{x},\Delta\mathbf{p},\mathbf{m}), \tag{1}\] where \(f_{dc}\) indicates a deformable convolution. Specifically, a single point \(\mathbf{p_{o}}\) of \(\mathbf{x}^{\prime}\) is obtained as: \[\mathbf{x}^{\prime}(\mathbf{p_{o}})=\sum_{k=1}^{N_{\mathbf{p}}}\mathbf{w}( \mathbf{p}_{k})\cdot\mathbf{x}(\mathbf{p_{o}}+\mathbf{p}_{k}+\Delta\mathbf{p}( \mathbf{p}_{k}))\cdot\mathbf{m}(\mathbf{p}_{k}), \tag{2}\] where \(\mathbf{w}(\mathbf{p}_{k})\), \(\mathbf{p}(\mathbf{p}_{k})\), and \(\mathbf{m}(\mathbf{p}_{k})\) are convolutional layer weights, offsets, and modulation scalars between \(0\) and \(1\), respectively, for the \(k\)-th sampling position. Taking advantage of the enhanced transformation capability, we also leverage deformable convolution to align features of adjacent frames only with a few convolution-like operations, instead of using heavy flow estimation networks. ### Shortcut Block As described in Fig. 1(b), Shortcut block \(S\) estimates the current frame features \(\mathbf{\hat{f}}_{t}\) given \(\mathbf{a}_{t}\) and the reference frame features \(\mathbf{f}_{ref}\) and \(\mathbf{a}_{ref}\) as inputs: \[\mathbf{\hat{f}}_{t}=S(\mathbf{f}_{ref},\mathbf{a}_{ref},\mathbf{a}_{t}). \tag{3}\] Our block effectively obtains rich information from \(\mathbf{f}_{ref}\) via coarse-to-fine alignment referring to alignment between \(\mathbf{a}_{ref}\) and \(\mathbf{a}_{t}\). Also, during the fine alignment, our newly-proposed AdaBD simultaneously performs adaptive blending of the aligned \(\mathbf{f}_{ref}\) and the current frame feature \(\mathbf{a}_{t}\). Here, \(\mathbf{a}_{t}\) supports the synthesis of newly-appeared areas in the current frame. **Coarse-to-Fine Alignment.** To handle a wide range of misalignments between the frames, our model aligns \(\mathbf{f}_{ref}\) with the current frame in a coarse-to-fine manner. Our global/local alignment module consists of an offset generator to estimate offsets, and deformable convolution layers to deform features based on the predicted offsets. Following TDAN [33], an offset generator estimates sampling offsets given the adjacent frame features. For global alignment, we first downsample the given inputs to enlarge the receptive fields of the corresponding convolutional layers in a lightweight manner. The downsampled \(\mathbf{a}_{ref}\) and \(\mathbf{a}_{t}\) are concatenated and fed into a global offset generator to generate global offsets \(\Delta\mathbf{p}_{g}\in\mathbb{R}^{2\times\frac{H}{2}\times\frac{W}{2}}\). Since we only need to capture coarse movement, \(\Delta\mathbf{p}_{g}\) includes a single offset for each kernel, unlike the original DCN. Each offset is identically applied to all the sampling positions within the kernel. Then, the deformed features are upsampled back to the original size to obtain \(\mathbf{f}^{\prime}_{ref}\) as follows: \[\mathbf{f}^{\prime}_{ref}=(f_{dc}(\mathbf{w}_{g},(\mathbf{f}_{ref})^{\downarrow 2 },\Delta\mathbf{p}_{g},\mathbf{1}))^{\uparrow 2}, \tag{4}\] where \(\mathbf{w}_{g}\) denotes weights of global deformable convolution, and \((\cdot)^{\uparrow 2}\) and \((\cdot)^{\downarrow 2}\) refer to upsampling and downsampling by a factor of 2 through bilinear interpolation, respectively. \(\mathbf{1}\) indicates a vector filled with 1 so that no modulation is applied here. While local alignment of the coarsely-aligned feature \(\mathbf{f}^{\prime}_{ref}\) follows the process of global alignment, the difference lies in that each sampling point of each kernel has a unique offset, and the alignment operation is conducted in the original resolution. We leverage \(\mathbf{a}^{\prime}_{ref}\) and \(\mathbf{a}_{t}\) to estimate local offsets, where \(\mathbf{a}^{\prime}_{ref}\) is the aligned \(\mathbf{a}_{ref}\) which is downsampled, deformed, and upsampled with the same weights \(\mathbf{w}_{g}\) used to synthesize \(\mathbf{f}^{\prime}_{ref}\). We estimate the offsets for the decoding layer features \(\mathbf{f}\) using the encoding layer features \(\mathbf{a}\) under the assumption that \(\mathbf{a}_{t}\) and \(\mathbf{f}_{t}\) have the same structural information. In video-to-video translation, input and output frames share the same underlying structure. Thus, it is natural for the network to learn to maintain the structural information of an input frame throughout the encoding and decoding process. More details are described in our supplementary materials. **Adaptive Blending and Deformation.** During the local alignment, we also take advantage of the current frame features \(\mathbf{a}_{t}\) from the encoding layer to handle the regions with large motion differences and new objects. To achieve this in a cost-efficient way, we introduce AdaBD, which simultaneously aligns \(\mathbf{f}^{\prime}_{ref}\) and blends it with \(\mathbf{a}_{t}\) in an adaptive manner, as illustrated in Fig. 1(b) AdaBD. First, our local offset/mask generator predicts a blending mask \(\mathbf{m}_{b}\in\mathbb{R}^{N_{\mathbf{p}}\times H\times W}\) in addition to the local offsets \(\Delta\mathbf{p}_{l}\in\mathbb{R}^{2N_{\mathbf{p}}\times H\times W}\). A learnable mask \(\mathbf{m}_{b}\) is composed of modulation scalars ranging from 0 to 1, each of which indicates the blending ratio of the current features \(\mathbf{a}_{t}\) to the aligned reference features. While DCNv2 [43] originally introduces the modulation scalars to control feature amplitudes of a single input, we leverage the scalars to adaptively blend the features from two adjacent frames considering their redundant areas. We apply deformable convolution by adding local offsets \(\Delta\mathbf{p}_{l}\) to sampling positions of the coarsely-aligned reference features \(\mathbf{f}^{\prime}_{ref}\), while the current frame features \(\mathbf{a}_{t}\) are fed into standard convolutional operations. Concurrently, blending mask \(\mathbf{m}_{b}\) adaptively combines the two feature maps. In detail, an output point \(\mathbf{p_{o}}\) of \(\mathbf{\hat{f}}_{t}\) is calculated as follows: \[\mathbf{\hat{f}}_{t}(\mathbf{p_{o}})=\sum_{k=1}^{N_{\mathbf{p}}} \mathbf{w}_{l}(\mathbf{p}_{k})\cdot\{\mathbf{a}_{t}(\mathbf{p_{o}}+\mathbf{p }_{k})\cdot\mathbf{m}_{b}(\mathbf{p}_{k})+\] \[\mathbf{f}^{\prime}_{ref}(\mathbf{p_{o}}+\mathbf{p}_{k}+\Delta \mathbf{p}_{l}(\mathbf{p}_{k}))\cdot(1-\mathbf{m}_{b}(\mathbf{p}_{k}))\}, \tag{5}\] where \(\mathbf{w}_{l}\) indicates weights of local deformable convolution. Intuitively, the higher values of \(\mathbf{m}_{b}\) indicate the regions where current frame features are more required. In other words, Eq. 5 can be rewritten as a summation of standard convolution and deformable convolution: \[\mathbf{\hat{f}}_{t}= f_{dc}(\mathbf{w}_{l},\mathbf{a}_{t},\mathbf{0},\mathbf{m}_{b}) +f_{dc}(\mathbf{w}_{l},\mathbf{f}_{ref}^{\prime},\Delta\mathbf{p}_{l},1- \mathbf{m}_{b}), \tag{6}\] In this equation, the convolutional weights \(\mathbf{w}_{l}\) are shared between \(\mathbf{a}_{t}\) and \(\mathbf{f}_{ref}^{\prime}\). \(f_{dc}\) with \(\mathbf{0}\) indicates DCN with zero offsets, illustrating standard convolutional operation. To save computational costs, we decrease the channel dimension of all input features and reconstruct the original channel size before injecting the output features into the remaining layers of the teacher network. ### Training Objectives To train Shortcut-V2V, we mainly leverage alignment loss, distillation loss, and GAN losses widely used in image/video translation networks [38, 37, 42]. First, we adopt alignment loss \(L_{align}\) to train the deformation layers in Shortcut-V2V. Since Shortcut-V2V aims to align the reference frame features \(\mathbf{f}_{ref}\) with the current frame, \(L_{align}\) computes L1 loss between the aligned feature \(\mathbf{f}_{ref}^{*}\) and the current frame features \(\mathbf{f}_{t}\) extracted from the teacher model. To obtain \(\mathbf{f}_{ref}^{*}\), we align \(\mathbf{f}_{ref}\) in a coarse-to-fine manner without an intervention of \(\mathbf{m}_{b}\) or blending with the current features. The alignment loss \(L_{align}\) is formulated as follows: \[\mathbf{f}_{ref}^{*}=f_{dc}(\mathbf{w}_{l},\mathbf{f}_{ref}^{\prime},\Delta \mathbf{p}_{l},1), \tag{7}\] \[L_{align}=\left\|\mathbf{f}_{t}-\mathbf{f}_{ref}^{*}\right\|_{1}. \tag{8}\] Additionally, we employ knowledge distillation losses at the feature and output levels. A feature-level distillation loss \(L_{feat}\) is applied between the estimated feature \(\mathbf{\hat{f}}_{t}\) and the ground truth feature \(\mathbf{f}_{t}\), while an output-level distillation loss \(L_{out}\) compares the approximated output \(\mathbf{\hat{O}}_{t}\) to the output \(\mathbf{O}_{t}\) generated by the teacher network. The perceptual loss [41]\(L_{perc}\) is also incorporated to distill the high-frequency information of the outputs. Lastly, we utilize a typical GAN loss \(L_{GAN}\) and a temporal GAN loss \(L_{T-GAN}\), following the existing video-based frameworks [37, 3]. Temporal GAN loss \(L_{T-GAN}\) encourages both temporal consistency and realisticity of the output frames. For the GAN losses, we consider the outputs of the teacher network as real images. The overall objective function \(L_{total}\) is as follows: \[L_{total}= \lambda_{align}L_{align}+\lambda_{feat}L_{feat}+\lambda_{out}L_ {out}\] \[+\lambda_{perc}L_{perc}+\lambda_{GAN}L_{GAN}+\lambda_{T-GAN}L_{T- GAN}, \tag{9}\] where \(\lambda_{align}\), \(\lambda_{feat}\), \(\lambda_{out}\), \(\lambda_{perc}\), \(\lambda_{GAN}\), and \(\lambda_{T-GAN}\) are hyperparameters to control relative significance among the losses. More details on training objectives are described in the supplementary materials. Cityscapes is a dataset composed of 2,975 training videos and 500 validation videos. The input images are resized into 256\(\times\)512 for the experiments. We also tackle the translation of the videos from the Viper dataset to their corresponding segmentation label maps, Viper\(\rightarrow\)Label (V2L), and vice versa, Label\(\rightarrow\)Viper (L2V). We resize the images and the labels into 256\(\times\)256 following the previous work [1, 35]. vid2vid is a widely-used pix2pixHD-based video-to-video translation network that serves as a base architecture for various recent video-to-video translation models [36, 20]. Following vid2vid, we evaluate Shortcut-V2V on Edge\(\rightarrow\)Face (E2F) and Label\(\rightarrow\)Cityscapes (L2C). E2F translates edge maps into facial videos from the FaceForensics [26], containing 704 videos for training and 150 videos for validation with various lengths. The images are cropped and resized to 512\(\times\)512 for the experiments. The edge maps are extracted using the estimated facial landmarks and Canny edge detector. Also, L2C synthesizes videos of driving scenes from segmentation label maps using the Cityscapes. We generate segmentation maps using pretrained networks following the previous studies [37, 44]. The images and the labels are resized into 256\(\times\)512. **Evaluation Metrics.** Primarily, we adopt the Frechet video distance (FVD) score [34] to evaluate the performance of Shortcut-V2V quantitatively. The FVD score measures the Frechet Inception distance between the distribution of video-level features extracted from generated and real videos. The lower the FVD score is, the better visual quality and temporal consistency the generated video frames have. For V2L and L2V, we follow the measurement of the evaluation metrics in the teacher model, Unsup [35]. We measure the segmentation scores, mean intersection over union (mIoU), mean pixel accuracy (MP), and average class accuracy (AC), to validate the performance of V2L. L2V is evaluated using FCN-score [19]. For the evaluation, we first estimate label maps from the generated videos using the FCN model pretrained with the Viper dataset. Then, we measure how accurately the estimated label maps are mapped to the ground truth segmentation labels. Higher FCN-scores refer to better output quality. **Implementation Details.** We attach our Shortcut block to the fixed teacher networks implemented based on the official codes and pretrained by the authors, except for Unsup V2C. The teacher network of Unsup V2C is trained from scratch in the same way the original paper described [35]. Also, standard convolutional kernels in the Shortcut block are replaced with HetConv [29] to further enhance computational efficiency without a performance drop. For the convenience of implementation and training stability, we intend \(\mathbf{a}\) and \(\mathbf{f}\) to have the same spatial size. Furthermore, we set the max interval \(\alpha\) for each dataset considering the factors that reflect motion differences between frames, such as the frame per second (FPS) of training videos. Lower FPS usually results in larger motion differences between the adjacent frames, requiring a shorter max interval \(\alpha\) and vice versa. Specifically, we set \(\alpha\) as 3 on V2C, V2L, L2V, and L2C, since the FPS of Viper [25] and Cityscapes [5] is 15 and 17, respectively. Also, \(\alpha\) of E2F is set as 6, where the FPS of FaceForensics [26] is 30. Additional details are included in the supplementary materials. Figure 2: Qualitative comparison. The upper rows represent the results of Unsup V2C, L2V, and V2L. The bottom rows show the results of vid2vid E2F and L2C. ### Comparison to Baselines To demonstrate the effectiveness of our framework, we conduct qualitative and quantitative evaluations compared to the original model and other baselines. Since this is the first work that tackles a generally applicable compression framework for video-to-video translation, we compare Shortcut-V2V to the existing compression methods for image-to-image translation, CAT [13] and OMGD [24], regarding video frames as individual images. In the case of vid2vid, we additionally conduct a comparison to Fast-Vid2vid [44], which is the compression method designed specifically for vid2vid. For a fair comparison, we compress the student networks of the baselines to have similar or higher MACs compared to our model. **Qualitative Evaluation.** According to Fig. 2, our method presents outputs of comparable visual quality to the original model with much fewer computations. In contrast, CAT on Unsup V2C generates undesirable buildings in the sky, and OMGD on Unsup V2C struggles with noticeable artifacts. For Unsup L2V, CAT and OMGD generate unrealistic textures for the terrain in the middle or the vegetation on the right. Moreover, in V2L, CAT estimates inappropriate labels on the sky and OMGD on the vegetation. For vid2vid E2F, CAT shows unwanted artifacts on the mouth, and OMGD presents inconsistent outputs. For vid2vid L2C, the outputs of CAT are blurry, especially for the trees in the background, and OMGD generates artifacts at the bottom of the images. Although Fast-Vid2vid shows reasonable image quality, the output frames are inaccurately aligned with the input (, a head pose of a person) and suffer from ghost effects (, the black car on the left) due to motion compensation using interpolation. **Quantitative Evaluation.** As shown in Table 1 and Table 2, our framework successfully improves the computational efficiency of the original network without a significant performance drop. In the case of Unsup, Shortcut-V2V reduces the MACs by 4.7\(\times\) and the number of parameters by 24.5\(\times\). In addition, our approach saves vid2vid's MACs by 3.2 and 5.7\(\times\) and the number of parameters by 7.8 and 44\(\times\) on each task. Fig. 3 visualizes the performance-efficiency trade-off of our framework and other baselines. According to Table 1, we also outperform other compression methods for image-to-image translation, CAT [13] and OMGD [24], even with fewer computations. Image-based approaches cannot consider temporal coherence among the frames during the compression, leading to a loss of quality. Meanwhile, we effectively preserve the original performance by exploiting rich information in the previous features during inference. Shortcut-V2V even shows superiority over Fast-Vid2vid [44], which is a compression method specifically designed for vid2vid. Table 2 also demonstrates that ours on Unsup surpasses the existing compression models by a large margin. ### Ablation Study We conduct an ablation study to evaluate the effect of each component of Shortcut-V2V. As described in Table 3, we compare the FVD scores of 5 different configurations, (a) w/o reference features, (b) w/o current features, (c) single-stage alignment, (d) w/o adaptive blending, and (e) ours, on Unsup V2C and vid2vid E2F. First, while our model originally exploits both the reference features \(\mathbf{f}_{ref}\) and the current frame features \(\mathbf{a}_{t}\) to synthesize \(\mathbf{\hat{f}}_{t}\), (a) and (b) are designed to leverage only either of them. To be specific, (a) synthesizes \(\mathbf{\hat{f}}_{t}\) by processing standard convolutions on \(\mathbf{a}_{t}\) without integrating the deformed \(\mathbf{f}_{ref}\). Meanwhile, (b) estimates \(\mathbf{\hat{f}}_{t}\) using only \(\mathbf{f}_{ref}\) aligned in a coarse-to-fine manner without blending \(\mathbf{a}_{t}\). The results demonstrate that the absence of either \(\mathbf{f}_{ref}\) or \(\mathbf{a}_{t}\) leads to performance degradation. Next, (c) performs a single-stage alignment on \(\mathbf{f}_{ref}\) to show the necessity of a coarse-to-fine alignment. Specifically, (c) leverages a single offset/mask generator to predict \(\Delta\mathbf{p}\) and \(\mathbf{m}_{b}\) which are then used in AdaBD for blending and deformation. The result indicates that coarse-to-fine alignment effectively assists the estimation of \(\mathbf{\hat{f}}_{t}\) by minimizing misalignment between the features from adjacent frames. This makes Shortcut-V2V suitable for Figure 3: Performance-efficiency trade-off of the original model, Shortcut-V2V, and the existing compression methods including OMGD [24], CAT [13], and Fast-Vid2vid [44]. We measure the FVD score and MACs, where the lower FVD score indicates better quality. Red points and stars denote ours with various model sizes. \begin{table} \begin{tabular}{l|c|c} Configurations & Unsup V2C & vid2vid E2F \\ \hline (a) w/o reference features & 1.256 & 0.398 \\ (b) w/o current features & 1.249 & 0.218 \\ (c) Single-stage alignment & 1.195 & 0.213 \\ (d) w/o adaptive blending & 1.208 & 0.244 \\ (e) Ours & **1.180** & **0.209** \\ \hline \end{tabular} \end{table} Table 3: An ablation study on Unsup V2C and vid2vid E2F. We measure the FVD scores. generating videos with diverse motion differences. Lastly, (d) blends \(\mathbf{f}_{ref}^{*}\) and \(\mathbf{a}_{t}\) simply by element-wise addition instead of adaptive blending with \(\mathbf{m}_{b}\) in AdaBD. The result demonstrates that \(\mathbf{m}_{b}\) also encourages better estimation of \(\mathbf{\hat{f}}_{t}\) by selectively exploiting features from temporally adjacent features. ### Offset/Mask Visualization We provide a qualitative analysis of the generated global/local offsets and blending masks. Fig. 4 visualizes the sampling positions (red points) for each output position \(\mathbf{p}_{o}\) of the current frame (green points), where the predicted offsets \(\Delta\mathbf{p}_{g}\) and \(\Delta\mathbf{p}_{l}\) are added to the original sampling positions. According to the result, global offsets \(\Delta\mathbf{p}_{g}\) effectively reflect the global movement of the objects such as the bridge. Also, the summation of the global and local offsets indicates the sampling points refined by local offsets enabling fine alignment. This shows that the estimated offsets effectively support the utilization of common features in the reference frame. For the blending masks \(\mathbf{m}_{b}\), Fig. 4 presents that the regions with significant motion differences (_e.g_., trees) have large mask values compared to regions with little change (_e.g_., road, sky). That is, our model relies more on the current features rather than the reference features when the deformation is challenging, which aligns with our intention and leads to robust performance. ### Performance-Efficiency Trade-off We present a performance-efficiency trade-off of our framework with respect to varying model sizes of Shortcut block and teacher model dependence. **Shortcut Block Size.** We construct our models of different sizes by reducing the number of the output channel of 1\(\times\)1 channel reduction layer by half (Ours-1/2) and a quarter (Ours-1/4). As shown in Fig. 3, despite the performance-efficiency trade-off depending on channel dimension, Ours-1/2 and Ours-1/4 still achieve comparable FVD to the baselines with less computation costs. On a single RTX 3090 GPU, our models with three different sizes on Unsup achieve 99.67, 109.69, and 111.80 FPS with an 11.6-21.2% reduction in inference time compared to the original model's speed of 88.10 FPS. Additionally, Ours, Ours-1/2, and Ours-1/4 on vid2vid contribute to a significant improvement in speed from 5.63 FPS to 12.86, 12.93, and 13.16 FPS, resulting in a 56.2-57.2% reduction in inference time. Our method is capable of a real-time inference, unlike the previous work [44] which requires future frames for the current frame inference due to motion compensation. More details on channel manipulation are described in the supplementary materials. **Teacher Model Dependence.** Since Shortcut-V2V leverages a subset of the teacher network during inference, the computational costs and memory usage may vary depending on the amount of teacher network we use. In this regard, we present an analysis of the computational efficiency and performance of our model with respect to various levels of teacher model dependence. We categorize the dependence on the teacher model into three levels, low, medium, and high, where high dependence indicates using more teacher network layers. According to Table 4, our model of higher dependence achieves better performance with larger MACs and the number of parameters. The results also demonstrate that Shortcut-V2V can leverage temporal redundancy in features extracted from various layers of the teacher network. Further details for teacher model dependence and experiments for temporal redundancy are included in the supplementary materials. ## 5 Discussion and Conclusion Although Shortcut-V2V significantly improves the test-time efficiency of video-to-video translation, it still poses several limitations. First, a constant max interval may induce unsatisfactory outputs when a degree of temporal redundancy largely varies between frames. Following recent studies [27, 22, 21], applying an adaptive interval based on frame selection algorithms or a learnable policy net \begin{table} \begin{tabular}{l|c c c|c c c} \hline \multirow{2}{*}{Dependence} & \multicolumn{4}{c|}{Unsup V2C} & \multicolumn{4}{c}{vid2vid E2F} \\ \cline{2-7} & FVD & MAGs (G) & Param. (M) & FVD & MAGs (G) & Param. (M) \\ \hline Low & 1.221 & 13.79 & 0.14 & 0.277 & 243.97 & 2.11 \\ Medium & 1.180 & 18.19 & 0.32 & 0.209 & 359.99 & 8.29 \\ High & 1.166 & 27.97 & 1.38 & 0.193 & 475.99 & 32.98 \\ \hline \end{tabular} \end{table} Table 4: Shortcut-V2V performance with different teacher model dependence. High dependency denotes using more teacher network layers during the inference. Figure 4: Visualization of global/local offsets and blending masks. The green points denote output points of the current frame, and the red points around them are each output point’s sampling positions modified by global offsets \(\Delta\mathbf{p}_{g}\) and local offsets \(\Delta\mathbf{p}_{l}\). The values in a blending mask \(\mathbf{m}_{b}\) are averaged by the kernel size for visualization. The brighter area indicates higher mask values. work could be promising future research. In addition, we need to manually configure various hyperparameters such as channel dimension, max interval, and teacher model dependence, which can be automated by NAS [15]. Lastly, our method still has the potential to further improvement of computational efficiency by compressing the teacher model using other methods before applying our framework. Despite the limitations, Shortcut-V2V presents a significant improvement in test-time efficiency of video-to-video translation networks based on temporal redundancy reduction, while preserving the original model performance. Extensive experiments with widely-used video-to-video translation models successfully demonstrate the general applicability of our framework. To the best of our knowledge, this is the first work for a general model compression in the domain of video-to-video translation. We hope our work facilitates research on video-to-video translation and extends the range of its application. **Acknowledgments.** This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2021-0-02068, Artificial Intelligence Innovation Hub) and the Ministry of Culture, Sports and Tourism and Korea Creative Content Agency (Project Number: R2021040097, Contribution Rate: 50).
2310.04461
AI-based automated active learning for discovery of hidden dynamic processes: A use case in light microscopy
In the biomedical environment, experiments assessing dynamic processes are primarily performed by a human acquisition supervisor. Contemporary implementations of such experiments frequently aim to acquire a maximum number of relevant events from sometimes several hundred parallel, non-synchronous processes. Since in some high-throughput experiments, only one or a few instances of a given process can be observed simultaneously, a strategy for planning and executing an efficient acquisition paradigm is essential. To address this problem, we present two new methods in this paper. The first method, Encoded Dynamic Process (EDP), is Artificial Intelligence (AI)-based and represents dynamic processes so as to allow prediction of pseudo-time values from single still images. Second, with Experiment Automation Pipeline for Dynamic Processes (EAPDP), we present a Machine Learning Operations (MLOps)-based pipeline that uses the extracted knowledge from EDP to efficiently schedule acquisition in biomedical experiments for dynamic processes in practice. In a first experiment, we show that the pre-trained State-Of-The- Art (SOTA) object segmentation method Contour Proposal Networks (CPN) works reliably as a module of EAPDP to extract the relevant object for EDP from the acquired three-dimensional image stack.
Nils Friederich, Angelo Yamachui Sitcheu, Oliver Neumann, Süheyla Eroğlu-Kayıkçı, Roshan Prizak, Lennart Hilbert, Ralf Mikut
2023-10-05T08:17:26Z
http://arxiv.org/abs/2310.04461v1
# Al-based automated active learning for discovery of hidden dynamic processes: ###### Abstract In the biomedical environment, experiments assessing dynamic processes are primarily performed by a human acquisition supervisor. Contemporary implementations of such experiments frequently aim to acquire a maximum number of relevant events from sometimes several hundred parallel, non-synchronous processes. Since in some high-throughput experiments, only one or a few instances of a given process can be observed simultaneously, a strategy for planning and executing an efficient acquisition paradigm is essential. To address this problem, we present two new methods in this paper. The first method, Encoded Dynamic Process (EDP), is Artificial Intelligence (AI)-based and represents dynamic processes so as to allow prediction of pseudo-time values from single still images. Second, with Experiment Automation Pipeline for Dynamic Processes (EAPDP), we present a Machine Learning Operations (MLOps)-based pipeline that uses the extracted knowledge from EDP to efficiently schedule acquisition in biomedical experiments for dynamic processes in practice. In a first experiment, we show that the pre-trained State-Of-The-Art (SOTA) object segmentation method Contour Proposal Networks (CPN) works reliably as a module of EAPDP to extract the relevant object for EDP from the acquired three-dimensional image stack. ## 1 Introduction For the imaging-based assessment of dynamic processes in biomedical settings, objects of interest must be identified and relevant events that characterize the dynamic process must be recorded during their time of occurrence. Commonly, a human operator controls the imaging instrument to examine a biomedical sample using a microscope and relevant objects are found in the sample by inspection. Alternatively, the operator estimates for each object of interest the time at which an event of interest is expected to occur based on previous experience and triggers the recording of the event at that time. Nevertheless, many contemporary experiments provide several hundred relevant objects that can, in principle, be imaged in parallel. Events of interest, however, are non-synchronous and the estimation of future event times requires extensive human effort, is prone to error, and not necessarily time-efficient. These obstacles can result in unnecessarily large amounts of irrelevant data, unnecessary experimental repeats, or experimental biases inflicted by additional light exposure of the sample [18]. To address these obstacles, we present two new methods for the automated, real-time planning and execution of such experiments. Edp.The traditional method of capturing all data or relying on human experience to predict future events is outdated and inefficient. Instead of relying on human experience, an accurate and comprehensive model of the dynamic process should be created. This model should be capable of uniquely identifying a relevant object state through a process known as fingerprinting, similar to how humans do it. In biomedicine, it is crucial that this fingerprint remains consistent despite contextual changes such as noise, brightness changes, or affine transformations. By modeling the relationship between the fingerprints, the dynamic process can be represented orderly. This representation suits as an approximation of relative progress within the dynamic process. This relative progress can also be interpreted as a relative time, known as pseudo-time. By adopting this method, we can achieve efficiency in predicting future events and a deeper understanding of the dynamic process. EapdpTo unlock the full potential of the EDP, a well-designed pipeline is an absolute must. Such a pipeline should be able to recognize specific states in the real world, identify relevant objects, and then calculate a pseudo-time for those states using the EDP. Armed with this knowledge, the pipeline can automatically plan and execute a new state capture for any significant event that occurs. Due to the uncertain nature of the EDP predictions, the pipeline must be able to respond to unsuccessful recordings and learn from them. This is where MLOps [1] comes in. By retraining an existing production model in accordance with the live context, MLOps ensures that the pipeline always uses a current and accurate model, resulting in a potentially better outcome in real-world experiments. ## 2 Related Work Object Extraction.Basically, there are different possibilities in Computer Vision (CV) like object detection and object segmentation, to identify individual objects in an image [10] and thus extract them. In the biomedical context, many methods mostly focus on segmentation [42] with SOTA methods like StarDist [38] and CPN [46]. Pseudo-time predictions.A first approach for pseudo-time predictions with classical, non Deep Learning (DL) methods was presented in [14]. For extract ing relevant objects from the acquired image, thresholding is used as a classical CV segmentation method. Then the object's fingerprint is generated linearly with a Principal Component Analysis (PCA) [19, 20]. However, biological processes are usually not linear [5]. Therefore, recently, non-linear encodings of the dynamic processes using DL methods have become popular [17, 9, 21, 32]. For example, [21] and [9] present DL approaches to encode cell cycles and derive predefined cell phases. However, this classification-based approach does not allow for deriving continuous relations like the pseudo-time to each other directly. This continuous relation was modeled with DeepCycle in [32]. The training of DeepCycle is performed supervised. For this purpose, virtual labels are calculated based on the fluorescence intensity in specifically labeled channels of the cells. These classes can then be used as anchor points during training to determine a (relative) cell state as a pseudotime. It is important to note that the assumption that a correlation of fluorescence intensity to cell phase can be used is not always true in the biological context. A DL method that follows a comparable pseudo-time approach to the given constraints, in this paper, was presented in [17]. In [17], an AutoEncoder (AE) approach for pseudo-time approximation is used as a Self-Supervised Learning (SSL) approach. A DL model is used as the Variational AutoEncoder (VAE) [23] encoding, from whose Hierarchical Agglomerative Clustering (HAC) and Minimum Spanning Tree (MST) code the pseudo-time is then determined. However, this approach also has a few limitations. First, a recording necessarily contains exactly one relevant object in one acquired image. Second, the entire dataset was acquired under comparable acquisition conditions, which also only contain identical positioned and oriented objects and are not able to learn affine transformations [3] between objects. Both constraints are generally not satisfied for microscopic images, such as in [27, 36]. Furthermore, this pseudo-time method was not designed as an End-to-end (E2E) model, which deprives the DL model of the ability to internally bind affine transformed objects. AutoEncoder.Autoencoders are SSL methods to learn a representation from a given suitability, such as an image [49, 47]. For example, the autoencoder can be represented by a Conventional AutoEncoder (CAE) [49] and/or a VAE [23]. Especially recently, Masked AutoEncoders (MAEs) [15] have become more popular than CAE because of their ability for a better visual representation learning [49], either using a transformer-based approach [15] or the Convolutional Neural Network (CNN)-based approach [47]. However, since in [47] the higher efficiency of ConvNeXt V2 is shown, this model is chosen for this work. In addition to the pure learning of a visual representation in the form of a fingerprint, relations between the fingerprints can also be learned, e.g. with VAE [17]. SslTo train the DL model in a supervised manner, labeled data are generally rare in the biomedical domain [13]. There are DataBases (DBs) like BioMed-Image.io [30] or several challenges with own datasets [26, 2, 28]. However, especially for biological datasets with sometimes hundreds of relevant objects in an image, the datasets are often limited to the 2D case. Furthermore, in the context of this work, labeling the relevant events with pseudo-time stamps is only approximate, demanding, error-prone and time-consuming. Therefore, unsupervised or SSL methods are often used in the biomedical context [43]. Thereby, Active Learning (AL) [41] is used to selectively integrate expert knowledge into the learning process. Since AL aims to keep the number of interactions to a minimum [7, 33], data-efficient learning is preferable. For example, existing datasets from a related context can be identified and leveraged to train more robust models in transfer learning [11, 29, 48]. In order to be able to use possibly directly existing pre-trained models from similar contexts, a new concept was developed in [48]. ## 3 Methodology ### Edp For the modeling of the dynamic process, a new concept is introduced with EDP. The new concept of EDP is based on an AE and is visualized in Figure 1. The basic idea of the new concept is to separate the generation of the fingerprint from the learning of the relation as a representation between all states. The fingerprint generation is done using a MAE as an evolution of CAE. Specifically, the SOTA MAE-based method ConvNeXt V2 is chosen. During the learning process, a maximum recovery of the encoded image is aimed in accordance with SSL. According to the challenge posed by biological objects, a context-independent representation is required. For this purpose, the images can be modified during the learning process through Data Augmentation [16] techniques like Rotations, reflections, contrast adjustments or noise additions. In addition to the fingerprint, the relation must also be learned as the actual modeling of the dynamic process. For this purpose, a VAE-like modeling is used by learning the uncertainty \(v\) in addition to the circle angle \(\alpha\). The assumption is that objects succeeded each other in the dynamic process with the relative distance \(t\) corresponding to this relative distance and differ in the same ratio in the circle representation. Such an exemplary circle representation is shown in Figure 2 using a cell division process of the zebrafish embryo. The state of the cell after cell division is visualized at 00 o'clock and up to the state of the cell just before cell division at 11 o'clock. This corresponds to a relative distance of \(\sim\)0.92 (normalized between [0,1)). This temporal difference must also be valid in reality for the temporal distance according to the model statement. ### Eapdp The new EDP module is integrated as a module into the new MLOps-based pipeline EAPDP. The pipeline concept is visualized in Figure 3 and contains nine other modules besides the EDP module. Each of these ten modules is Figure 1: Visualization of the EDP model. A 3D object is transformed by an encoder into a fingerprint (like MAE) and into a relation representation between the fingerprints (like VAE). The example object is a recording of the DNA channel of a nucleus from an internal zebrafish embryo dataset. A scale bar of \(2\mu m\) is indicated at the bottom of the input/output image. briefly described below. The explanation of the modules and their relationships to each other is based on the pipeline visualization of Figure 3. Microscope setup.In the EAPDP, the microscope is used as an actuator to the real-world environment represented by a biomedical sample. For this purpose, all microscope components relevant to image acquisition and the microscope accessories, such as lasers in the case of a laser scanning microscope, must be controllable via appropriate interfaces of the specific microscope setup. In addition, the microscope must be able to react on given com Figure 2: Example of a 2D feature space representation for an encoded dynamic process. Each point represents an encoded image. The circle serves as an estimation for the positioning of points in its vicinity. It’s worth noting that due to the presence of uncertainty, the points may not always be precisely on the circle, but rather in its proximity. For the seven red dots, example images of cell nuclei from zebrafish embryos at various stages of cell division are shown. A scale bar of 2 \(\mu m\) is indicated at the bottom of each example image. The images are from an internal dataset. mands like image acquisition or requested meta-information like the objective position in a standardized way. Image Pre-processing.To optimize the analysis of dynamic processes in the biomedical environment, the raw images acquired through experiments must be pre-processed according to the microscope setup and the context of the targeted event. This may involve methods such as cropping, contrast adjustment, or denoising. Various libraries, such as Albumentations [4], offer pre-processing Figure 3: MLOps pipeline with the new EDP module. AI-based MLOps modules are marked with a green background, non AI-based ones with a blue background. Additionally, all modules marked with a red border must be newly developed or only partially adapted from existing methods. methods that can be used to improve the quality of the images and optimize their analysis by other modules in the machine learning operation pipeline. Object Extraction.During an acquisition, the relevant object and the surrounding context are captured. In order to better analyze the object, it is necessary to separate it from the surrounding context. The extraction from the whole image is done via pre-trained segmentation methods. To find a suitable method, we compare the SOTA cell segmentation algorithms StarDist and CPN using a microscopic dataset in a first experiment. Importantly, the actual pseudo-time determination cannot be performed if both methods' segmentation is insufficient. Therefore, this submodule is of particular importance. Because (well) labeled data are generally scarce in the biological context, this work evaluates generalization performance during inference with already pre-trained models on new, unknown images. Since there are only pre-trained weights for 2D segmentation for both methods, the dataset was split into 2D images along the z-axis. Edp.The EDP module gets the extracted object and should pass the pseudo-time to the experiment planner. In order to do this, the module is equipped by the Experiment planner before with the appropriate experiment setup. With the setup, the EDP model can then query according to its existing knowledge like pre-trained models in the context of data-efficient learning. If no weights are available, training can also be done with/without AL as specified by the expert. After successful training, the model is passed to the DB with the appropriate required metadata for possible further use. Then, when the inference with the original extracted relevant object has been determined, the results are passed to the expert planner accordingly. The recorded inference image is also sent to the Data-efficient Learning module and stored in its DB. Experiment planner.The Experiment planner is the central module of the experiment automatization. As input it gets the pseudo-times for the recorded objects. Based on the experiment context, including interesting events, the Experiment planner can plan future experiments with utmost precision. Once the plan is set, the Experiment planner gives the microscope the command to ensure that the image captures the object's state at the right time, leaving no room for errors. Additionally, it can query the state of the microscope to ensure that there was no hardware drift, such as when moving to the object position. All the information about the experiment's state is then passed on to the User Interface (UI), ensuring that all aspects of the experiment are under control. Ui.The UI is the interface between the expert and the MLOps pipeline. On the one hand, simple interactions can be provided, such as displaying meta-information or adapting the experimental context, e.g. the cell classes that occur. On the other hand, much more complex interactions such as result justifications of DL models can be represented through Explainable Artificial Intelligence (XAI) [34] or expert knowledge can be brought into the pipeline within the context of AL. With XAI, the expert should be able to understand better the processes in the DL models used and why decisions were made, e.g. for event detection. This helps the expert to eliminate potential errors like unfavorable experiment settings at an early stage. Such XAI methods can be realized using a library like PyTorch Captum [25]. For AL, only if the expert can capture the actual state in the best possible way, the expert can transfer his domain knowledge to the method in the best possible way and support the method. For example, points, boxes, or entire segmented regions can be passed to the method as hints. For this purpose, a custom segmentation module can be developed based on exiting AL labeling platforms like ObiWanMicrobi [40] or Karlsruhe Image Data Annotation (KaIDA) [37]. Expert (Domain) Knowledge.The domain knowledge contributed by the expert to the MLOps pipeline can take several forms. For example, the context of the experiment with a specific cell class can improve a more efficient event detection module. Furthermore, knowledge can be injected, e.g. by labeling in the context of AL. For this, the expert must ensure the quality of the injected domain knowledge with maximum correctness. Incorrect information can affect the learning processes in the network. Data-efficient Learning.To minimize AL interactions with the human expert, as much existing knowledge as possible is reused. To this end, building on [48], a new AE-based fingerprinting approach for datasets and Machine Learning (ML) models is being implemented to reuse as much knowledge as possible. For this, from a DB, context-given requirements can query existing knowledge. If no data is available, it can be created synthetically e.g. with biophysical simulations like Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) [44]. Microscope control.In order for the planned experiment to be automated and performed in real-time, a corresponding software library is needed to control the microscope. Since the first release in 2010, \(\mu\)Manager has been used for this purpose as one of the SOTA open-source solutions [6, 22, 31, 45]. Therefore, this is also used in this work. ### Exemplary Use Cases Example use cases for the presented EAPDP with the EDP are presented using record extracts in Figure 4 below. A first use case is shown in Figure 4a and represents the temporal sorting of RiboNucleic Acid (RNA) Polymerase II (Pol II) clusters that occur in the nuclei of pluripotent zebrafish embryos. A method for this use case has already been presented in [14]. A comparison of the pipeline based on classical ML methods with our DL-based EDP method allows a direct statement about limitations or improvements of our approach. The second use-case in Figure 4b is the recording of cell divisions in pluripotent zebrafish embryos, where the time of reaching a new division stage and thus the regions of an event of interest need to be extrapolated. A final biological application from the field of microbiology is presented in Figure 4c. In this example, one interesting event could be the state at which \(n\) microbes reach the recording region. For this purpose, a modeling of the cell division process with EDP can be used to plan the experiment accordingly and automatically record the event of interest at a time \(t\). The modeling of the cell division process with EDP can be used for this purpose. In addition to these biological use cases, other use cases are also possible, e.g. in medicine. For example, by modeling a tumor accordingly, a prediction can be made about the relative stage. Consequently, a therapy concept such as surgery or medication can be tailored to the patient. ## 4 Experiments The comparison of segmentation algorithms was performed on Helmholtz AI COmpute REssources (HAICORE) resources equipped with Intel Xeon Platinum 8368 Central Processing Units (CPUs) and an Nvidia A100-40 Graphics Processing Unit (GPU) [24]. The operating system utilized was Red Hat Enterprise Linux (RHEL) version 8.6. ### Dataset The internal microscope dataset from Figure 3(b) is used to compare the segmentation algorithms. This dataset was chosen over the other two example datasets from Figures 3(a) and 3(c) because of the challenging, frayed structure of the nuclei as the relevant image object. This is because the fibrillar structure of the nuclei sometimes deviates strongly from their typical ellipsoidal shape as in Figure 3(a) due to individually advanced cytokinesis. This poses a challenge because contiguous pixel regions are not trivially identifiable and correct Figure 4: Three exemplary images from biological datasets for dynamic processes. Figure 3(a) shows cell nuclei of zebrafish embryos with marked Pol II Ser5P clusters. Figure 3(b) shows the DeoxyriboNucleic Acid (DNA) of zebrafish embryos nuclei. In these first two images, a scale bar of \(20\mu m\) is shown in the lower left. The last Figure 3(c) shows a microbial cell division state. boundary segmentation is a challenge. With microbeSEG [35], a working SOTA solution for microbes like in Figure 4c also already exists. For the dataset in Figure 4b, zebrafish embryo DNA was imaged. DNA was stained with 1:10000 5'-TMR Hoechst in TDE or glycerol. Confocal z-sections were obtained using a commercial instant SIM microscope (iSIM, VisiTech). A Nikon 100x oil immersion objective (NA 1.49, SR HO Apo TIRF 100xAC Oil) and a Hamamatsu ORCA-Quest camera were used for image acquisition. In accordance with a common problem in biology, no labels exist for this dataset. According to the desired 2D segmentation, the 3D images are split into 2D images along the z-axis. ### Object extraction These 2D images were then segmented using each of the two methods. In the following, the results are evaluated qualitatively because of the non-existent labels. Therefore, the results are shown in Figure 5. The comparison of the original image in Figure 5a with the StarDist prediction in Figure 5b, shows that StarDist cannot well segment semantically related objects as the nucleus in the upper area. For the method designs with center prediction, StarDist focused primarily on segmenting ellipsoidal objects from [12] and was trained only on these. The cell detection was designed to be more flexible and additionally trained on a more heterogeneous set of non-elliptical cells such as _MCF7_ from the dataset [8]. This leads to better generalization and results in qualitatively evaluated good initial segmentation performance on this most challenging of our datasets from Figure 4. Thus, we could show that CPN is a good pre-trained SOTA approach for extracting the relevant objects from the 2D decomposition. The 2D segmentations can be reassembled back to 3D segmentations in post-processing, e.g. using Nearest Neighbor. Based on this, the further submodules of EDP can be developed in future work and the presented MLOps pipeline can be built upon it. ## 5 Conclusion and Further Work In this work, we motivated that due to the large number of parallel non-synchronous dynamic processes, a novel concept for automated planning and execution of two novel DL-based approaches is essential. First, the EDP was introduced to model dynamic processes and derivate a pseudo-time for a given object state. The pseudo-time prediction can then be used with the EAPDP for real-time experiment automation. We explained the EDP realized within the MLOps pipeline by an AE and trained using SSL with AL. At the same time, the key advantage of higher execution speed and lower human cost while minimizing user interactions with data-efficient learning was highlighted. Finally, as a first Proof of Concept (PoC), we showed the necessary pre-processing step for the EDP to extract the relevant objects based on good inference results of CPN. However, the lack of pre-trained weights for 3D segmentation was a drawback of the segmentation experiments. However, since the fragmented objects are partially reconnected along the z-axis, this could simplify the problem and improve accuracy. This will be done as soon as appropriate weights are available. In addition, a suitable affine-invariant 3D AE needs to be developed for use within the EDP method. In this context, further research is needed to investigate whether the ConvNeXt V2 is suitable for 3D segmentation, also Figure 5: Comparison of segmentation predictions for the two SOTA methods StarDist [38] and CPN [46]. Figure 4(a) represents the original image, duplicated from Figure 4(b). In Figure 4(b) and Figure 4(c), the predictions of pre-trained StarDist or CPN are then shown. The predictions are highlighted differently for better visual differentiation depending on the method used. from an efficiency perspective. Of course, the modules of the MLOps pipeline must be implemented accordingly. ## Acknowledgments This project is funded by the Helmholtz Association under the Program Natural, Artificial and Cognitive Information Processing (NACIP) and the Helmholtz Association's Initiative & Networking Fund through Helmholtz AI. All experiments were performed on the HAICORE. We sincerely thank all of them for supporting our research. The authors have accepted responsibility for the entire content of this manuscript and approved its submission. We describe the individual contributions of N. Friederich (NF), A. Yamachui Sitcheu (AYS), O. Neumann (ON), S. Eroglu-Kayikci (SEK), R. Prizak (RP), L. Hilbert (LH), R. Mikut (RM): Conceptualization: NF, LH, RM; Methodology: NF, LH, RM; Software: NF; Investigation: NF; Resources: SEK, RP, LH; Writing - Original Draft: NF; Writing - Review & Editing: NF, AYS, ON, SEK, RP, LH, RM; Supervision: LH, RM; Project administration: RM; Funding Acquisition: LH, RM
2304.00798
Topology of the space of conormal distributions
Given a closed manifold $M$ and a closed regular submanifold $L$, consider the corresponding locally convex space $I=I(M,L)$ of conormal distributions, with its natural topology, and the strong dual $I'=I'(M,L)=I(M,L;\Omega)'$ of the space of conormal densities. It is shown that $I$ is a barreled, ultrabornological, webbed, Montel, acyclic LF-space, and $I'$ is a complete Montel space, which is a projective limit of bornological barreled spaces. In the case of codimension one, similar properties and additional descriptions are proved for the subspace $K\subset I$ of conormal distributions supported in $L$ and for its strong dual $K'$. We construct a locally convex Hausdoff space $J$ and a continuous linear map $I\to J$ such that the sequence $0\to K\to I\to J\to 0$ as well as the transpose sequence $0\to J'\to I'\to K'\to 0$ are short exact sequences in the category of continuous linear maps between locally convex spaces. Finally, it is shown that $I\cap I'=C^\infty(M)$ in the space of distributions. In another publication, these results are applied to prove a Lefschetz trace formula for a simple foliated flow $\phi=\{\phi^t\}$ on a compact foliated manifold $(M,F)$. It describes a Lefschetz distribution $L_{\text{\rm dis}}(\phi)$ defined by the induced action $\phi^*=\{\phi^{t\,*}\}$ on the reduced cohomologies $\bar H^\bullet I(F)$ and $\bar H^\bullet I'(F)$ of the complexes of leafwise currents that are conormal and dual-conormal at the leaves preserved by $\phi$.
Jesús A. Álvarez López, Yuri A. Kordyukov, Eric Leichtnam
2023-04-03T08:39:19Z
http://arxiv.org/abs/2304.00798v3
# Topology of the space of conormal distributions ###### Abstract. Given a closed manifold \(M\) and a closed regular submanifold \(L\), consider the corresponding locally convex space \(I=I(M,L)\) of conormal distributions, with its natural topology, and the strong dual \(I^{\prime}=I^{\prime}(M,L)=I(M,L;\Omega)^{\prime}\) of the space of conormal densities. It is shown that \(I\) is a barreled, ultrabornological, webbed, Montel, acyclic LF-space, and \(I^{\prime}\) is a complete Montel space, which is a projective limit of bornological barreled spaces. In the case of codimension one, similar properties and additional descriptions are proved for the subspace \(K\subset I\) of conormal distributions supported in \(L\) and for its strong dual \(K^{\prime}\). We construct a locally convex Hausdorff space \(J\) and a continuous linear map \(I\to J\) such that the sequence \(0\to K\to I\to J\to 0\) as well as the transpose sequence \(0\to J^{\prime}\to I^{\prime}\to K^{\prime}\to 0\) are short exact sequences in the category of continuous linear maps between locally convex spaces. Finally, it is shown that \(I\cap I^{\prime}=C^{\infty}(M)\) in the space of distributions. These results will be applied to prove a trace formula for foliated flows, involving the reduced cohomologies of the complexes of leafwise currents that are conormal and dual-conormal at the leaves preserved by the flow. Key words and phrases:Conormal distributions, dual-conormal distributions, barreled, ultrabornological, webbed, acyclic, Montel space, complete, boundedly retractive, reflexive 2020 Mathematics Subject Classification: 46F05, 46A13, 46M40 The authors are partially supported by the grants MTM2017-89686-P and PID2020-114474GB-I00 (AEI/FEDER, UE) and ED431C 2019/10 (Xunta de Galicia, FEDER). Introduction Let \(M\) be a compact, closed, orientable manifold, and let \(\mathcal{M}\) be a compact, closed, orientable manifold. Let \(\mathcal{M}\) be a compact, orientable manifold and let \(\mathcal{M}^{\prime}\) be a compact, orientable manifold. Let \(\mathcal{M}\) be a compact, orientable manifold and let \(\mathcal{M}^{\prime}\) be a compact, orientable manifold and let \(\mathcal{M}^{\prime}\) be a compact, orientable manifold. Let \(\mathcal{M}\) be a compact, orientable manifold and let \(\mathcal{M}^{\prime}\ [15, 25] for the boundary case. Most of these properties are first established in the boundary case (Section 6). Besides the extensions for distributional sections of vector bundles, some results are extended to non-compact manifolds. We also analyze the action of differential operators on these spaces, as well as the pull-back and push-forward homomorphisms induced by maps on these spaces (Sections 4, 5, 7 and 8). Via the Schwartz kernel theorem, the spaces of pseudodifferential and differential operators can be described as \(\Psi(M)\equiv I(M^{2},\Delta)\) and \(\operatorname{Diff}(M)\equiv K(M^{2},\Delta)\), where \(\Delta\) is the diagonal of \(M^{2}\). Thus \(\Psi(M)\) and \(\operatorname{Diff}(M)\) become examples of locally convex spaces satisfying the above properties. The wave front set of any \(u\in I(M,L)\) satisfies \(\operatorname{WF}(u)\subset N^{*}L\setminus 0_{L}\) (considering \(N^{*}L\subset T^{*}M\)) [14, Chapter VIII], [15, Chapter XVIII]; this is the reason of the term "conormal distribution." The larger space of all distributions whose wave front set is contained in any prescribed closed cone of \(T^{*}M\setminus 0_{M}\), like \(N^{*}L\setminus 0_{L}\), also has a natural topology which was studied in [6]. Our results for codimension one can be clearly extended to arbitrary codimension. We only consider codimension one for simplicity reasons. It is also clear that there are further extensions to manifolds with corners, stratified spaces, etc. The case of codimension one is also enough for our application in a trace formula for simple foliated flows [4]. These are simple flows \(\phi=\{\phi^{t}\}\) that preserve the leaves of a foliation \(\mathcal{F}\) on \(M\). C. Deninger conjectured the existence of a "Leftchetz distribution" \(L_{\operatorname{dis}}(\phi)\) on \(\mathbb{R}\) for the induced pull-back action \(\phi^{*}=\{\phi^{t\,*}\}\) on the leafwise reduced cohomology \(\bar{H}^{\bullet}(\mathcal{F})\), and predicted a formula for \(L_{\operatorname{dis}}(\phi)\) involving data from the closed orbits and fixed points [8]. Here, \(\bar{H}^{\bullet}(\mathcal{F})\) is the maximal Hausdorff quotient of the leafwise cohomology \(H^{\bullet}(\mathcal{F})\), defined by the de Rham derivative of the leaves acting on leafwise differential forms smooth on \(M\), equipped with the \(C^{\infty}\) topology. But can not use leafwise forms smooth on \(M\) if there are leaves preserved by \(\phi\); they do not work well. Instead, we consider the spaces \(I(\mathcal{F})\) and \(I^{\prime}(\mathcal{F})\) of distributional leafwise currents that are conormal and dual-conormal at the preserved leaves, giving rise to reduced cohomologies, \(\bar{H}^{\bullet}I(\mathcal{F})\) and \(\bar{H}^{\bullet}I^{\prime}(\mathcal{F})\), with actions \(\phi^{*}\). The spaces \(K(\mathcal{F})\), \(J(\mathcal{F})\), \(K^{\prime}(\mathcal{F})\) and \(J^{\prime}(\mathcal{F})\) are similarly defined, obtaining short exact sequences, \(0\to\bar{H}^{\bullet}K(\mathcal{F})\to\bar{H}(\mathcal{F})^{\bullet}I\to\bar{ H}^{\bullet}J(\mathcal{F})\to 0\) and \(0\leftarrow\bar{H}^{\bullet}K^{\prime}(\mathcal{F})\leftarrow\bar{H}^{\bullet }I^{\prime}(\mathcal{F})\leftarrow\bar{H}^{\bullet}J^{\prime}(\mathcal{F})\gets 0\). In this way, the definition of \(L_{\operatorname{dis}}(\phi)\) for both \(\bar{H}^{\bullet}I(\mathcal{F})\) and \(\bar{H}^{\bullet}I^{\prime}(\mathcal{F})\) together can be reduced to the cases of \(\bar{H}^{\bullet}K(\mathcal{F})\), \(\bar{H}^{\bullet}J(\mathcal{F})\), \(\bar{H}^{\bullet}K^{\prime}(\mathcal{F})\) and \(\bar{H}^{\bullet}J^{\prime}(\mathcal{F})\). This can be done by using the descriptions of \(K(\mathcal{F})\), \(J(\mathcal{F})\), \(K^{\prime}(\mathcal{F})\) and \(J^{\prime}(\mathcal{F})\), and some additional ingredients. In this way we can define \(L_{\operatorname{dis}}(\phi)\), which has the desired expression plus a zeta invariant produced by the use of the b-trace of R. Melrose [24]. However the ingredients can be chosen so that the zeta invariant vanishes [3], and the predicted formula becomes correct. ## 2. Preliminaries ### Topological vector spaces The field of coefficients is \(\mathbb{K}=\mathbb{R},\mathbb{C}\). For the general theory of topological vector spaces (TVSs), we follow the references [9, 16, 21, 31, 28], assuming the following conventions. We always consider locally convex spaces (LCSs), which are not assumed to be Hausdorff (contrary to the definition of [31]); the abbreviation LCHS is used in the Hausdorff case. Local convexity is preserved by all operations we use. For any inductive/projective system (or spectrum) of continuous linear maps between LCSs, we have its (locally convex) inductive/projective limit; in particular, when the inductive/projective spectrum consists of a sequence of continuous inclusions, their union/intersection is endowed with the inductive/projective limit topology. This applies to the locally convex direct sum and the topological product of LCSs. LF-spaces are not assumed to be strict. The (continuous) dual \(X^{\prime}\) is always endowed with the strong topology; i.e., we write \(X^{\prime}=X^{\prime}_{\beta}\) with the usual notation. Some homological theory of LCSs will be used (see [39] and references therein) For instance, for an inductive spectrum of LCSs of the form \((X_{k})=(X_{0}\subset X_{1}\subset\cdots)\), the condition of being _acyclic_ can be described as follows [39, Theorem 6.1]: for all \(k\), there is some \(k^{\prime}\geq k\) such that, for all \(k^{\prime\prime}\geq k^{\prime}\), the topologies of \(X_{k^{\prime}}\) and \(X_{k^{\prime\prime}}\) coincide on some \(0\)-neighborhood of \(X_{k}\). In this case, \(X:=\bigcup_{k}X_{k}\) is Hausdorff if and only if all \(X_{k}\) are Hausdorff [39, Proposition 6.3]. It is said that \((X_{k})\) is _regular_ if any bounded \(B\subset X\) is contained and bounded in some step \(X_{k}\). If moreover the topologies of \(X\) and \(X_{k}\) coincide on \(B\), then \((X_{k})\) is said to be _boundedly retractive_. The conditions of being _compactly retractive_ or _sequentially retractive_ are similarly defined, using compact sets or convergent sequences. If the steps \(X_{k}\) are Frechet spaces, the above properties of \((X_{k})\) depend only on the LF-space \(X\)[39, Chapter 6, p. 111]; thus it may be said that they are properties of \(X\). In this case, \(X\) is acyclic if and only if it is boundedly/compactly/sequentially retractive [39, Proposition 6.4]. As a consequence, acyclic LF-spaces are complete and regular [39, Corollary 6.5]. A topological vector subspace \(Y\subset X\) is called a _limit subspace_ if \(Y\equiv\bigcup_{k}Y_{k}\), where \(Y_{k}=X\cap Y_{k}\). This condition is satisfied if and only if the spectrum consisting of the spaces \(X_{k}/Y_{k}\) is acyclic [39, Chapter 6, p. 110]. Assume the steps \(X_{k}\) are LCHSs. It is said that \((X_{k})\) is _compact_ if the inclusion maps are compact operators. In this case, \((X_{k})\) is clearly acyclic, and so \(X\) is Hausdorff. Moreover \(X\) is a complete bornological DF Montel space [18, Theorem 6']. The above concepts and properties also apply to an inductive/projective spectrum consisting of continuous inclusions \(X_{r}\subset X_{r^{\prime}}\) for \(r<r^{\prime}\) in \(\mathbb{R}\) because \(\bigcap_{r}X_{r}=\bigcap_{k}X_{r_{k}}\) and \(\bigcup_{r}X_{r}=\bigcup_{k}X_{s_{k}}\) for sequences \(r_{k}\downarrow-\infty\) and \(s_{k}\uparrow\infty\). In the category of continuous linear maps between LCSs, the exactness of a sequence \(0\to X\to Y\to Z\to 0\) means that it is exact as a sequence of linear maps and consists of topological homomorphisms [39, Sections 2.1 and 2.2]. ### Smooth functions on open subsets of \(\mathbb{R}^{n}\) For an open \(U\subset\mathbb{R}^{n}\) (\(n\in\mathbb{N}_{0}=\mathbb{N}\cup\{0\}\)), we will use the Frechet space \(C^{\infty}(U)\) of smooth (\(\mathbb{K}\)-valued) functions on \(U\) with the topology of uniform approximation of all partial derivatives on compact subsets, which is described by the semi-norms \[\|u\|_{K,C^{k}}=\sup_{x\in K,\ |\alpha|\leq k}|\partial^{\alpha}u(x)|\;, \tag{2.1}\] for any compact \(K\subset U\), \(k\in\mathbb{N}_{0}\) and \(\alpha\in\mathbb{N}_{0}^{n}\), using standard multi-index notation. (Sometimes the notation \(C^{\infty}_{\mathrm{loc}}(U)\) is used for this space, and \(C^{\infty}(U)\) is used for the uniform space denoted by \(C^{\infty}_{\mathrm{ub}}(U)\) in this paper.) For any \(S\subset M\), the notation \(C^{\infty}_{S}(U)\) is used for the subspace of smooth functions supported in \(S\) (with the subspace topology). (The common notation \(C^{\infty}(S)=C^{\infty}_{S}(U)\) would be confusing when extended to other function spaces.) Recall also the strict LF-space of compactly supported functions, \[C^{\infty}_{\rm c}(U)=\bigcup_{K}C^{\infty}_{K}(U)\;, \tag{2.2}\] for compact subsets \(K\subset M\) (an exhausting increasing sequence of compact subsets is enough). The above definitions have straightforward generalizations to the case of functions with values in \(\mathbb{K}^{l}\), obtaining \[C^{\infty}_{\cdot/{\rm c}}(U,\mathbb{K}^{l})\equiv C^{\infty}_{\cdot/{\rm c}}(U )\otimes\mathbb{K}^{l}\;. \tag{2.3}\] (The notation \(C^{\infty}_{\cdot/c}\) or \(C^{\infty}_{c/\cdot}\) refers to both \(C^{\infty}\) and \(C^{\infty}_{\rm c}\).) ### Vector bundles The notation \(M\) will be used for a smooth manifold of dimension \(n\), and \(E\) for a (\(\mathbb{K}\)-) vector bundle over \(M\). The fibers of \(E\) are denoted by \(E_{x}\) (\(x\in M\)), the zero in every \(E_{x}\) by \(0_{x}\), and the image of the zero section by \(0_{M}\). Let \(\Omega^{a}E\) (\(a\in\mathbb{R}\)) denote the line bundle of \(a\)-densities of \(E\), let \(\Omega E=\Omega^{1}E\), and let \(o(E)\) be the flat line bundle of orientations of \(E\). We may use the notation \(E_{L}=E|_{L}\) for the restriction of \(E\) to a submanifold \(L\subset M\). As particular cases, we have the tangent and cotangent \(\mathbb{R}\)-vector bundles, \(TM\) and \(T^{*}M\), and the associated \(\mathbb{K}\)-vector bundles \(o(M)=o(TM)\), \(\Omega^{a}M=\Omega^{a}TM\) and \(\Omega M=\Omega TM\). ### Smooth and distributional sections Our notation for spaces of distributional sections mainly follows [25], with some minor changes to fit our application in [4]. Some notation from [14, 15] is also used. Generalizing \(C^{\infty}(U,\mathbb{K}^{l})\), we have the Frechet space \(C^{\infty}(M;E)\) of smooth sections of \(E\), whose topology is described by semi-norms \(\|\cdot\|_{K,C^{k}}\) defined as in (2.1) via charts \((U,x)\) of \(M\) and diffeomorphisms of triviality \(E_{U}\equiv U\times\mathbb{K}^{l}\), with \(K\subset U\). This procedure is standard and will be used again with other section spaces. Redundant notation will be removed as usual. For instance, we write \(C^{\infty}(M)\) (respectively, \(C^{\infty}(M,\mathbb{K}^{l})\)) in the case of the trivial vector bundle of rank 1 (respectively, \(l\)). We also write \(C^{\infty}(L,E)=C^{\infty}(L,E_{L})\) and \(C^{\infty}(M;\Omega^{a})=C^{\infty}(M;\Omega^{a}M)\). We may write \(C^{\infty}(E)=C^{\infty}(M;E)\) if \(M\) is fixed, but this may also mean the space of smooth functions on \(E\). In particular, \(\mathfrak{X}(M)=C^{\infty}(M;TM)\) is the Lie algebra of vector fields. The subspace \(C^{\infty}_{\mathcal{S}}(M;E)\) is defined like in Section 2.2. Similar notation will be used with any LCHS and \(C^{\infty}(M)\)-module continuously included in \(C^{\infty}(M;E)\), or in the space \(C^{-\infty}(M;E)\) defined below. The notation \(C^{\infty}(M;E)\), or \(C^{\infty}(E)\), is also used with any smooth fiber bundle \(E\), obtaining a completely metrizable topological space with the weak \(C^{\infty}\) topology. The strict LF-space \(C^{\infty}_{\rm c}(M;E)\) of compactly supported smooth sections is defined like in (2.2), using compact subsets \(K\subset M\). There is a continuous inclusion \(C^{\infty}_{\rm c}(M;E)\subset C^{\infty}(M;E)\). If \(M\) is a fiber bundle, the LCHS \(C^{\infty}_{\rm c}(M;E)\) of smooth sections with compact support in the vertical direction is similarly defined using (2.1) and (2.2) with closed subsets \(K\subset M\) whose intersection with the fibers is compact (now an exhaustive increasing sequence of such subsets \(K\) is not enough). The space of distributional sections with arbitrary/compact support is \[C^{-\infty}_{\cdot/{\rm c}}(M;E)=C^{\infty}_{c/\cdot}(M;E^{*}\otimes\Omega)^ {\prime}\;. \tag{2.4}\] (In [14], these dual spaces are endowed with the weak topology, contrary to our convention.) Integration of smooth densities on \(M\) and the canonical pairing of and \(E^{*}\) define a dense continuous inclusion \(C^{\infty}_{\cdot/\mathrm{c}}(M;E)\subset C^{-\infty}_{\cdot/\mathrm{c}}(M;E)\). If \(U\subset M\) is open, the extension by zero defines a TVS-embedding \(C^{\pm\infty}_{\mathrm{c}}(U;E)\subset C^{\pm\infty}_{\mathrm{c}}(M;E)\). The above spaces of distributional sections are locally finitely \(C^{\infty}(M)\)-generated, and can be also described in terms of the corresponding spaces of distributions as the algebraic tensor product as \(C^{\infty}(M)\)-modules \[C^{-\infty}_{\cdot/\mathrm{c}}(M;E)\equiv C^{-\infty}_{\cdot/\mathrm{c}}(M) \otimes_{C^{\infty}(M)}C^{\infty}(M;E)\;. \tag{2.5}\] To show this identity, \(E\) can be realized as a vector subbundle of a trivial vector bundle \(F=M\times\mathbb{K}^{l^{\prime}}\)[12, Theorem 4.3.1]. Then, like in (2.3), \[C^{-\infty}_{\cdot/\mathrm{c}}(M;F) \equiv C^{-\infty}_{\cdot/\mathrm{c}}(M)\otimes\mathbb{K}^{l^{ \prime}}\equiv C^{-\infty}_{\cdot/\mathrm{c}}(M)\otimes_{C^{\infty}(M)}C^{ \infty}(M)\otimes\mathbb{K}^{l^{\prime}}\] \[\equiv C^{-\infty}_{\cdot/\mathrm{c}}(M)\otimes_{C^{\infty}(M)}C^{ \infty}(M;F)\;,\] and the spaces of (2.5) clearly correspond by these identities. Expressions like (2.5) hold for most of the spaces of distributional sections we will consider. Thus, from now on, we will mostly define and study spaces for the trivial line bundle or density bundles, and then the notation for arbitrary vector bundles will be used without further comment, and the properties have straightforward extensions. Consider also the Frechet space \(C^{k}(M)\) (\(k\in\mathbb{N}_{0}\)) of \(C^{k}\) functions, with the seminorms \(\left\|\cdot\right\|_{K,C^{k}}\) given by (2.1), the LF-space \(C^{k}_{\mathrm{c}}(M)\) of \(C^{k}\) functions with compact support, defined like in (2.2), and the space \(C^{\prime\,-k}_{\cdot/\mathrm{c}}(M)\) of distributions of order \(k\) with arbitrary/compact support, defined like in (2.4). (A prime is added to this notation to distinguish \(C^{\prime\,0}_{\cdot/\mathrm{c}}(M)\) from \(C^{0}_{\cdot/\mathrm{c}}(M)\).) There are dense continuous inclusions \[C^{k^{\prime}}_{\cdot/\mathrm{c}}(M)\subset C^{k}_{\cdot/\mathrm{c}}(M)\;,\quad C ^{\prime\,-k^{\prime}}_{\mathrm{c}/\cdot}(M)\supset C^{\prime\,-k}_{\mathrm{ c}/\cdot}(M)\quad(k<k^{\prime})\;, \tag{2.6}\] with \(\bigcap_{k}C^{k}_{\cdot/\mathrm{c}}(M)=C^{\infty}_{\cdot/\mathrm{c}}(M)\) and \(\bigcup_{k}C^{\prime\,-k}_{\mathrm{c}}(M)=C^{-\infty}_{\mathrm{c}}(M)\)[28, Exercise 12.108]. The space \(\bigcup_{k}C^{\prime\,-k}(M)\) consists of the distributions with some order; it is \(C^{-\infty}(M)\) just when \(M\) is compact. Let us recall some properties of the spaces we have seen. In addition of the fact that \(C^{\infty}(M)\) and \(C^{k}(M)\) are Frechet spaces [16, Example 2.9.3], \(C^{\infty}_{\mathrm{c}}(M)\) and \(C^{k}_{\mathrm{c}}(M)\) are complete and Hausdorff [16, Examples 2.12.6 and 2.12.8]. \(C^{\infty}_{\cdot/\mathrm{c}}(M)\) and \(C^{k}_{\cdot/\mathrm{c}}(M)\) are ultrabornological because this property is satisfied by Frechet spaces and preserved by inductive limits [28, Example 13.2.8 (d) and Theorem 13.2.11], and therefore they are barreled [29, Observation 6.1.2 (b)]. \(C^{\pm\infty}_{\cdot/\mathrm{c}}(M)\) is a Montel space (in particular, barreled) [16, Examples 3.9.3, 3.9.4 and 3.9.6 and Proposition 3.9.9], [9, Section 8.4.7, Theorem 8.4.11 and Application 8.4.12], [31, the paragraph before IV.5.9], and therefore reflexive [9, Section 8.4.7], [21, 6.27.2 (1)], [31, IV.5.8]. \(C^{\infty}_{\cdot/\mathrm{c}}(M)\) is a Schwartz space [16, Examples 3.15.2 and 3.15.3], and therefore \(C^{-\infty}_{\cdot/\mathrm{c}}(M)\) is ultrabornological [16, Exercise 3.15.9 (c)]. \(C^{\infty}(M)\) is distinguished [16, Examples 3.16.1]. \(C^{\pm\infty}_{\cdot/\mathrm{c}}(M)\) is webbed because this property is satisfied by LF-spaces and strong duals of strict inductive limits of sequences of metrizable LCSs [7, Proposition IV.4.6], [22, 7.35.1 (4) and 7.35.4 (8)], [28, Theorem 14.6.5]. ### Linear operators on section spaces Let \(E\) and \(F\) be vector bundles over \(M\), and let \(A:C^{\infty}_{\mathrm{c}}(M;E)\to C^{\infty}(M;F)\) be a continuous linear map. Recall that the _transpose_ of \(A\) is the continuous linear map \[A^{t}:C_{\rm c}^{-\infty}(M;F^{*}\otimes\Omega)\to C^{-\infty}(M;E^{*} \otimes\Omega)\;,\] \[\langle A^{t}v,u\rangle=\langle v,Au\rangle\;,\quad u\in C_{\rm c} ^{\infty}(M;E)\;,\quad v\in C_{\rm c}^{-\infty}(M;F^{*}\otimes\Omega)\;.\] For instance, the transpose of \(C_{\rm c}^{\infty}(M;E^{*}\otimes\Omega)\subset C^{\infty}(M;E^{*}\otimes\Omega)\) is a continuous dense injection \(C_{\rm c}^{-\infty}(M;E)\subset C^{-\infty}(M;E)\). If \(A^{t}\) restricts to a continuous linear map \(C_{\rm c}^{\infty}(M;F^{*}\otimes\Omega)\to C^{\infty}(M;E^{*}\otimes\Omega)\), then \(A^{tt}:C_{\rm c}^{-\infty}(M;E)\to C^{-\infty}(M;F)\) is a continuous extension of \(A\), also denoted by \(A\). There are versions of the construction of \(A^{t}\) and \(A^{tt}\) when both the domain and codomain of \(A\) have compact support, or no support restriction. For example, for any open \(U\subset M\), the transpose of the extension by zero \(C_{\rm c}^{\infty}(U;E^{*}\otimes\Omega)\subset C_{\rm c}^{\infty}(M;E^{*} \otimes\Omega)\) is the restriction map \(C^{-\infty}(M;E)\to C^{-\infty}(U,E)\), \(u\mapsto u|_{U}\), and the transpose of the restriction map \(C^{\infty}(M;E^{*}\otimes\Omega)\to C^{\infty}(U,E^{*}\otimes\Omega)\) is the extension by zero \(C_{\rm c}^{-\infty}(U;E)\subset C_{\rm c}^{-\infty}(M;E)\). In the whole paper, inclusion maps may be denoted by \(\iota\) and restriction maps by \(R\), without further comment. Other related concepts and results, like singular support, Schwartz kernel and the Schwartz kernel theorem, can be seen e.g. in [24]. ### Pull-back and push-forward of distributional sections Recall that any smooth map \(\phi:M^{\prime}\to M\) induces the continuous linear pull-back map \[\phi^{*}:C^{\infty}(M;E)\to C^{\infty}(M^{\prime};\phi^{*}E)\;. \tag{2.7}\] Suppose that moreover \(\phi\) is a submersion. Then it also induces the continuous linear push-forward map \[\phi_{*}:C_{\rm c}^{\infty}(M^{\prime};\phi^{*}E\otimes\Omega_{\rm fiber}) \to C_{\rm c}^{\infty}(M;E)\;, \tag{2.8}\] where \(\Omega_{\rm fiber}=\Omega_{\rm fiber}M^{\prime}=\Omega\mathcal{V}\) for the vertical subbundle \(\mathcal{V}=\ker\phi_{*}\subset TM^{\prime}\). Since \(\phi^{*}\Omega M\equiv\Omega(TM/\mathcal{V})\equiv\Omega_{\rm fiber}^{-1} \otimes\Omega M^{\prime}\), the transposes of the versions of (2.7) and (2.8) with \(E^{*}\otimes\Omega M\) are continuous extensions of (2.8) and (2.7) [14, Theorem 6.1.2], \[\phi_{*}:C_{\rm c}^{-\infty}(M^{\prime};\phi^{*}E\otimes\Omega_{ \rm fiber})\to C_{\rm c}^{-\infty}(M;E)\;, \tag{2.10}\] \[\phi^{*}:C^{-\infty}(M;E)\to C^{-\infty}(M^{\prime};\phi^{*}E)\;, \tag{2.9}\] also called push-forward and pull-back linear maps. The term integration along the fibers is also used for \(\phi_{*}\). If \(\phi:M^{\prime}\to M\) is a proper local diffeomorphism, then we can omit \(\Omega_{\rm fiber}\) and the compact support condition in (2.8) and (2.9), and therefore the compositions \(\phi_{*}\phi^{*}\) and \(\phi^{*}\phi_{*}\) are defined on smooth/distributional sections. The space \(C^{\infty}(M^{\prime};\phi^{*}E)\) becomes a \(C^{\infty}(M)\)-module via the algebra homomorphism \(\phi^{*}:C^{\infty}(M)\to C^{\infty}(M^{\prime})\), and we have \[C_{\cdot/{\rm c}}^{\pm\infty}(M^{\prime};\phi^{*}E)=C_{\cdot/{\rm c}}^{\pm \infty}(M^{\prime})\otimes_{C^{\infty}(M)}C^{\infty}(M;E)\;. \tag{2.11}\] Using (2.5) and (2.11), we can describe (2.7)-(2.10) as the \(C^{\infty}(M)\)-tensor products of their trivial-line-bundle versions with the identity map on \(C^{\infty}(M;E)\). Thus, from now on, only pull-back and push-forward of distributions will be considered. ### Differential operators Let \(\operatorname{Diff}(M)\) be the filtered algebra and \(C^{\infty}(M)\)-module of differential operators, filtered by the order. Every \(\operatorname{Diff}^{m}(M)\) (\(m\in\mathbb{N}_{0}\)) is spanned as \(C^{\infty}(M)\)-module by all compositions of up to \(m\) elements of \(\mathfrak{X}(M)\), considered as the Lie algebra of derivations of \(C^{\infty}_{\cdot/\mathrm{c}}(M)\). In particular, \(\operatorname{Diff}^{0}(M)\equiv C^{\infty}(M)\). For vector bundles \(E\) and \(F\) over \(M\), the above concepts can be extended by taking the \(C^{\infty}(M)\)-tensor product with \(C^{\infty}(M;F\otimes E^{*})\), obtaining \(\operatorname{Diff}^{m}(M;E,F)\) (\(\operatorname{Diff}^{m}(M;E)\) being obtained if \(E=F\)); here, redundant notation is simplified like in the case of \(C^{\pm\infty}(M;E)\) (Section 2.4). If \(E\) is a line bundle, then \[\operatorname{Diff}^{m}(M;E) \equiv\operatorname{Diff}^{m}(M)\otimes_{C^{\infty}(M)}C^{\infty} (M;E\otimes E^{*}) \tag{2.12}\] \[\equiv\operatorname{Diff}^{m}(M)\otimes_{C^{\infty}(M)}C^{\infty} (M)\equiv\operatorname{Diff}^{m}(M)\;.\] Any \(A\in\operatorname{Diff}^{m}(M;E)\) defines a continuous linear endomorphism \(A\) of \(C^{\infty}_{\cdot/\mathrm{c}}(M;E)\). We get \(A^{t}\in\operatorname{Diff}^{m}(M;E^{*}\otimes\Omega)\) using integration by parts. So \(A\) has continuous extensions to a continuous endomorphism \(A\) of \(C^{-\infty}_{\cdot/\mathrm{c}}(M;E)\) (Section 2.5). A similar map is defined when \(A\in\operatorname{Diff}^{m}(M;E,F)\). Other related concepts like symbols and ellipticity can be seen e.g. in [24]. ### \(L^{2}\) sections Recall that the Hilbert space \(L^{2}(M;\Omega^{1/2})\) of square-integrable half-densities is the completion of \(C^{\infty}_{\mathrm{c}}(M;\Omega^{1/2})\) with the scalar product \(\langle u,v\rangle=\int_{M}u\bar{v}\). The induced norm is denoted by \(\|\cdot\|\). If \(M\) is compact, the space \(L^{2}(M;E)\) of square-integrable sections of \(E\) can be described as the \(C^{\infty}(M)\)-tensor product of \(L^{2}(M;\Omega^{1/2})\) and \(C^{\infty}(M;\Omega^{-1/2}\otimes E)\). It becomes a Hilbert space with the scalar product \(\langle u,v\rangle=\int_{M}(u,v)\,\omega\) determined by the choice of a Euclidean/Hermitian structure \((\cdot,\cdot)\) on \(E\) and a non-vanishing \(\omega\in C^{\infty}(M;\Omega)\). The equivalence class of its norm \(\|\cdot\|\) is independent of those choices; in this sense, \(L^{2}(M;E)\) is called a _Hilbertian space_ if no norm is distinguished. When \(M\) is not assumed to be compact, any choice of \((\cdot,\cdot)\) and \(\omega\) can be used to define \(L^{2}(M;E)\) and \(\langle\cdot,\cdot\rangle\). Now \(L^{2}(M;E)\) and the equivalence class of \(\|\cdot\|\) depends on the choices involved. The independence still holds for sections supported in any compact \(K\subset M\), obtaining the Hilbertian space \(L^{2}_{K}(M;E)\). Then we get the strict LF-space \(L^{2}_{\mathrm{c}}(M;E)\) is defined like in (2.2). On the other hand, let \[L^{2}_{\mathrm{loc}}(M;E)=\{\,u\in C^{-\infty}(M;E)\mid C^{\infty}_{\mathrm{c }}(M)\,u\subset L^{2}_{\mathrm{c}}(M;E)\,\}\;, \tag{2.13}\] which is a Frechet space with the semi-norms \(u\mapsto\|f_{k}u\|\), for a countable partition of unity \(\{f_{k}\}\subset C^{\infty}_{\mathrm{c}}(M)\). If \(M\) is compact, then \(L^{2}_{\mathrm{loc/c}}(M;E)\equiv L^{2}(M;E)\) as TVSs. The spaces \(L^{2}_{\mathrm{loc/c}}(M;E)\) satisfy the obvious version of (2.4). Any \(A\in\operatorname{Diff}^{m}(M;E)\) can be considered as a densely defined operator in \(L^{2}(M;E)\). Integration by parts shows that the adjoint \(A^{*}\) is defined by an element \(A^{*}\in\operatorname{Diff}^{m}(M;E)\) (the _formal adjoint_ of \(A\)). ### \(L^{\infty}\) sections A Euclidean/Hermitian structure can be also used to define the Banach space \(L^{\infty}(M;E)\) of its essentially bounded sections, with the norm \(\|u\|_{L^{\infty}}=\operatorname{ess\,sup}_{x\in M}|u(x)|\). There is a continuous injection \(L^{\infty}(M;E)\subset L^{2}_{\mathrm{loc}}(M;E)\). If \(M\) is compact, then the equivalence class of \(\|\cdot\|_{L^{\infty}}\) is independent of \((\cdot,\cdot)\). ### Sobolev spaces #### 2.10.1. Local and compactly supported versions Recall that the Fourier transform, \(f\mapsto\hat{f}\), defines a TVS-automorphism of the Schwartz space \(\mathcal{S}(\mathbb{R}^{n})\), which extends to a TVS-automorphism of the space \(\mathcal{S}(\mathbb{R}^{n})^{\prime}\) of tempered distributions [14, Section 7.1]. In turn, for every \(s\in\mathbb{R}\), this automorphism of \(\mathcal{S}(\mathbb{R}^{n})^{\prime}\) restricts to unitary isomorphism \[H^{s}(\mathbb{R}^{n})\stackrel{{\cong}}{{\to}}L^{2}(\mathbb{R}^{ n},(1+|\xi|^{2})^{s}\,d\xi)\;,\quad f\mapsto\hat{f}\;, \tag{2.14}\] for some Hilbert space \(H^{s}(\mathbb{R}^{n})\), called the Sobolev space of order \(s\) of \(\mathbb{R}^{n}\). There is a canonical continuous inclusion \(H^{s}(\mathbb{R}^{n})\subset C^{-\infty}(\mathbb{R}^{n})\). For any compact \(K\subset\mathbb{R}^{n}\), we have the Hilbert subspace \(H^{s}_{K}(\mathbb{R}^{n})\subset H^{s}(\mathbb{R}^{n})\) of elements supported in \(K\). Then the LCHSs \(H^{s}_{\text{c/loc}}(U)\) are defined like in (2.4) and (2.13), using the spaces \(H^{s}_{K}(\mathbb{R}^{n})\) for compact subsets \(K\subset U\). They are continuously included in \(C^{-\infty}_{\text{c/}}(U)\). For a manifold \(M\), the definition of the LCHSs \(H^{s}_{\text{c/loc}}(M)\) can be extended in a standard way, by using a locally finite atlas and a partition of unity consisting of compactly supported smooth functions. These are the compactly supported and local versions of the Sobolev space of order \(s\) of \(M\). They are continuously included in \(C^{-\infty}_{\text{c/}}(M)\). #### 2.10.2. Case of compact manifolds Suppose for a while that \(M\) is compact. Then \(H^{s}(M):=H^{s}_{\text{loc}}(M)=H^{s}_{\text{c}}(M)\) is a Hilbertian space called the _Sobolev space_ of order \(s\) of \(M\). We have \[H^{-s}(M)=H^{s}(M;\Omega)^{\prime}\;, \tag{2.15}\] given by (2.4). Moreover there are dense continuous inclusions, \[H^{s}(M)\subset H^{s^{\prime}}(M)\;, \tag{2.16}\] for \(s^{\prime}<s\), and \[H^{s}(M)\subset C^{k}(M)\subset H^{k}(M)\;, \tag{2.18}\] \[H^{-s}(M)\supset C^{\prime\,-k}(M)\supset H^{-k}(M)\;, \tag{2.17}\] for \(s>k+n/2\). The first inclusion of (2.17) is the Sobolev embedding theorem, and (2.18) is the transpose of the version of (2.17) with \(\Omega M\). Moreover the inclusions (2.16) are compact (Rellich theorem). So the spaces \(H^{s}_{\text{loc/c}}(M)\) form a compact spectrum with \[C^{\infty}(M)=\bigcap_{s}H^{s}(M)\quad\;C^{-\infty}(M)=\bigcup_{s}H^{s}(M)\;. \tag{2.19}\] Any \(A\in\text{Diff}^{m}(M;E)\) defines a bounded operator \(A:H^{s+m}(M;E)\to H^{s}(M;E)\). It can be considered as a densely defined operator in \(H^{s}(M;E)\), which is closable because, after fixing a scalar product in \(H^{s}(M;E)\), the adjoint of \(A\) in \(H^{s}(M;E)\) is densely defined since it is induced by \(\bar{A}^{t}\in\text{Diff}^{m}(M;\bar{E}^{*}\otimes\Omega)\) via the identity \(H^{s}(M;E)\equiv H^{s}(M;\bar{E})^{\prime}=H^{-s}(M;\bar{E}^{*}\otimes\Omega)\), where the bar stands for the complex conjugate. In the case \(s=0\), the adjoint of \(A\) is induced by the formal adjoint \(A^{*}\in\text{Diff}^{m}(M;E)\). By the elliptic estimate, a scalar product on \(H^{s}(M)\) can be defined by \(\langle u,v\rangle_{s}=\langle(1+P)^{s}u,v\rangle\), for any choice of a nonnegative symmetric elliptic \(P\in\text{Diff}^{2}(M)\), where \(\langle\cdot,\cdot\rangle\) is defined like in Section 2.8 and \((1+P)^{s}\) is given by the spectral theorem for all \(s\in\mathbb{R}\). The corresponding norm \(\|\cdot\|_{s}\) is independent of the choice of \(P\). For a vector bundle \(E\), a precise scalar product on \(H^{s}(M;E)\) can be defined as above, using any choice of a Euclidean/Hermitian structure \((\cdot,\cdot)\) on \(E\) and a non-vanishing \(\omega\in C^{\infty}(M;\Omega)\) (Section 2.8), besides a nonnegative symmetric elliptic \(P\in\operatorname{Diff}^{2}(M;E)\). If \(E=\Omega^{1/2}M\), then \(\langle\cdot,\cdot\rangle_{s}\) can be defined independently of \((\cdot,\cdot)\) and \(\omega\) (Section 2.8). If \(s\in\mathbb{N}_{0}\), we can also describe \[H^{s}(M) =\{\,u\in C^{-\infty}(M)\mid\operatorname{Diff}^{s}(M)\,u\subset L ^{2}(M)\,\}\;, \tag{2.21}\] \[H^{-s}(M) =\operatorname{Diff}^{s}(M)\,L^{2}(M)\;, \tag{2.20}\] with the respective projective and injective topologies given by the maps \(A:H^{s}(M)\to L^{2}(M)\) and \(A:L^{2}(M)\to H^{-s}(M)\) (\(A\in\operatorname{Diff}^{s}(M)\)). #### 2.10.3. Extension to non-compact manifolds If \(M\) is not assumed to be compact, then \(H^{s}(M;E)\) can be defined as the completion of \(C^{\infty}_{\mathrm{c}}(M;E)\) with respect to the scalar product \(\langle\cdot,\cdot\rangle_{s}\) defined by the above choices of \((\cdot,\cdot)\), \(\omega\) and \(P\); in this case, \(H^{s}(M;E)\) and the equivalence class of \(\|\cdot\|_{s}\) depends on the choices involved. For instance, in (2.14), \(H^{s}(\mathbb{R}^{n})\) can be also described with the Laplacian of \(\mathbb{R}^{n}\) and the standard density and Euclidean/Hermitian structure. The version of (2.15) with \(E\) can be used to define \(H^{-s}(M;E)\). With this generality, the versions of (2.20), (2.21), the right-hand side inclusions of (2.17) and (2.18), and the inclusions "\(\subset\)" of (2.19) are wrong, but the versions of (2.16), the left-hand side continuous inclusions of (2.17) and (2.18), and the continuous inclusion "\(\supset\)" of (2.19) are true. Thus the intersection and union of (2.19) define new LCHSs \(H^{\pm\infty}(M)\), which are continuously included in \(C^{\pm\infty}(M)\). Any \(A\in\operatorname{Diff}^{m}(M;E)\) defines continuous linear maps \(A:H^{s}_{\mathrm{c/loc}}(M;E)\to H^{s-m}_{\mathrm{c/loc}}(M;F)\). ### Weighted spaces Assume first that \(M\) is compact. Take any \(h\in C^{\infty}(M)\) which is positive almost everywhere; for instance, \(\{h=0\}\) could be any countable union of submanifolds of positive codimension. Then the _weighted Sobolev space_\(hH^{s}(M;E)\) is a Hilbertian space; a scalar product \(\langle\cdot,\cdot\rangle_{hH^{s}}\) is given by \(\langle u,v\rangle_{hH^{s}}=\langle h^{-1}u,h^{-1}v\rangle_{s}\), depending on the choice of a scalar product \(\langle\cdot,\cdot\rangle_{s}\) on \(H^{s}(M;E)\) like in Section 2.10. The corresponding norm is denoted by \(\|\cdot\|_{hH^{s}}\). In particular, we get the _weighted \(L^{2}\) space_\(hL^{2}(M;E)\). We have \(h>0\) just when \(hH^{m}(M;E)=H^{m}(M;E)\); in this case, \(\langle\cdot,\cdot\rangle_{hH^{s}}\) can be described like \(\langle\cdot,\cdot\rangle_{s}\) using \(h^{-2}\omega\) instead of \(\omega\). Thus the notation \(hH^{m}(M;E)\) for \(h>0\) is used when changing the density; e.g., if it is different from a distinguished choice, say a Riemanian volume. If \(M\) is not compact, \(hH^{s}(M;E)\) and \(\langle u,v\rangle_{hH^{s}}\) depend on \(h\) and the chosen definitions of \(H^{s}(M;E)\) and \(\langle u,v\rangle_{s}\) (Section 2.10). We also get the weighted spaces \(hH^{s}_{\mathrm{c/loc}}(M;E)\), and the weighted Banach space \(hL^{\infty}(M;E)\) with the norm \(\|u\|_{hL^{\infty}}=\|h^{-1}u\|_{L^{\infty}}\). There is a continuous injection \(hL^{\infty}(M;E)\subset hL^{2}_{\mathrm{loc}}(M;E)\). ### Bounded geometry Concerning this topic, we follow [10, 30, 35, 32, 33]; see also [2] for the way we present it and examples. #### 2.12.1. Manifolds and vector bundles of bounded geometry The concepts recalled here become relevant when \(M\) is not compact. Equip \(M\) with a Riemannian metric \(g\), and let \(\nabla\) denote its Levi-Civita connection, \(R\) its curvature and \(\operatorname{inj}_{M}\geq 0\) its injectivity radius (the infimum of the injectivity radius at all points). If \(M\) is connected, we have an induced distance function \(d\). If \(M\) is not connected, we can also define \(d\) taking \(d(p,q)=\infty\) if \(p\) and \(q\) belong to different connected components. Observe that \(M\) is complete if \(\operatorname{inj}_{M}>0\). For \(r>0\) and \(p\in M\), let \(B(p,r)\) and \(\overline{B}(p,r)\) denote the open and closed \(r\)-balls centered at \(p\). Recall that \(M\) is said to be of _bounded geometry_ if \(\operatorname{inj}_{M}>0\) and \(\sup|\nabla^{m}R|<\infty\) for every \(m\in\mathbb{N}_{0}\). This concept has the following chart description. **Theorem 2.1** (Eichhorn [10]; see also [30, 32, 33]).: \(M\) _is of bounded geometry if and only if, for some open ball \(B\subset\mathbb{R}^{n}\) centered at \(0\), there are normal coordinates at every \(p\in M\) defining a diffeomorphism \(y_{p}:V_{p}\to B\) such that the corresponding Christoffel symbols \(\Gamma^{i}_{jk}\), as a family of functions on \(B\) parametrized by \(i\), \(j\), \(k\) and \(p\), lie in a bounded set of the Frechet space \(C^{\infty}(B)\). This equivalence holds as well replacing the Cristoffel symbols with the metric coefficients \(g_{ij}\)._ From now on in this subsection, assume \(M\) is of bounded geometry and consider the charts \(y_{p}:V_{p}\to B\) given by Theorem 2.1. The radius of \(B\) is denoted by \(r_{0}\). **Proposition 2.2** (Schick [32, Theorem A.22], [33, Proposition 3.3]).: _For every \(\alpha\in\mathbb{N}_{0}^{n}\), the function \(|\partial^{\alpha}(y_{q}y_{p}^{-1})|\) is bounded on \(y_{p}(V_{p}\cap V_{q})\), uniformly on \(p,q\in M\)._ **Proposition 2.3** (Shubin [35, Appendix A1.1, Lemma 1.2]).: _For any \(0<2r\leq r_{0}\), there is a subset \(\{p_{k}\}\subset M\) and some \(N\in\mathbb{N}\) such that the balls \(B(p_{k},r)\) cover \(M\), and every intersection of \(N+1\) sets \(B(p_{k},2r)\) is empty._ A vector bundle \(E\) of rank \(l\) over \(M\) is said to be of _bounded geometry_ when it is equipped with a family of local trivializations over the charts \((V_{p},y_{p})\), for small enough \(r_{0}\), with corresponding defining cocycle \(a_{pq}:V_{p}\cap V_{q}\to\operatorname{GL}(l,\mathbb{K})\subset\mathbb{K}^{l \times l}\), such that, for all \(\alpha\in\mathbb{N}_{0}^{n}\), the function \(|\partial^{\alpha}(a_{pq}y_{p}^{-1})|\) is bounded on \(y_{p}(V_{p}\cap V_{q})\), uniformly on \(p,q\in M\). When referring to local trivializations of a vector bundle of bounded geometry, we always mean that they satisfy this condition. If the corresponding defining cocycle is valued in the orthogonal/unitary group, then \(E\) is said to be of _bounded geometry_ as a Euclidean/Hermitian vector bundle. #### 2.12.2. Uniform spaces For every \(m\in\mathbb{N}_{0}\), a function \(u\in C^{m}(M)\) is said to be \(C^{m}\)_-uniformy bounded_ if there is some \(C_{m}\geq 0\) with \(|\nabla^{m^{\prime}}u|\leq C_{m}\) on \(M\) for all \(m^{\prime}\leq m\). These functions form the _uniform \(C^{m}\) space_\(C^{m}_{\mathrm{ub}}(M)\), which is a Banach space with the norm \(\|\cdot\|_{C^{m}_{\mathrm{ub}}}\) defined by the best constant \(C_{m}\). Equivalently, we may take the norm \(\|\cdot\|^{\prime}_{C^{m}_{\mathrm{ub}}}\) defined by the best constant \(C^{\prime}_{m}\geq 0\) such that \(|\partial^{\alpha}(uy_{p}^{-1})|\leq C^{\prime}_{m}\) on \(B\) for all \(p\in M\) and \(|\alpha|\leq m\); in fact, it is enough to consider any subset of points \(p\) so that \(\{V_{p}\}\) covers \(M\)[32, Theorem A.22], [33, Proposition 3.3]. The _uniform \(C^{\infty}\) space_ is \(C^{\infty}_{\mathrm{ub}}(M)=\bigcap_{m}C^{m}_{\mathrm{ub}}(M)\). This is a Frechet space with the semi-norms \(\|\cdot\|_{C^{m}_{\mathrm{ub}}}\) or \(\|\cdot\|^{\prime}_{C^{m}_{\mathrm{ub}}}\). It consists of the functions \(u\in C^{\infty}(M)\) such that all functions \(uy_{p}^{-1}\) lie in a bounded set of \(C^{\infty}(B)\), which are said to be \(C^{\infty}\)_-uniformy bounded._ The same definitions apply to functions with values in \(\mathbb{C}^{l}\). Moreover the definition of uniform spaces with covariant derivative can be also considered for non-complete Riemannian manifolds. **Proposition 2.4** (Shubin [35, Appendix A1.1, Lemma 1.3]; see also [33, Proposition 3.2]).: _Given \(r\), \(\{p_{k}\}\) and \(N\) like in Proposition 2.3, there is a partition of unity \(\{f_{k}\}\) subordinated to the open covering \(\{B(p_{k},r)\}\), which is bounded in the Frechet space \(C^{\infty}_{\mathrm{ub}}(M)\)._ For a Euclidean/Hermitian vector bundle \(E\) of bounded geometry over \(M\), the _uniform \(C^{m}\) space_\(C^{m}_{\mathrm{ub}}(M;E)\), of \(C^{m}\)_-uniformly bounded_ sections, can be defined by introducing \(\|\cdot\|^{\prime}_{C^{m}_{\mathrm{ub}}}\) like the case of functions, using local trivializations of \(E\) to consider every \(uy_{p}^{-1}\) in \(C^{m}(B,\mathbb{C}^{l})\) for all \(u\in C^{m}(M;E)\). Then, as above, we get the _uniform \(C^{\infty}\) space_\(C^{\infty}_{\mathrm{ub}}(M;E)\) of \(C^{\infty}\)_-uniformly bounded_ sections, which are the sections \(u\in C^{\infty}(M;E)\) such that all functions \(uy_{p}^{-1}\) define a bounded set of \(C^{\infty}_{\mathrm{ub}}(B;\mathbb{C}^{l})\). In particular, \(\mathfrak{X}_{\mathrm{ub}}(M):=C^{\infty}_{\mathrm{ub}}(M;TM)\) is a \(C^{\infty}_{\mathrm{ub}}(M)\)-submodule and Lie subalgebra of \(\mathfrak{X}(M)\). Observe that \[C^{m}_{\mathrm{ub}}(M)=\{\,u\in C^{m}(M)\mid\mathfrak{X}_{\mathrm{ub}}(M) \stackrel{{(m^{\prime})}}{{\cdots}}\mathfrak{X}_{\mathrm{ub}}(M )\,u\subset L^{\infty}(M)\ \forall m^{\prime}\leq m\,\}\;. \tag{2.22}\] #### 2.12.3. Differential operators of bounded geometry Like in Section 2.7, by using \(\mathfrak{X}_{\mathrm{ub}}(M)\) and \(C^{\infty}_{\mathrm{ub}}(M)\) instead of \(\mathfrak{X}(M)\) and \(C^{\infty}(M)\), we get the filtered subalgebra and \(C^{\infty}_{\mathrm{ub}}(M)\)-submodule \(\mathrm{Diff}_{\mathrm{ub}}(M)\subset\mathrm{Diff}(M)\) of differential operators of _bounded geometry_. Moreover this concept can be extended to vector bundles of bounded geometry \(E\) and \(F\) over \(M\) by taking the \(C^{\infty}_{\mathrm{ub}}(M)\)-tensor product of \(\mathrm{Diff}_{\mathrm{ub}}(M)\) and \(C^{\infty}_{\mathrm{ub}}(M;F\otimes E^{*})\), obtaining the filtered \(C^{\infty}_{\mathrm{ub}}(M)\)-submodule \(\mathrm{Diff}_{\mathrm{ub}}(M;E,F)\subset\mathrm{Diff}(M;E,F)\) (or \(\mathrm{Diff}_{\mathrm{ub}}(M;E)\) if \(E=F\)). Bounded geometry of differential operators is preserved by compositions and by taking transposes, and by taking formal adjoints in the case of Hermitian vector bundles of bounded geometry; in particular, \(\mathrm{Diff}_{\mathrm{ub}}(M;E)\) is a filtered subalgebra of \(\mathrm{Diff}(M;E)\). Like in (2.12), we have \[\mathrm{Diff}^{m}_{\mathrm{ub}}(M;\Omega^{\frac{1}{2}})\equiv\mathrm{Diff}^{m}_ {\mathrm{ub}}(M)\;. \tag{2.23}\] Every \(A\in\mathrm{Diff}^{m}_{\mathrm{ub}}(M;E)\) defines continuous linear maps \(A:C^{m+k}_{\mathrm{ub}}(M;E)\to C^{k}_{\mathrm{ub}}(M;E)\) (\(k\in\mathbb{N}_{0}\)), which induce a continuous endomorphism \(A\) of \(C^{\infty}_{\mathrm{ub}}(M;E)\). It is said that \(A\) is _uniformly elliptic_ if there is some \(C\geq 1\) such that, for all \(p\in M\) and \(\xi\in T^{*}_{p}M\), its leading symbol \(\sigma_{m}(A)\) satisfies \[C^{-1}|\xi|^{m}\leq|\sigma_{m}(A)(p,\xi)|\leq C|\xi|^{m}\;.\] This condition is independent of the choice of the Hermitian metric of bounded geometry on \(E\). Any \(A\in\mathrm{Diff}^{m}_{\mathrm{ub}}(M;E,F)\) satisfies the second inequality. The case where \(A\in\mathrm{Diff}^{m}_{\mathrm{ub}}(M;E,F)\) is similar. #### 2.12.4. Sobolev spaces of manifolds of bounded geometry For any Hermitian vector bundle \(E\) of bounded geometry over \(M\), any nonnegative symmetric uniformly elliptic \(P\in\mathrm{Diff}^{2}_{\mathrm{ub}}(M;E)\) can be used to define the Sobolev space \(H^{s}(M;E)\) (\(s\in\mathbb{R}\)) with a scalar product \(\langle\cdot,\cdot\rangle_{s}\) (Section 2.10). Any choice of \(P\) defines the same Hilbertian space \(H^{s}(M;E)\), which is a \(C^{\infty}_{\mathrm{ub}}(M)\)-module. In particular, \(L^{2}(M;E)\) is the \(C^{\infty}_{\mathrm{ub}}(M)\)-tensor product of \(L^{2}(M;\Omega^{1/2})\) and \(C^{\infty}_{\mathrm{ub}}(M;E\otimes\Omega^{1/2})\), and \(H^{s}(M;E)\) is the \(C^{\infty}_{\mathrm{ub}}(M)\)-tensor product of \(H^{s}(M)\) and \(C^{\infty}_{\mathrm{ub}}(M;E)\). For instance, we may take \(P=\nabla^{*}\nabla\) for any unitary connection \(\nabla\) of bounded geometry on \(E\). For \(s\in\mathbb{N}_{0}\), the Sobolev space \(H^{s}(M)\) can be also described with the scalar product \[\langle u,v\rangle^{\prime}_{s}=\sum_{k}\sum_{|\alpha|\leq s}\int_{B}f_{k}^{2}( x)\cdot\partial^{\alpha}(uy_{p_{k}}^{-1})(x)\cdot\overline{\partial^{\alpha}(vy_{p_{k}}^{- 1})(x)}\,dx\;,\] using the partition of unity \(\{f_{k}\}\) given by Proposition 2.4 [32, Theorem A.22], [33, Propositions 3.2 and 3.3], [35, Appendices A1.2 and A1.3]. A similar scalar product \(\langle\cdot,\cdot\rangle^{\prime}_{s}\) can be defined for \(H^{s}(M;E)\) with the help of local trivializations defining the bounded geometry of \(E\). Every \(A\in\mathrm{Diff}^{m}_{\mathrm{ub}}(M;E)\) defines bounded operators \(A:H^{m+s}(M;E)\to H^{s}(M;E)\) (\(s\in\mathbb{R}\)), which induce a continuous endomorphism \(A\) of \(H^{\pm\infty}(M;E)\). For any almost everywhere positive \(h\in C^{\infty}(M)\), we have \(hH^{m}(M;E)=H^{m}(M;E)\) if and only if \(h>0\) and \(h^{\pm 1}\in C^{\infty}_{\mathrm{ub}}(M)\). **Proposition 2.5** (Roe [30, Proposition 2.8]).: _If \(m^{\prime}>m+n/2\), then \(H^{m^{\prime}}(M;E)\subset C^{m}_{\mathrm{ub}}(M;E)\), continuously. Thus \(H^{\infty}(M;E)\subset C^{\infty}_{\mathrm{ub}}(M;E)\), continuously._ ## 3. Symbols The canonical coordinates of \(\mathbb{R}^{n}\times\mathbb{R}^{l}\) (\(n,l\in\mathbb{N}_{0}\)) are denoted by \((x,\xi)=(x^{1},\dots,x^{n},\xi^{1},\dots,\xi^{l})\), and let \(dx=dx^{1}\wedge\dots\wedge dx^{n}\) and \(d\xi=d\xi^{1}\wedge\dots\wedge d\xi^{l}\). Recall that a _symbol_ of _order_ at most \(m\in\mathbb{R}\) on \(U\times\mathbb{R}^{l}\), or simply on \(U\), is a function \(a\in C^{\infty}(U\times\mathbb{R}^{l})\) such that, for any compact \(K\subset U\), and multi-indices \(\alpha\in\mathbb{N}_{0}^{n}\) and \(\beta\in\mathbb{N}_{0}^{l}\), \[\|a\|_{K,\alpha,\beta,m}:=\sup_{x\in K,\;\xi\in\mathbb{R}^{l}}\frac{|\partial_{ x}^{\alpha}\partial_{\xi}^{\beta}a(x,\xi)|}{(1+|\xi|)^{m-|\beta|}}<\infty\;. \tag{3.1}\] The set of symbols of order at most \(m\), \(S^{m}(U\times\mathbb{R}^{l})\), becomes a Frechet space with the semi-norms \(\|\cdot\|_{K,\alpha,\beta,m}\) given by (3.1). There are continuous inclusions \[S^{m}(U\times\mathbb{R}^{l})\subset S^{m^{\prime}}(U\times\mathbb{R}^{l})\quad (m<m^{\prime})\;, \tag{3.2}\] giving rise to the LCSs \[S^{\infty}(U\times\mathbb{R}^{l})=\bigcup_{m}S^{m}(U\times\mathbb{R}^{l})\;, \quad S^{-\infty}(U\times\mathbb{R}^{l})=\bigcap_{m}S^{m}(U\times\mathbb{R}^{ l})\;.\] The LF-space \(S^{\infty}(U\times\mathbb{R}^{l})\) is a filtered algebra and \(C^{\infty}(U)\)-module with the pointwise multiplication. The Frechet space \(S^{-\infty}(U\times\mathbb{R}^{l})\) is a filtered ideal and \(C^{\infty}(U)\)-submodule of \(S^{\infty}(U\times\mathbb{R}^{l})\). The homogeneous components of the corresponding graded algebra are \[S^{(m)}(U\times\mathbb{R}^{l})=S^{m}(U\times\mathbb{R}^{l})/S^{m-1}(U\times \mathbb{R}^{l})\;.\] The notation \(S^{m}(\mathbb{R}^{l})\), \(S^{\pm\infty}(\mathbb{R}^{l})\), etc. is used when \(U=\mathbb{R}^{0}=\{0\}\); in this case, the subscripts \(K\) and \(\alpha\) are omitted from the notation of the semi-norms in (3.1). Since \(S^{\infty}(U\times\mathbb{R}^{l})\) is an LF-space, we get the following (see Section 2.4). **Proposition 3.1**.: \(S^{\infty}(U\times\mathbb{R}^{l})\) _is barreled, ultrabornological and webbed._ There are continuous inclusions (see Section 2.4 for the definition of \(C^{\infty}_{\mathrm{cv}}(U\times\mathbb{R}^{l})\)) \[C^{\infty}_{\mathrm{cv}}(U\times\mathbb{R}^{l})\subset S^{-\infty}(U\times \mathbb{R}^{l})\;,\quad S^{\infty}(U\times\mathbb{R}^{l})\subset C^{\infty}(U \times\mathbb{R}^{l})\;; \tag{3.3}\] in particular, \(S^{\infty}(U\times\mathbb{R}^{l})\) is Hausdorff. According to (2.1) and (3.3), we get continuous semi-norms \(\|\cdot\|_{Q,C^{k}}\) on \(S^{\infty}(U\times\mathbb{R}^{l})\), for any compact \(Q\subset U\times\mathbb{R}^{l}\) and \(k\in\mathbb{N}_{0}\), given by \[\|a\|_{Q,C^{k}}=\sup_{(x,\xi)\in Q,\;|\alpha|+|\beta|\leq k}|\partial_{x}^{ \alpha}\partial_{\xi}^{\beta}a(x,\xi)|\;. \tag{3.4}\] With the notation of (3.1), consider also the continuous semi-norms \(\|\cdot\|^{\prime}_{K,\alpha,\beta,m}\) on \(S^{m}(U\times\mathbb{R}^{l})\) given by \[\|a\|^{\prime}_{K,\alpha,\beta,m}=\sup_{x\in K}\limsup_{|\xi|\to\infty}\frac{ \left|\partial_{x}^{\alpha}\partial_{\xi}^{\beta}a(x,\xi)\right|}{|\xi|^{m-| \beta|}}\;. \tag{3.5}\] In the case of \(S^{m}(\mathbb{R}^{l})\), the subscripts \(K\) and \(\alpha\) are omitted from the notation of the semi-norms (3.5). **Proposition 3.2**.: _The semi-norms (3.4) and (3.5) together describe the topology of \(S^{m}(U\times\mathbb{R}^{l})\)._ Proof.: Let \(S^{\prime\,m}(U\times\mathbb{R}^{l})\) denote the LCHS defined by endowing the vector space \(S^{m}(U\times\mathbb{R}^{l})\) with the topology induced by the semi-norms (3.4) and (3.5) together; in fact, countably many semi-norms of these types are enough to describe its topology (taking exhausting increasing sequences of compact sets), and therefore \(S^{\prime\,m}(U\times\mathbb{R}^{l})\) is metrizable. Let \(\widehat{S}^{\prime\,m}(U\times\mathbb{R}^{l})\) denote its completion, where the stated semi-norms have continuous extensions. There is a continuous inclusion \(S^{\prime\,m}(U\times\mathbb{R}^{l})\subset C^{\infty}(U\times\mathbb{R}^{l})\), which can be extended to a continuous map \(\phi:\widehat{S}^{\prime\,m}(U\times\mathbb{R}^{l})\to C^{\infty}(U\times \mathbb{R}^{l})\) because \(C^{\infty}(U\times\mathbb{R}^{l})\) is complete. For any \(a\in\widehat{S}^{\prime\,m}(U\times\mathbb{R}^{l})\), and \(K\), \(\alpha\) and \(\beta\) like in (3.5), since \(\|\phi(a)\|^{\prime}_{K,\alpha,\beta,m}=\|a\|^{\prime}_{K,\alpha,\beta,m}<\infty\), there are \(C,R>0\) so that, if \(x\in K\) and \(|\xi|\geq R\), then \[\frac{|\partial_{x}^{\alpha}\partial_{\xi}^{\beta}\phi(a)(x,\xi)|}{(1+|\xi|)^{ m-|\beta|}}\leq C\;.\] Let \(B_{R}\subset\mathbb{R}^{l}\) denote the open ball of center \(0\) and radius \(R\). For \(Q=K\times\overline{B_{R}}\subset U\times\mathbb{R}^{l}\) and \(k=|\alpha|+|\beta|\), since \(\|\phi(a)\|_{Q,C^{k}}=\|a\|_{Q,C^{k}}<\infty\), there is some \(C^{\prime}>0\) such that \(|\partial_{x}^{\alpha}\partial_{\xi}^{\beta}\phi(a)(x,\xi)|<C^{\prime}\) for \((x,\xi)\in Q\), yielding \[\frac{|\partial_{x}^{\alpha}\partial_{\xi}^{\beta}\phi(a)(x,\xi)|}{(1+|\xi|)^{ m-|\beta|}}\leq\begin{cases}C^{\prime}&\text{if }|\beta|\leq m\\ C^{\prime}(1+R)^{|\beta|-m}&\text{if }|\beta|\geq m\;.\end{cases}\] This shows that \(\|\phi(a)\|_{K,\alpha,\beta,m}<\infty\), obtaining that \(a\equiv\phi(a)\in S^{m}(U\times\mathbb{R}^{l})\). Hence \(S^{\prime\,m}(U\times\mathbb{R}^{l})\) is complete, and therefore it is a Frechet space. Thus the identity map \(S^{m}(U\times\mathbb{R}^{l})\to S^{\prime\,m}(U\times\mathbb{R}^{l})\) is a continuous linear isomorphism between Frechet spaces, obtaining that it is indeed a homeomorphism by a version of the open mapping theorem [21, Section 15.12], [31, Theorem II.2.1], [28, Theorem 14.4.6]. **Proposition 3.3**.: _For \(m,m^{\prime}\in\mathbb{N}_{0}\), \(\alpha\in\mathbb{N}_{0}^{n}\), \(\beta\in\mathbb{N}_{0}^{l}\) and any compact \(K\subset U\), if \(m<m^{\prime}\), then \(\|\cdot\|^{\prime}_{K,\alpha,\beta,m^{\prime}}=0\) on \(S^{m}(U\times\mathbb{R}^{l})\)._ Proof.: According to (3.5), for all \(a\in S^{m}(U\times\mathbb{R}^{l})\), \[\|a\|^{\prime}_{K,\alpha,\beta,m^{\prime}}=\|a\|^{\prime}_{K,\alpha,\beta,m}\lim _{|\xi|\to\infty}|\xi|^{m-m^{\prime}}=0\;.\qed\] **Corollary 3.4**.: _For \(m<m^{\prime}\), the topologies of \(S^{m^{\prime}}(U\times\mathbb{R}^{l})\) and \(C^{\infty}(U\times\mathbb{R}^{l})\) coincide on \(S^{m}(U\times\mathbb{R}^{l})\). Therefore the topologies of \(S^{\infty}(U\times\mathbb{R}^{l})\) and \(C^{\infty}(U\times\mathbb{R}^{l})\) coincide on \(S^{m}(U\times\mathbb{R}^{l})\)._ Proof.: The first assertion is a consequence of Propositions 3.2 and 3.3. To prove the second assertion, by (3.3), it is enough to show that the topology of \(S^{\infty}(U\times\mathbb{R}^{l})\) is finer or equal than the topology of \(C^{\infty}(U\times\mathbb{R}^{l})\) on \(S^{m}(U\times\mathbb{R}^{l})\). For every open \(O\subset S^{\infty}(U\times\mathbb{R}^{l})\) and \(m^{\prime}>m\), since \(O\cap S^{m^{\prime}}(U\times\mathbb{R}^{l})\) is open in \(S^{m^{\prime}}(U\times\mathbb{R}^{l})\), it follows from the first assertion that there is some open \(P\subset C^{\infty}(U\times\mathbb{R}^{l})\) such that \(O\cap S^{m}(U\times\mathbb{R}^{l})=P\cap S^{m}(U\times\mathbb{R}^{l})\). **Corollary 3.5**.: _For \(m<m^{\prime}\), \(C^{\infty}_{\rm c}(U\times\mathbb{R}^{l})\) is dense in \(S^{m}(U\times\mathbb{R}^{l})\) with the topology of \(S^{m^{\prime}}(U\times\mathbb{R}^{l})\). Therefore \(C^{\infty}_{\rm c}(U\times\mathbb{R}^{l})\) is dense in \(S^{\infty}(U\times\mathbb{R}^{l})\)._ Proof.: The first assertion is given by Corollary 3.4 and the density of \(C_{\rm c}^{\infty}(U\times\mathbb{R}^{l})\) in \(C^{\infty}(U\times\mathbb{R}^{l})\). To prove the second assertion, take any open \(O\neq\emptyset\) in \(S^{\infty}(U\times\mathbb{R}^{l})\). We have \(O\cap S^{m}(U\times\mathbb{R}^{l})\neq\emptyset\) for some \(m\). This intersection is open in \(S^{m}(U\times\mathbb{R}^{l})\) with the topology of any \(S^{m^{\prime}}(U\times\mathbb{R}^{l})\) for all \(m^{\prime}\geq m\). So \(O\cap C_{\rm c}^{\infty}(U\times\mathbb{R}^{l})\neq\emptyset\) by the first assertion. **Corollary 3.6**.: \(S^{\infty}(U\times\mathbb{R}^{l})\) _is an acyclic Montel space, and therefore complete, boundedly retractive and reflexive._ Proof.: Corollary 3.4 gives the property of being acyclic, and therefore complete and boundedly retractive (Section 2.1). Since \(S^{\infty}(U\times\mathbb{R}^{l})\) is barreled (Proposition 3.1) and every Montel space is reflexive [21, 6.27.2 (1)], [9, Section 8.4.7], [31, after the examples of IV.5.8], it only remains to prove that \(S^{\infty}(U\times\mathbb{R}^{l})\) is semi-Montel. Take any closed bounded subset \(B\subset S^{\infty}(U\times\mathbb{R}^{l})\); in particular, \(B\) is complete because \(S^{\infty}(U\times\mathbb{R}^{l})\) is complete. Since \(S^{\infty}(U\times\mathbb{R}^{l})\) is boundedly retractive, \(B\) is contained and bounded in some \(S^{m}(U\times\mathbb{R}^{l})\), and the topologies of \(S^{\infty}(U\times\mathbb{R}^{l})\) and \(S^{m}(U\times\mathbb{R}^{l})\) coincide on \(B\). By Corollary 3.4, it follows that \(B\) is a complete bounded subspace of \(C^{\infty}(U\times\mathbb{R}^{l})\), and therefore closed because \(C^{\infty}(U\times\mathbb{R}^{l})\) is complete. So \(B\) is compact because \(C^{\infty}(U\times\mathbb{R}^{l})\) is a Montel space. _Remark 3.7_.: Another proof of Corollary 3.5 could be given like in Proposition 6.8. _Remark 3.8_.: Despite of Corollary 3.4, the following argument shows that the second inclusion of (3.3) is not a TVS-embedding. Let \(a_{m}\in S^{\infty}(U\times\mathbb{R}^{l})\) (\(m\in\mathbb{N}_{0}\)) such that \(a_{m}(x,\xi)=0\) if \(|\xi^{1}|\leq m\), and \(a_{m}(x,\xi)=(\xi^{1}-m)^{m}\) if \(|\xi^{1}|\geq m+1\). Then \(a_{m}\in S^{m}(U\times\mathbb{R}^{l})\setminus S^{m-1}(U\times\mathbb{R}^{l})\) and \(a_{m}\to 0\) in \(C^{\infty}(U\times\mathbb{R}^{l})\) as \(m\uparrow\infty\). However \(a_{m}\not\to 0\) in \(S^{\infty}(U\times\mathbb{R}^{l})\); otherwise, since \(S^{\infty}(U\times\mathbb{R}^{l})\) is sequentially retractive (Corollary 3.6), all \(a_{m}\) would lie in some step \(S^{m_{0}}(U\times\mathbb{R}^{l})\), a contradiction. With more generality, a symbol of order \(m\) on a vector bundle \(E\) over \(M\) is a smooth function on \(E\) satisfying (3.1) via charts of \(M\) and local trivializations of \(E\), with \(K\) contained in the domains of charts where \(E\) is trivial. As above, they form a Frechet space \(S^{m}(E)\) with the topology described by the semi-norms given by this version of (3.1). The version of (3.2) in this setting is true, obtaining the corresponding inductive and projective limits \(S^{\pm\infty}(E)\), and quotient spaces \(S^{(m)}(E)\). We can similarly define the norms (3.4) and (3.5) on \(S^{m}(E)\), and Propositions 3.2 and 3.3 and Corollaries 3.4 to 3.6 can be directly extended to this setting. Given another vector bundle \(F\) over \(M\), we can further take the \(C^{\infty}(M)\)-tensor product of these spaces with \(C^{\infty}(M;F)\), obtaining the spaces \(S^{m}(E;F)\), \(S^{\pm\infty}(E;F)\) and \(S^{(m)}(E;F)\), satisfying analogous properties and results. Now (3.3) becomes \[C^{\infty}_{\rm cv}(E;\pi^{*}F)\subset S^{-\infty}(E;F)\;,\quad S^{\infty}(E; F)\subset C^{\infty}(E;\pi^{*}F)\;,\] where \(\pi:E\to M\) is the vector bundle projection. ## 4. Conormal distributions ### Differential operators tangent to a submanifold Let \(L\) is a regular submanifold of \(M\) of codimension \(n^{\prime}\) and dimension \(n^{\prime\prime}\), which is a closed subset. Let \(\mathfrak{X}(M,L)\subset\mathfrak{X}(M)\) be the Lie subalgebra and \(C^{\infty}(M)\)-submodule of vector fields tangent to \(L\). Using \(\mathfrak{X}(M,L)\) instead of \(\mathfrak{X}(M)\), we can define the filtered subalgebra and \(C^{\infty}(M)\)-submodule \(\operatorname{Diff}(M,L)\subset\operatorname{Diff}(M)\) like in Section 2.7. We have \[A\in\operatorname{Diff}(M,L)\Rightarrow A^{t}\in\operatorname{Diff}(M,L;\Omega)\;. \tag{4.1}\] By the conditions on \(L\), every \(\operatorname{Diff}^{m}(M,L)\) (\(m\in\mathbb{N}_{0}\)) is locally finitely \(C^{\infty}(M)\)-generated, and therefore \(\operatorname{Diff}(M,L)\) is countably \(C^{\infty}(M)\)-generated. The surjective restriction map \(\mathfrak{X}(M,L)\to\mathfrak{X}(L)\), \(X\mapsto X|_{L}\), induces a surjective linear restriction map of filtered algebras and \(C^{\infty}(M)\)-modules, \[\operatorname{Diff}(M,L)\to\operatorname{Diff}(L)\;,\quad A\mapsto A|_{L}\;. \tag{4.2}\] Let \((U,x)\) be a chart of \(M\) adapted to \(L\); i.e., for open subsets \(U^{\prime}\subset\mathbb{R}^{n^{\prime}}\) and \(U^{\prime\prime}\subset\mathbb{R}^{n^{\prime\prime}}\), \[x=(x^{1},\dots,x^{n})\equiv(x^{\prime},x^{\prime\prime}):U\to U^{ \prime}\times U^{\prime\prime}\;,\] \[x^{\prime}=(x^{\prime 1},\dots,x^{\prime n^{\prime}})\;,\quad x^{ \prime\prime}=(x^{\prime\prime 1},\dots,x^{\prime\prime n^{\prime\prime}})\;,\quad L _{0}:=L\cap U=\{x^{\prime}=0\}\;.\] If \(L\) is of codimension one, then we will use the notation \((x,y)\) instead of \((x^{\prime},x^{\prime\prime})\). For every \(m\in\mathbb{N}_{0}\), \(\operatorname{Diff}^{m}(U,L_{0})\) is \(C^{\infty}(U)\)-spanned by the operators \(x^{\prime\alpha}\partial_{x^{\prime}}^{\beta}\partial_{x^{\prime\prime}}^{ \gamma}\) with \(|\beta|+|\gamma|\leq m\) and \(|\alpha|=|\beta|\); we may use the generators \(\partial_{x^{\prime}}^{\beta}\partial_{x^{\prime\prime}}^{\gamma}x^{\prime \alpha}\) as well, with the same conditions on the multi-indices. ### Conormal distributions filtered by Sobolev order #### 4.2.1. Case of compact manifolds Suppose first that \(M\) is compact. Then the space of _conormal distributions_ at \(L\) of _Sobolev order_ at most \(s\in\mathbb{R}\) is the LCS and \(C^{\infty}(M)\)-module \[I^{(s)}(M,L)=\{\,u\in C^{-\infty}(M)\mid\operatorname{Diff}(M,L)\,u\subset H ^{s}(M)\,\}\;, \tag{4.3}\] with the projective topology given by the maps \(P:I^{(s)}(M,L)\to H^{s}(M)\) (\(P\in\operatorname{Diff}(M,L)\)). **Proposition 4.1**.: \(I^{(s)}(M,L)\) _is a totally reflexive Frechet space._ Proof.: For any countable \(C^{\infty}(M)\)-spanning set \(\{P_{j}\mid j\in\mathbb{N}_{0}\}\) of \(\operatorname{Diff}(M,L)\), the space \(I^{(s)}(M,L)\) has the projective topology given by the maps \(P_{j}:I^{(s)}(M,L)\to H^{s}(M)\). Let \[I^{(s)}_{k}(M,L)=\{\,u\in C^{-\infty}(M)\mid P_{j}u\subset H^{s}(M),\ j=0, \dots,k\,\}\;,\] with the projective topology given by the maps \(P_{j}:I^{(s)}(M,L)\to H^{s}(M)\) (\(j=0,\dots,k\)). We can assume \(P_{0}=1\), and therefore \(I^{(s)}_{0}(M,L)=H^{s}(M)\). Every \(I^{(s)}_{k}(M,L)\) is a Hilbert space with the scalar product \[\langle u,v\rangle_{s,k}=\sum_{j=0}^{k}\langle P_{j}u,P_{j}v\rangle_{s}\;,\] there are continuous inclusions \(I^{(s)}_{k^{\prime}}(M,L)\subset I^{(s)}_{k}(M,L)\) (\(k<k^{\prime}\)), and \(I^{(s)}(M,L)=\bigcap_{k}I^{(s)}_{k}(M,L)\). So \(I^{(s)}(M,L)\) is a totally reflexive Frechet space [38, Theorem 4]. We have continuous inclusions \[I^{(s)}(M,L)\subset I^{(s^{\prime})}(M,L)\quad(s^{\prime}<s)\;, \tag{4.4}\] and consider the LCSs and \(C^{\infty}(M)\)-modules \[I(M,L)=\bigcup_{s}I^{(s)}(M,L)\;,\quad I^{(\infty)}(M,L)=\bigcap_{s}I^{(s)}(M, L)\;.\] \(I(M,L)\) is an LF-space, and \(I^{(\infty)}(M,L)\) is a Frechet space and submodule of \(I(M,L)\). The elements of \(I(M,L)\) are called _conormal distributions_ of \(M\) at \(L\) (or of \((M,L)\)). The spaces \(I^{(s)}(M,L)\) form the _Sobolev order filtration_ of \(I(M,L)\). From (4.3), it follows that there are canonical continuous inclusions, \[C^{\infty}(M)\subset I^{(\infty)}(M,L)\;,\quad I(M,L)\subset C^{-\infty}(M)\;; \tag{4.5}\] in particular, \(I(M,L)\) is Hausdorff. Since every \(I^{(s)}(M,L)\) is a Frechet space (Proposition 4.1), the following analog of Proposition 3.1 holds true by the same reason. **Corollary 4.2**.: \(I(M,L)\) _is barreled, ultrabornological and webbed._ #### 4.2.2. Extension to non-compact manifolds If \(M\) is not assumed to be compact, we can similarly define the LCHS \(I^{(s)}_{\cdot/\mathrm{c}}(M,L)\) by using \(C^{-\infty}_{\cdot/\mathrm{c}}(M)\) and \(H^{s}_{\mathrm{loc/c}}(M)\). Every \(I^{(s)}(M,L)\) is a Frechet space, as follows like in the proof of Proposition 4.1, using the Frechet spaces \(H^{s}_{\mathrm{loc}}(M)\). We can describe \(I^{(s)}_{\mathrm{c}}(M,L)=\bigcup_{K}I^{(s)}_{K}(M,L)\) like in (2.2), which is a strict LF-space because every \(I^{(s)}_{K}(M,L)\) satisfies an analog of Proposition 4.1. Therefore \(I_{\mathrm{c}}(M,L)=\bigcup_{s}I^{(s)}_{\mathrm{c}}(M,L)\) is an LF-space [28, Exercise 12.108]; moreover \(I_{\mathrm{c}}(M,L)=\bigcup_{K}I_{K}(M,L)\). We also have the Frechet space \(I^{(\infty)}_{\mathrm{c}}(M,L)=\bigcap_{s}I^{(s)}_{\mathrm{c}}(M,L)\). All of these spaces are modules over \(C^{\infty}(M)\); \(I_{\mathrm{c}}(M,L)\) is a filtered module and \(I^{(\infty)}_{\mathrm{c}}(M,L)\) a submodule. The extension by zero defines a continuous inclusion \(I_{\mathrm{c}}(U,L\cap U)\subset I_{\mathrm{c}}(M,L)\) for any open \(U\subset M\). We also define the space \(I^{(\infty)}(M,L)\) like in the compact case, as well as the space \(\bigcup_{s}I^{(s)}(M,L)\), which consists of the conormal distributions with a Sobolev order. But now let (cf. [15, Definition 18.2.6]) \[I(M,L)=\{\,u\in C^{-\infty}(M)\mid C^{\infty}_{\mathrm{c}}(M)\,u\subset I_{ \mathrm{c}}(M,L)\,\}\;, \tag{4.6}\] which is a LCS with the projective topology given by the (multiplication) maps \(f_{j}:I(M,L)\to I_{\mathrm{c}}(M,L)\), for a countable partition of unity \(\{f_{j}\}\subset C^{\infty}_{\mathrm{c}}(M)\). We have \(I(M,L)=\bigcup_{s}I^{(s)}(M,L)\) if and only if \(L\) is compact; thus the spaces \(I^{(s)}(M,L)\) form a filtration of \(I(M,L)\) just when \(L\) is compact. There is an extension of (4.5) for non-compact \(M\), taking arbitrary/compact support; in particular, \(I_{\cdot/\mathrm{c}}(M,L)\) is Hausdorff. ### Filtration of \(I(m,l)\) by the symbol order #### 4.3.1. Local description of conormal distributions with symbols Consider the notation of Section 4.1 for a chart \((U,x=(x^{\prime},x^{\prime\prime}))\) of \(M\) adapted to \(L\). We use the identity \(U^{\prime\prime}\times\mathbb{R}^{n^{\prime}}\equiv N^{*}U^{\prime\prime}\), and the symbol spaces \(S^{m}(U^{\prime\prime}\times\mathbb{R}^{n^{\prime}})\equiv S^{m}(N^{*}U^{ \prime\prime})\) (Section 3). Define \[C^{\infty}_{\mathrm{cv}}(N^{*}U^{\prime\prime})\to C^{\infty}(U)\;, \quad a\mapsto u\;, \tag{4.8}\] \[C^{\infty}_{\mathrm{c}}(U)\to C^{\infty}(N^{*}U^{\prime\prime})\;, \quad u\mapsto a\;, \tag{4.7}\] by the following partial inverse Fourier transform and partial Fourier transform: \[u(x) =(2\pi)^{-n^{\prime}}\int_{\mathbb{R}^{n^{\prime}}}e^{i(x^{\prime}, \xi)}a(x^{\prime\prime},\xi)\,d\xi\;,\] \[a(x^{\prime\prime},\xi) =\int_{\mathbb{R}^{n^{\prime}}}e^{-i(x^{\prime},\xi)}u(x^{\prime},x^{\prime\prime})\,dx^{\prime}\;.\] **Proposition 4.3** ([15, Theorem 18.2.8], [25, Proposition 6.1.1], [27, Lemma 9.33]).: _If \(s<-\bar{m}-n^{\prime}/2\), then (4.7) has a continuous extension \(S^{\bar{m}}(N^{*}U^{\prime\prime})\to I^{(s)}(U,L_{0})\). If \(\bar{m}>-s-n^{\prime}/2\), then (4.8) induces a continuous linear map \(I_{\rm c}^{(s)}(U,L_{0})\to S^{\bar{m}}(N^{*}U^{\prime\prime})\)._ _Remark 4.4_.: The continuity of the maps of Proposition 4.3 is not stated in [15, Theorem 18.2.8], [25, Proposition 6.1.1], [27, Lemma 9.33], but it follows easily from their proofs. When applying Proposition 4.3 to \(M\) via \((U,x)\), it will be convenient to use \[a\,|d\xi|\in S^{\bar{m}}(N^{*}U^{\prime\prime};\Omega N^{*}U^{\prime\prime}) \equiv S^{\bar{m}}(N^{*}L_{0};\Omega N^{*}L_{0})\;.\] #### 4.3.2. Case of compact manifolds Assume first that \(M\) is compact. Take a finite cover of \(L\) by relatively compact charts \((U_{j},x_{j})\) of \(M\) adapted to \(L\), and write \(L_{j}=L\cap U_{j}\). Let \(\{h,f_{j}\}\) be a \(C^{\infty}\) partition of unity of \(M\) subordinated to the open covering \(\{M\setminus L,U_{j}\}\). Then \(I(M,L)\) consists of the distributions \(u\in C^{-\infty}(M)\) such that \(hu\in C^{\infty}(M\setminus L)\) and \(f_{j}u\in I_{\rm c}(U_{j},L_{j})\) for all \(j\). Then, according to Proposition 4.3, every \(f_{j}u\) is given by some \(a_{j}\in S^{\infty}(N^{*}L_{j};\Omega N^{*}L_{j})\). For \[\bar{m}=m+n/4-n^{\prime}/2\;, \tag{4.9}\] the condition \(a_{j}\in S^{\bar{m}}(N^{*}L_{j};\Omega N^{*}L_{j})\) describes the elements \(u\) of a \(C^{\infty}(M)\)-submodule \(I^{m}(M,L)\subset I(M,L)\), which is independent of the choices involved [27, Proposition 9.33] (see also [25, Definition 6.2.19] and [36, Definition 4.3.9]). Moreover, applying the versions of semi-norms (2.1) on \(C^{\infty}(M\setminus L)\) to \(hu\) and versions of semi-norms (3.1) on \(S^{\bar{m}}(N^{*}L_{j};\Omega N^{*}L_{j})\) to every \(a_{j}\), we get semi-norms on \(I^{m}(M,L)\), which becomes a Frechet space [25, Sections 6.2 and 6.10]. In other words, the following map is required to be a TVS-embedding: \[I^{m}(M,L)\to C^{\infty}(M\setminus L)\oplus\prod_{j}S^{\bar{m}}(N^{*}L_{j}; \Omega N^{*}L_{j})\;,\quad u\mapsto(hu,(a_{j}))\;. \tag{4.10}\] The version of (3.2) for the spaces \(S^{\bar{m}}(N^{*}L_{j};\Omega N^{*}L_{j})\) gives continuous inclusions \[I^{m}(M,L)\subset I^{m^{\prime}}(M,L)\quad(m<m^{\prime})\;. \tag{4.11}\] The element \(\sigma_{m}(u)\in S^{(\bar{m})}(N^{*}L;\Omega N^{*}L)\) represented by \(\sum_{j}a_{j}\in S^{\bar{m}}(N^{*}L;\Omega N^{*}L)\) is called the _principal symbol_ of \(u\). This defines the exact sequence \[0\to I^{m-1}(M,L)\hookrightarrow I^{m}(M,L)\xrightarrow{\sigma_{m}}S^{(\bar{ m})}(N^{*}L;\Omega N^{*}L)\to 0\;.\] From Proposition 4.3 and (4.9), we also get continuous inclusions \[I^{(-m-n/4+\epsilon)}(M,L)\subset I^{m}(M,L)\subset I^{(-m-n/4-\epsilon)}(M,L )\;, \tag{4.12}\] for all \(m\in\mathbb{R}\) and \(\epsilon>0\) (cf. [25, Eq. (6.2.5)], [27, Eq. (9.35)]). So \[I(M,L)=\bigcup_{m}I^{m}(M,L)\;,\quad I^{(\infty)}(M,L)=I^{-\infty}(M,L):= \bigcap_{m}I^{m}(M,L)\;.\] The spaces \(I^{m}(M,L)\) form the _symbol order filtration_ of \(I(M,L)\). The maps (4.10) induce a TVS-embedding \[I(M,L)\to C^{\infty}(M\setminus L)\oplus\prod_{j}S^{\infty}(N^{*}L_{j}; \Omega N^{*}L_{j})\;. \tag{4.13}\] **Corollary 4.5**.: _For \(m<m^{\prime},m^{\prime\prime}\), the topologies of \(I^{m^{\prime}}(M,L)\) and \(I^{m^{\prime\prime}}(M,L)\) coincide on \(I^{m}(M,L)\)._ Proof.: Use Corollary 3.4 and the TVS-embeddings (4.10). **Corollary 4.6** ([25, Eq. (6.2.12)]).: _For \(m<m^{\prime}\), \(C^{\infty}(M)\) is dense in \(I^{m}(M,L)\) with the topology of \(I^{m^{\prime}}(M,L)\). Therefore \(C^{\infty}(M)\) is dense in \(I(M,L)\)._ Proof.: \(C^{\infty}(M)\) is contained in the stated spaces by (4.5). Let us prove the first density, and the second one follows like in Corollary 3.5. Given \(u\in I^{m}(M,L)\), let \(a_{j}\in S^{\tilde{m}}(N^{*}L_{j};\Omega N^{*}L_{j})\) be the symbol corresponding to \(f_{j}u\) by Proposition 4.3, like in (4.10). By Corollary 3.5, there is a sequence \(b_{j,k}\in C^{\infty}_{\rm c}(N^{*}L_{j};\Omega N^{*}L_{j})\) converging to \(a_{j}\) in \(S^{\tilde{m}^{\prime}}(N^{*}L_{j};\Omega N^{*}L_{j})\) (\(\tilde{m}^{\prime}=m^{\prime}+n/4-n^{\prime}/2\)). Let \(v_{j,k}\) be the sequence in \(C^{\infty}(U)\) that corresponds to \(b_{j,k}\) via (4.7); it converges to \(f_{j}u\) in \(I^{m^{\prime}}(U,L)\) as \(k\to\infty\) by Proposition 4.3. Take functions \(\tilde{f}_{j}\in C^{\infty}_{\rm c}(U_{\tilde{j}})\) with \(\tilde{f}_{j}=1\) on \(\operatorname{supp}f_{j}\). Then \(\tilde{f}_{j}v_{j,k}\to f_{j}u\) in \(I^{m^{\prime}}_{\rm c}(U,L)\), and therefore \(hu+\sum_{j}\tilde{f}_{j}v_{j,k}\in C^{\infty}(M)\) is convergent to \(u\) in \(I^{m^{\prime}}(M,L)\). **Corollary 4.7**.: \(I(M,L)\) _is an acyclic Montel space, and therefore complete, boundedly retractive and reflexive._ Proof.: Like in Corollary 3.6, by Corollaries 4.2 and 4.5, it is enough to prove that \(I(M,L)\) is semi-Montel. The TVS-embedding (4.13) is closed because \(I(M,L)\) is complete. Then \(I(M,L)\) is semi-Montel because \(C^{\infty}(M\setminus L)\) and \(S^{\infty}(N^{*}L_{j};\Omega N^{*}L_{j})\) are Montel spaces (Corollary 3.6), and this property is inherited by closed subspaces and products [16, Propositions 3.9.3 and 3.9.4], [28, Exercise 12.203 (c)]. _Remark 4.8_.: The reflexivity of \(I(M,L)\) is also a consequence of the reflexivity of \(I^{(s)}(M,L)\) (Proposition 4.1) and the regularity of \(I(M,L)\) (Corollary 4.7) [23]. #### 4.3.3. Extension to non-compact manifolds When \(M\) is not assumed to be compact, the definition of \(I^{m}(M,L)\) can be immediately extended assuming \(\{U_{j}\}\) is a locally finite cover of \(L\), obtaining an analog of (4.10). We can similarly define \(I^{m}_{K}(M,L)\) for all compact \(K\subset M\), and take \(I^{m}_{\rm c}(M,L)=\bigcup_{K}I^{m}_{K}(M,L)\) like in (2.2). The space of conormal distributions with a symbol order is \(\bigcup_{m}I^{m}(M,L)\), and let \(I^{-\infty}_{\cdot/{\rm c}}(M,L)=\bigcap_{m}I^{m}_{\cdot/{\rm c}}(M,L)\). There are extensions of (4.10)-(4.13) and Corollaries 4.5 and 4.6, with arbitrary/compact support (using direct sums instead of products in the case of compact support). So \(\bigcup_{m}I^{m}(M,L)=\bigcup_{s}I^{(s)}(M,L)\), \(I_{\rm c}(M,L)=\bigcup_{m}I^{m}_{\rm c}(M,L)\) and \(I^{(\infty)}_{\cdot/{\rm c}}(M,L)=I^{-\infty}_{\cdot/{\rm c}}(M,L)\). Corollary 4.7 can be extended with \(\bigcup_{m}I^{m}(M,L)\) and \(I_{\cdot/{\rm c}}(M,L)\), except acyclicity in the case of \(I(M,L)\). ### Dirac sections at submanifolds Let \(NL\) and \(N^{*}L\) denote the normal and conormal bundles of \(L\). We have identity \(\Omega NL\otimes\Omega L\equiv\Omega_{L}M\). The transpose of the restriction map \(C^{\infty}_{c\cdot}(M;E^{*}\otimes\Omega M)\to C^{\infty}_{c\cdot}(L;E^{*} \otimes\Omega_{L}M)\) is a continuous inclusion \[C^{-\infty}_{\cdot/{\rm c}}(L;E\otimes\Omega^{-1}NL)\subset C^{- \infty}_{\cdot/{\rm c}}(M;E)\;,\] \[u\mapsto\delta^{u}_{L}\;,\quad\langle\delta^{u}_{L},v\rangle= \langle u,v|_{L}\rangle\;,\quad v\in C^{\infty}_{\rm c/\cdot}(M;E^{*}\otimes \Omega)\;. \tag{4.14}\] By restriction of (4.14), we get a continuous inclusion [11, p. 310], \[C^{\infty}_{\cdot/{\rm c}}(L;E\otimes\Omega^{-1}NL)\subset C^{-\infty}_{\cdot/ {\rm c}}(M;E)\;; \tag{4.15}\] in this case, we can write \(\langle\delta^{u}_{L},v\rangle=\int_{L}u\,v|_{L}\). This is the subspace of \(\delta\)_-sections_ or _Dirac sections_ at \(L\). Actually, the following sharpening of (4.15) is true. **Proposition 4.9**.: _The inclusion (4.15) induces a continuous injection_ \[C^{\infty}_{\cdot/{\rm c}}(L;E\otimes\Omega^{-1}NL)\subset H^{s}_{{\rm loc}/{ \rm c}}(M;E)\quad(s<-n^{\prime}/2)\] _with_ \[C^{\infty}_{\cdot/{\rm c}}(L;E\otimes\Omega^{-1}NL)\cap H^{-n^{\prime}/2}_{{ \rm loc}/{\rm c}}(M;E)=0\;.\] Proof.: First, take \(M=\mathbb{R}^{n}\), \(L=\mathbb{R}^{n^{\prime\prime}}\times\{0\}\equiv\mathbb{R}^{n^{\prime\prime}}\) and \(E=M\times\mathbb{C}\) (the trivial line bundle). Let \(\delta_{0}\) be the Dirac mass at \(0\) in \(\mathbb{R}^{n^{\prime}}\). For any \(\phi\in\mathcal{S}(\mathbb{R}^{n^{\prime\prime}})\), consider the tensor product distribution \(\phi\otimes\delta_{0}\in\mathcal{S}(\mathbb{R}^{n})^{\prime}\)[14, Section 5.1]. Its Fourier transform is \(\hat{\phi}\otimes\hat{\delta}_{0}=\hat{\phi}\otimes 1\). If \(\phi\neq 0\), then \(\hat{\phi}\otimes 1\in L^{2}(\mathbb{R}^{n},(1+|\xi|^{2})^{s}\,d\xi)\) if and only if \(1\in L^{2}(\mathbb{R}^{n^{\prime}},(1+|\xi|^{2})^{s}\,d\xi)\), which holds just when \(s<-n^{\prime}/2\). Moreover the map \[\mathcal{S}(\mathbb{R}^{n^{\prime\prime}})\to L^{2}(\mathbb{R}^{n},(1+|\xi|^{2 })^{s}\,d\xi)\;,\quad\phi\mapsto\hat{\phi}\otimes 1\;,\] is continuous if \(s<-n^{\prime}/2\). Thus (2.14) yields versions of the stated properties using \(\mathcal{S}(\mathbb{R}^{n^{\prime\prime}})\) and \(H^{s}(R^{n})\). For arbitrary \(M\), \(L\) and \(E\), the result follows from the previous case by using a locally finite atlas, a subordinated partition of unity, and diffeomorphisms of triviality of \(E\). For instance, for any \(p\in M\) and \(u\in E_{p}\otimes\Omega_{p}^{-1}M\), we get \(\delta^{u}_{p}\in H^{s}_{\rm c}(M;E)\) if \(s<-n/2\), with \(\langle\delta^{u}_{p},v\rangle=u\cdot v(p)\) for \(v\in C^{\infty}(M;E^{*}\otimes\Omega)\), obtaining a continuous map \[M\times C^{\infty}(M;E\otimes\Omega^{-1})\to H^{s}_{\rm c}(M;E)\;,\quad(p,u) \mapsto\delta^{u(p)}_{p}\;.\] As a particular case, the Dirac mass at any \(p\in\mathbb{R}^{n}\) is \(\delta_{p}=\delta^{1\otimes|dx|^{-1}}_{p}\in H^{s}_{\rm c}(\mathbb{R}^{n})\). ### Differential operators on conormal distributional sections Any \(A\in\operatorname{Diff}^{k}(M;E)\) induces continuous linear maps [25, Lemma 6.1.1] \[A:I^{(s)}_{\cdot/{\rm c}}(M,L;E)\to I^{(s-k)}_{\cdot/{\rm c}}(M,L;E)\;, \tag{4.16}\] which induce a continuous endomorphism \(A\) of \(I_{\cdot/{\rm c}}(M,L;E)\). If \(A\in\operatorname{Diff}(M,L;E)\), then it clearly induces a continuous endomorphism \(A\) of every \(I^{(s)}_{\cdot/{\rm c}}(M,L;E)\). According to (4.14), for \(A\in\operatorname{Diff}(M,L;E)\) and \(u\in C^{\infty}_{\cdot/{\rm c}}(L;E\otimes\Omega^{-1}NL)\), \[A\delta^{u}_{L}=\delta^{A^{\prime}u}_{L}\;,\quad A^{\prime}=((A^{t})|_{L})^{t }\in\operatorname{Diff}(L;E\otimes\Omega^{-1}NL)\;, \tag{4.17}\] where \(A^{t}\in\operatorname{Diff}(M,L;E^{*}\otimes\Omega)\) and \((A^{t})|_{L}\in\operatorname{Diff}(L,E^{*}\otimes\Omega_{L}M)\) using the vector bundle versions of (4.1) and (4.2). In fact, for \(v\in C^{\infty}_{c/\cdot}(M;E^{*}\otimes\Omega)\), \[\langle A\delta^{u}_{L},v\rangle=\langle\delta^{u}_{L},A^{t}v\rangle=\langle u,(A^{t}v)|_{L}\rangle=\langle u,(A^{t})|_{L}\langle v|_{L}\rangle=\langle A^{ \prime}u,v|_{L}\rangle=\langle\delta^{A^{\prime}u}_{L},v\rangle\;.\] By (4.17), \(\operatorname{Diff}(M,L;E)\) preserves the subspace of Dirac sections given by (4.15). Thus the continuous inclusion of Proposition 4.9 induces a continuous inclusion \[C^{\infty}_{\cdot/\mathrm{c}}(L;E\otimes\Omega^{-1}NL)\subset I^{(s)}_{\cdot/ \mathrm{c}}(M,L;E)\quad(s<-n^{\prime}/2)\;. \tag{4.18}\] ### Pull-back of conormal distributions If a smooth map \(\phi:M^{\prime}\to M\) is transverse to a regular submanifold \(L\subset M\), then \(L^{\prime}:=\phi^{-1}(L)\subset M^{\prime}\) is a regular submanifold and (the trivial-line-bundle version of) (2.7) has continuous extensions \[\phi^{*}:I^{m}(M,L)\to I^{m+k/4}(M^{\prime},L^{\prime})\quad(m\in\mathbb{R})\;, \tag{4.19}\] where \(k=\dim M-\dim M^{\prime}\)[36, Theorem 5.3.8], [25, Proposition 6.6.1]. Taking inductive limits and using (4.12), we get a continuous linear map \[\phi^{*}:I(M,L)\to I(M^{\prime},L^{\prime})\;. \tag{4.20}\] If \(\phi\) is a submersion, this is a restriction of (2.10). If \(\phi\) is a local diffeomorphism, then (2.7) also has continuous extensions \[\phi^{*}:I^{(s)}(M,L)\to I^{(s)}(M^{\prime},L^{\prime})\quad(s\in\mathbb{R})\;. \tag{4.21}\] A more general pull-back of distributional sections can be defined under conditions on the wave front set [14, Theorem 8.2.4], but we will not use it. ### Push-forward of conormal distributional sections Now let \(\phi:M^{\prime}\to M\) be a smooth submersion, and let \(L\subset M\) and \(L^{\prime}\subset M^{\prime}\) be regular submanifolds such that \(\phi(L^{\prime})\subset L\) and the restriction \(\phi:L^{\prime}\to L\) is also a smooth submersion. Then (2.8) has continuous extensions \[\phi_{*}:I^{m}_{\mathrm{c}}(M^{\prime},L^{\prime};\Omega_{\mathrm{fiber}}) \to I^{m+l/2-k/4}_{\mathrm{c}}(M,L)\quad(m\in\mathbb{R})\;, \tag{4.22}\] where \(k=\dim M^{\prime}-\dim M\) and \(l=\dim L^{\prime}-\dim L\)[36, Theorem 5.3.6], [25, Proposition 6.7.2]. Taking inductive limits, we get a continuous linear map \[\phi_{*}:I_{\mathrm{c}}(M^{\prime},L^{\prime};\Omega_{\mathrm{fiber}})\to I_{ \mathrm{c}}(M,L)\;, \tag{4.23}\] which is a restriction of (2.9). If \(\phi\) is a local diffeomorphism, then (2.8) also has continuous extensions \[\phi_{*}:I^{(s)}_{\mathrm{c}}(M^{\prime},L^{\prime})\to I^{(s)}_{\mathrm{c}}( M,L)\quad(s\in\mathbb{R})\;. \tag{4.24}\] ### Pseudodifferential operators This type of operators is the main application of conormal distributions (see e.g. [37, 15, 26, 36]). #### 4.8.1. Case of compact manifolds Suppose first that \(M\) is compact. The filtered algebra and \(C^{\infty}(M^{2})\)-module of pseudodifferential operators, \(\Psi(M)\), consists of the continuous endomorphisms \(A\) of \(C^{\infty}(M)\) with Schwartz kernel \(K_{A}\in I(M^{2},\Delta)\), where \(\Delta\) is the diagonal. In fact, by the Schwartz kernel theorem, we may consider \(\Psi(M)\equiv I(M^{2},\Delta)\). It is filtered by the symbol order, \(\Psi^{m}(M)\equiv I^{m}(M^{2},\Delta)\) (\(m\in\mathbb{R}\)), and \(\Psi^{-\infty}(M)\equiv I^{-\infty}(M^{2},\Delta)\) consists of the smoothing operators. The analogs of (2.20) and (2.21) hold true using \(\Psi^{s}(M)\) instead of \(\operatorname{Diff}^{s}(M)\) for any \(s\in\mathbb{R}\). In this way, \(\Psi(M)\) also becomes a LCHS satisfying the properties indicated in Sections 4.2.1 and 4.3.2. Taking the \(C^{\infty}(M^{2})\)-tensor product of \(\Psi(M)\) with \(C^{\infty}(M;F\boxtimes E^{*})\), we get \(\Psi(M;E,F)\) (or \(\Psi(M;E)\) if \(E=F\)) as in Section 2.7, satisfying the analog of (2.12). \(\Psi(M;E)\) is preserved by taking transposes, and therefore any \(A\in\Psi(M;E)\) defines a continuous endomorphism \(A\) of \(C^{-\infty}(M;E)\) (Section 2.5), and \(\operatorname{sing\,supp}Au\subset\operatorname{sing\,supp}u\) for all \(u\in C^{-\infty}(M;E)\) (_pseudolocality_). The concepts of symbols and ellipticity can be extended to pseudodifferential operators via the Schwartz kernels (see e.g. [24]), and the symbol map is multiplicative. If \(A\in\Psi^{m}(M;E)\), it defines a bounded operator \(A:H^{s+m}(M;E)\to H^{s}(M;E)\). This can be considered as a closable densely defined operator in \(H^{s}(M;E)\), like in the case of differential operators (Section 2.10). In the case \(s=0\), the adjoint of \(A\) is induced by the formal adjoint \(A^{*}\in\Psi^{m}(M;E)\). The class of pseudodifferential operators is preserved by transposition. So any \(A\in\Psi^{m}(M)\) defines a continuous endomorphism \(A\) of \(C^{-\infty}(M)\) (Section 2.5), and \(\operatorname{sing\,supp}Au\subset\operatorname{sing\,supp}u\) for all \(u\in C^{-\infty}_{\mathrm{c}}(M)\) (_pseudolocality_). The concepts of symbols and ellipticity can be extended to pseudodifferential operators via the Schwartz kernels (see e.g. [24]), and the symbol map is multiplicative. #### 4.8.2. Extension to non-compact manifolds If \(M\) is not assumed to be compact, \(\Psi(M)\) is similarly defined with the change that any \(A\in\Psi^{m}(M)\) defines continuous linear maps \(A:C^{\pm\infty}_{\mathrm{c}}(M)\to C^{\pm\infty}(M)\) and \(A:H^{s+m}_{\mathrm{c}}(M)\to H^{s}_{\mathrm{loc}}(M)\). Thus \(\Psi(M)\) is not an algebra in this case. However, if \(A\in\Psi^{m}(M)\) is properly supported (both factor projections \(M^{2}\to M\) restrict to proper maps \(\operatorname{supp}K_{A}\to M\)), then it defines a continuous endomorphism \(A\) of \(C^{-\infty}_{\mathrm{c}}(M;E)\); in this sense, properly supported pseudodifferential operators can be composed. Pseudodifferential operators are properly supported modulo \(\Psi^{-\infty}(M)\). Like in the compact case, \(\Psi(M)\equiv I(M^{2},\Delta)\) becomes a filtered \(C^{\infty}(M^{2})\)-module and LCHS satisfying the properties indicated in Sections 4.2.2 and 4.3.3. In the setting of bounded geometry (Section 2.12.3), properly supported pseudodifferential operators with uniformly bounded symbols, and their uniform ellipticity, were studied in [19, 20]. ## 5. Dual-conormal distributions ### Dual-conormal distributions Consider the notation of Sections 4.2 and 4.3. #### 5.1.1. Case of compact manifolds Assume first that \(M\) is compact. The space of _dual-conormal distributions_ of \(M\) at \(L\) (or of \((M,L)\)) is the strong dual space [25, Chapter 6] \[I^{\prime}(M,L)=I(M,L;\Omega)^{\prime}\;. \tag{5.1}\] **Corollary 5.1**.: \(I^{\prime}(M,L)\) _is a complete Montel space._ Proof.: Since \(I(M,L;\Omega)\) is bornological (the version Corollary 4.2 with \(\Omega M\)), \(I^{\prime}(M,L)\) is complete [31, IV.6.1], [29, Corollary 6.1.18], [28, Theorem 13.2.13]. Since \(I(M,L;\Omega)\) is a Montel space (the version Corollary 4.7 with \(\Omega M\)), \(I^{\prime}(M,L)\) is a Montel space [16, Proposition 3.9.9], [21, 6.27.2 (2)], [31, IV.5.9]. Let also \[I^{\prime\,(s)}(M,L)=I^{(-s)}(M,L;\Omega)^{\prime}\;,\quad I^{\prime\,m}(M,L)= I^{-m}(M,L;\Omega)^{\prime}\;. \tag{5.2}\] **Corollary 5.2**.: \(I^{\prime\,(s)}(M,L)\) _is bornological and barreled._ Proof.: Since \(I^{(-s)}(M,L;\Omega)\) is a reflexive Frechet space (the version of Proposition 4.1 with \(\Omega M\)), \(I^{\prime\,(s)}(M,L)\) is bornological [31, Corollary 1 of IV.6.6], and therefore barreled [31, IV.6.6]. Transposing the versions of (4.4) and (4.11) with \(\Omega M\), we get continuous restrictions, for \(s^{\prime}<s\) and \(m<m^{\prime}\), \[I^{\prime\,(s)}(M,L)\to I^{\prime\,(s^{\prime})}(M,L)\;,\quad I^{\prime\,m}(M,L )\to I^{\prime\,m^{\prime}}(M,L)\;.\] These maps form projective systems, giving rise to \(\varprojlim I^{\prime\,(s)}(M,L)\) as \(s\uparrow\infty\) and \(\varprojlim I^{\prime\,m}(M,L)\) as \(m\downarrow-\infty\). Transposing the versions of (4.5) and (4.12) with \(\Omega M\), we get continuous inclusions \[C^{-\infty}(M)\supset I^{\prime}(M,L)\supset C^{\infty}(M)\;, \tag{5.3}\] and, for all \(m\in\mathbb{R}\) and \(\epsilon>0\), continuous restrictions \[I^{\prime\,(-m+n/4-\epsilon)}(M,L)\gets I^{\prime\,m}(M,L)\gets I^{ \prime\,(-m+n/4+\epsilon)}(M,L)\;. \tag{5.4}\] Thus \[\varprojlim I^{\prime\,(s)}(M,L)\equiv\varprojlim I^{\prime\,m}(M,L)\;. \tag{5.5}\] **Corollary 5.3**.: \(I^{\prime}(M,L)\equiv\varprojlim I^{\prime\,(s)}(M,L)\)_._ Proof.: This holds because \(I(M,L)\) is regular (Corollary 4.7) [23, Lemma 1]. Alternatively, the following argument can be used. \(I^{\prime}(M,L)\) is a Montel space (Corollary 5.1); in particular, it is barrel, and therefore a Mackey space [31, IV.3.4]. On the other hand, every \(I^{\prime\,(s)}(M,L)\) is bornological (Corollary 5.2), and therefore a Mackey space [16, Proposition 3.7.2], [31, IV.3.4], [28, Theorem 13.2.10]. So the result follows applying [31, Remark of IV.4.5]. #### 5.1.2. Extension to non-compact manifolds If \(M\) is not supposed to be compact, we can similarly define the space \(I^{\prime}_{K}(M,L)\) of dual-conormal distributions supported in any compact \(K\subset M\). Then define the LCHSs, \(I^{\prime}_{\rm c}(M,L)=\bigcup_{K}I^{\prime}_{K}(M,L)\) like in (2.2), and \(I^{\prime}(M,L)\) like in (4.6) using \(I^{\prime}_{\rm c}(M,L)\) instead of \(I_{\rm c}(M,L)\). These spaces satisfy a version of (5.1), interchanging arbitrary/compact support like in (2.4). Given a smooth partition of unity \(\{f_{j}\}\) so that every \(K_{j}:=\operatorname{supp}f_{j}\) is compact, the multiplication by the functions \(f_{j}\) defines closed TVS-embeddings \[I^{\prime}(M,L)\to\prod_{j}I^{\prime}_{K_{j}}(M,L)\;,\quad I^{\prime}_{\rm c} (M,L)\to\bigoplus_{j}I^{\prime}_{K_{j}}(M,L)\;. \tag{5.6}\] Then, by the extension of Corollary 4.7 for \(I_{\rm c}(M,L;\Omega)\), the obvious extension of Corollary 5.2 for every \(I^{\prime}_{K_{j}}(M,L)\), (5.1) and (5.6), we get an extension of Corollary 5.1. Similarly, we can define the spaces \(I^{\prime\,(s)}_{\cdot/{\rm c}}(M,L)\) and \(I^{\prime\,m}_{\cdot/{\rm c}}(M,L)\), which satisfy (5.2) interchanging the support condition. Since \(I_{\rm c}(M,L)\) is an acyclic Montel space (Section 4.3.3), there are extensions of Corollaries 5.1 and 5.3 for \(I^{\prime}(M,L)\). ### Differential operators on dual-conormal distributional sections For any \(A\in\operatorname{Diff}(M;E)\), consider \(A^{t}\in\operatorname{Diff}(M;E^{*}\otimes\Omega)\). The transpose of \(A^{t}\) on \(I_{\rm c}{}_{\cdot/{\rm c}}(M,L;E^{*}\otimes\Omega)\) is a continuous endomorphism \(A\) of \(I^{\prime}_{\cdot/{\rm c}}(M,L;E)\), which is a restriction of the map \(A\) on \(C^{-\infty}(M;E)\) (Section 2.7). By (4.16), if \(A\in\operatorname{Diff}^{m}(M;E)\), we get induced continuous linear maps \[A:I^{\prime\,(s)}_{\cdot/{\rm c}}(M,L;E)\to I^{\prime\,(s-m)}_{\cdot/{\rm c}}( M,L;E)\;, \tag{5.7}\] If \(A\in\operatorname{Diff}(M,L;E)\), the transpose of \(A^{t}\) of \(I^{(-s)}_{\cdot/{\rm c}}(M,L;E^{*}\otimes\Omega)\) is a continuous endomorphism \(A\) of \(I^{\prime\,(s)}_{\cdot/{\rm c}}(M,L;E)\). ### Pull-back of dual-conormal distributions If the conditions of Section 4.7 hold, transposing the versions of (4.22) and (4.23) with \(E^{*}\otimes\Omega M\) and \(-m\), we get continuous linear pull-back maps \[\phi^{*}:I^{\prime\,m}(M,L)\to I^{\prime\,m+l/2-k/4}(M^{\prime},L^{ \prime})\quad(m\in\mathbb{R})\;, \tag{5.9}\] \[\phi^{*}:I^{\prime}(M,L)\to I^{\prime}(M^{\prime},L^{\prime})\;. \tag{5.8}\] The map (5.9) is an extension of (2.7), a restriction of (2.10) and the projective limit of the maps (5.8). If \(\phi\) is a local diffeomorphism, we similarly get from (4.24) continuous pull-back linear maps \[\phi^{*}:I^{\prime\,(s)}(M,L)\to I^{\prime\,(s)}(M^{\prime},L^{ \prime})\quad(s\in\mathbb{R})\;. \tag{5.10}\] ### Push-forward of dual-conormal distributions With the notation of Section 4.6, if \(\phi\) is a submersion, transposing the versions of (4.19) and (4.20) with \(\Omega M\) and \(-m\), we get continuous linear push-forward maps \[\phi_{*}:I^{\prime\,m}_{\rm c}(M^{\prime},L^{\prime}\otimes\Omega _{\rm fiber})\to I^{\prime\,m-k/4}_{\rm c}(M,L)\quad(m\in\mathbb{R})\;, \tag{5.12}\] \[\phi_{*}:I^{\prime}_{\rm c}(M^{\prime},L^{\prime};\Omega_{\rm fiber })\to I^{\prime}_{\rm c}(M,L)\;. \tag{5.11}\] The map (5.12) is an extension of (2.8), a restriction of (2.9) and the projective limit of the maps (5.11). If \(\phi\) is a local diffeomorphism, we similarly get from (4.21) continuous linear push-forward maps \[\phi_{*}:I^{\prime\,(s)}(M^{\prime},L^{\prime})\to I^{\prime\,(s)}(M,L)\quad(s \in\mathbb{R})\;. \tag{5.13}\] ## 6. Conormal distributions at the boundary For the sake of simplicity, in this section and in Sections 7 and 8, we only consider the case of compact manifolds unless otherwise stated. But the concepts, notation and some of the results, can be extended to the non-compact case like in Sections 4.2.2, 4.3.3 and 5.1.2, using arbitrary/compact support conditions. Such extensions to non-compact manifolds may be used without further comment. ### Some notions of b-geometry R. Melrose introduced b-calculus, a way to extend calculus to manifolds with boundary [24, 25]. We will only use a part of it called small b-calculus. Let \(M\) be a compact (smooth) \(n\)-manifold with boundary; its interior is denoted by \(\mathring{M}\). There exists a function \(x\in C^{\infty}(M)\) so that \(x\geq 0\), \(\partial M=\{x=0\}\) (i.e., \(x^{-1}(0)\)) and \(dx\neq 0\) on \(\partial M\), which is called a _boundary defining function_. Let \({}_{+}N\partial M\subset N\partial M\) be the inward-pointing subbundle of the normal bundle to the boundary. There is a unique trivialization \(\nu\in C^{\infty}(\partial M;{}_{+}N\partial M)\) of \({}_{+}N\partial M\) so that \(dx(\nu)=1\). Take a collar neighborhood \(T\equiv[0,\epsilon_{0})_{x}\times\partial M\) of \(\partial M\), whose projection \(\varpi:T\to\partial M\) is the second factor projection. (In a product expression, every factor projection may be indicated as subscript of the corresponding factor.) Given coordinates \(y=(y^{1},\dots,y^{n-1})\) on some open \(V\subset\partial M\), we get via \(\varpi\) coordinates \((x,y)=(x,y^{1},\dots,y^{n-1})\) adapted (to \(\partial M\)) on the open subset \(U\equiv[0,\epsilon_{0})\times V\subset M\). There are vector bundles over \(M\), \({}^{\rm b}TM\) and \({}^{\rm b}T^{*}M\), called _b-tangent_ and _b-cotangent_ bundles, which have the same restrictions as \(TM\) and \(T^{*}M\) to \(\mathring{M}\), and such that \(x\partial_{x},\partial_{y^{1}},\dots,\partial_{y^{n-1}}\) and \(x^{-1}dx,dy^{1},\dots,dy^{n-1}\) extend to smooth local frames around boundary points. This gives rise to versions of induced vector bundles, like \({}^{\rm b}\Omega^{s}M:=\Omega^{s}({}^{\rm b}TM)\) (\(s\in\mathbb{R}\)) and \({}^{\rm b}\Omega M:={}^{\rm b}\Omega^{1}M\). Clearly, \[C^{\infty}(M;\Omega^{s})\equiv x^{s}C^{\infty}(M;{}^{\rm b}\Omega^{s})\;. \tag{6.1}\] Thus the integration operator \(\int_{M}\) is defined on \(xC^{\infty}(M;{}^{\rm b}\Omega)\), and induces a pairing between \(C^{\infty}(M)\) and \(xC^{\infty}(M;{}^{\rm b}\Omega)\). At the points of \(\partial M\), the local section \(x\partial_{x}\) is independent of the choice of adapted local coordinates, spanning a trivial line subbundle \({}^{\rm b}N\partial M\subset{}^{\rm b}T_{\partial M}M\) with \(T\partial M={}^{\rm b}T_{\partial M}M/{}^{\rm b}N\partial M\). So \({}^{\rm b}\Omega^{s}_{\partial M}M\equiv\Omega^{s}\partial M\otimes\Omega^{s }({}^{\rm b}N\partial M)\), and a restriction map \(C^{\infty}(M;{}^{\rm b}\Omega^{s})\to C^{\infty}(\partial M;\Omega^{s})\) is locally given by \[u=a(x,y)\,\Big{|}\frac{dx}{x}dy\Big{|}^{s}\mapsto u|_{\partial M}=a(0,y)\,| dy|^{s}\;.\] A Euclidean structure \(g\) on \({}^{\rm b}TM\) is called a _b-metric_. Locally, \[g=a_{0}\Big{(}\frac{dx}{x}\Big{)}^{2}+2\sum_{j=1}^{n-1}a_{0j}\,\frac{dx}{x}\, dy^{j}+\sum_{j,k=1}^{n-1}a_{jk}\,dy^{j}\,dy^{k}\;,\] where \(a_{0}\), \(a_{0j}\) and \(a_{jk}\) are \(C^{\infty}\) functions, on condition that \(g\) is positive definite. If moreover \(a_{0}=1+O(x^{2})\) and \(a_{0j}=O(x)\) as \(x\downarrow 0\), then \(g\) is called _exact_. In this case, the restriction of \(g\) to \(\check{T}\equiv(0,\epsilon_{0})\times\partial M\) is asymptotically cylindrical, and therefore \(g|_{\check{M}}\) is complete. This restriction is of bounded geometry if it is cylindrical around the boundary; i.e., \(g=(\frac{dx}{x})^{2}+h\) on \(\check{T}\) for (the pull-back via \(\varpi\) of) some Riemannian metric \(h\) on \(\partial M\), taking \(\epsilon_{0}\) small enough; i.e., \(a_{0}=1\) and \(a_{0j}=0\) using adapted local coordinates. ### Supported and extendible functions Let \(\check{M}\) be any closed manifold containing \(M\) as submanifold of dimension \(n\) (for instance, \(\check{M}\) could be the double of \(M\)), and let \(M^{\prime}=\check{M}\setminus\hat{M}\), which is another compact \(n\)-submanifold with boundary of \(\check{M}\), with dimension \(n\) and \(\partial M^{\prime}=M\cap M^{\prime}=\partial M\). The concepts, notation and conventions of Section 2.4 have straightforward extensions to manifolds with boundary, like the Frechet space \(C^{\infty}(M)\). Its elements are called _extendible functions_ because the continuous linear restriction map \[R:C^{\infty}(\check{M})\to C^{\infty}(M) \tag{6.2}\] is surjective; in fact, there is a continuous linear extension map \(E:C^{\infty}(M)\to C^{\infty}(\check{M})\)[34]. Since \(C^{\infty}(\check{M})\) and \(C^{\infty}(M)\) are Frechet spaces, the map (6.2) is open by the open mapping theorem, and therefore it is a surjective topological homomorphism. Its null space is \(C^{\infty}_{M^{\prime}}(\check{M})\). The Frechet space of _supported_ functions is the closed subspace of the smooth functions on \(M\) that vanish to all orders at the points of \(\partial M\), \[\dot{C}^{\infty}(M)=\bigcap_{m\geq 0}x^{m}C^{\infty}(M)\subset C^{\infty}(M)\;, \tag{6.3}\] The extension by zero realizes \(\dot{C}^{\infty}(M)\) as the closed subspace of functions on \(\check{M}\) supported in \(M\), \[\dot{C}^{\infty}(M)\equiv C^{\infty}_{M}(\check{M})\subset C^{\infty}(\check{ M})\;. \tag{6.4}\] By (6.3), \[x^{m}\dot{C}^{\infty}(M)=\dot{C}^{\infty}(M)\quad(m\in\mathbb{R})\;, \tag{6.5}\] and therefore, by (6.1), \[\dot{C}^{\infty}(M;{}^{\mathrm{b}}\Omega^{s})\equiv\dot{C}^{\infty}(M;\Omega^{s}) \quad(s\in\mathbb{R})\;. \tag{6.6}\] We can similarly define Banach spaces \(C^{k}(M)\) and \(\dot{C}^{k}(M)\) (\(k\in\mathbb{N}_{0}\)) satisfying the analogs of (6.2)-(6.4), which in turn yield analogs of the first inclusions of (2.6), obtaining \(C^{\infty}(M)=\bigcap_{k}C^{k}(M)\) and \(\dot{C}^{\infty}(M)=\bigcap_{k}\dot{C}^{k}(M)\). ### Supported and extendible distributions The spaces of _supported_ and _extendible_ distributions on \(M\) are \[\dot{C}^{-\infty}(M)=C^{\infty}(M;\Omega)^{\prime}\;,\quad C^{-\infty}(M)= \dot{C}^{\infty}(M;\Omega)^{\prime}\;.\] Transposing the version of (6.2) with \(\Omega M\), we get [25, Proposition 3.2.1] \[\dot{C}^{-\infty}(M)\equiv C_{M}^{-\infty}(\breve{M})\subset C^{-\infty}( \breve{M})\;. \tag{6.7}\] Similarly, (6.4) and (6.3) give rise to continuous linear restriction maps \[R:C^{-\infty}(\breve{M})\to C^{-\infty}(M)\;, \tag{6.9}\] \[R:\dot{C}^{-\infty}(M)\to C^{-\infty}(M)\;, \tag{6.8}\] which are surjective by the Hahn-Banach theorem. Their null spaces are \(C_{M^{\prime}}^{-\infty}(\breve{M})=\dot{C}^{-\infty}(M^{\prime})\) and \(\dot{C}_{\partial M}^{-\infty}(M)\)[25, Proposition 3.3.1], respectively. According to (6.7), the map (6.9) is a restriction of (6.8). As a consequence of (6.7), there are dense continuous inclusions [25, Lemma 3.2.1] \[C_{c}^{\infty}(\breve{M})\subset\dot{C}^{\infty}(M)\subset C^{\infty}(M) \subset\dot{C}^{-\infty}(M)\;, \tag{6.10}\] the last one given by the integration pairing between \(C^{\infty}(M)\) and \(C^{\infty}(M;\Omega)\). The restriction of this pairing to \(\dot{C}^{\infty}(M;\Omega)\) induces a dense continuous inclusion \[C^{\infty}(M)\subset C^{-\infty}(M)\;. \tag{6.11}\] Moreover (6.9) is the identity map on \(C^{\infty}(M)\). As before, from (6.5) and (6.6), we get \[x^{m}C^{-\infty}(M) =C^{-\infty}(M)\quad(m\in\mathbb{R})\;, \tag{6.13}\] \[C^{-\infty}(M;{}^{\mathrm{b}}\Omega^{s}) \equiv C^{-\infty}(M;\Omega^{s})\quad(s\in\mathbb{R})\;. \tag{6.12}\] The Banach spaces \(C^{\prime-k}(M)\) and \(\dot{C}^{\prime-k}(M)\) (\(k\in\mathbb{N}_{0}\)) are similarly defined and satisfy the analogs of (6.7)-(6.13). These spaces satisfy the analogs of the second inclusions of (2.6), obtaining \(\bigcup_{k}C^{\prime-k}(M)=C^{-\infty}(M)\) and \(\bigcup_{k}\dot{C}^{\prime-k}(M)=\dot{C}^{-\infty}(M)\). ### Supported and extendible Sobolev spaces The _supported_ Sobolev space of order \(s\in\mathbb{R}\) is the closed subspace of the elements supported in \(M\), \[\dot{H}^{s}(M)=H^{s}_{M}(\breve{M})\subset H^{s}(\breve{M})\;. \tag{6.14}\] On the other hand, using the map (6.9), the _extendible_ Sobolev space of order \(s\) is \(H^{s}(M)=R(H^{s}(\breve{M}))\) with the inductive topology given by \[R:H^{s}(\breve{M})\to H^{s}(M)\;; \tag{6.15}\] i.e., this is a surjective topological homomorphism. Its null space is \(H^{s}_{M^{\prime}}(\breve{M})\). The analogs of (2.16)-(2.19) hold true in this setting using \(\dot{C}^{\pm\infty}(M)\) and \(C^{\pm\infty}(M)\). Furthermore the analogs of (2.16) are also compact operators because (6.14) is a closed embedding and (6.15) a surjective topological homomorphism. The following properties are satisfied [25, Proposition 3.5.1]. \(C^{\infty}(M)\) is dense in \(H^{s}(M)\), we have \[\dot{H}^{s}(M)\equiv H^{-s}(M;\Omega)^{\prime}\;,\quad H^{s}(M)\equiv\dot{H}^{- s}(M;\Omega)^{\prime}\;, \tag{6.16}\] and the map (6.9) has a continuous restriction \[R:\dot{H}^{s}(M)\to H^{s}(M)\;, \tag{6.17}\] which is surjective if \(s\leq 1/2\), and injective if \(s\geq-1/2\). In particular, \(\dot{H}^{0}(M)\equiv H^{0}(M)\equiv L^{2}(M)\). The null space of (6.17) is \(\dot{H}^{s}_{\partial M}(M)\). Since \(\dot{H}^{s}(M)\) and \(H^{s}(M)\) form compact spectra of Hilbertian spaces, we get the following result. **Proposition 6.1**.: \(\dot{C}^{-\infty}(M)\) _and \(C^{-\infty}(M)\) are barreled, ultrabornological, webbed, acyclic DF Montel spaces, and therefore complete, boundedly retractive and reflexive._ **Proposition 6.2**.: _The maps (6.8) and (6.9) are surjective topological homomorphisms._ Proof.: We already know that these maps are linear, continuous and surjective. Since \(C^{-\infty}(\tilde{M})\) is webbed, and \(\dot{C}^{-\infty}(M)\) and \(C^{-\infty}(M)\) are webbed and ultrabornological (Proposition 6.1), the stated maps are also open by the open mapping theorem [22, 7.35.3 (1)], [28, Exercise 14.202 (a)], [7, Section IV.5], [5]. ### The space \(\dot{C}^{-\infty}_{\partial M}(M)\) Proposition 6.2 means that the following are short exact sequences in the category of continuous linear maps between LCSs: \[0\to\dot{C}^{-\infty}(M^{\prime})\xrightarrow{\iota}C^{-\infty} (\tilde{M})\xrightarrow{R}C^{-\infty}(M)\to 0\;,\] \[0\to\dot{C}^{-\infty}_{\partial M}(M)\xrightarrow{\iota}\dot{C} ^{-\infty}(M)\xrightarrow{R}C^{-\infty}(M)\to 0\;. \tag{6.18}\] From (6.7), we get \[\dot{C}^{-\infty}_{\partial M}(M)\equiv C^{-\infty}_{\partial M}(\tilde{M}) \subset C^{-\infty}(\tilde{M})\;. \tag{6.19}\] The analogs of the second inclusion of (2.6), (2.16) and (2.18) for the spaces \(\dot{C}^{\prime\,-k}(M)\) and \(\dot{H}^{s}(M)\) yield corresponding analogs for the spaces \(\dot{C}^{\prime\,-k}_{\partial M}(M)\) and \(\dot{H}^{s}_{\partial M}(M)\). Thus the spaces \(\dot{C}^{\prime\,-k}_{\partial M}(M)\) and \(\dot{H}^{s}_{\partial M}(M)\) form spectra with the same union; the spectrum of spaces \(\dot{H}^{s}_{\partial M}(M)\) is compact. **Proposition 6.3**.: \(\dot{C}^{-\infty}_{\partial M}(M)\) _is a limit subspace of the LF-space \(\dot{C}^{-\infty}(M)\)._ Proof.: By Propositions 6.1 and 6.2, \(\dot{C}^{-\infty}(M)/\dot{C}^{-\infty}_{\partial M}(M)\equiv C^{-\infty}(M)\) is acyclic, which is equivalent to the statement. The following analog of Proposition 6.1 hold true with the same arguments, applying Proposition 6.3 and using that the Hilbertian spaces \(\dot{H}^{s}_{\partial M}(M)\) form a compact spectrum. **Corollary 6.4**.: \(\dot{C}^{-\infty}_{\partial M}(M)\) _is barreled, ultrabornological, webbed acyclic DF Montel space, and therefore complete, boundedly retractive and reflexive._ A description of \(\dot{C}^{-\infty}_{\partial M}(M)\) will be indicated in Remark 7.7. ### Differential operators acting on \(C^{-\infty}(M)\) and \(\dot{C}^{-\infty}(M)\) The notions of Section 2.7 also have straightforward extensions to manifolds with boundary. The action of any \(A\in\operatorname{Diff}(M)\) on \(C^{\infty}(M)\) preserves \(\dot{C}^{\infty}(M)\). Taking the version of this property with \(\Omega M\), we get that \(A^{t}\) acts on \(\dot{C}^{\infty}(M;\Omega)\) and \(C^{\infty}(M;\Omega)\). Using the transpose again, we get extended continuous actions of \(A\) on \(C^{-\infty}(M)\) and \(\dot{C}^{-\infty}(M)\). They fit into commutative diagrams \[\begin{CD}\dot{C}^{-\infty}(M)@>{A}>{}>\dot{C}^{-\infty}(M)@>{-\infty}>{}>C^{- \infty}(M)@>{A}>{}>C^{-\infty}(M)\\ @V{R}V{}V@V{}V{R}V@V{\iota}V{}V@V{}V{\iota}V\\ C^{-\infty}(M)@>{A}>{}>C^{-\infty}(M)@>{C^{\infty}(M)}>{}>C^{\infty}(M)\;. \end{CD} \tag{6.20}\] However the analogous diagram \[\begin{CD}\dot{C}^{-\infty}(M)@>{A}>{}>\dot{C}^{-\infty}(M)\\ @V{\iota}V{}V@V{}V{\iota}V\\ C^{\infty}(M)@>{A}>{}>C^{\infty}(M)\end{CD} \tag{6.21}\] may not be commutative. Let us use the notation \(u\mapsto u_{\mathrm{c}}\) for the injection \(C^{\infty}(M)\subset\dot{C}^{-\infty}(M)\) (see (6.10)). (Following Melrose, the subscript "\(\mathrm{c}\)" stands for "cutoff at the boundary.") We have \(A(u_{\mathrm{c}})-(Au)_{\mathrm{c}}\in C^{-\infty}_{\partial M}(M)\) for all \(u\in C^{\infty}(M)\)[25, Eq. (3.4.8)]. For instance, if \(M=[x_{0},x_{1}]\), where \(x_{0}<x_{1}\) in \(\mathbb{R}\), and \(A=\partial_{x}\), integration by parts gives \[\partial_{x}(u_{\mathrm{c}})-(\partial_{x}u)_{\mathrm{c}}=u(x_{1})\,\delta_{x _{1}}-u(x_{0})\,\delta_{x_{0}}\] for all \(u\in C^{\infty}([x_{0},x_{1}])\), using the Dirac mass at \(x_{j}\) (\(j=0,1\)). Using (6.2) and its version for vector fields, we get a surjective restriction map \[\operatorname{Diff}(\breve{M})\to\operatorname{Diff}(M)\;,\quad\breve{A} \mapsto\breve{A}|_{M}\;. \tag{6.22}\] For any \(\breve{A}\in\operatorname{Diff}(\breve{M})\) with \(\breve{A}|_{M}=A\), we have the commutative diagrams \[\begin{CD}C^{-\infty}(\breve{M})@>{\breve{A}}>{}>C^{-\infty}(\breve{M})@>{ -\infty}>{}>C^{-\infty}(\breve{M})@>{\breve{A}}>{}>C^{-\infty}(\breve{M})\\ @V{R}V{}V@V{}V{R}V@V{\iota}V{}V@V{}V{\iota}V\\ C^{-\infty}(M)@>{A}>{}>C^{-\infty}(M)\;,\quad\dot{C}^{-\infty}(M)@>{A}>{}> \dot{C}^{-\infty}(M)\;,\end{CD} \tag{6.23}\] where the left-hand side square extends the left-hand side square of (6.20). If \(A\in\operatorname{Diff}^{m}(M)\) (\(m\in\mathbb{N}_{0}\)), its actions on \(\dot{C}^{-\infty}(M)\) and \(C^{-\infty}(M)\) define continuous linear maps, \[A:\dot{H}^{s}(M)\to\dot{H}^{s-m}(M)\;,\quad A:H^{s}(M)\to H^{s-m}(M)\;. \tag{6.24}\] The maps (6.17) and (6.24) fit into a commutative diagram given by the left-hand side square of (6.20). ### Differential operators tangent to the boundary The concepts of Section 4 can be generalized to the case with boundary when \(L=\partial M\)[25, Chapter 6] (see also [24, Section 4.9]), giving rise to the Lie subalgebra and \(C^{\infty}(M)\)-submodule \(\mathfrak{X}_{\mathrm{b}}(M)\subset\mathfrak{X}(M)\) of vector fields tangent to \(\partial M\), called _b-vector fields_. There is a canonical identity \(\mathfrak{X}_{\mathrm{b}}(M)\equiv C^{\infty}(M;{}^{\mathrm{b}}TM)\). Using \(\mathfrak{X}_{\mathrm{b}}(M)\) like in Section 2.7, we get the filtered \(C^{\infty}(M)\)-submodule and filtered subalgebra \(\operatorname{Diff}_{\mathrm{b}}(M)\subset\operatorname{Diff}(M)\) of _b-differential operators_. It consists of the operators \(A\in\operatorname{Diff}(M)\) such that (6.21) is commutative [25, Exercise 3.4.20]. The extension of \(\operatorname{Diff}_{\mathrm{b}}(M)\) to arbitrary vector bundles is closed by taking transposes and formal adjoints. Clearly, the restriction map (6.22) satisfies \[\operatorname{Diff}(\check{M},\partial M)|_{M}=\operatorname{Diff}_{\mathrm{b}}( M)\;. \tag{6.25}\] For all \(a\in\mathbb{R}\) and \(k\in\mathbb{Z}\), we have [25, Eqs. (4.2.7) and (4.2.8)] \[\operatorname{Diff}_{\mathrm{b}}^{k}(M)\,x^{a}=x^{a}\operatorname{Diff}_{ \mathrm{b}}^{k}(M)\;. \tag{6.26}\] Since \(\operatorname{Diff}(M)\) is spanned by \(\partial_{x}\) and \(\operatorname{Diff}_{\mathrm{b}}(M)\) as algebra, it follows that \[\operatorname{Diff}^{k}(M)\,x^{a}\subset x^{a-k}\operatorname{Diff}^{k}(M)\;. \tag{6.27}\] ### Conormal distributions at the boundary The spaces of _supported_ and _extendible_ conormal distributions at the boundary of Sobolev order \(s\in\mathbb{R}\) are the \(C^{\infty}(M)\)-modules and LCSs \[\dot{\mathcal{A}}^{(s)}(M) =\{\,u\in\dot{C}^{-\infty}(M)\mid\operatorname{Diff}_{\mathrm{b} }(M)\,u\subset\dot{H}^{s}(M)\,\}\;,\] \[\mathcal{A}^{(s)}(M) =\{\,u\in C^{-\infty}(M)\mid\operatorname{Diff}_{\mathrm{b}}(M) \,u\subset H^{s}(M)\,\}\;,\] with the projective topologies given by the maps \(P:\dot{\mathcal{A}}^{(s)}(M)\to H^{s}(M)\) and \(P:\mathcal{A}^{(s)}(M)\to\dot{H}^{s}(M)\) (\(P\in\operatorname{Diff}_{\mathrm{b}}(M)\)). They satisfy the analogs of the continuous inclusions (4.4), giving rise to the filtered \(C^{\infty}(M)\)-modules and LF-spaces of _supported_ and _extendible_ conormal distributions at the boundary, \[\dot{\mathcal{A}}(M)=\bigcup_{s}\dot{\mathcal{A}}^{(s)}(M)\;,\quad\mathcal{A} (M)=\bigcup_{s}\mathcal{A}^{(s)}(M)\;. \tag{6.28}\] By definition, there are continuous inclusions \[\dot{\mathcal{A}}(M)\subset\dot{C}^{-\infty}(M)\;,\quad\mathcal{A}(M)\subset C ^{-\infty}(M)\;. \tag{6.29}\] Thus \(\dot{\mathcal{A}}(M)\) and \(\mathcal{A}(M)\) are Hausdorff. The following analogs of Propositions 3.1 and 4.1 hold true with formally the same proofs. **Proposition 6.5**.: \(\dot{\mathcal{A}}^{(s)}(M)\) _and \(\mathcal{A}^{(s)}(M)\) are totally reflexive Frechet spaces._ **Corollary 6.6**.: \(\dot{\mathcal{A}}(M)\) _and \(\mathcal{A}(M)\) are barreled, ultrabornological and webbed._ We have \[\bigcap_{s}\dot{\mathcal{A}}^{(s)}(M)=\dot{C}^{\infty}(M)\;,\quad\bigcap_{s} \mathcal{A}^{(s)}(M)=C^{\infty}(M)\;, \tag{6.30}\] obtaining dense continuous inclusions [25, Lemma 4.6.1 and the proof of Proposition 4.1.1] \[\dot{C}^{\infty}(M)\subset\dot{\mathcal{A}}(M)\;,\quad C^{\infty}(M)\subset \mathcal{A}(M)\;. \tag{6.31}\] By elliptic regularity, we also get continuous inclusions [25, Eq. (4.1.4)] \[\dot{\mathcal{A}}(M)|_{\dot{M}},\mathcal{A}(M)\subset C^{\infty}(\dot{M})\;. \tag{6.32}\] Using (6.9), (6.15) and the commutativity of the left-hand side square of (6.20), we get the continuous linear restriction maps \[R:\dot{\mathcal{A}}^{(s)}(M)\to\mathcal{A}^{(s)}(M)\;, \tag{6.33}\] which are surjective for \(s\leq 1/2\) and injective for \(s\geq-1/2\) because this is true for the maps (6.17). In particular, there is a dense continuous inclusion \[C^{\infty}(M)\subset\dot{\mathcal{A}}^{(0)}(M)\;. \tag{6.34}\] The maps (6.33) induce a continuous linear restriction map \[R:\dot{\mathcal{A}}(M)\to\mathcal{A}(M)\;. \tag{6.35}\] This map is surjective [25, Proposition 4.1.1], which is a consequence of the existence of corresponding partial extension maps [25, Section 4.4]; the precise statement is recalled in Proposition 6.26 for later use in this work. The maps (6.33) and (6.35) are restrictions of (6.9). The following analog of Proposition 6.2 holds true with formally the same proof, using that \(\dot{\mathcal{A}}(M)\) is webbed and \(\mathcal{A}(M)\) ultrabornological (Corollary 6.6). **Proposition 6.7**.: _The map (6.35) is a surjective topological homomorphism._ ### The spaces \(x^{m}L^{\infty}(M)\) For \(m\in\mathbb{R}\), consider the weighted space \(x^{m}L^{\infty}(M)\) (Section 2.11). From (6.12) and since \(L^{\infty}(M)\subset C^{-\infty}(M)\), it follows that there is a continuous inclusion \[x^{m}L^{\infty}(M)\subset C^{-\infty}(M)\;.\] Moreover, for \(m^{\prime}<m\), from \(x^{m-m^{\prime}}\in L^{\infty}(M)\), we easily get a continuous inclusion \[x^{m}L^{\infty}(M)\subset x^{m^{\prime}}L^{\infty}(M)\;, \tag{6.36}\] **Proposition 6.8**.: _For \(m^{\prime}<m\), \(C^{\infty}_{\mathrm{c}}(\mathring{M})\) is dense in \(x^{m}L^{\infty}(M)\) with the topology of \(x^{m^{\prime}}L^{\infty}(M)\)._ Proof.: Given \(u\in x^{m}L^{\infty}(M)\) and \(\epsilon>0\), let \(B\) be the ball in \(x^{m^{\prime}}L^{\infty}(M)\) of center \(u\) and radius \(\epsilon\). Let \(S=\sup_{M}x^{m-m^{\prime}}>0\). Since \(C^{\infty}(M)\) is dense in \(L^{\infty}(M)\), there is some \(f\in C^{\infty}(M)\) so that \(|f-x^{-m}u|<\min\{\epsilon/2,\epsilon/S\}\) (Lebesgue-) almost everywhere. (Recall that the sets of Lebesgue measure zero are well-defined in any \(C^{1}\) manifold [12, Lemma 3.1.1].) There is some \(0<\delta<1\) so that \(\delta x^{-m}|u|<\epsilon/4\) almost everywhere. Take some \(\lambda\in C^{\infty}(\mathbb{R})\) so that \(\lambda\geq 0\), \(\lambda(r)\leq r^{m}\) if \(r>0\), \(\lambda(r)=0\) if \(r^{m-m^{\prime}}\leq\delta/2\), and \(\lambda(r)=r^{m}\) if \(r^{m-m^{\prime}}\geq\delta\). Let \(h=\lambda(x)f\in C^{\infty}_{\mathrm{c}}(\mathring{M})\). If \(x^{m-m^{\prime}}\leq\delta\), then, almost everywhere, \[x^{-m^{\prime}}|h-u| \leq\delta x^{-m}(|h|+|u|)\leq\delta(|f|+x^{-m}|u|)\] \[\leq\delta(|f-x^{-m}u|+2x^{-m}|u|)<\delta\Big{(}\frac{\epsilon}{ 2}+\frac{\epsilon}{2\delta}\Big{)}<\epsilon\;.\] If \(x^{m-m^{\prime}}\geq\delta\), then, almost everywhere, \[x^{-m^{\prime}}|h-u|=x^{m-m^{\prime}}|x^{-m}\lambda(x)f-x^{-m}u|\leq S|f-x^{- m}u|<\epsilon\;.\] Thus \(h\in B\cap C^{\infty}_{\mathrm{c}}(\mathring{M})\). ### Filtration of \(\mathcal{A}(M)\) by bounds For every \(m\in\mathbb{R}\), let \[\mathcal{A}^{m}(M)=\{\,u\in C^{-\infty}(M)\mid\operatorname{Diff}_{\mathrm{b} }(M)\,u\subset x^{m}L^{\infty}(M)\,\}\;.\] This is another \(C^{\infty}(M)\)-module and LCS, with the projective topology given by the maps \(P:\mathcal{A}^{m}(M)\to x^{m}L^{\infty}(M)\) (\(P\in\operatorname{Diff}_{\mathrm{b}}(M)\)). **Example 6.9** ([25, Exercises 4.2.23 and 4.2.24]).: Via the injection of \(\mathbb{R}^{l}\) into its stereographic compactification \(\mathbb{S}^{l}_{+}=\{\,x\in\mathbb{S}^{l}\mid\,x^{l+1}\geq 0\,\}\), the space \(\mathcal{A}^{-m}(\mathbb{S}^{l}_{+})\) corresponds to the symbol space \(S^{m}(\mathbb{R}^{l})\) (Section 3). Note that (6.36) yields a continuous inclusion \[\mathcal{A}^{m}(M)\subset\mathcal{A}^{m^{\prime}}(M)\quad(m^{\prime}<m)\;. \tag{6.37}\] Moreover there are continuous inclusions [25, Proof of Proposition 4.2.1] \[\mathcal{A}^{(s)}(M)\subset\mathcal{A}^{m}(M)\subset\mathcal{A}^{(\min\{m,0\}) }(M)\quad(m<s-n/2-1)\;. \tag{6.38}\] Hence \[\mathcal{A}(M)=\bigcup_{m}\mathcal{A}^{m}(M)\;. \tag{6.39}\] Despite of defining the same LF-space, the filtrations of \(\mathcal{A}(M)\) defined by the spaces \(\mathcal{A}^{(s)}(M)\) and \(\mathcal{A}^{m}(M)\) are not equivalent because, in contrast with (6.30), \[\dot{C}^{\infty}(M)=\bigcap_{m}\mathcal{A}^{m}(M)\;.\] Let \(\{\,P_{j}\mid j\in\mathbb{N}_{0}\,\}\) be a countable \(C^{\infty}(M)\)-spanning set of \(\mathrm{Diff}_{\mathrm{b}}(M)\). The topology of \(\mathcal{A}^{m}(M)\) can be described by the semi-norms \(\|\cdot\|_{k,m}\) (\(k\in\mathbb{N}_{0}\)) given by \[\|u\|_{k,m}=\|P_{k}u\|_{x^{m}L^{\infty}}=\operatorname*{ess\,sup}_{M}\big{|}x ^{-m}P_{k}u\big{|}=\sup_{\dot{M}}\big{|}x^{-m}P_{k}u\big{|}\;, \tag{6.40}\] using (6.32) in the last expression. From (2.1) and (6.32), we also get the continuous semi-norms \(\|\cdot\|_{K,k,m}\) (for any compact \(K\subset\mathring{M}\) and \(k\in\mathbb{N}_{0}\)) on \(\mathcal{A}^{m}(M)\) given by \[\|u\|_{K,k,m}=\sup_{K}|P_{k}u|\;. \tag{6.41}\] Other continuous semi-norms \(\|\cdot\|_{k,m}^{\prime}\) (\(k\in\mathbb{N}_{0}\)) on \(\mathcal{A}^{m}(M)\) are defined by \[\|u\|_{k,m}^{\prime}=\lim_{\epsilon\downarrow 0}\sup_{\{0<x<\epsilon\}} \big{|}x^{-m}P_{k}u\big{|}\;. \tag{6.42}\] The proofs of the following results are similar to the proofs of Propositions 3.2 and 3.3 and Corollaries 3.4 to 3.6, using (6.32). **Proposition 6.10**.: _The semi-norms (6.41) and (6.42) together describe the topology of \(\mathcal{A}^{m}(M)\)._ **Proposition 6.11**.: _For \(m,m^{\prime},k\in\mathbb{N}_{0}\), if \(m^{\prime}<m\), then \(\|\cdot\|_{k,m^{\prime}}^{\prime}=0\) on \(\mathcal{A}^{m}(M)\)._ **Corollary 6.12**.: _If \(m^{\prime}<m\), then the topologies of \(\mathcal{A}^{m^{\prime}}(M)\) and \(C^{\infty}(\mathring{M})\) coincide on \(\mathcal{A}^{m}(M)\). Therefore the topologies of \(\mathcal{A}(M)\) and \(C^{\infty}(\mathring{M})\) coincide on \(\mathcal{A}^{m}(M)\)._ **Corollary 6.13**.: _For \(m^{\prime}<m\), \(C^{\infty}_{\mathrm{c}}(\mathring{M})\) is dense in \(\mathcal{A}^{m}(M)\) with the topology of \(\mathcal{A}^{m^{\prime}}(M)\). Therefore \(C^{\infty}_{\mathrm{c}}(\mathring{M})\) is dense in \(\mathcal{A}(M)\)._ **Corollary 6.14**.: \(\mathcal{A}(M)\) _is an acyclic Montel space, and therefore complete, boundedly retractive and reflexive._ _Remark 6.15_.: Proposition 6.8 provides an alternative direct proof of Corollary 6.13. Actually, it will be shown that \(C^{\infty}_{\mathrm{c}}(\mathring{M})\) is dense in every \(\mathcal{A}^{m}(M)\) with its own topology (Corollary 6.36 and Remark 6.38). The obvious analog of Remark 3.8 makes sense for (6.32) and Corollary 6.12. ### \(\dot{\mathcal{A}}(M)\) and \(\mathcal{A}(M)\) vs \(I(\ddot{M},\partial M)\) Using (6.8), (6.15), (6.25) and the commutativity of the left-hand side square of (6.23), we get continuous linear restriction maps \[R:I^{(s)}(\ddot{M},\partial M)\to\mathcal{A}^{(s)}(M)\;,\] which induce a continuous linear restriction map \[R:I(\ddot{M},\partial M)\to\mathcal{A}(M)\;. \tag{6.43}\] By (6.14), (6.25) and the commutativity of the right-hand side square of (6.23), we get the TVS-identities \[\dot{\mathcal{A}}^{(s)}(M)\equiv I^{(s)}_{M}(\ddot{M},\partial M)\;, \tag{6.44}\] inducing a continuous linear isomorphism \[\dot{\mathcal{A}}(M)\xrightarrow{\cong}I_{M}(\ddot{M},\partial M)\;. \tag{6.45}\] By (6.45) and Proposition 6.7, the map (6.43) is also surjective. Then the following analog of Proposition 6.2 follows with formally the same proof, using that \(I(\ddot{M},\partial M)\) is webbed (Corollary 4.2) and \(\mathcal{A}(M)\) ultrabornological (Corollary 6.6). **Proposition 6.16**.: _The map (6.43) is a surjective topological homomorphism._ The null space of (6.43) is \(I_{M^{\prime}}(\ddot{M},\partial M)\). The following analog of Proposition 6.3 follows with formally the same proof, using Proposition 6.16 and Corollary 6.14. **Proposition 6.17**.: \(I_{M}(\ddot{M},\partial M)\) _is a limit subspace of the LF-space \(I(\ddot{M},\partial M)\)._ **Corollary 6.18**.: _The map (6.45) is a TVS-isomorphism._ Proof.: Apply (6.28), (6.44) and Proposition 6.17. ### Filtration of \(\dot{\mathcal{A}}(M)\) by the symbol order Inspired by (6.44), let \[\dot{\mathcal{A}}^{m}(M)=I^{m}_{M}(\ddot{M},\partial M)\subset I^{m}(\ddot{M},\partial M)\quad(m\in\mathbb{R})\;, \tag{6.46}\] which are closed subspaces satisfying the analogs of (4.11) and (4.12). Thus \[\dot{\mathcal{A}}(M)=\bigcup_{m}\dot{\mathcal{A}}^{m}(M)\;,\quad\dot{C}^{ \infty}(M)=\bigcap_{m}\dot{\mathcal{A}}^{m}(M)\;,\] and the TVS-isomorphism (6.45) is also compatible with the symbol filtration. The following is a consequence of Corollary 4.5 applied to \((\ddot{M},\partial M)\). **Corollary 6.19**.: _For \(m<m^{\prime},m^{\prime\prime}\), the topologies of \(\dot{\mathcal{A}}^{m^{\prime}}(M)\) and \(\dot{\mathcal{A}}^{m^{\prime\prime}}(M)\) coincide on \(\dot{\mathcal{A}}^{m}(M)\)._ The following result follows like Corollary 3.6, applying Corollary 6.19 and using that \(\dot{\mathcal{A}}(M)\) is barreled (Corollary 6.6) and a closed subspace of the Montel space \(I(\ddot{M},\partial M)\) (Corollary 4.7). **Corollary 6.20**.: \(\dot{\mathcal{A}}(M)\) _is an acyclic Montel space, and therefore complete, boundedly retractive and reflexive._ ### The space \(\mathcal{K}(M)\) Using the condition of being supported in \(\partial M\), define the LCHSs and \(C^{\infty}(M)\)-modules \[\mathcal{K}^{(s)}(M)=\dot{\mathcal{A}}^{(s)}_{\partial M}(M)\;,\quad\mathcal{K} ^{m}(M)=\dot{\mathcal{A}}^{m}_{\partial M}(M)\;,\quad\mathcal{K}(M)=\dot{ \mathcal{A}}_{\partial M}(M)\;.\] These are closed subspaces of \(\dot{\mathcal{A}}^{(s)}(M)\), \(\dot{\mathcal{A}}^{m}(M)\) and \(\dot{\mathcal{A}}(M)\), respectively; more precisely, they are the null spaces of the corresponding restrictions of the map (6.35). They satisfy the analogs of (4.4), (4.11) and (4.12). So \[\bigcup_{s}\mathcal{K}^{(s)}(M)=\bigcup_{m}\mathcal{K}^{m}(M)\;.\] Proposition 6.7 means that the following sequence is exact in the category of continuous linear maps between LCSs: \[0\to\mathcal{K}(M)\xrightarrow{\iota}\dot{\mathcal{A}}(M)\xrightarrow{R} \mathcal{A}(M)\to 0\;. \tag{6.47}\] The following analog of Proposition 6.3 holds true with formally the same proof, using Proposition 6.7 and Corollary 6.14. **Proposition 6.21**.: \(\mathcal{K}(M)\) _is a limit subspace of the LF-space \(\dot{\mathcal{A}}(M)\)._ From the definition of \(\dot{\mathcal{A}}^{(s)}(M)\) (Section 6.8), we get \[\mathcal{K}^{(s)}(M)=\{\,u\in\dot{C}^{-\infty}_{\partial M}(M)\mid\operatorname {Diff_{b}}(M)\,u\subset\dot{H}^{s}_{\partial M}(M)\,\}\;,\] with the projective topology given by the maps \(P:\mathcal{K}^{(s)}(M)\to\dot{H}^{s}_{\partial M}(M)\) (\(P\in\operatorname{Diff_{b}}(M)\)). Hence the following analogs of Propositions 3.1 and 4.1 hold true with formally the same proofs. **Proposition 6.22**.: \(\mathcal{K}^{(s)}(M)\) _is a totally reflexive Frechet space._ **Corollary 6.23**.: \(\mathcal{K}(M)\) _is barreled, ultrabornological and webbed._ Now the following analogs of Corollaries 6.19 and 6.20 hold true with formally the same proofs, using Corollaries 6.19, 6.20 and 6.23. **Corollary 6.24**.: _For \(m<m^{\prime},m^{\prime\prime}\), the topologies of \(\mathcal{K}^{m^{\prime}}(M)\) and \(\mathcal{K}^{m^{\prime\prime}}(M)\) coincide on \(\mathcal{K}^{m}(M)\)._ **Corollary 6.25**.: \(\mathcal{K}(M)\) _is an acyclic Montel space, and therefore complete, boundedly retractive and reflexive._ By Corollary 6.18, \[\mathcal{K}(M)\equiv I_{\partial M}(\check{M},\partial M)\;, \tag{6.48}\] which restricts to identities between the spaces defining the Sobolev and symbol order filtrations, according to (6.44) and (6.46). A description of \(\mathcal{K}^{(s)}(M)\) and \(\mathcal{K}(M)\) will be indicated in Remark 7.26. Action of \(\operatorname{Diff}(M)\) on \(\dot{\mathcal{A}}(M)\), \(\mathcal{A}(M)\) and \(\mathcal{K}(M)\) According to Section 4.5, and using (6.25), (6.44), Proposition 6.16 and locality, any \(A\in\operatorname{Diff}(M)\) defines continuous endomorphisms \(A\) of \(\dot{\mathcal{A}}(M)\), \(\mathcal{A}(M)\) and \(\mathcal{K}(M)\). If \(A\in\operatorname{Diff}^{k}(M)\), these maps also satisfy the analogs of (4.16). If \(A\in\operatorname{Diff_{b}}(M)\), it clearly defines continuous endomorphisms of \(\dot{\mathcal{A}}^{(s)}(M)\), \(\mathcal{A}^{(s)}(M)\), \(\mathcal{A}^{m}(M)\) and \(\mathcal{K}^{(s)}(M)\). According to Section 6.6, (6.29), (6.31) and (6.32), the maps of this subsection are restrictions of the endomorphisms \(A\) of \(\dot{C}^{-\infty}(M)\), \(C^{-\infty}(M)\) and \(C^{\infty}(\check{M})\), and extensions of the endomorphisms \(A\) of \(\dot{C}^{\infty}(M)\) and \(C^{\infty}(M)\). ### Partial extension maps **Proposition 6.26** (Cf. [25, Section 4.4]).: _For all \(m\in\mathbb{R}\), there is a continuous linear partial extension map \(E_{m}:\mathcal{A}^{m}(M)\to\dot{\mathcal{A}}^{(s)}(M)\), where \(s=0\) if \(m\geq 0\), and \(m>s\in\mathbb{Z}^{-}\) if \(m<0\). For \(m\geq 0\), \(E_{m}:\mathcal{A}^{m}(M)\to\dot{\mathcal{A}}^{(0)}(M)\) is a continuous inclusion map._ _Remark 6.27_.: By (6.38) and Proposition 6.26, for any \(A\in\operatorname{Diff}^{k}(M)\), the endomorphism \(A\) of \(\mathcal{A}(M)\) is induced by the continuous linear compositions \[\mathcal{A}^{m}(M)\xrightarrow{E_{m}}\dot{\mathcal{A}}^{(s)}(M)\xrightarrow {A}\dot{\mathcal{A}}^{(s-k)}(M)\xrightarrow{R}\mathcal{A}^{(s-k)}(M)\subset \mathcal{A}^{m^{\prime}-k}(M)\;,\] where \(m^{\prime}=s-n/2-1\) for \(m\) and \(s\) satisfying the conditions of Proposition 6.26. ### \(L^{2}\) half-b-densities By (6.1), \[C^{\infty}(M;\Omega^{-\frac{1}{2}}\otimes{}^{\mathrm{b}}\Omega^ {\frac{1}{2}}) \equiv C^{\infty}(M;\Omega^{-\frac{1}{2}})\otimes_{C^{\infty}(M)}C ^{\infty}(M;{}^{\mathrm{b}}\Omega^{\frac{1}{2}})\] \[\equiv C^{\infty}(M;\Omega^{-\frac{1}{2}})\otimes_{C^{\infty}(M)} x^{-\frac{1}{2}}C^{\infty}(M;\Omega^{\frac{1}{2}})\] \[\equiv x^{-\frac{1}{2}}C^{\infty}(M;\Omega^{-\frac{1}{2}}\otimes \Omega^{\frac{1}{2}})\equiv x^{-\frac{1}{2}}C^{\infty}(M)\;.\] So \[L^{2}(M;{}^{\mathrm{b}}\Omega^{\frac{1}{2}}) \equiv L^{2}(M;\Omega^{\frac{1}{2}})\otimes_{C^{\infty}(M)}C^{ \infty}(M;\Omega^{-\frac{1}{2}}\otimes{}^{\mathrm{b}}\Omega^{\frac{1}{2}}) \tag{6.49}\] \[\equiv L^{2}(M;\Omega^{\frac{1}{2}})\otimes_{C^{\infty}(M)}x^{- \frac{1}{2}}C^{\infty}(M)\equiv x^{-\frac{1}{2}}L^{2}(M;\Omega^{\frac{1}{2}})\;.\] This is an identity of Hilbert spaces, using the weighted \(L^{2}\) space structure of \(x^{-1/2}L^{2}(M;\Omega^{1/2})\) (Section 2.11) and the Hilbert space structure on \(L^{2}(M;{}^{\mathrm{b}}\Omega^{1/2})\) induced by the canonical identity \[L^{2}(\mathring{M};\Omega^{\frac{1}{2}})\equiv L^{2}(M;{}^{\mathrm{b}}\Omega^ {\frac{1}{2}})\;. \tag{6.50}\] ### \(L^{\infty}\) half-b-densities Like in (6.49), we get \[L^{\infty}(M;{}^{\mathrm{b}}\Omega^{\frac{1}{2}})\equiv x^{-\frac{1}{2}}L^{ \infty}(M;\Omega^{\frac{1}{2}})\;, \tag{6.51}\] as LCSs endowed with a family of equivalent Banach space norms. Equip \(M\) with a b-metric \(g\) (Section 6.1), and endow \(\mathring{M}\) with the restriction of \(g\), also denoted by \(g\). With the corresponding Euclidean/Hermitean structures on \(\Omega^{1/2}\mathring{M}\) and \({}^{\mathrm{b}}\Omega^{1/2}M\), we have the identity of Banach spaces \[L^{\infty}(\mathring{M};\Omega^{\frac{1}{2}})\equiv L^{\infty}(M;{}^{ \mathrm{b}}\Omega^{\frac{1}{2}})\;. \tag{6.52}\] ### b-Sobolev spaces For \(m\in\mathbb{N}_{0}\), the _b-Sobolev spaces_ of _order_\(\pm m\) are defined by the following analogs of (2.15), (2.20) and (2.21): \[H^{m}_{\mathrm{b}}(M;{}^{\mathrm{b}}\Omega^{\frac{1}{2}})=\{\,u \in L^{2}(M;{}^{\mathrm{b}}\Omega^{\frac{1}{2}})\mid\operatorname{Diff}^{m}_{ \mathrm{b}}(M;{}^{\mathrm{b}}\Omega^{\frac{1}{2}})\,u\subset L^{2}(M;{}^{ \mathrm{b}}\Omega^{\frac{1}{2}})\,\}\;,\] \[\operatorname{Diff}^{m}_{\mathrm{b}}(M;{}^{\mathrm{b}}\Omega^{ \frac{1}{2}})\,L^{2}(M;{}^{\mathrm{b}}\Omega^{\frac{1}{2}})=H^{-m}_{\mathrm{b }}(M;{}^{\mathrm{b}}\Omega^{\frac{1}{2}})=H^{m}_{\mathrm{b}}(M;{}^{\mathrm{b} }\Omega^{\frac{1}{2}})^{\prime}\;.\] These are \(C^{\infty}(M)\)-modules and Hilbertian spaces with no canonical choice of a scalar product in general; we can use any finite set of \(C^{\infty}(M)\)-generators of \(\operatorname{Diff}^{m}_{\mathrm{b}}(M;{}^{\mathrm{b}}\Omega^{1/2})\) to define a scalar product on \(H^{\pm m}_{\mathrm{b}}(M;{}^{\mathrm{b}}\Omega^{1/2})\). The intersections and unions of the b-Sobolev spaces are denoted by \(H^{\pm\infty}_{\mathrm{b}}(M;{}^{\mathrm{b}}\Omega^{1/2})\). In particular, \(H^{\infty}_{\mathrm{b}}(M;{}^{\mathrm{b}}\Omega^{1/2})=\mathcal{A}^{(0)}(M;{} ^{\mathrm{b}}\Omega^{1/2})\). ### Weighted b-Sobolev spaces We will also use the _weighted b-Sobolev space_\(x^{a}H^{m}_{\mathrm{b}}(M;{}^{\mathrm{b}}\Omega^{1/2})\) (\(a\in\mathbb{R}\)), which is another Hilbertian space with no canonical choice of a scalar product; given a scalar product on \(H^{m}_{\mathrm{b}}(M;{}^{\mathrm{b}}\Omega^{1/2})\) with norm \(\|\cdot\|_{H^{m}_{\mathrm{b}}}\), we get a scalar product on \(x^{a}H^{m}_{\mathrm{b}}(M;{}^{\mathrm{b}}\Omega^{1/2})\) with norm \(\|\cdot\|_{x^{a}H^{m}_{\mathrm{b}}}\), like in Section 2.11. Observe that \[\bigcap_{a,m}x^{a}H^{m}_{\mathrm{b}}(M;{}^{\mathrm{b}}\Omega^{\frac{1}{2}})= \mathring{C}^{\infty}(M;{}^{\mathrm{b}}\Omega^{\frac{1}{2}})\;. \tag{6.53}\] ### Action of \(\mathrm{Diff}^{m}_{\mathrm{b}}(M)\) on weighted b-Sobolev spaces Like in (2.12), \[\mathrm{Diff}^{m}_{\mathrm{b}}(M;{}^{\mathrm{b}}\Omega^{\frac{1}{2}})\equiv \mathrm{Diff}^{m}_{\mathrm{b}}(M)\equiv\mathrm{Diff}^{m}_{\mathrm{b}}(M;\Omega^ {\frac{1}{2}})\;. \tag{6.54}\] By (6.26), for all \(k\in\mathbb{N}_{0}\), \(m\in\mathbb{Z}\) and \(a\in\mathbb{R}\), any \(A\in\mathrm{Diff}^{k}_{\mathrm{b}}(M;{}^{\mathrm{b}}\Omega^{1/2})\) defines a continuous linear map [24, Lemma 5.14] \[A:x^{a}H^{m}_{\mathrm{b}}(M;{}^{\mathrm{b}}\Omega^{\frac{1}{2}})\to x^{a}H^{m- k}_{\mathrm{b}}(M;{}^{\mathrm{b}}\Omega^{\frac{1}{2}})\;.\] Thus it induces a continuous endomorphism \(A\) of \(x^{a}H^{\pm\infty}_{\mathrm{b}}(M;{}^{\mathrm{b}}\Omega^{1/2})\). ### A description of \(\mathcal{A}(M)\) Now, unless the contrary is indicated, assume the following properties: 1. \(\mathring{M}\) is of bounded geometry with \(g\). 2. The collar neighborhood \(T\) of \(\partial M\) can be chosen so that: 1. every \(A\in\mathfrak{X}(\partial M)\) has an extension \(A^{\prime}\in\mathfrak{X}_{\mathrm{b}}(T)\) such that \(A^{\prime}\) is \(\varpi\)-projectable to \(A\), and \(A^{\prime}|_{\mathring{T}}\) is orthogonal to the \(\varpi\)-fibers; and 2. \(\mathfrak{X}_{\mathrm{ub}}(\mathring{M})|_{\mathring{T}}\) is \(C^{\infty}_{\mathrm{ub}}(\mathring{M})|_{\mathring{T}}\)-generated by \(x\partial_{x}\) and the restrictions \(A^{\prime}|_{\mathring{T}}\) of the vector fields \(A^{\prime}\) of (a), for \(A\in\mathfrak{X}(\partial M)\). For instance, these properties hold if \(\mathring{T}\) is cylindrical with \(g\) (Section 6.1). **Lemma 6.28**.: \(\mathfrak{X}_{\mathrm{b}}(M)|_{T}\) _is \(C^{\infty}(M)|_{T}\)-generated by \(x\partial_{x}\) and the vector fields \(A^{\prime}\) of_ (B)_, for \(A\in\mathfrak{X}(\partial M)\)._ Proof.: For every \(A\in\mathfrak{X}(\partial M)\), there is a unique \(A^{\prime\prime}\in\mathfrak{X}(T)\) such that \(A^{\prime\prime}\) is \(\varpi\)-projectable to \(A\) and \(dx(A^{\prime\prime})=0\). Since \(A^{\prime}-A^{\prime\prime}\) is tangent to the \(\varpi\)-fibers and vanishes on \(\partial M\), we have \(A^{\prime}-A^{\prime\prime}=fx\partial_{x}\) for some \(f\in C^{\infty}(T)\). Then the result follows because \(\mathfrak{X}_{\mathrm{b}}(M)|_{T}\) is \(C^{\infty}(M)|_{T}\)-spanned by \(x\partial_{x}\) and the vector fields \(A^{\prime\prime}\). Consider the notation of Sections 2.12.2 to 2.12.4 for \(\mathring{M}\) with \(g\). **Corollary 6.29**.: _The restriction to \(\mathring{M}\) defines a continuous injection \(C^{\infty}(M)\subset C^{\infty}_{\mathrm{ub}}(\mathring{M})\); in particular, \(C^{\infty}_{\mathrm{ub}}(\mathring{M})\) becomes a \(C^{\infty}(M)\)-module._ Proof.: It is enough to work on a collar neighborhood \(T\) of the boundary satisfying (A) and (B). But, by (2.22), (b) and Lemma 6.28, the restriction to \(\mathring{T}\) defines an injection of \(C^{\infty}(M)|_{T}\) into \(C^{\infty}_{\mathrm{ub}}(\mathring{M})|_{\mathring{T}}\). **Proposition 6.30**.: _There is a canonical identity of \(C^{\infty}_{\mathrm{ub}}(\mathring{M})\)-modules,_ \[\mathrm{Diff}^{m}_{\mathrm{ub}}(\mathring{M})\equiv\mathrm{Diff}^{m}_{\mathrm{b} }(M)\otimes_{C^{\infty}(M)}C^{\infty}_{\mathrm{ub}}(\mathring{M})\;.\] Proof.: We have to prove that \(\mathrm{Diff}^{m}_{\mathrm{ub}}(\mathring{M})\) is \(C^{\infty}_{\mathrm{ub}}(\mathring{M})\)-spanned by \(\mathrm{Diff}^{m}_{\mathrm{b}}(M)\). It is enough to consider the case \(m=1\) because the filtered algebra \(\mathrm{Diff}_{\mathrm{ub}}(\mathring{M})\) (respectively, \(\mathrm{Diff}_{\mathrm{b}}(M)\)) is spanned by \(\mathrm{Diff}^{1}_{\mathrm{ub}}(\mathring{M})\) (respectively, \(\mathrm{Diff}^{1}_{\mathrm{b}}(M)\)). Moreover it is clearly enough to work on a collar neighborhood \(T\) of the boundary satisfying (A) and (B). By (b), Lemma 6.28 and Corollary 6.29, the restriction to \(\mathring{T}\) defines an injection of \(\operatorname{Diff}^{1}_{\mathrm{b}}(M)|_{T}\) as a \(C^{\infty}_{\mathrm{ub}}(\mathring{M})|_{\mathring{T}}\)-spanning subset of \(\operatorname{Diff}^{1}_{\mathrm{ub}}(\mathring{M})|_{\mathring{T}}\). **Corollary 6.31**.: _There is a canonical identity of \(C^{\infty}_{\mathrm{ub}}(\mathring{M})\)-modules,_ \[\operatorname{Diff}^{m}_{\mathrm{ub}}(\mathring{M};\Omega^{\frac{1}{2}})\equiv \operatorname{Diff}^{m}_{\mathrm{b}}(M;{}^{\mathrm{b}}\Omega^{\frac{1}{2}}) \otimes_{C^{\infty}(M)}C^{\infty}_{\mathrm{ub}}(\mathring{M})\;.\] Proof.: This follows from (2.23) for \(\mathring{M}\), (6.54) and Proposition 6.30. **Corollary 6.32**.: \(H^{m}(\mathring{M};\Omega^{1/2})\equiv H^{m}_{\mathrm{b}}(M;{}^{\mathrm{b}} \Omega^{1/2})\)__\((m\in\mathbb{Z})\) _as \(C^{\infty}(M)\)-modules and Hilbertian spaces, and therefore \(H^{\pm\infty}(\mathring{M};\Omega^{1/2})\equiv H^{\pm\infty}_{\mathrm{b}}(M;{} ^{\mathrm{b}}\Omega^{1/2})\)._ Proof.: We show the case where \(m\geq 0\), and the case where \(m<0\) follows by taking dual spaces. For any \(m\in\mathbb{N}_{0}\), let \(\{P_{k}\}\) be a finite \(C^{\infty}(M)\)-spanning set of \(\operatorname{Diff}^{m}_{\mathrm{b}}(M;{}^{\mathrm{b}}\Omega^{1/2})\), which is also a \(C^{\infty}_{\mathrm{ub}}(\mathring{M})\)-spanning set of \(\operatorname{Diff}^{m}_{\mathrm{ub}}(\mathring{M};\Omega^{1/2})\) by Corollary 6.31. Then, by (6.50), \[H^{m}(\mathring{M};\Omega^{\frac{1}{2}}) =\{\,u\in L^{2}(\mathring{M};\Omega^{\frac{1}{2}})\mid P_{k}u\in L ^{2}(\mathring{M};\Omega^{\frac{1}{2}})\ \forall k\,\}\] \[\equiv\{\,u\in L^{2}(M;{}^{\mathrm{b}}\Omega^{\frac{1}{2}})\mid P_ {k}u\in L^{2}(M;{}^{\mathrm{b}}\Omega^{\frac{1}{2}})\ \forall k\,\}=H^{m}_{\mathrm{b}}(M;{}^{\mathrm{b}}\Omega^{\frac{1}{2}})\;.\] Moreover \(\{P_{k}\}\) can be used to define scalar products on both \(H^{m}(\mathring{M};\Omega^{1/2})\) and \(H^{m}_{\mathrm{b}}(M;{}^{\mathrm{b}}\Omega^{1/2})\), obtaining that the above identity is unitary. **Proposition 6.33**.: \(\mathcal{A}^{m}(M;\Omega^{1/2})\equiv x^{m+1/2}H^{\infty}_{\mathrm{b}}(M;{}^{ \mathrm{b}}\Omega^{1/2})\)__\((m\in\mathbb{R})\)_._ Proof.: By (6.51), (6.52), Proposition 2.5 and Corollary 6.32, we get the following identities and continuous inclusions: \[H^{\infty}_{\mathrm{b}}(M;{}^{\mathrm{b}}\Omega^{\frac{1}{2}}) \equiv H^{\infty}(\mathring{M};\Omega^{\frac{1}{2}})\subset C^{ \infty}_{\mathrm{ub}}(\mathring{M};\Omega^{\frac{1}{2}})\] \[\subset L^{\infty}(\mathring{M};\Omega^{\frac{1}{2}})\equiv L^{ \infty}(M;{}^{\mathrm{b}}\Omega^{\frac{1}{2}})\equiv x^{-\frac{1}{2}}L^{ \infty}(M;\Omega^{\frac{1}{2}})\;.\] So, according to Section 6.20, every \(A\in\operatorname{Diff}_{\mathrm{b}}(M;{}^{\mathrm{b}}\Omega^{1/2})\) induces a continuous linear map \[x^{m+\frac{1}{2}}H^{\infty}_{\mathrm{b}}(M;{}^{\mathrm{b}}\Omega^{1/2}) \xrightarrow{A}x^{m+\frac{1}{2}}H^{\infty}_{\mathrm{b}}(M;{}^{\mathrm{b}} \Omega^{\frac{1}{2}})\subset x^{m}L^{\infty}(M;\Omega^{\frac{1}{2}})\;.\] Hence there is a continuous inclusion \[x^{m+\frac{1}{2}}H^{\infty}_{\mathrm{b}}(M;{}^{\mathrm{b}}\Omega^{\frac{1}{2} })\subset\mathcal{A}^{m}(M;\Omega^{\frac{1}{2}})\;.\] On the other hand, by (6.13) and the version of (6.12) with \(\Omega^{1/2}M\), for all \(a\in\mathbb{R}\), \[x^{a}\mathcal{A}^{m}(M;\Omega^{\frac{1}{2}})\subset x^{a}C^{-\infty}(M;\Omega^ {\frac{1}{2}})=C^{-\infty}(M;\Omega^{\frac{1}{2}})\equiv C^{-\infty}(M;{}^{ \mathrm{b}}\Omega^{\frac{1}{2}})\;.\] Moreover, by (6.49) and (6.54), every \(A\in\operatorname{Diff}_{\mathrm{b}}(M;{}^{\mathrm{b}}\Omega^{1/2})\) induces a continuous linear map \[\mathcal{A}^{m}(M;\Omega^{\frac{1}{2}})\xrightarrow{A}x^{m}L^{\infty}(M; \Omega^{\frac{1}{2}})\subset x^{m}L^{2}(M;\Omega^{\frac{1}{2}})\equiv x^{m+ \frac{1}{2}}L^{2}(M;{}^{\mathrm{b}}\Omega^{\frac{1}{2}})\;.\] Hence, by (6.26) and (6.54), \(A\) induces a continuous linear map \[A:x^{-m-\frac{1}{2}}\mathcal{A}^{m}(M;\Omega^{\frac{1}{2}})\to L^{2}(M;{}^{ \mathrm{b}}\Omega^{\frac{1}{2}})\;.\] It follows that there is a continuous inclusion \[x^{-m-\frac{1}{2}}\mathcal{A}^{m}(M;\Omega^{\frac{1}{2}})\subset H^{\infty}_{ \mathrm{b}}(M;{}^{\mathrm{b}}\Omega^{\frac{1}{2}})\;,\] or, equivalently, there is a continuous inclusion \[\mathcal{A}^{m}(M;\Omega^{\frac{1}{2}})\subset x^{m+\frac{1}{2}}H^{\infty}_{ \mathrm{b}}(M;{}^{\mathrm{b}}\Omega^{\frac{1}{2}})\;.\qed\] **Corollary 6.34**.: \(H^{m}(\mathring{M})=x^{-1/2}H^{m}_{\rm b}(M)\) _(\(m\in\mathbb{Z}\)) as \(C^{\infty}(M)\)-modules and Hilbertian spaces, and therefore \(H^{\pm\infty}(\mathring{M})=x^{-1/2}H^{\pm\infty}_{\rm b}(M)\)._ Proof.: By (6.1) and Corollary 6.32, \[H^{m}(\mathring{M}) \equiv H^{m}(\mathring{M};\Omega^{\frac{1}{2}})\otimes_{C^{ \infty}_{\rm ub}(\mathring{M})}C^{\infty}_{\rm ub}(\mathring{M};\Omega^{- \frac{1}{2}})\] \[\equiv H^{m}(\mathring{M};\Omega^{\frac{1}{2}})\otimes_{C^{\infty }_{\rm ub}(\mathring{M})}\left(C^{\infty}(M;\Omega^{-\frac{1}{2}})\otimes_{C^ {\infty}(M)}C^{\infty}_{\rm ub}(\mathring{M})\right)\] \[\equiv H^{m}(\mathring{M};\Omega^{\frac{1}{2}})\otimes_{C^{\infty }(M)}C^{\infty}(M;\Omega^{-\frac{1}{2}})\] \[\equiv H^{m}_{\rm b}(M;{}^{\rm b}\Omega^{\frac{1}{2}})\otimes_{C^ {\infty}(M)}x^{-\frac{1}{2}}C^{\infty}(M;{}^{\rm b}\Omega^{-\frac{1}{2}}) \equiv x^{-\frac{1}{2}}H^{m}_{\rm b}(M)\;.\qed\] **Corollary 6.35**.: \(\mathcal{A}^{m}(M)\equiv x^{m}H^{\infty}_{\rm b}(M)\equiv x^{m+1/2}H^{m}( \mathring{M})\) _(\(m\in\mathbb{R}\))._ Proof.: The second identity is given by Corollary 6.34. By Proposition 6.33 and (6.1), \[x^{m}H^{\infty}_{\rm b}(M) \equiv x^{m+\frac{1}{2}}H^{\infty}_{\rm b}(M;{}^{\rm b}\Omega^{ \frac{1}{2}})\otimes_{C^{\infty}(M)}x^{-\frac{1}{2}}C^{\infty}(M;{}^{\rm b} \Omega^{-\frac{1}{2}})\] \[\equiv\mathcal{A}^{m}(M;\Omega^{\frac{1}{2}})\otimes_{C^{\infty}( M)}C^{\infty}(M;\Omega^{-\frac{1}{2}})\equiv\mathcal{A}^{m}(M)\;.\qed\] By (6.38) and (6.39), we get the following consequences of Corollary 6.35. **Corollary 6.36**.: \(C^{\infty}_{\rm c}(\mathring{M})\) _is dense in every \(\mathcal{A}^{m}(M)\) and \(\mathcal{A}^{(s)}(M)\)._ **Corollary 6.37**.: \(\mathcal{A}(M)\equiv\bigcup_{m}x^{m}H^{\infty}_{\rm b}(M)=\bigcup_{m}x^{m}H^{ \infty}(\mathring{M})\)_._ _Remark 6.38_.: Corollary 6.36 and the first identities of Corollaries 6.35 and 6.37 are independent of \(g\). So they hold true without the assumptions (A) and (B).Observe that Corollary 6.36 is stronger than Corollary 6.13. ### Dual-conormal distributions at the boundary Consider the LCHSs [15, Section 18.3], [25, Chapter 4] \[\mathcal{K}^{\prime}(M)=\mathcal{K}(M;\Omega)^{\prime}\;,\quad\mathcal{A}^{ \prime}(M)=\dot{\mathcal{A}}(M;\Omega)^{\prime}\;,\quad\dot{\mathcal{A}}^{ \prime}(M)=\mathcal{A}(M;\Omega)^{\prime}\;.\] The elements of \(\mathcal{A}^{\prime}(M)\) (respectively, \(\dot{\mathcal{A}}^{\prime}(M)\)) will be called _extendible_ (respectively, _supported_) _dual-conormal distributions_ at the boundary. The following analog of Corollary 5.1 holds true with formally the same proof, using the versions with \(\Omega M\) of Corollaries 6.6, 6.14, 6.20, 6.23 and 6.25 **Proposition 6.39**.: \(\mathcal{K}^{\prime}(M)\)_, \(\mathcal{A}^{\prime}(M)\) and \(\dot{\mathcal{A}}^{\prime}(M)\) are complete Montel spaces._ We also define the LCHSs \[\mathcal{K}^{\prime}{}^{(s)}(M) =\mathcal{K}^{(-s)}(M;\Omega)^{\prime}\;,\quad\mathcal{K}^{\prime \,m}(M)=\mathcal{K}^{-m}(M;\Omega)^{\prime}\;,\] \[\mathcal{A}^{\prime}{}^{(s)}(M) =\dot{\mathcal{A}}^{(-s)}(M;\Omega)^{\prime}\;,\quad\mathcal{A}^{ \prime\,m}(M)=\dot{\mathcal{A}}^{-m}(M;\Omega)^{\prime}\;,\] \[\dot{\mathcal{A}}^{\prime}{}^{(s)}(M) =\mathcal{A}^{(-s)}(M;\Omega)^{\prime}\;,\quad\dot{\mathcal{A}}^{ \prime\,m}(M)=\mathcal{A}^{-m}(M;\Omega)^{\prime}\;.\] Transposing the analogs of (4.4) and (4.11) for the spaces \(\mathcal{K}^{(s)}(M;\Omega)\), \(\mathcal{K}^{m}(M;\Omega)\), \(\dot{\mathcal{A}}^{(s)}(M;\Omega)\) and \(\dot{\mathcal{A}}^{m}(M;\Omega)\), we get continuous linear restriction maps \[\mathcal{K}^{\prime}{}^{(s^{\prime})}(M) \to\mathcal{K}^{\prime}{}^{(s)}(M)\;,\quad\mathcal{K}^{\prime\,m}(M) \to\mathcal{K}^{\prime\,m^{\prime}}(M)\;,\] \[\mathcal{A}^{\prime}{}^{(s^{\prime})}(M) \to\mathcal{A}^{\prime}{}^{(s)}(M)\;,\quad\mathcal{A}^{\prime\,m}(M) \to\mathcal{A}^{\prime\,m^{\prime}}(M)\;,\] for \(s<s^{\prime}\) and \(m<m^{\prime}\). These maps form projective spectra, giving rise to projective limits. The spaces \(\mathcal{K}^{\prime}{}^{(s)}(M)\), \(\mathcal{K}^{\prime\,m}(M)\), \(\mathcal{A}^{\prime}{}^{(s)}(M)\) and \(\mathcal{A}^{\prime\,m}(M)\) satisfy the analogs of (5.4). So the corresponding projective limits satisfy the analogs of (5.5). Similarly, transposing the analog of (4.4) for the spaces \(\mathcal{A}^{(s)}(M;\Omega)\) and the version of (6.37) with \(\Omega M\), by Corollary 6.36 and Remark 6.38, we get continuous inclusions \[\dot{\mathcal{A}}^{\prime\,(s^{\prime})}(M)\subset\dot{\mathcal{A}}^{\prime\,(s )}(M)\;,\quad\dot{\mathcal{A}}^{\prime\,m^{\prime}}(M)\subset\dot{\mathcal{A}}^ {\prime\,m}(M)\;,\] for \(s<s^{\prime}\) and \(m<m^{\prime}\). The version of (6.38) with \(\Omega M\) yields continuous inclusions \[\dot{\mathcal{A}}^{\prime\,(s)}(M)\supset\dot{\mathcal{A}}^{\prime\,m}(M) \supset\dot{\mathcal{A}}^{\prime\,(\max\{m,0\})}(M)\quad(m>s+n/2+1)\;. \tag{6.55}\] Therefore \[\bigcap_{s}\dot{\mathcal{A}}^{\prime\,(s)}(M)=\bigcap_{m}\dot{\mathcal{A}}^{ \prime\,m}(M)\;. \tag{6.56}\] The following analogs of Corollaries 5.2 and 5.3 hold true with formally the same proofs, using the versions with \(\Omega M\) of Propositions 6.5 and 6.22 and Corollaries 6.14, 6.20 and 6.25, or Proposition 6.39 for an alternative proof. **Corollary 6.40**.: \(\mathcal{K}^{\prime\,(s)}(M)\)_, \(\mathcal{A}^{\prime\,(s)}(M)\) and \(\dot{\mathcal{A}}^{\prime\,(s)}(M)\) are bornological and barreled._ **Corollary 6.41**.: _We have_ \[\mathcal{K}^{\prime}(M)\equiv\varprojlim\mathcal{K}^{\prime\,(s)}(M)\;,\quad \mathcal{A}^{\prime}(M)\equiv\varprojlim\mathcal{A}^{\prime\,(s)}(M)\;,\quad \dot{\mathcal{A}}^{\prime}(M)\equiv\bigcap_{s}\dot{\mathcal{A}}^{\prime\,(s)} (M)\;.\] Transposing the versions of (6.3), (6.29), (6.31) and (6.34) with \(\Omega M\), we get continuous inclusions [25, Section 4.6] \[C^{\infty}(M)\subset\mathcal{A}^{\prime}(M)\subset C^{-\infty}(M),\dot{C}^{- \infty}(M)\;, \tag{6.57}\] \[\dot{C}^{\infty}(M)\subset\dot{\mathcal{A}}^{\prime}(M)\subset\dot{C}^{- \infty}(M),C^{-\infty}(M)\;, \tag{6.58}\] and \(R:\dot{C}^{-\infty}(M)\to C^{-\infty}(M)\) restricts to the identity map on \(\mathcal{A}^{\prime}(M)\) and \(\dot{\mathcal{A}}^{\prime}(M)\). ### Dual-conormal sequence at the boundary Transposing maps in the version of (6.47) with \(\Omega M\), we get the sequence \[0\leftarrow\mathcal{K}^{\prime}(M)\stackrel{{\iota^{t}}}{{ \longleftarrow}}\mathcal{A}^{\prime}(M)\stackrel{{ R^{t}}}{{ \longleftarrow}}\dot{\mathcal{A}}^{\prime}(M)\gets 0\;, \tag{6.59}\] which will called the _dual-conormal sequence at the boundary_ of \(M\). **Proposition 6.42**.: _The sequence (6.59) is exact in the category of continuous linear maps between LCSs._ Proof.: By Proposition 6.7 and [39, Lemma 7.6], it is enough to prove that the map (6.35) satisfies the following condition of "topological lifting of bounded sets." _Claim 6.43_.: For all bounded subset \(A\subset\mathcal{A}(M)\), there is some bounded subset \(B\subset\dot{\mathcal{A}}(M)\) such that, for all \(0\)-neighborhood \(U\subset\dot{\mathcal{A}}(M)\), there is a \(0\)-neighborhood \(V\subset\mathcal{A}(M)\) so that \(A\cap V\subset R(B\cap U)\). Since \(\mathcal{A}(M)\) is boundedly retractive (Corollary 6.14), \(A\) is contained and bounded in some step \(\mathcal{A}^{m}(M)\). For any \(m^{\prime}>m\), let \(E_{m^{\prime}}:\mathcal{A}^{m}(M)\to\dot{\mathcal{A}}^{(s)}(M)\) be the partial extension map given by Proposition 6.26. Then \(B:=E_{m^{\prime}}(A)\) is bounded in \(\dot{\mathcal{A}}^{(s)}(M)\), and therefore in \(\dot{\mathcal{A}}(M)\). Moreover, given any \(0\)-neighborhood \(U\subset\dot{\mathcal{A}}(M)\), there is some \(0\)-neighborhood \(W\subset\mathcal{A}^{m^{\prime}}(M)\) so that \(E_{m^{\prime}}(W)\subset U\cap\dot{\mathcal{A}}^{(s)}(M)\). By Corollary 6.12, there is some \(0\)-neighborhood \(V\subset\mathcal{A}(M)\) such that \(V\cap\mathcal{A}^{m}(M)=W\cap\mathcal{A}^{m}(M)\). Hence \(E_{m^{\prime}}(V\cap\mathcal{A}^{m}(M))\subset U\cap\dot{\mathcal{A}}^{(s)}(M)\), yielding \[A\cap V=R(E_{m^{\prime}}(A\cap V))\subset R(E_{m^{\prime}}(A)\cap E_{m^{ \prime}}(V\cap\mathcal{A}^{m}(M)))\subset R(B\cap U)\;.\qed\] _Remark 6.44_.: Proposition 6.42 does not agree with [25, Proposition 4.6.2], which seems to be a minor error of that book project. ### \(\dot{\mathcal{A}}(M)\) and \(\mathcal{A}(M)\) vs \(\mathcal{A}^{\prime}(M)\) Using (6.29), (6.31) and (6.57), we have [15, Proposition 18.3.24], [25, Theorem 4.6.1] \[\dot{\mathcal{A}}(M)\cap\mathcal{A}^{\prime}(M)=C^{\infty}(M)\;. \tag{6.60}\] (Actually, the a priori weaker equality \(\mathcal{A}(M)\cap\mathcal{A}^{\prime}(M)=C^{\infty}(M)\) is proved in [25, Theorem 4.6.1], but it is equivalent to (6.60) because \(R=1\) on \(\mathcal{A}^{\prime}(M)\).) ### A description of \(\dot{\mathcal{A}}^{\prime}(M)\) **Corollary 6.45**.: \(\dot{\mathcal{A}}^{\prime\,m}(M)\equiv x^{m}H_{\mathrm{b}}^{-\infty}(M)=x^{m- \frac{1}{2}}H^{-\infty}(\mathring{M})\)__(\(m\in\mathbb{R}\))_._ Proof.: Apply the version of Corollary 6.35 with \(\Omega M\). **Corollary 6.46**.: \(\dot{\mathcal{A}}^{\prime}(M)\equiv\bigcap_{m}x^{m}H_{\mathrm{b}}^{-\infty}( M)=\bigcap_{m}x^{m}H^{-\infty}(\mathring{M})\)_._ Proof.: Apply (6.56) and Corollaries 6.41 and 6.45. **Corollary 6.47**.: \(C_{\mathrm{c}}^{\infty}(\mathring{M})\) _is dense in every \(\dot{\mathcal{A}}^{\prime\,m}(M)\) and in \(\dot{\mathcal{A}}^{\prime}(M)\). Therefore the first inclusion of (6.58) is also dense._ Proof.: Since \(C_{\mathrm{c}}^{\infty}(\mathring{M})\) is dense in \(H^{-\infty}(\mathring{M})\), we get that \(C_{\mathrm{c}}^{\infty}(\mathring{M})=x^{m}C_{\mathrm{c}}^{\infty}(\mathring{ M})\) is dense in every \(x^{m}H^{-\infty}(\mathring{M})\equiv\dot{\mathcal{A}}^{\prime\,m}(M)\) (Corollary 6.45), and therefore in \(\dot{\mathcal{A}}^{\prime}(M)\) (Corollary 6.46). _Remark 6.48_.: Like in Remark 6.38, Corollary 6.47 and the first identities of Corollary 6.46 are independent of \(g\), and hold true without the assumptions (A) and (B). Action of \(\mathrm{Diff}(M)\) on \(\mathcal{A}^{\prime}(M)\), \(\dot{\mathcal{A}}^{\prime}(M)\) and \(\mathcal{K}^{\prime}(M)\) According to Sections 2.5 and 6.14, any \(A\in\mathrm{Diff}(M)\) induces continuous linear endomorphisms \(A\) of \(\mathcal{A}^{\prime}(M)\), \(\dot{\mathcal{A}}^{\prime}(M)\) and \(\mathcal{K}^{\prime}(M)\)[25, Proposition 4.6.1], which are the transposes of \(A^{t}\) on \(\dot{\mathcal{A}}(M;\Omega)\), \(\mathcal{A}(M;\Omega)\) and \(\mathcal{K}(M;\Omega)\). If \(A\in\mathrm{Diff}^{k}(M)\), these maps satisfy the analogs of (5.7). If \(A\in\mathrm{Diff}_{\mathrm{b}}(M)\), it induces continuous endomorphisms of \(\mathcal{A}^{\prime\,(s)}(M)\), \(\mathcal{A}^{\prime\,m}(M)\), \(\dot{\mathcal{A}}^{\prime\,(s)}(M)\) and \(\mathcal{K}^{\prime\,(s)}(M)\). ## 7. Conormal sequence ### Cutting along a submanifold Let \(M\) be a closed connected manifold, and \(L\subset M\) be a regular closed submanifold of codimension one. \(L\) may not be connected, and therefore \(M\setminus L\) may have several connected components. First assume also that \(L\) is transversely oriented. Then, like in the boundary case Section 6.1, there is some real-valued smooth function \(x\) on some tubular neighborhood \(T\) of \(L\) in \(M\), with projection \(\varpi:T\to L\), so that \(L=\{x=0\}\) and \(dx\neq 0\) on \(L\). Any function \(x\) satisfying these conditions is called a _defining function_ of \(L\) on \(T\). We can suppose \(T\equiv(-\epsilon,\epsilon)_{x}\times L\), for some \(\epsilon>0\), so that \(\varpi:T\to L\) is the second factor projection. For any atlas \(\{V_{j},y_{j}\}\) of \(L\), we get an atlas of \(T\) of the form \(\{U_{j}\equiv(-\epsilon,\epsilon)_{x}\times V_{j},(x,y)\}\), whose charts are adapted to \(L\). The corresponding local vector fields \(\partial_{x}\in\mathfrak{X}(U_{j})\) can be combined to define a vector field \(\partial_{x}\in\mathfrak{X}(T)\); we can consider \(\partial_{x}\) as the derivative operator on \(C^{\infty}(T)\equiv C^{\infty}((-\epsilon,\epsilon),C^{\infty}(L))\). For every \(k\), \(\mathrm{Diff}(U_{j},L\cap U_{j})\) is spanned by \(x\partial_{x},\partial_{j}^{1},\dots,\partial_{j}^{n-1}\) using the operations of \(C^{\infty}(U_{j})\)-module and algebra, where \(\partial_{j}^{\alpha}=\partial/\partial y_{j}^{\alpha}\). Using \(T\equiv(-\epsilon,\epsilon)_{x}\times L\), any \(A\in\mathrm{Diff}(L)\) induces an operator \(1\otimes A\in\mathrm{Diff}(T,L)\), such that \((1\otimes A)(u(x)v(y))=u(x)\,(Av)(y)\) for \(u\in C^{\infty}(-\epsilon,\epsilon)\) and \(v\in C^{\infty}(L)\). This defines a canonical injection \(\mathrm{Diff}(L)\equiv 1\otimes\mathrm{Diff}(L)\subset\mathrm{Diff}(T,L)\) so that \((1\otimes A)|_{L}=A\). (This also shows the surjectivity of (4.2) in this case.) Moreover \(\mathrm{Diff}(T)\) (respectively, \(\mathrm{Diff}(T,L)\)) is spanned by \(\partial_{x}\) (respectively, \(x\partial_{x}\)) and \(1\otimes\mathrm{Diff}(L)\) using the operations of \(C^{\infty}(T)\)-module and algebra. Clearly, \[[\partial_{x},1\otimes\mathrm{Diff}(L)]=0\;,\quad[\partial_{x},x\partial_{x}]= \partial_{x}\;, \tag{7.1}\] yielding \[[\partial_{x},\mathrm{Diff}^{k}(T,L)]\subset\mathrm{Diff}^{k}(T,L)+\mathrm{ Diff}^{k-1}(T,L)\,\partial_{x}\;. \tag{7.2}\] \(\mathrm{Diff}^{k}(T,L)\) and \(\mathrm{Diff}^{k}(T)\) satisfy the obvious versions of (6.26) and (6.27). For a vector bundle \(E\) over \(M\), there is an identity \(E_{T}\equiv(-\epsilon,\epsilon)\times E_{L}\) over \(T\equiv(-\epsilon,\epsilon)\times L\), which can be used to define \(\partial_{x}\in\mathrm{Diff}^{1}(T;E)\) using the above charts. With this interpretation of \(\partial_{x}\) and using tensor products like in Section 1, the vector bundle versions of the properties and spaces of distributions of this section is straightforward. Let \(\boldsymbol{M}\) be the smooth manifold with boundary defined by "cutting" \(M\) along \(L\); i.e., modifying \(M\) only on the tubular neighborhood \(T\equiv(-\epsilon,\epsilon)\times L\), which is replaced with \(\boldsymbol{T}=((-\epsilon,0]\sqcup[0,\epsilon))\times L\) in the obvious way. (\(\boldsymbol{M}\) is the blowing-up \([M,L]\) of \(M\) along \(L\)[25, Chapter 5].) Thus \(\partial\boldsymbol{M}\equiv L\sqcup L\) because \(L\) is transversely oriented, and \(\hat{\boldsymbol{M}}\equiv M\setminus L\). A canonical projection \(\boldsymbol{\pi}:\boldsymbol{M}\to M\) is defined as the combination of the identity map \(\hat{\boldsymbol{M}}\to M\setminus L\) and the map \(\boldsymbol{T}\to T\) given by the product of the canonical projection \((-\epsilon,0]\sqcup[0,\epsilon)\to(-\epsilon,\epsilon)\) and \(\mathrm{id}_{L}\). This projection realizes \(M\) as a quotient space of \(\boldsymbol{M}\) by the equivalence relation defined by the homeomorphism \(h\equiv h_{0}\times\mathrm{id}\) of \(\partial\boldsymbol{M}\equiv\partial\boldsymbol{T}=(\{0\}\sqcup\{0\})\times L\), where \(h_{0}\) switches the two points of \(\{0\}\sqcup\{0\}\). Moreover \(\boldsymbol{\pi}:\boldsymbol{M}\to M\) is a local embedding of a compact manifold with boundary to a closed manifold of the same dimension. Like in Section 2.6, we have the continuous linear pull-back map \[\boldsymbol{\pi}^{*}:C^{\infty}(M)\to C^{\infty}(\boldsymbol{M})\;, \tag{7.3}\] which is clearly injective. Then the transpose of the version of (7.3) with \(\Omega M\) and \(\Omega\boldsymbol{M}\equiv\boldsymbol{\pi}^{*}\Omega M\) is the continuous linear push-forward map \[\boldsymbol{\pi}_{*}:\dot{C}^{-\infty}(\boldsymbol{M})\to C^{-\infty}(M)\;, \tag{7.4}\] which is surjective by a consequence of the Hahn-Banach theorem [31, Theorem II.4.2]. After distinguishing a connected component \(L_{0}\) of \(L\), let \(\widetilde{M}\) and \(\widetilde{L}\) be the quotients of \(\boldsymbol{M}\sqcup\boldsymbol{M}\equiv\boldsymbol{M}\times\mathbb{Z}_{2}\) and \(\partial\boldsymbol{M}\sqcup\partial\boldsymbol{M}\equiv\partial\boldsymbol{M }\times\mathbb{Z}_{2}\) by the equivalence relation generated by \((p,a)\sim(h(p),a)\) if \(\boldsymbol{\pi}(p)\in L\setminus L_{0}\) and \((p,a)\sim(h(p),a+1)\) if \(\boldsymbol{\pi}(p)\in L_{0}\) (\(p\in\boldsymbol{\pi}^{-1}(L)=\partial\boldsymbol{M}\) in both cases). Let us remark that \(\widetilde{M}\) may not be homeomorphic to the double of \(\boldsymbol{M}\), which is the quotient of \(\boldsymbol{M}\times\mathbb{Z}_{2}\) by the equivalence relation generated by \((p,0)\sim(p,1)\), for \(p\in\partial\boldsymbol{M}\). Note that \(\widetilde{M}\) is a closed connected manifold and \(\widetilde{L}\) is a closed regular submanifold. Moreover \(\widetilde{T}:=\boldsymbol{T}\cup_{h}\boldsymbol{T}\) becomes a tubular neighborhood of \(\widetilde{L}\) in \(\widetilde{M}\). The combination \(\boldsymbol{\pi}\sqcup\boldsymbol{\pi}:\boldsymbol{M}\sqcup\boldsymbol{M}\to M\) induces a two-fold covering map \(\tilde{\pi}:\widetilde{M}\to M\), whose restrictions to \(\widetilde{L}\) and \(\widetilde{T}\) are trivial two-fold coverings of \(L\) and \(T\), respectively; i.e., \(\widetilde{L}\equiv L\sqcup L\) and \(\widetilde{T}\equiv T\sqcup T\). The group of deck transformations of \(\tilde{\pi}:\widetilde{M}\to M\) is \(\{\mathrm{id},\sigma\}\), where \(\sigma:\widetilde{M}\to\widetilde{M}\) is induced by the map \(\sigma_{0}:\boldsymbol{M}\times\mathbb{Z}_{2}\to\boldsymbol{M}\times\mathbb{Z}_ {2}\) defined by switching the elements of \(\mathbb{Z}_{2}\). The composition of the injection \(\boldsymbol{M}\to\boldsymbol{M}\times\mathbb{Z}_{2}\), \(p\mapsto(p,0)\), with the quotient map \(\boldsymbol{M}\sqcup\boldsymbol{M}\to\widetilde{M}\) is a smooth embedding \(\boldsymbol{M}\to\widetilde{M}\). This will be considered as an inclusion map of a regular submanifold with boundary, obtaining \(\partial\boldsymbol{M}\equiv\widetilde{L}\). Since \(\tilde{\pi}\) is a two-fold covering map, we have continuous linear maps (Section 2.6) \[\tilde{\pi}_{*}:C^{\infty}(\widetilde{M})\to C^{\infty}(M)\;, \tilde{\pi}^{*}:C^{\infty}(M)\to C^{\infty}(\widetilde{M})\;,\] \[\tilde{\pi}^{*}:C^{-\infty}(M)\to C^{-\infty}(\widetilde{M})\;, \tilde{\pi}_{*}:C^{-\infty}(\widetilde{M})\to C^{-\infty}(M)\;, \tag{7.5}\] both pairs of maps satisfying \[\tilde{\pi}_{*}\tilde{\pi}^{*}=2\;,\quad\tilde{\pi}^{*}\tilde{\pi}_{*}=A_{ \sigma}\;, \tag{7.6}\] where \(A_{\sigma}:C^{\pm\infty}(\widetilde{M})\to C^{\pm\infty}(\widetilde{M})\) is given by \(A_{\sigma}u=u+\sigma_{*}u\). Using the continuous linear restriction and inclusion maps given by (6.2) and (6.7), we get the commutative diagrams \[\begin{CD}C^{\infty}(\widetilde{M})@>{R}>{}>C^{\infty}(\boldsymbol{M})@>{ \dot{C}^{-\infty}(\boldsymbol{M})}>{}>C^{-\infty}(\widetilde{M})\\ @V{\tilde{\pi}^{*}}V{}V@V{}V{\pi^{*}}V@V{\pi_{*}}V{}V@V{}V{\tilde{\pi}_{*}}V\\ C^{\infty}(M)@>{C^{\infty}(M)}>{}>C^{-\infty}(M)@>{C^{-\infty}(M)}>{}>C^{- \infty}(M)\;,\end{CD} \tag{7.7}\] the second one being the transpose of the density-bundles version of the first one. ### Lift of differential operators from \(M\) to \(\widetilde{M}\) For any \(A\in\operatorname{Diff}(M)\), let \(\widetilde{A}\in\operatorname{Diff}(\widetilde{M})\) denote its lift via the covering map \(\tilde{\pi}:\widetilde{M}\to M\). The action of \(\widetilde{A}\) on \(C^{\pm\infty}(\widetilde{M})\) corresponds to the action of \(A\) on \(C^{\pm\infty}(M)\) via \(\tilde{\pi}^{*}:C^{\pm\infty}(M)\to C^{\pm\infty}(\widetilde{M})\) and \(\tilde{\pi}_{*}:C^{\pm\infty}(\widetilde{M})\to C^{\pm\infty}(M)\). According to (6.22), \(\widetilde{A}|_{\boldsymbol{M}}\in\operatorname{Diff}(\boldsymbol{M})\) is the lift of \(A\) via the local embedding \(\boldsymbol{\pi}:\boldsymbol{M}\to M\), also denoted by \(\widetilde{A}\). The action of \(\widetilde{A}\) on \(C^{\infty}(\boldsymbol{M})\) (respectively, \(C^{-\infty}(\boldsymbol{M})\)) corresponds to the action of \(A\) on \(C^{\infty}(M)\) (respectively, \(C^{-\infty}(M)\)) via \(\boldsymbol{\pi}^{*}:C^{\infty}(M)\to C^{\infty}(\boldsymbol{M})\) (respectively, \(\boldsymbol{\pi}_{*}:C^{-\infty}(\boldsymbol{M})\to C^{-\infty}(M)\)). If \(A\in\operatorname{Diff}(M,L)\), then \(\widetilde{A}\in\operatorname{Diff}(\widetilde{M},\widetilde{L})\) and \(\widetilde{A}|_{\boldsymbol{M}}\in\operatorname{Diff}_{\mathrm{b}}( \boldsymbol{M})\) by (6.25). ### The spaces \(C^{\pm\infty}(M,L)\) Consider the closed subspaces, \[C^{\infty}(M,L)\subset C^{\infty}(M)\;,\quad C^{k}(M,L)\subset C^{k}(M)\quad( k\in\mathbb{N}_{0})\;, \tag{7.8}\] consisting of functions that vanish to all orders at the points of \(L\) in the first case, and that vanish up to order \(k\) at the points of \(L\) in the second case. Then let \[C^{-\infty}(M,L)=C^{\infty}(M,L;\Omega)^{\prime}\;,\quad C^{\prime\,-k}(M,L)= C^{k}(M,L;\Omega)^{\prime}\;.\] Note that (7.3) restricts to a TVS-isomorphisms \[\boldsymbol{\pi}^{*}:C^{\infty}(M,L)\xrightarrow{\cong}\dot{C}^{\infty}( \boldsymbol{M})\;,\quad\boldsymbol{\pi}^{*}:C^{k}(M,L)\xrightarrow{\cong}\dot{ C}^{k}(\boldsymbol{M})\;. \tag{7.9}\] Taking the transposes of its versions with density bundles, it follows that (7.4) restricts to a TVS-isomorphisms \[\boldsymbol{\pi}_{*}:C^{-\infty}(\boldsymbol{M})\xrightarrow{\cong}C^{- \infty}(M,L)\;,\quad\boldsymbol{\pi}_{*}:C^{\prime\,-k}(\boldsymbol{M}) \xrightarrow{\cong}C^{\prime\,-k}(M,L)\;. \tag{7.10}\] So the spaces \(C^{\infty}(M,L)\), \(C^{k}(M,L)\), \(C^{-\infty}(M,L)\) and \(C^{\prime\,-k}(M,L)\) satisfy the analogs of (2.6), and \[C^{\infty}(M,L)=\bigcap_{k}C^{k}(M,L)\;,\quad C^{-\infty}(M,L)=\bigcup_{k}C^{ \prime\,-k}(M,L)\;.\] On the other hand, there are Hilbertian spaces \(H^{r}(M,L)\) (\(r>n/2\)) and \(H^{\prime\,s}(M,L)\) (\(s\in\mathbb{R}\)), continuously included in \(C^{0}(M,L)\) and \(C^{-\infty}(M,L)\), respectively, such that the second map of (7.9) for \(k=0\) and the first map of (7.10) restrict to a TVS-isomorphisms \[\boldsymbol{\pi}^{*}:H^{r}(M,L)\xrightarrow{\cong}\dot{H}^{r}(\boldsymbol{M} )\;,\quad\boldsymbol{\pi}_{*}:H^{s}(\boldsymbol{M})\xrightarrow{\cong}H^{ \prime\,s}(M,L)\;. \tag{7.11}\] By (6.16), \[H^{\prime\,-r}(M,L)\equiv H^{r}(M,L;\Omega)^{\prime}\;,\quad H^{r}(M,L)\equiv H ^{\prime\,-r}(M,L;\Omega)^{\prime}\;. \tag{7.12}\] Now, the second identity of (7.12) can be used to extend the definition of \(H^{r}(M,L)\) for all \(r\in\mathbb{R}\). Alternatively, we may also use trace theorems [1, Theorem 7.53 and 7.58] to define \(H^{m}(M,L)\) for \(m\in\mathbb{Z}^{+}\), and then use the first identity of (7.12) to define \(H^{\prime\,-m}(M,L)\). From (7.3), (7.4), (7.11) and the analogs of (2.17)-(2.19) mentioned in Section 6.4, we get \[C^{\infty}(M,L)=\bigcap_{r}H^{r}(M,L)\;,\quad C^{-\infty}(M,L)=\bigcup_{s}H^{ \prime\,s}(M,L)\;, \tag{7.13}\] as well as a continuous inclusion and a continuous linear surjection, \[C^{\infty}(M)\subset\bigcap_{s}H^{\prime\,s}(M,L)\;,\qquad C^{-\infty}(M) \leftarrow\bigcup_{r}H^{r}(M,L)\;. \tag{7.14}\] By (7.12) and (7.13), \[C^{\infty}(M,L)=C^{-\infty}(M,L;\Omega)^{\prime}\;. \tag{7.15}\] Proposition 6.1 and (7.10) have the following consequence. **Corollary 7.1**.: \(C^{-\infty}(M,L)\) _is a barreled, ultrabornological, webbed, acyclic DF Montel space, and therefore complete, boundedly retractive and reflexive._ The transpose of the version of the first inclusion of (7.8) with \(\Omega M\) is a continuous linear restriction map \[R:C^{-\infty}(M)\to C^{-\infty}(M,L)\;, \tag{7.16}\] whose restriction to \(C^{\infty}(M)\) is the identity. This map can be also described as the composition \[C^{-\infty}(M)\xrightarrow{\cong^{*}}C^{-\infty}(\widetilde{M})\xrightarrow {R}C^{-\infty}(\boldsymbol{M})\xrightarrow{\pi_{*}}C^{-\infty}(M,L)\;.\] The canonical pairing between \(C^{\infty}(M)\) and \(C^{\infty}(M,L;\Omega)\) defines a continuous inclusion \[C^{\infty}(M)\subset C^{-\infty}(M,L) \tag{7.17}\] such that (7.16) is the identity on \(C^{\infty}(M)\). We also get commutative diagrams \[\begin{CD}C^{\infty}(\boldsymbol{M})@<{\iota}<{}<\dot{C}^{\infty}( \boldsymbol{M})@<{\dot{C}^{-\infty}(\boldsymbol{M})}>{}>&C^{-\infty}( \boldsymbol{M})\\ @V{\boldsymbol{\pi}^{*}}V{}V@V{\cong}V{\boldsymbol{\pi}^{*}}V@V{\pi_{*}}V{}V@V{ \cong}V{}V\\ C^{\infty}(M)@<{\iota}<{}<C^{\infty}(M,L)\;,\quad C^{-\infty}(M)@>{R}>{}>C^{- \infty}(M,L)\;.\end{CD} \tag{7.18}\] the second one being the transpose of the density-bundles version of the first one. ### The space \(C^{-\infty}_{L}(M)\) The closed subspaces of elements supported in \(L\), \[C^{-\infty}_{L}(M)\subset C^{-\infty}(M)\;,\quad C^{\prime\,-k}_{L}(M)\subset C^ {\prime\,-k}(M)\;,\quad H^{s}_{L}(M)\subset H^{s}(M)\;,\] are the null space of restrictions of (7.16). These spaces satisfy continuous inclusions analogous to (2.6), (2.16) and (2.18). According to (6.19) and Section 7.1, \[\dot{C}^{-\infty}_{\partial\boldsymbol{M}}(\boldsymbol{M}) \equiv C^{-\infty}_{\widetilde{L}}(\widetilde{M})\equiv C^{- \infty}_{\widetilde{L}}(\widetilde{T})\equiv C^{-\infty}_{L}(T)\oplus C^{- \infty}_{L}(T) \tag{7.19}\] \[\equiv C^{-\infty}_{L}(M)\oplus C^{-\infty}_{L}(M)\;,\] The maps (7.4) and (7.5) have restrictions \[\boldsymbol{\pi}_{*}=\tilde{\pi}_{*}:\dot{C}^{-\infty}_{\partial\boldsymbol{M }}(\boldsymbol{M})\to C^{-\infty}_{L}(M)\;,\quad\tilde{\pi}^{*}:C^{-\infty}_{ L}(M)\to\dot{C}^{-\infty}_{\partial\boldsymbol{M}}(\boldsymbol{M})\;. \tag{7.20}\] Using (7.19), these maps are given by \(\boldsymbol{\pi}_{*}(u,v)=u+v\) and \(\tilde{\pi}^{*}u=(u,u)\). From (7.19), Proposition 6.3 and Corollary 6.4, we get the following. **Corollary 7.2**.: \(C^{-\infty}_{L}(M)\) _is a limit subspace of the LF-space \(C^{-\infty}(M)\)._ **Corollary 7.3**.: \(C^{-\infty}_{L}(M)\) _is a barreled, ultrabornological, webbed, acyclic DF Montel space, and therefore complete, boundedly retractive and reflexive._ Moreover the right-hand side diagram of (7.18) can be completed to get the commutative diagram \[\begin{CD}0\to\dot{C}^{-\infty}_{\partial\boldsymbol{M}}(\boldsymbol{M})@>{ \iota}>{}>\dot{C}^{-\infty}(\boldsymbol{M})@>{R}>{}>C^{-\infty}(\boldsymbol{M })\to 0\\ @V{\boldsymbol{\pi}_{*}}V{}V@V{\pi_{*}}V{}V@V{\cong}V{}V\pi_{*}\\ 0\to C^{-\infty}_{L}(M)@>{\iota}>{}>C^{-\infty}(M)@>{R}>{}>C^{-\infty}(M,L)\to 0\;.\end{CD} \tag{7.21}\] **Proposition 7.4**.: _The maps (7.4) and (7.16) are surjective topological homomorphisms._ Proof.: In (7.21), the top row is exact in the category of continuous linear maps between LCSs by (6.18), the left-hand side vertical map is onto by (7.6), and the right-hand side vertical map is a TVS-isomorphism. Then, by the commutativity of its right-hand side square, the map (7.16) is surjective, and therefore the bottom row of (7.21) of is exact in the category of linear maps between vector spaces. By the above properties, chasing (7.21), we get that (7.4) is surjective. Since \(\dot{C}^{-\infty}(\boldsymbol{M})\) is webbed (Proposition 6.1) and \(C^{-\infty}(M)\) ultrabornological, by the open mapping theorem, it also follows that (7.4) is a topological homomorphism. To get that (7.16) is another surjective topological homomorphism, apply the commutativity of the right-hand side square of (7.21) and the above properties. By Proposition 7.4, the bottom row of (7.21) is exact in the category of continuous linear maps between LCSs. The surjectivity of (7.16) (Proposition 7.4) and the density of \(C^{\infty}(M)\) in \(C^{-\infty}(M)\) yield the density of (7.17). ### A description of \(C^{-\infty}_{L}(M)\) According to Sections 2.7 and 7.1 and (4.14), we have the subspaces \[\partial^{m}_{x}C^{-\infty}(L;\Omega^{-1}NL)\subset C^{-\infty}_{L}(M) \tag{7.22}\] for \(m\in\mathbb{N}_{0}\), and continuous linear isomorphisms \[\partial_{x}^{m}:C^{-\infty}(L;\Omega^{-1}NL)\xrightarrow{\cong}\partial_{x}^{m} C^{-\infty}(L;\Omega^{-1}NL)\;. \tag{7.23}\] They induce a continuous linear injection \[\bigoplus_{m=0}^{\infty}C_{m}^{0}\to C_{L}^{-\infty}(M)\;, \tag{7.24}\] where \(C_{m}^{0}=C^{-\infty}(L;\Omega^{-1}NL)\) for all \(m\). **Proposition 7.5**.: _The map (7.24) is a TVS-isomorphism, which restricts to TVS-isomorphisms_ \[\bigoplus_{m=0}^{k}C^{m-k}(L;\Omega^{-1}NL)\xrightarrow{\cong}C_{L}^{\prime-k }(M)\quad(k\in\mathbb{N}_{0})\;. \tag{7.25}\] Proof.: In the case where \(M=\mathbb{R}^{n}\) and \(L\) is a linear subspace, it is known that (7.25) is a linear isomorphism [14, Theorem 2.3.5 and Example 5.1.2], which is easily seen to be continuous. This can be easily extended to arbitrary \(M\) by using charts of \(M\) adapted to \(L\). Then we get the continuous linear isomorphism (7.24) by taking the locally convex inductive limit of (7.25) as \(k\uparrow\infty\). Since \(\bigoplus_{m}C_{m}^{0}\) is webbed and \(C_{L}^{-\infty}(M)\) ultrabornological (Corollary 7.3), the map (7.24) is a TVS-isomorphism by the open mapping theorem. _Remark 7.6_.: Proposition 7.5 reconfirms Corollary 7.2. _Remark 7.7_ (See [25, Exercise 3.3.18]).: In Section 6.5, for any compact manifold with boundary \(M\), the analog of Proposition 7.5 for \(\dot{C}_{\partial M}^{-\infty}(M)\) follows from the application of Proposition 7.5 to \(C_{\partial M}^{-\infty}(\tilde{M})\). **Corollary 7.8**.: _Every map (7.23) is a TVS-isomorphism._ ### Action of \(\operatorname{Diff}(M)\) on \(C^{-\infty}(M,L)\) and \(C_{L}^{-\infty}(M)\) For every \(A\in\operatorname{Diff}(M)\), \(A^{t}\) preserves \(C^{\infty}(M,L;\Omega)\), and therefore \(A\) induces a continuous linear map \(A=A^{tt}\) on \(C^{-\infty}(M,L)\). By locality, it restricts to a continuous endomorphism \(A\) of \(C_{L}^{-\infty}(M)\). ### The space \(J(m,l)\) According to Sections 6.8 and 7.3, there is a LCHS \(J(M,L)\), continuously included in \(C^{-\infty}(M,L)\), so that (7.10) restricts to a TVS-isomorphism \[\boldsymbol{\pi}_{*}:\mathcal{A}(\boldsymbol{M})\xrightarrow{\cong}J(M,L)\;, \tag{7.26}\] where \(\mathcal{A}(\boldsymbol{M})\) is defined in (6.39). By (6.32), there is a continuous inclusion \[J(M,L)\subset C^{\infty}(M\setminus L)\;.\] We also get spaces \(J^{(s)}(M,L)\) and \(J^{m}(M,L)\) (\(s,m\in\mathbb{R}\)) corresponding to \(\mathcal{A}^{(s)}(\boldsymbol{M})\) and \(\mathcal{A}^{m}(\boldsymbol{M})\) via (7.26). Let \(\boldsymbol{x}\) be an extension of \(|x|\) to \(M\), which is positive and smooth on \(M\setminus L\). Its lift \(\boldsymbol{\pi}^{*}\boldsymbol{x}\) is a boundary defining function of \(\boldsymbol{M}\), also denoted by \(\boldsymbol{x}\). Using the first map of (7.10) and second map of (7.11), and according to Section 7.2, we can also describe \[J^{(s)}(M,L) =\{\,u\in C^{-\infty}(M,L)\mid\operatorname{Diff}(M,L)\,u\subset H ^{\prime\,s}(M,L)\,\}\;,\] \[J^{m}(M,L) =\{\,u\in C^{-\infty}(M,L)\mid\operatorname{Diff}(M,L)\,u\subset \boldsymbol{x}^{m}L^{\infty}(M)\,\}\;, \tag{7.27}\] equipped with topologies like in Sections 6.8 and 6.10. These spaces satisfy the analogs of (4.4), (6.28) and (6.37)-(6.39). By (7.14) and (7.27), there are continuous inclusions, \[C^{\infty}(M)\subset J^{(\infty)}(M,L):=\bigcap_{s}J^{(s)}(M,L)\;,\quad J(M,L) \subset C^{-\infty}(M,L)\;; \tag{7.28}\] in particular, \(J(M,L)\) is Hausdorff. Moreover the following analogs of Proposition 6.5 and Corollaries 6.6 and 6.12 to 6.14 hold true. **Corollary 7.9**.: \(J^{(s)}(M,L)\) _is a totally reflexive Frechet space._ **Corollary 7.10**.: \(J(M,L)\) _is barreled, ultabornological and webbed._ **Corollary 7.11**.: _If \(m^{\prime}<m\), then the topologies of \(J^{m^{\prime}}(M,L)\) and \(C^{\infty}(M\setminus L)\) coincide on \(J^{m}(M,L)\). Therefore the topologies of \(J(M,L)\) and \(C^{\infty}(M\setminus L)\) coincide on \(J^{m}(M,L)\)._ **Corollary 7.12**.: _For \(m^{\prime}<m\), \(C^{\infty}_{c}(M\setminus L)\) is dense in \(J^{m}(M,L)\) with the topology of \(J^{m^{\prime}}(M,L)\). Therefore \(C^{\infty}_{c}(M\setminus L)\) is dense in \(J(M,L)\)._ **Corollary 7.13**.: \(J(M,L)\) _is an acyclic Montel space, and therefore complete, boundedly retractive and reflexive._ The analog of Remark 6.15 makes sense for \(J(M,L)\). ### A description of \(J(m,l)\) Take a b-metric \(\boldsymbol{g}\) on \(\boldsymbol{M}\) satisfying (A) and (B), and consider its restriction to \(\hat{\boldsymbol{M}}\). Consider also the boundary defining function \(\boldsymbol{x}\) of \(\boldsymbol{M}\) (Section 7.7). Corollaries 6.35 to 6.37 and (7.26) have the following direct consequences. **Corollary 7.14**.: \(J^{m}(M,L)\cong\boldsymbol{x}^{m}H^{\infty}_{\mathrm{b}}(\boldsymbol{M}) \equiv\boldsymbol{x}^{m+1/2}H^{m}(\hat{\boldsymbol{M}})\)__(\(m\in\mathbb{R}\)).__ **Corollary 7.15**.: \(C^{\infty}_{\mathrm{c}}(M\setminus L)\) _is dense in every \(J^{m}(M,L)\) and \(J^{(s)}(M,L)\)._ **Corollary 7.16**.: \(J(M,L)\cong\bigcup_{m}\boldsymbol{x}^{m}H^{\infty}_{\mathrm{b}}(\boldsymbol{ M})=\bigcup_{m}\boldsymbol{x}^{m}H^{\infty}(\hat{\boldsymbol{M}})\)_._ The analog of Remark 6.38 makes sense for \(J(M,L)\). ### \(I(m,l)\) vs \(\hat{\mathcal{A}}(\boldsymbol{M})\) and \(J(m,l)\) According to Sections 4.6 and 4.7, we have the continuous linear maps \[\tilde{\pi}^{*}:I(M,L)\to I(\widetilde{M},\widetilde{L})\;,\quad\tilde{\pi}_ {*}:I(\widetilde{M},\widetilde{L})\to I(M,L)\;, \tag{7.29}\] which are restrictions of the maps (7.5), and therefore they satisfy (7.6). According to Sections 4.6 and 4.7, these maps are compatible with the symbol and Sobolev filtrations because \(\tilde{\pi}:\widetilde{M}\to M\) is a covering map (Sections 4.6 and 4.7). Using the TVS-embedding \(\hat{\mathcal{A}}(\boldsymbol{M})\subset I(\widetilde{M},\widetilde{L})\) (Corollary 6.18), which preserves the Sobolev and symbol order filtrations, the map \(\tilde{\pi}_{*}\) of (7.29) has the restriction \[\boldsymbol{\pi}_{*}:\hat{\mathcal{A}}(\boldsymbol{M})\to I(M,L)\;. \tag{7.30}\] On the other hand, the map (7.16) restricts to a continuous linear map \[R:I(M,L)\to J(M,L)\;, \tag{7.31}\] which can be also described as the composition \[I(M,L)\stackrel{{\tilde{\pi}^{*}}}{{\longrightarrow}}I( \widetilde{M},\widetilde{L})\stackrel{{ R}}{{\longrightarrow}} \mathcal{A}(\boldsymbol{M})\stackrel{{\pi_{*}}}{{\longrightarrow}} J(M,L)\;.\] According to (4.5) and (7.28), the map (7.31) is the identity on \(C^{\infty}(M)\). ### The space \(K(m,l)\) Like in Section 6.13, the condition of being supported in \(L\) defines the LCHSs and \(C^{\infty}(M)\)-modules \[K^{(s)}(M,L)=I^{(s)}_{L}(M,L)\;,\quad K^{m}(M,L)=I^{m}_{L}(M,L)\;,\quad K(M,L)=I _{L}(M,L)\;.\] These are closed subspaces of \(I^{(s)}(M,L)\), \(I^{m}_{L}(M,L)\) and \(I(M,L)\), respectively; more precisely, they are the null spaces of the corresponding restrictions of the map (7.31). According to Corollary 6.18, the identity (7.19) restricts to a TVS-identity \[\mathcal{K}(\boldsymbol{M})\equiv K(M,L)\oplus K(M,L)\;. \tag{7.32}\] Furthermore the maps (7.20) induce continuous linear maps \[\boldsymbol{\pi}_{*}:\mathcal{K}(\boldsymbol{M})\to K(M,L)\;,\quad\tilde{\pi }^{*}:K(M,L)\to\mathcal{K}(\boldsymbol{M})\;. \tag{7.33}\] Using (7.32), these maps are given by \(\boldsymbol{\pi}_{*}(u,v)=u+v\) and \(\tilde{\pi}^{*}u=(u,u)\). By (6.44) and (6.46), \(K^{(s)}(M,L)\) and \(K^{m}(M,L)\) satisfy analogs of (7.32), using \(\mathcal{K}^{(s)}(\boldsymbol{M})\) and \(\mathcal{K}^{m}(\boldsymbol{M})\). Thus we get the following consequences of Propositions 6.21 and 6.22 and Corollaries 6.23 to 6.25. **Corollary 7.17**.: \(K(M,L)\) _is a limit subspace of the LF-space \(I(M,L)\)._ **Corollary 7.18**.: \(K^{(s)}(M,L)\) _is a totally reflexive Frechet space._ **Corollary 7.19**.: \(K^{(s)}(M,L)\) _is barreled, ultrabornological and webbed, and therefore so is \(K(M,L)\)._ **Corollary 7.20**.: _For \(m<m^{\prime},m^{\prime\prime}\), the topologies of \(K^{m^{\prime}}(M,L)\) and \(K^{m^{\prime\prime}}(M,L)\) coincide on \(K^{m}(M,L)\)._ **Corollary 7.21**.: \(K(M,L)\) _is an acyclic Montel space, and therefore complete, boundedly retractive and reflexive._ **Example 7.22**.: With the notation of Section 4.8, \(\operatorname{Diff}(M)\equiv K(M^{2},\Delta)\) becomes a filtered \(C^{\infty}(M^{2})\)-submodule of \(\Psi(M)\), with the order filtration corresponding to the symbol filtration. This is the reason of the involved choice of \(\bar{m}\) in (4.9). In this way, \(\operatorname{Diff}(M)\) also becomes a LCHS satisfying the above properties. If \(M\) is compact, it is also a filtered subalgebra of \(\Psi(M)\). ### A description of \(K(m,l)\) By (4.18) and (4.16), \[\partial_{x}^{m}C^{\infty}(L;\Omega^{-1}NL)\subset K^{(s-m)}(M,L)\quad(s<-1/2 )\;, \tag{7.34}\] and (7.23) restricts to a continuous linear isomorphism \[\partial_{x}^{m}:C^{\infty}(L;\Omega^{-1}NL)\stackrel{{\cong}}{{ \longrightarrow}}\partial_{x}^{m}C^{\infty}(L;\Omega^{-1}NL)\;. \tag{7.35}\] **Lemma 7.23**.: _For all \(m\in\mathbb{N}_{0}\),_ \[\partial_{x}^{m}C^{\infty}(L;\Omega^{-1}NL)\cap K^{(-\frac{1}{2}-m)}(M,L)=0\;.\] Proof.: We proceed by induction on \(m\). The case \(m=0\) is given by Proposition 4.9. Now assume \(m\geq 1\), and let \(v\in C^{\infty}(L;\Omega^{-1}NL)\) with \(u=\partial_{x}^{m}\delta_{L}^{v}\in K^{(-\frac{1}{2}-m)}(M,L)\). Take any \(A\in\operatorname{Diff}^{2}(L;\Omega NL)\) such that \(-\partial_{x}^{2}+B\in\operatorname{Diff}^{2}(T)\) is elliptic, where \(B=(1\otimes A^{t})^{t}\in\operatorname{Diff}^{2}(T,L)\); for instance, given a Riemannian metric on \(M\), \(A\) can be the Laplacian of the flat line bundle \(\Omega NL\). By (7.34), \(u_{0}:=\partial_{x}^{m-1}\delta_{L}^{v}\in K^{(\frac{1}{2}-m-\epsilon)}(M,L)\) for \(0<\epsilon<1\). By (7.2), given any \(B_{0}\in\operatorname{Diff}(M,L)\), there is some \(B_{1},B_{2},B_{3}\in\operatorname{Diff}(M,L)\) such that \([\partial_{x}^{2},B_{0}]=B_{1}+B_{2}\partial_{x}+B_{3}\partial_{x}^{2}\). So, according to Section 4.5, (4.17) and (7.2), for all \(B_{0}\in\operatorname{Diff}(M,L)\), \[(-\partial_{x}^{2}+B)B_{0}u_{0} =-B_{0}\partial_{x}u-B_{1}u_{0}-B_{2}u-B_{3}\partial_{x}u+\partial _{x}^{m-1}\delta_{L}^{B_{0}^{\prime}Av}+[B,B_{0}]u_{0}\] \[\in K^{(-\frac{3}{2}-m)}(M,L)+K^{(\frac{1}{2}-m-\epsilon)}(M,L)= K^{(-\frac{3}{2}-m)}(M,L)\;.\] Hence \(B_{0}u_{0}\in H^{\frac{1}{2}-m}(M)\) by elliptic regularity. Since \(B_{0}\) is arbitrary, we get \(u_{0}\in K^{(\frac{1}{2}-m)}(M,L)\). So \(u_{0}=0\) by the induction hypothesis, yielding \(u=\partial_{x}u_{0}=0\). By Proposition 7.5, the TVS-isomorphism (7.24) restricts to a linear injection \[\bigoplus_{m=0}^{\infty}C_{m}^{1}\to K(M,L)\;, \tag{7.36}\] where \(C_{m}^{1}=C^{\infty}(L;\Omega^{-1}NL)\) for all \(m\in\mathbb{N}_{0}\), which is easily seen to be continuous. **Proposition 7.24**.: _The map (7.36) is a TVS-isomorphism, which induces TVS-isomorphisms_ \[\bigoplus_{m<-s-\frac{1}{2}}C_{m}^{1}\xrightarrow{\cong}K^{(s)}(M,L)\quad(s< -1/2)\;. \tag{7.37}\] Proof.: To prove that (7.36) is surjective, take any \(u\in K(M,L)\). By Proposition 7.5, we can assume \(u\in\partial_{x}^{m}C^{-\infty}(L;\Omega NL)\) for some \(m\). For any \(A\in\operatorname{Diff}(L;\Omega NL)\), let \(B=(1\otimes A^{t})^{t}\in\operatorname{Diff}(T,L)\). Since \(u\in K(M,L)\) and \(B\) is local, it follows from the definition of \(I(M,L)\) and (2.18) that \(Bu\in H^{-k}_{L}(T)\subset C^{\prime-k}_{L}(T)\) for some \(k\geq m\). On the other hand, \(u=\partial_{x}^{m}\delta_{L}^{v}\) for some \(v\in C^{-\infty}(L;\Omega NL)\). Then (4.17) and (7.1) yield \[Bu=B\partial_{x}^{m}\delta_{L}^{v}=\partial_{x}^{m}B\delta_{L}^{v}=\partial_{ x}^{m}\delta_{L}^{B^{\prime}v}=\partial_{x}^{m}\delta_{L}^{Av}\;.\] Therefore, by Proposition 7.5, \[Bu\in C^{\prime-k}_{L}(M)\cap\partial_{x}^{m}C^{-\infty}(L;\Omega NL)= \partial_{x}^{m}C^{\prime\,m-k}(L;\Omega NL)\;.\] This means that \(Av\in C^{\prime\,m-k}(L;\Omega NL)\). So \(v\in C^{\infty}(L;\Omega NL)\) because \(A\) is arbitrary, and therefore \(u\in\partial_{x}^{m}C^{\infty}(L;\Omega NL)\). The surjectivity of (7.37) follows from Lemma 7.23 and the surjectivity of (7.36). Finally, (7.36) is open like in Proposition 7.5, using that \(C^{\infty}(L;\Omega^{-1}NL)\) is webbed and \(K(M,L)\) ultrabornological (Corollary 7.19). So (7.37) is also open. _Remark 7.25_.: Proposition 7.24 reconfirms Corollary 7.17 _Remark 7.26_.: In Section 6.13, for any compact manifold with boundary \(M\), the analog of Proposition 7.5 for \(\mathcal{K}(M)\) follows from (6.48) and the application of Proposition 7.5 to \(K(\check{M},\partial M)\). ### The conormal sequence The diagram (7.21) has the restriction \[\begin{CD}0\to\mathcal{K}(\boldsymbol{M})@>{\iota}>{}>\check{\mathcal{A}}( \boldsymbol{M})@>{R}>{}>\mathcal{A}(\boldsymbol{M})\to 0\\ @V{\boldsymbol{\pi}_{*}}V{}V@V{\boldsymbol{\pi}_{*}}V{}V@V{\cong}V{}V\\ 0\to K(M,L)@>{\iota}>{}>I(M,L)@>{R}>{}>J(M,L)\to 0\;.\end{CD} \tag{7.38}\] The following analog of Proposition 7.4 holds true with formally the same proof, using (7.38). **Proposition 7.27**.: _The maps (7.30) and (7.31) are surjective topological homomorphisms._ By Proposition 7.27, the bottom row of (7.38) is exact in the category of continuous linear maps between LCSs; it will be called the _conormal sequence_ of \(M\) at \(L\) (or of \((M,L)\)). The surjectivity of (7.31) can be realized with partial extension maps, as stated in the following analog of Proposition 6.26. **Corollary 7.28**.: _For all \(m\in\mathbb{R}\), there is a continuous linear partial extension map \(E_{m}:J^{m}(M,L)\to I^{(s)}(M,L)\), where \(s=0\) if \(m\geq 0\), and \(m>s\in\mathbb{Z}^{-}\) if \(m<0\)._ Proof.: By the commutativity of (7.38), we can take \(E_{m}\) equal to the composition \[J^{m}(M,L)\xrightarrow{\boldsymbol{\pi}_{-1}^{-1}}\mathcal{A}^{m}(\boldsymbol {M})\xrightarrow{E_{m}}\dot{\mathcal{A}}^{(s)}(\boldsymbol{M})\xrightarrow{ \boldsymbol{\pi}_{*}}I^{(s)}(M,L)\;,\] where this map \(E_{m}\) is given by Proposition 6.26. **Corollary 7.29**.: \(C^{\infty}(M)\) _is dense in \(J(M,L)\)._ Proof.: Apply (7.28), Corollary 4.6 and Proposition 7.27. ### Action of \(\operatorname{Diff}(M)\) on the conormal sequence According to Section 4.5, every \(A\in\operatorname{Diff}(M)\) defines a continuous linear map \(A\) on \(I(M,L)\), which preserves \(K(M,L)\) because \(A\) is local. Therefore it induces a linear map \(A\) on \(J(M,L)\), which is continuous by Proposition 7.27. This map satisfies the analog of (4.16). The map \(A\) on \(J(M,L)\) can be also described as a restriction of \(A\) on \(C^{-\infty}(M,L)\) (Section 7.6). On the other hand, according to Section 6.14, the lift \(\widetilde{A}\in\operatorname{Diff}(\boldsymbol{M})\) defines continuous linear maps on the top spaces of (7.38) which correspond to the operators defined by \(A\) on the bottom spaces via the maps \(\boldsymbol{\pi}_{*}\). If \(A\in\operatorname{Diff}(M,L)\), then it defines continuous endomorphisms \(A\) of \(J^{(s)}(M,L)\) and \(J^{m}(M,L)\). ### Pull-back of elements of the conormal sequence Consider the notation and conditions of Section 4.6. By locality, the map (4.20) has a restriction \(\phi^{*}:K(M,L)\to K(M^{\prime},L^{\prime})\). So it also induces a linear map \(\phi^{*}:J(M,L)\to J(M^{\prime},L^{\prime})\), which is continuous by Proposition 7.27. These maps satisfy the analogs of (4.19) and (4.21). ### Push-forward of elements of the conormal sequence Consider the notation and conditions of Section 4.7. As above, the map (4.23) has a restriction \(\phi_{*}:K_{\mathrm{c}}(M^{\prime},L^{\prime};\Omega_{\mathrm{fiber}})\to K_{ \mathrm{c}}(M,L)\). Thus it induces a linear map \(\phi_{*}:J_{\mathrm{c}}(M^{\prime},L^{\prime};\Omega_{\mathrm{fiber}})\to J_{ \mathrm{c}}(M,L)\), which is continuous by Proposition 7.27. These maps satisfy the analogs of (4.22) and (4.24). ### Case where \(L\) is not transversely orientable If \(L\) is not transversely orientable, we still have a tubular neighborhood \(T\) of \(L\) in \(M\), but there is no defining function \(x\) of \(L\) in \(T\) trivializing the projection \(\varpi:T\to L\). We can cut \(M\) along \(L\) as well to produce a bounded compact manifold, \(\boldsymbol{M}\), with a projection \(\boldsymbol{\pi}:\boldsymbol{M}\to M\) and a boundary collar \(\boldsymbol{T}\) over \(T\). By using a boundary defining function \(\boldsymbol{x}\) of \(\boldsymbol{M}\), we get the same definitions, properties and descriptions of \(C^{\pm\infty}(M,L)\) and \(J(M,L)\) (Sections 7.3, 7.7 and 7.8). \(C^{-\infty}_{L}(M)\) and \(K(M,L)\) also have the same definitions (Sections 7.4 and 7.10). However (7.19) and (7.32) are not true because the covering map \(\boldsymbol{\pi}:\partial\boldsymbol{M}\to L\) is not trivial, and the descriptions given in Propositions 7.5 and 7.24 need a slight modification. This problem can be solved as follows. Let \(\check{\pi}:\check{L}\to L\) denote the two-fold covering of transverse orientations of \(L\), and let \(\check{\sigma}\) denote its deck transformation different from the identity. Since the lift of \(NL\) to \(\check{L}\) is trivial, \(\check{\pi}\) on \(\check{L}\equiv\{0\}\times\check{L}\) can be extended to a two-fold covering \(\check{\pi}:\check{T}:=(-\epsilon,\epsilon)_{x}\times\check{L}\to T\), for some \(\epsilon>0\). Its deck transformation different from the identity is an extension of \(\check{\sigma}\) on \(\check{L}\equiv\{0\}\times\check{L}\), also denoted by \(\check{\sigma}\). Then \(\check{L}\) is transversely oriented in \(\check{T}\); i.e., its normal bundle \(N\check{L}\) is trivial. Thus \(C^{-\infty}_{\check{L}}(\check{T})\) and \(K(\check{T},\check{L})\) satisfy (7.19), (7.32) and Propositions 7.5 and 7.24. Since \(N\check{L}\equiv\check{\pi}^{*}NL\), the map \(\check{\sigma}\) lifts to a homomorphism of \(N\check{L}\), which induces a homomorphism of \(\Omega^{-1}NL\) also denoted by \(\check{\sigma}\). Let \(L_{-1}\) be the union of non-transversely oriented connected components of \(L\), and \(L_{1}\) the union of its transversely oriented components. Correspondingly, let \(\check{L}_{\pm 1}=\check{\pi}^{-1}(L_{\pm 1})\) and \(\check{T}_{\pm 1}=(-\epsilon,\epsilon)\times\check{L}_{\pm 1}\). Since \(\check{\sigma}^{*}x=\pm x\) on \(T_{\pm 1}\), Propositions 7.5 and 7.24 become true in this case by replacing \(C^{r}(L;\Omega^{-1}NL)\) (\(r\in\mathbb{Z}\cup\{\pm\infty\}\)) with the direct sum of the spaces \[\{\,u\in C^{r}(L_{\pm 1};\Omega^{-1}NL_{\pm 1})\mid\check{\sigma}^{*}u=\pm u \,\}\;.\] Now the other results about \(C^{-\infty}_{L}(M)\) and \(K(M,L)\), indicated in Sections 7.4, 7.5, 7.10 and 7.11, can be obtained by using these extensions of Propositions 7.5 and 7.24 instead of (7.19) and (7.32). Sections 7.12 to 7.15 also have strightforward extensions. ## 8. Dual-conormal sequence ### Dual-conormal distributions Consider the notation of Section 7 assuming that \(L\) is transversely oriented; the extension to the non-transversely orientable case can be made like in Section 7.16. Like in Sections 6.22 and 8.1, let \[K^{\prime}(M,L)=K(M,L;\Omega)^{\prime}\;,\quad J^{\prime}(M,L)=J(M,L;\Omega)^{ \prime}\;.\] By (7.26) and (7.32), \[\mathcal{K}^{\prime}(\boldsymbol{M})\equiv K^{\prime}(M,L)\oplus K^{\prime}(M,L)\;,\quad\dot{\mathcal{A}}^{\prime}(\boldsymbol{M})\equiv J^{\prime}(M,L)\;. \tag{8.1}\] Let also \[\begin{cases}K^{\prime\,(s)}(M,L)=K^{(-s)}(M,L;\Omega)^{\prime}\;,\quad K^{ \prime\,m}(M,L)=K^{-m}(M,L;\Omega)^{\prime}\;,\\ J^{\prime\,(s)}(M,L)=J^{(-s)}(M,L;\Omega)^{\prime}\;,\quad J^{\prime\,m}(M,L)= J^{-m}(M,L;\Omega)^{\prime}\;,\end{cases} \tag{8.2}\] which satisfy the analogs of (8.1). Like in Section 6.22, for \(s<s^{\prime}\) and \(m<m^{\prime}\), we get continuous linear restriction maps \[K^{\prime\,(s^{\prime})}(M,L)\to K^{\prime\,(s)}(M,L)\;,\quad K^{\prime\,m}( M,L)\to K^{\prime\,m^{\prime}}(M,L)\;,\] and continuous injections \[J^{\prime\,(s^{\prime})}(M,L)\subset J^{\prime\,(s)}(M,L)\;,\quad J^{\prime\, m^{\prime}}(M,L)\subset J^{\prime\,m}(M,L)\;,\] forming projective spectra. By (8.1), its analogs for the spaces (8.2) and according to Section 6.22, the spaces \(K^{\prime\,(s)}(M,L)\) and \(K^{\prime\,m}(M,L)\) satisfy the analogs of (5.4) and (5.5), and the spaces \(J^{\prime\,(s)}(M,L)\) and \(J^{\prime\,m}(M,L)\) satisfy the analogs of (6.55) and (6.56). Using (8.1), we get the following consequences of Proposition 6.39 and Corollaries 6.40 and 6.41. **Corollary 8.1**.: \(K^{\prime}(M,L)\) _and \(J^{\prime}(M,L)\) are complete Montel spaces._ **Corollary 8.2**.: \(K^{\prime\,(s)}(M,L)\) _and \(J^{\prime\,(s)}(M,L)\) are bornological and barreled._ **Corollary 8.3**.: \(K^{\prime}(M,L)\equiv\varprojlim K^{\prime\,(s)}(M,L)\) _and \(J^{\prime}(M,L)\equiv\bigcap_{\mathrm{s}}J^{\prime\,(s)}(M,L)\)._ Like in Section 6.22, the versions of (7.15), (7.28) and Corollary 7.12 with \(\Omega M\) induce continuous inclusions \[C^{-\infty}(M)\supset J^{\prime}(M,L)\supset C^{\infty}(M,L)\;. \tag{8.3}\] ### A description of \(J^{\prime}(M,L)\) With the notation of Section 7.8, the identity (8.1) and Corollaries 6.45 to 6.47 have the following consequences. **Corollary 8.4**.: \(J^{\prime\,m}(M,L)\cong\boldsymbol{x}^{m}H_{\mathrm{b}}^{-\infty}(\boldsymbol{ M})=\boldsymbol{x}^{m-\frac{1}{2}}H^{-\infty}(\mathring{\boldsymbol{M}})\)__(\(m\in\mathbb{R}\))._ **Corollary 8.5**.: \(J^{\prime}(M,L)\cong\bigcap_{m}\boldsymbol{x}^{m}H_{\mathrm{b}}^{-\infty}( \boldsymbol{M})=\bigcap_{m}\boldsymbol{x}^{m}H^{-\infty}(\mathring{ \boldsymbol{M}})\)_._ **Corollary 8.6**.: \(C^{\infty}_{\mathrm{c}}(M\setminus L)\) _is dense in every \(J^{\prime\,m}(M,L)\) and in \(J^{\prime}(M,L)\). Therefore the first inclusion of (8.3) is also dense._ The analog of Remark 6.48 makes sense for \(J^{\prime}(M,L)\). ### Description of \(K^{\prime}(M,L)\) The version of Proposition 7.24 with \(\Omega M\) has the following direct consequence, where we set \[C_{m}^{2}=C^{\infty}(L;\Omega^{-1}NL\otimes\Omega M)^{\prime}=C^{\infty}(L; \Omega)^{\prime}=C^{-\infty}(L)\] for every \(m\in\mathbb{N}_{0}\). **Corollary 8.7**.: _The transposes of (7.36) and (7.37) are TVS-isomorphisms,_ \[K^{\prime}(M,L)\xrightarrow{\cong}\prod_{m=0}^{\infty}C_{m}^{2}\;,\quad K^{ \prime\,(s)}(M,L)\xrightarrow{\cong}\prod_{m<s-1/2}C_{m}^{2}\quad(s>1/2)\;.\] ### Dual-conormal sequence The transpose of the density-bundles version of (7.38) is the commutative diagram (8.4) \[\begin{CD}0\leftarrow\mathcal{K}^{\prime}(\boldsymbol{M})@<{\iota^{t}}<{}< \mathcal{A}^{\prime}(\boldsymbol{M})@<{R^{t}}<{}<\mathring{\mathcal{A}}^{ \prime}(\boldsymbol{M})\gets 0\\ @A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{ }A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^ {*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{ \pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{ \boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{ \boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{ \boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{ \boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{ \boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{ \boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A @A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}} A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{ \boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{} A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{ \boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{} A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{ \boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{} A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{ \boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{} A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{ \boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{} A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{ \boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{} A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{ \boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{} A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{ \boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A {}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}} A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{ \boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{} A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{ \boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{ \boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A {}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A {}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A {}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A {}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A {}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A {}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A {}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A {}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A {}A@A{\boldsymbol{\pi}^{*}}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A {}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A@A{\boldsymbol{\pi}^{*}}A {}A@A{\boldsymbol{\pi}^{*}}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A {}A@A{\boldsymbol{\pi}^{*}}A@A{\boldsymbol{\pi}^{*}}A{}A@A{\boldsymbol{\pi}^{*}}A {}A@A{\boldsymbol{\pi}^{*}}A@A{\boldsymbol{\pi}^{*}}A@A{\boldsymbol{\pi}^{*}}A{}A ### Push-forward of elements of the dual-conormal sequence With the notations and conditions of Section 5.4, besides (5.12), we get continuous linear push-forward maps \(\phi_{*}:K^{\prime}_{\mathrm{c}}(M^{\prime},L^{\prime};\Omega_{\mathrm{fiber}}) \to K^{\prime}_{\mathrm{c}}(M,L)\) and \(\phi_{*}:J^{\prime}_{\mathrm{c}}(M^{\prime},L^{\prime};\Omega_{\mathrm{fiber}}) \to J^{\prime}_{\mathrm{c}}(M,L)\). The analogs of (5.11) and (5.13) in this setting are also true. ### \(I(M,l)\) vs \(I^{\prime}(M,L)\) **Lemma 8.9**.: _For all \(m\in\mathbb{N}_{0}\), \(\boldsymbol{\pi}^{*}\big{(}H^{-m}(M)\cap I^{\prime}(M,L)\big{)}\subset\dot{H}^ {-m}(\boldsymbol{M})\cap\mathcal{A}^{\prime}(\boldsymbol{M})\)._ Proof.: Using a volume form on \(M\) and its lift to \(\boldsymbol{M}\) to define a scalar product of \(L^{2}(M)\) and \(L^{2}(\boldsymbol{M})\), it follows that \(\boldsymbol{\pi}^{*}:C^{\infty}(M)\to C^{\infty}(\boldsymbol{M})\) induces a unitary isomorphism \(\boldsymbol{\pi}^{*}:L^{2}(M)\to L^{2}(\boldsymbol{M})\). Hence the statement is true for \(m=0\) because \(L^{2}(\boldsymbol{M})\equiv\dot{H}^{0}(\boldsymbol{M})\) (Section 6.4). Then, for arbitrary \(m\), by (2.21) and (6.24), and according to Section 7.1, \[\boldsymbol{\pi}^{*}\big{(}H^{-m} (M)\cap I^{\prime}(M,L)\big{)}\] \[=\boldsymbol{\pi}^{*}\big{(}\operatorname{Diff}^{m}(M)\,L^{2}(M) \cap I^{\prime}(M,L)\big{)}\subset\operatorname{Diff}^{m}(\boldsymbol{M})\, \boldsymbol{\pi}^{*}L^{2}(M)\cap\mathcal{A}^{\prime}(\boldsymbol{M})\] \[=\operatorname{Diff}^{m}(\boldsymbol{M})\,\dot{H}^{0}(\boldsymbol {M})\cap\mathcal{A}^{\prime}(\boldsymbol{M})\subset\dot{H}^{-m}(\boldsymbol {M})\cap\mathcal{A}^{\prime}(\boldsymbol{M})\;.\qed\] **Lemma 8.10**.: \(\boldsymbol{\pi}^{*}\big{(}I(M,L)\cap I^{\prime}(M,L)\big{)}\subset C^{\infty }(\boldsymbol{M})\)_._ Proof.: For every \(u\in I(M,L)\cap I^{\prime}(M,L)\), there is some \(m\in\mathbb{N}_{0}\) such that \(u\in I^{(-m)}(M,L)\). Then, by Lemma 8.9, for any \(B\in\operatorname{Diff}(M,L)\), \[\widetilde{B}\boldsymbol{\pi}^{*}u=\boldsymbol{\pi}^{*}Bu\in\boldsymbol{\pi}^ {*}\big{(}H^{-m}(M)\cap I^{\prime}(M,L)\big{)}\subset\dot{H}^{-m}(\boldsymbol {M})\cap\mathcal{A}^{\prime}(\boldsymbol{M})\;.\] Since the operators \(\widetilde{B}\)\((B\in\operatorname{Diff}(M,L))\) generate \(\operatorname{Diff}(\widetilde{M},\widetilde{L})\) as \(C^{\infty}(\widetilde{M})\)-module, it follows that \(u\in\dot{\mathcal{A}}(\boldsymbol{M})\cap\mathcal{A}^{\prime}(\boldsymbol{M})= C^{\infty}(\boldsymbol{M})\) by (6.60). **Theorem 8.11**.: \(I(M,L)\cap I^{\prime}(M,L)=C^{\infty}(M)\)_._ Proof.: Suppose there is some non-smooth \(u\in I(M,L)\cap I^{\prime}(M,L)\). However \(\boldsymbol{\pi}^{*}u\in C^{\infty}(\boldsymbol{M})\) by Lemma 8.10. Then there is a chart \((V,y)\) of \(L\) such that, for the induced chart \((U\equiv(-\epsilon,\epsilon)\times V,(x,y))\) of \(M\), the function \(u\) is smooth on \(((-\epsilon,0)\cup(0,\epsilon))\times V\), and has smooth extensions to \((-\epsilon,0]\times V\) and \([0,\epsilon)\times V\), but \(\partial_{x}^{m}u(0^{-},y_{0})\neq\partial_{x}^{m}(0^{+},y_{0})\) for some \(m\in\mathbb{N}_{0}\) and \(y_{0}\in V\). After multiplying \(u\) by a smooth function supported in \(U\) whose value at \(y_{0}\) is nonzero, we can assume \(u\) is supported in \((-\epsilon/2,\epsilon/2)\times V\). Then there is some \(v\in C^{\infty}(L;\Omega)\) such that \(\operatorname{supp}v\subset V\) and \[\int_{y\in V}(u(0^{-},y)-u(0^{+},y))\,v(y)\neq 0\;. \tag{8.5}\] On the other hand, there is a sequence \(\phi_{k}\in C^{\infty}_{\mathrm{c}}(-\epsilon,\epsilon)\) so that the restrictions of \(m\)th derivatives \(\phi_{k}^{(m)}\) to \((-\epsilon/2,\epsilon/2)\) are compactly supported and converge to \(\delta_{0}\) in \(C^{-\infty}(-\epsilon/2,\epsilon/2)\) as \(k\to\infty\). For instance, we may take \[\phi_{k}(t)=h(t)\int_{0}^{t}\int_{0}^{t_{m-1}}\cdots\int_{0}^{t_{1}}f_{k}(t_{0} )\,dt_{0}\cdots dt_{m-1}\;,\] where \(h,f_{k}\in C^{\infty}_{\mathrm{c}}(-\epsilon,\epsilon)\) with \(h=1\) on \((-\epsilon/2,\epsilon/2)\), \(\operatorname{supp}f_{k}\subset(-\epsilon/2,\epsilon/2)\), \(f_{k}\) is even, and \(f_{k}\to\delta_{0}\) in \(C^{-\infty}_{\mathrm{c}}(-\epsilon/2,\epsilon/2)\) and \(f_{k}(0)\to\infty\) as \(k\to\infty\). Thus \[\phi_{k}^{(m)}(0)=f_{k}(0)\to\infty\;, \tag{8.7}\] \[\int_{-\infty}^{0}a(t)\phi_{k}^{(m)}(t)\,dt\to\frac{a(0)}{2}\;, \quad\int_{0}^{\infty}b(t)\phi_{k}^{(m)}(t)\,dt\to\frac{b(0)}{2}\;, \tag{8.6}\] for all \(a\in C_{\mathrm{c}}^{\infty}(-\infty,0]\) and \(b\in C_{\mathrm{c}}^{\infty}[0,\infty)\). The sequence \(w_{k}\equiv\phi_{k}(x)\,v(y)\otimes|dx|\in C_{\mathrm{c}}^{\infty}(T;\Omega) \subset C^{\infty}(M;\Omega)\) satisfies \[\partial_{x}^{m}w_{k}\equiv\phi_{k}^{(m)}(x)\,v(y)\otimes|dx|\to\delta_{0}(x) \,v(y)\otimes|dx|\equiv\delta_{L}^{v}\] in \(I(M,L;\Omega)\) as \(k\to\infty\). Since \(u\in I^{\prime}(M,L)\) and \(\partial_{x}^{m+1}w_{k}\in I(M,L;\Omega)\), it follows that \(\langle u,\partial_{x}^{m+1}w_{k}\rangle\to\langle u,\partial_{x}\delta_{L}^{v}\rangle\) as \(k\to\infty\). But \[\langle u,\partial_{x}^{m+1}w_{k}\rangle =\int_{y\in V}\int_{-\epsilon/2}^{0}u(x,y))\,\phi_{k}^{(m+1)}(x) )\,v(y)\,dx\] \[\quad+\int_{y\in V}\int_{0}^{\epsilon/2}u(x,y))\,\phi_{k}^{(m+1 )}(x))\,v(y)\,dx\] \[=\phi_{k}^{(m)}(0)\int_{y\in V}\left(u(0^{-},y)-u(0^{+},y))\right) v(y)\] \[\quad-\int_{y\in V}\int_{-\epsilon/2}^{0}\partial_{x}u(x,y))\, \phi_{k}^{(m)}(x))\,v(y)\,dx\] \[\quad-\int_{y\in V}\int_{0}^{\epsilon/2}\partial_{x}u(x,y))\,\phi _{k}^{(m)}(x))\,v(y)\,dx\;,\] which is divergent by (8.5)-(8.7).
2306.16783
TacMMs: Tactile Mobile Manipulators for Warehouse Automation
Multi-robot platforms are playing an increasingly important role in warehouse automation for efficient goods transport. This paper proposes a novel customization of a multi-robot system, called Tactile Mobile Manipulators (TacMMs). Each TacMM integrates a soft optical tactile sensor and a mobile robot with a load-lifting mechanism, enabling cooperative transportation in tasks requiring coordinated physical interaction. More specifically, we mount the TacTip (biomimetic optical tactile sensor) on the Distributed Organisation and Transport System (DOTS) mobile robot. The tactile information then helps the mobile robots adjust the relative robot-object pose, thereby increasing the efficiency of load-lifting tasks. This study compares the performance of using two TacMMs with tactile perception with traditional vision-based pose adjustment for load-lifting. The results show that the average success rate of the TacMMs (66%) is improved over a purely visual-based method (34%), with a larger improvement when the mass of the load was non-uniformly distributed. Although this initial study considers two TacMMs, we expect the benefits of tactile perception to extend to multiple mobile robots. Website: https://sites.google.com/view/tacmms
Zhuochao He, Xuyang Zhang, Simon Jones, Sabine Hauert, Dandan Zhang, Nathan F. Lepora
2023-06-29T08:42:01Z
http://arxiv.org/abs/2306.16783v1
# TacMMs: Tactile Mobile Manipulators for Warehouse Automation ###### Abstract Multi-robot platforms are playing an increasingly important role in warehouse automation for efficient goods transport. This paper proposes a novel customization of a multi-robot system, called Tactile Mobile Manipulators (TacMMs). Each TacMM integrates a soft optical tactile sensor and a mobile robot with a load-lifting mechanism, enabling cooperative transportation in tasks requiring coordinated physical interaction. More specifically, we mount the TacTip (biomimetic optical tactile sensor) on the Distributed Organisation and Transport System (DOTS) mobile robot. The tactile information then helps the mobile robots adjust the relative robot-object pose, thereby increasing the efficiency of load-lifting tasks. This study compares the performance of using two TacMMs with tactile perception with traditional vision-based pose adjustment for load-lifting. The results show that the average success rate of the TacMMs (\(66\%\)) is improved over a purely visual-based method (\(34\%\)), with a larger improvement when the mass of the load was non-uniformly distributed. Although this initial study considers two TacMMs, we expect the benefits of tactile perception to extend to multiple mobile robots. Website: [https://sites.google.com/view/tacmms](https://sites.google.com/view/tacmms). Tactile Sensing, Multi-robot system, Warehouse transportation ## I Introduction With the increasing number of online customized orders, there are higher requirements for transportation and warehouse management [1]. Currently, traditional mobile robots, such as Automated Guided Vehicles (AGVs) [2] and forklifts [3], automatically localize products then lift and transport them in the warehouse. Of these, load-lifting is an important step for warehouse automation, for which a robot must: 1) perceive the pose of the load; 2) determine the optimal lifting position on the target based on the perceived pose; and 3) control the contact pressure to achieve robust lifting and avoid damage to the load. However, due to difficult to precisely control object contact for robust lifting, traditional visually-guided mobile robots are limited for transporting goods with a distal non-contact modality. Motivated by these shortcomings, multi-robot cooperative systems with soft end-effectors have been developed for warehouse transportation [4, 5, 6], resulting in more efficient and robust systems, with higher performance and the ability to lift heavier products. The transportation strategies of this kind of robotic system include pushing, grasping and caging [7, 8, 9]. Moreover, grasping or lifting with multiple robots is analogous to using the fingers of a human or robot hand, where it is known that to achieve safe lifting the fingers must pre-adjust to a desired pose and apply a reasonable force. In multi-robot cooperative systems, the robots commonly rely on an external vision system to feedback the relative pose of the target objects. However, as is well known in robot grasping, such vision systems are limited by occlusion, calibration issues and inaccuracy from a camera situated away from the target [10]. In contrast, tactile sensing offers the capability to estimate the relative robot-object pose for the lifting task without the aforementioned issues of vision. Here we use an optical tactile sensor called the TacTip (Tactile fingerTip) [11, 12] which has a 3D-printed soft dome-like structure mounted over an internal camera and lighting. This sensor is well-suited for Tactile Mobile Manipulators (TacMMs), being of the right size and shape to mount on the top liftable platform of DOTS (Distributed Organization and Transport System) mobile robots designed for cooperative automation [13]. Furthermore, both the TacTip and DOTS are open-source and easily fabricated, enabling others to customize and build upon this work. Fig. 1: TacMM system lifting a box. Top: a box lifted by two DOTS mobile robots each with a TacTip optical tactile sensor mounted on a disable platform; tactile images also shown. Bottom: steps to lift a box: (a) approach object; (b) adjust pose after initial contact; (c) establish a second contact with object; (d) adjust pose after the second contact; (e) lift object; (f) lower object.
2301.02751
Hadamard matrices: skew of order 276 and symmetric of order 372
The smallest integer v>0 for which no skew-Hadamard matrix of order 4v is known is v=69. We show how to construct several such matrices. We also construct the first examples of symmetric Hadamard matrices of order 372.
Dragomir Ž. Djoković
2023-01-07T00:11:59Z
http://arxiv.org/abs/2301.02751v2
# Skew-Hadamard matrices of order 276 ###### Abstract. The smallest integer \(v>0\) for which no skew-Hadamard matrix of order \(4v\) is known is \(v=69\). We show how to construct several such matrices. _In memory of my son Dejan Djokovic (1962-2022)._ ## 1. Introduction According to Table 7.1 of the survey paper of Seberry and Yamada [9], published in 1992, there were only six odd integers \(v<100\) for which no skew-Hadamard matrix of order \(4v\) was known at that time, namely the integers \[47,59,69,81,89,97.\] Subsequently, the skew-Hadamard matrices of order \(4v\) were constructed in 1994 for \(v=81\)[4], in 2004 for \(v=59\)[8], and in 2008 for \(v=47,97\)[5]. In this note we construct 19 skew-Hadamard matrices of order \(276\) (\(=4\cdot 69\)). Let us make two remarks. First, the case \(v=63\) is listed as unknown in the handbook [2, Table 1.51, p. 277] published in 2007. However the existence of a skew-Hadamard matrix of order \(4\cdot 63\) was known since 1969, as it belongs to an infinite series of such matrices constructed by Szekeres [11]. As this handbook does not list \(v=69\) as unknown, it is probable that this was just a misprint: 63 should be replaced by 69? Second, in the more recent book [10, Table 9.2, pp. 198-200], the cases \(v=39,49,65\) are listed as unknown. However the corresponding skew-Hadamard matrices have been constructed long ago in [3]. ## 2. The first skew-Hadamard matrices of order 276 As far as we know, the smallest integer \(v>0\) for which no skew-Hadamard matrix of order \(4v\) is known is \(v=69\)[1, p. 1436]). In this section we construct several such matrices. Our construction uses the Goethals-Seidel array (GS-array) shown below \[\left[\begin{array}{cccc}A_{0}&A_{1}R&A_{2}R&A_{3}R\\ -A_{1}R&A_{0}&-RA_{3}&RA_{2}\\ -A_{2}R&RA_{3}&A_{0}&-RA_{1}\\ -A_{3}R&-RA_{2}&RA_{1}&A_{0}\end{array}\right].\] We shall assume that the \(A_{i}\) are circulants and \(R\) is the back-diagonal identity matrix (i.e. the matrix obtained from the identity matrix by reversing the order of rows). The circulants are obtained from a cyclic difference family \(\{X_{0},X_{1},X_{2},X_{3}\}\) with parameters \[(v=69;k_{0}=34,k_{1}=34,k_{2}=31,k_{3}=27;\lambda=57).\] For instance, for the first row \((a_{0},a_{1},\ldots,a_{v-1})\) of \(A_{0}\) we have \(a_{i}=-1\) if \(i\in X_{0}\) and \(a_{i}=1\) otherwise. Moreover it is required that the block \(X_{0}\) is skew, i.e. \(a_{0}=1\) and \(a_{i}+a_{v-i}=0\) for \(i=1,2,\ldots,34\). A special feature of our difference families is that they break up into two pieces \(\{X_{0},X_{1}\}\) and \(\{X_{2},X_{3}\}\) which are also difference families. First, we need a difference family \(\{X_{0},X_{1}\}\) with parameters \((69;34,34;33)\) with \(X_{0}\) skew. This is provided by the well known family of Szekeres difference sets [11, 10]: \[X_{0} = \{1,2,6,7,9,13,14.16,17,18,21,27,31,34,36,37,39,40,\] \[41,43,44,45,46,47,49,50,54,57,58,59,61,64,65,66\};\] \[X_{1} = \{1,4,5,7,9,10,11,12,15,17,18,19,24,26,27,28,30,39,\] \[41,42,43,45,50,51,52,54,57,58,59,60,62,64,65,68\}.\] Note that \(X_{0}\) is skew and \(X_{1}\) is symmetric. Further, all \(19\) difference families share the same first two blocks, \(X_{0}\) and \(X_{1}\). Second, we need a difference family \(\{X_{2},X_{3}\}\) with parameters \((69;31,27;24)\). This is exactly the parameter set for a D-optimal design of order \(2\cdot 69=138\). In a joint paper with I. Kotsireas [7, Section 4.2], we have constructed \(19\) nonequivalent such difference families. Anyone of them can be used in our construction. As an example, let us choose the first one: \[X_{2} = \{0,1,3,4,6,9,10,11,13,14,17,18,20,22,26,28,29,\] \[32,33,34,39,41,43,45,46,48,51,59,60,62,63\},\] \[X_{3} = \{0,2,3,4,8,9,10,11,12,15,16,17,21,25,26,\] \[32,33,35,36,37,39,41,46,51,54,57,59\}.\] By constructing the circulants \(A_{i}\) from the blocks \(X_{i}\) and by plugging the \(A_{i}\) into the GS-array we obtain a skew-Hadamard matrix of order \(276\). Consequently, the smallest positive integer \(v\) for which the existence of a skew-Hadamard matrix of order \(4v\) is still undecided is now \(89\). For the readers convenience, we provide (for the difference family chosen above) the first rows of the blocks \(A_{i}\): \[+--++--+++++--++-++-++++++++-++-++-\] \[+--+----+--+++-++--++--++;\] \[+-++--+-+-+-+---+++---+++-+--+-+++++\] \[++++-+--+++++--+++-++--+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-++-+-+-+-++-+-+-++-+-+-+-++- As far as we know, the odd integers \(v>0\) less than \(200\) for which the existence of skew-Hadamard matrices of order \(4v\) is still undecided are the following: \[89,101,107,119,149,153,167,177,179,191,193.\] After taking into account the papers [5, 6] (and correcting the hypothetical misprint mentioned earlier), this list agrees with [2, Table 1.51, p. 277]. ## 3. Acknowledgements This research was enabled in part by support provided by SHARCNET ([http://www.sharcnet.ca](http://www.sharcnet.ca)) and the Digital Research Alliance of Canada (allianeccan.ca).
2303.03236
An upscaling based three parameter elastic anisotropy model
Rock formations often exhibit transversely anisotropic elastic behavior due to their layered structure. Such materials are characterized by five independent elastic constants. In the context of petroleum applications, it is often challenging to accurately measure all these elastic parameters. At the same time, the effect of elastic anisotropy can be noticeable and therefore there is a need to include it in some form. To fill the gap, this study proposes a three parameter elastic anisotropy model. It captures the dominant anisotropic behavior and yet has only three elastic constants that are relatively easy to measure in laboratory. The approach is based on upscaling of a periodically layered material characterized by equal height layers with different Young's moduli and the same Poisson's ratio. The resultant upscaled material is transversely isotropic and is also physically admissible. The developed approach allows to effectively reconstruct or estimate the values of the two remaining parameters needed for the transversely isotropic model. Comparison between the reconstructed results and the measured values are compared for several rock types.
E. V. Dontsov
2023-03-06T15:53:10Z
http://arxiv.org/abs/2303.03236v1
# An upscaling based three parameter elastic anisotropy model ###### Abstract Rock formations often exhibit transversely anisotropic elastic behavior due to their layered structure. Such materials are characterized by five independent elastic constants. In the context of petroleum applications, it is often challenging to accurately measure all these elastic parameters. At the same time, the effect of elastic anisotropy can be noticeable and therefore there is a need to include it in some form. To fill the gap, this study proposes a three parameter elastic anisotropy model. It captures the dominant anisotropic behavior and yet has only three elastic constants that are relatively easy to measure in laboratory. The approach is based on upscaling of a periodically layered material characterized by equal height layers with different Young's moduli and the same Poisson's ratio. The resultant upscaled material is transversely isotropic and is also physically admissible. The developed approach allows to effectively reconstruct or estimate the values of the two remaining parameters needed for the transversely isotropic model. Comparison between the reconstructed results and the measured values are compared for several rock types. **Keywords:** Transversely isotropic material; hydraulic fracturing; modeling; upscaling. ## 1 Introduction Sedimentary and layered nature of rock formations often leads to transversely isotropic elastic behavior. One of the early works includes [1], in which weak anisotropy is investigated and the so-called Thomsen parameters are introduced to describe it. The study [2] reports the values for the anisotropic elastic constants for Devonian-Mississipian shale that are obtained using sound velocity measurements and the variation of the aforementioned elastic parameters versus confining pressure is observed. A comparison between static and dynamic parameters is investigated in [3]. The dynamic elastic constants are measured by utilizing the variation of sound speed with respect to bedding orientation, while the static properties are calculated on the basis of the stress-strain response of core plugs with various orientations. There are many more other authors who investigated anisotropic elastic properties of shales, see e.g. [4, 5, 6, 7, 8, 9, 10, 11]. Some use dynamic velocity measurements and some focus on the static experiments. It is also interesting to mention the study [8], in which correlations between petrophysical and geomechanical properties are obtained. At the same time, authors in [10] developed correlations between anisotropic elastic constants based on the obtained data. The study [9] also reported the anisotropic values of fracture toughness for Marcellus shale, but surprisingly the degree of anisotropy is relatively mild, on the order of 10%. Finally, the study [11] investigated compaction trends for shales and systematically analyzed the anisotropy of various clays that appear in shales. Elastic anisotropy of rock formations is important for hydraulic fracture propagation. The vast majority of hydraulic fracture simulators assume that the rock formation is isotropic, even though it can be layered. One of the first articles devoted to the influence of elastic anisotropy on hydraulic fracture propagation is [12], where the effect of anisotropy is investigated in the context a constant height model. It is concluded that for such a geometry the solution is identical to that for an isotropic material, but features an apparent elastic modulus that depends on the anisotropic constants. A plain strain fracture is analyzed in [13]. It is again observed that the solution is the same as for an isotropic material, but the apparent elastic modulus depends on the anisotropic constants as well as on orientation of the fracture relative to the bedding layers. The analysis is extended to a uniformly pressurized elliptical fracture in [14]. The variation of the elliptical fracture aspect ratio versus propagation regime and the degree of anisotropy is investigated in [15]. Further, the study [16] outlines the parametric map for an elliptical hydraulic fracture propagating in a homogeneous anisotropic material. Numerical modeling is employed in [17] and [18] to better understand the influence of elastic anisotropy on the hydraulic fracture propagation near the wellbore and for multiple pseudo-3D fractures. Finally, results of the modeling with a fully planar hydraulic fracture model are presented in [19]. In order to specify the transversely isotropic material for the purpose of hydraulic fracture modeling, five elastic constants should be provided. On the other hand, isotropic material is quantified by only two constants. Therefore, there can be two more intermediate models with three and four constants. An example of the four parameter model can be found in [20], while the first three parameter model is introduced in [21] and is called ANNIE. There are two modifications to ANNIE, presented in [22] and [23], respectively. Also, another three parameter model is effectively used in [15], where the result is obtained by upscaling a periodically layered material in which the layers have different Young's modulus and the same Poisson's ratio. The primary use of the simplified three parameter models is to tackle often ocurring data poor cases for which values of all five elastic constants are not available. As a result, the purpose of this study is to evaluate and to compare different three parameter elastic models in the context of hydraulic fracture modeling. ## 2 Relations between elastic constants for a transversely isotropic elastic material As a starting point, transversely isotropic elastic material needs to be defined. This is a material, which is isotropic in the horizontal plane, but the behavior in the vertical direction is different. Hooke's law for such a material can be written as \[\begin{bmatrix}\sigma_{xx}\\ \sigma_{yy}\\ \sigma_{zz}\\ \sigma_{xz}\\ \sigma_{yz}\\ \sigma_{xy}\end{bmatrix}=\begin{bmatrix}C_{11}&C_{12}&C_{13}&0&0&0\\ C_{12}&C_{11}&C_{13}&0&0&0\\ C_{13}&C_{13}&C_{33}&0&0&0\\ 0&0&0&C_{44}&0&0\\ 0&0&0&0&C_{44}&0\\ 0&0&0&0&0&(C_{11}\!-\!C_{12})/2\end{bmatrix}\begin{bmatrix}\varepsilon_{xx} \\ \varepsilon_{yy}\\ \varepsilon_{zz}\\ 2\varepsilon_{xz}\\ 2\varepsilon_{yz}\\ 2\varepsilon_{xy}\end{bmatrix}, \tag{1}\] where \(\sigma_{ij}\) are the components of the stress tensor, \(\varepsilon_{ij}\) are the components of the strain tensor, and \(C_{ij}\) are the elastic or stiffness constants. Here it is assumed that the vertical axis is \(z\), while the horizontal \((x,y)\) plane is is the plane of isotropy. Note that there are five independent elastic constants, namely \(C_{11}\), \(C_{12}\), \(C_{13}\), \(C_{33}\), and \(C_{44}\). It is also useful to write the Hooke's law (1) in terms of compliances as \[\begin{bmatrix}\varepsilon_{xx}\\ \varepsilon_{yy}\\ \varepsilon_{zz}\\ 2\varepsilon_{xx}\\ 2\varepsilon_{yz}\\ 2\varepsilon_{xy}\end{bmatrix}=\begin{bmatrix}S_{11}&S_{12}&S_{13}&0&0&0\\ S_{12}&S_{11}&S_{13}&0&0&0\\ S_{13}&S_{13}&S_{33}&0&0&0\\ 0&0&0&S_{44}&0&0\\ 0&0&0&0&S_{44}&0\\ 0&0&0&0&2(S_{11}\!-\!S_{12})\end{bmatrix}\begin{bmatrix}\sigma_{xx}\\ \sigma_{yy}\\ \sigma_{zz}\\ \sigma_{yz}\\ \sigma_{xy}\end{bmatrix}, \tag{2}\] where \(S_{ij}\) are the compliance coefficients. The relationships between the compliance and stiffness constants follow the matrix inversion rules and can be summarized as \[S_{11}=\frac{1}{\Delta}\frac{C_{11}\!-\!C_{13}^{2}/C_{33}}{C_{11 }\!-\!C_{12}},\qquad S_{12}=-\frac{1}{\Delta}\frac{C_{12}\!-\!C_{13}^{2}/C_{3 3}}{C_{11}\!-\!C_{12}},\qquad S_{13}=-\frac{1}{\Delta}\frac{C_{13}}{C_{33}},\] \[S_{33}=\frac{1}{\Delta}\frac{C_{11}\!+\!C_{12}}{C_{33}},\qquad S _{44}=\frac{1}{C_{44}},\qquad\Delta=C_{11}\!+\!C_{12}\!-\!2C_{13}^{2}/C_{33}. \tag{3}\] For the future reference, it is also useful to provide alternative definitions of the elastic constants that is commonly used in engineering applications \[S_{11}=\frac{1}{E_{h}},\qquad S_{12}=-\frac{\nu_{h}}{E_{h}},\qquad S_{13}=- \frac{\nu_{v}}{E_{v}},\qquad S_{33}=\frac{1}{E_{v}},\qquad S_{44}=\frac{1}{G_ {vh}}. \tag{4}\] Here \(E_{h}\) is the horizontal Young's modulus, \(\nu_{h}\) is the horizontal Poisson's ratio, \(E_{v}\) is the vertical Young's modulus, \(\nu_{v}\) is the vertical Poisson's ratio, and \(G_{vh}\) is the shear modulus defined in either \((x,z)\) or \((y,z)\) plane. ## 3 Three parameter elastic model To develop the three parameter anisotropic model, the concept of upscaling is applied to a layered material. In particular, as shown in Fig. 1 the problem consists of a periodically layered material, in which the layers have equal height and are characterized by \(E_{1}\), \(\nu\) and \(E_{2}\), \(\nu\). In other words, the layers have different Young's moduli and the same Poisson's ratio. Thus, there are only three independent parameters in the model, namely, \(E_{1}\), \(E_{2}\), and \(\nu\). Note that such a model was considered in [15] for the purpose of analyzing aspect ratio of a hydraulic fracture. By applying Backus averaging or upscaling [24], the elastic constants \(C_{ij}\) can be calculated as \[C_{11}=\frac{1}{1\!-\!\nu^{2}}\langle E\rangle+\frac{\nu^{2}}{(1\!- \!\nu^{2})(1\!-\!2\nu)}\langle E^{-1}\rangle^{-1},\qquad C_{12}=\frac{\nu}{1 \!-\!\nu^{2}}\langle E\rangle+\frac{\nu^{2}}{(1\!-\!\nu^{2})(1\!-\!2\nu)} \langle E^{-1}\rangle^{-1},\] \[C_{13}=\frac{\nu}{(1\!+\!\nu)(1\!-\!2\nu)}\langle E^{-1}\rangle^ {-1},\qquad C_{33}=\frac{1\!-\!\nu}{(1\!+\!\nu)(1\!-\!2\nu)}\langle E^{-1} \rangle^{-1},\qquad C_{44}=\frac{1}{2(1\!+\!\nu)}\langle E^{-1}\rangle^{-1}. \tag{5}\] Here \(\langle E\rangle=(E_{1}\!+\!E_{2})/2\) is the arithmetic average of the Young's moduli, while \(\langle E^{-1}\rangle^{-1}=2E_{1}E_{2}/(E_{1}\!+\!E_{2})\) is the geometric average of the Young's moduli. Thus, equations (5) provide the values for all five elastic constants, but there are only three independent parameters. The primary advantage of using upscaling to define the remaining two parameters is the fact that the resultant anisotropic material is guaranteed to be physically admissible, i.e. the stiffness matrix in (1) is positive definite. In addition, many of the reservoir rocks are finely layered and individual layers often have very different Young's moduli, while the Poisson's ratio does not change that significantly. This observation provides an additional justification for the model. The answer presented in (5) depends on the properties of individual layers, i.e. on \(E_{1}\), \(E_{2}\), and \(\nu\). For engineering applications, it is instructive to reformulate the result in terms of parameters that are can be measured in a laboratory. In particular, it is assumed that the values of \(E_{h}\), \(\nu_{h}\), and \(E_{v}\) are known, and the goal is to express the values of \(\nu_{v}\) and \(G_{vh}\) in terms of the known parameters. The procedure for this is the following. The upscaled values of \(C_{ij}\) (5) need to be substituted into (3) to find \(S_{ij}\) in terms of \(E_{1}\), \(E_{2}\), and \(\nu\). After that, the relations (4) are used to relate the \(S_{ij}\) to the engineering parameters. The result can be summarized as \[\Delta=\frac{1}{1\!-\!\nu}\langle E\rangle,\qquad S_{11}=\frac{1 }{E_{h}}=\frac{1}{\langle E\rangle},\qquad S_{12}=S_{13}=-\frac{\nu_{h}}{E_{h} }=-\frac{\nu_{v}}{E_{v}}=-\frac{\nu}{\langle E\rangle},\] \[S_{33}=\frac{1}{E_{v}}=\frac{(1\!+\!\nu)(1\!-\!2\nu)}{(1\!-\!\nu )\langle E^{-1}\rangle^{-1}}+\frac{2\nu^{2}}{(1\!-\!\nu)\langle E\rangle}, \qquad S_{44}=\frac{1}{G_{vh}}=\frac{2(1\!+\!\nu)}{\langle E^{-1}\rangle^{-1}}. \tag{6}\] From equations (6) it becomes apparent that \(\langle E\rangle=E_{h}\) and \(\nu=\nu_{h}\), while the vertical Poisson's ratio becomes \[\nu_{v}=\nu_{h}\frac{E_{v}}{E_{h}}. \tag{7}\] The harmonic average of the moduli can be expressed as \[\langle E^{-1}\rangle^{-1}=\frac{(1\!+\!\nu_{h})(1\!-\!2\nu_{h})}{1\!-\!\nu_{ h}}\bigg{[}\frac{1}{E_{v}}-\frac{2\nu_{h}^{2}}{(1\!-\!\nu_{h})E_{h}}\bigg{]}^{-1}. \tag{8}\] And finally the expression for the shear modulus is \[G_{vh}=\frac{(1\!-\!2\nu_{h})}{2(1\!-\!\nu_{h})}\bigg{[}\frac{1}{E_{v}}-\frac {2\nu_{h}^{2}}{(1\!-\!\nu_{h})E_{h}}\bigg{]}^{-1}=\frac{E_{h}}{2(1\!+\!\nu_{h}) }\bigg{[}1+\frac{1\!-\!\nu_{h}}{(1\!+\!\nu_{h})(1\!-\!2\nu_{h})}\Big{(}\frac{E _{h}}{E_{v}}-1\Big{)}\bigg{]}^{-1}. \tag{9}\] To summarize, given the values of \(E_{h}\), \(E_{v}\), and \(\nu_{h}\), the three parameter anisotropy model provides the values for \(\nu_{v}\) (7) and \(G_{vh}\) (9). Note that \(E_{v}\leqslant E_{h}\) and \(\nu_{h}<0.5\), therefore the value of the shear modulus is always positive. It is instructive to compare the obtained result to the two existing models called ANNIE [21] and MANNIE [22]. In the former model, it is assumed that \(C_{12}=C_{13}=C_{33}-2C_{44}\), see [21]. As a result, the relationships between the \(C_{ij}\) and engineering constants can be summarized as \[C_{11}=\frac{1}{\Delta_{S}}\frac{1-\nu_{v}^{2}E_{h}/E_{v}}{1+\nu_{h}},\qquad C _{11}-C_{12}=\frac{E_{h}}{1+\nu_{h}},\qquad C_{33}=\frac{1}{\Delta_{S}}\frac{1 -\nu_{h}}{E_{h}}E_{v},\qquad C_{12}=\frac{\nu_{v}}{\Delta_{S}}, \tag{10}\] Figure 1: Upscaling of a periodically layered material. where \(\Delta_{S}=(1-\nu_{h})/E_{h}-2\nu_{v}^{2}/E_{v}\). The above equations can be combined to obtain \[\nu_{v}=\frac{(1+\nu_{h})-\sqrt{(1+\nu_{h})^{2}-4\nu_{h}E_{h}/E_{v}}}{2E_{h}/E_{v }},\qquad G_{vh}=\frac{1}{2\Delta_{S}}\bigg{[}(1-\nu_{h})\frac{E_{v}}{E_{h}}- \nu_{v}\bigg{]}. \tag{11}\] MANNIE model assumes \(C_{13}=K_{2}C_{12}\) and \(C_{11}=K_{1}(C_{11}-C_{12}-2C_{44}+C_{33})\)[22], where \(K_{1}=1.1\) and \(K_{2}=0.8\). This leads to the very similar relations between the elastic constants \[C_{11}=\frac{1}{\Delta_{S}}\frac{1-\nu_{v}^{2}E_{h}/E_{v}}{1+\nu_{h}},\qquad C _{11}-C_{12}=\frac{E_{h}}{1+\nu_{h}},\qquad C_{33}=\frac{1}{\Delta_{S}}\frac{ 1-\nu_{h}}{E_{h}}E_{v},\qquad K_{2}C_{12}=\frac{\nu_{v}}{\Delta_{S}}, \tag{12}\] where \(\Delta_{S}=(1-\nu_{h})/E_{h}-2\nu_{v}^{2}/E_{v}\). After some algebraic manipulations, the resultant expressions for \(\nu_{v}\) an \(G_{vh}\) are \[\nu_{v}=\frac{(1+\nu_{h})/K_{2}-\sqrt{(1+\nu_{h})^{2}/K_{2}^{2}-4\nu_{h}E_{h}/ E_{v}}}{2E_{h}/E_{v}},\qquad G_{vh}=\frac{1}{2}\big{(}C_{11}(1-K_{1}^{-1})-C_{12}+C_{ 33}\big{)}, \tag{13}\] where the expressions for \(C_{ij}\) from (12) should be used to calculate \(G_{vh}\) from the above equation. The results in (11) and (13) demonstrate that such formulations can potentially lead to complex valued elastic constants, which is unphysical. In particular the maximum allowable ratio between the Young's moduli is \[\max\Bigl{\{}\frac{E_{h}}{E_{v}}\Bigr{\}}=\frac{(1+\nu_{h})^{2}}{4\nu_{h}K_{ 2}^{2}}. \tag{14}\] The result with \(K_{2}=0.8\) applies to MANNIE model, while \(K_{2}=1\) for ANNIE model. For a typical value of \(\nu_{h}=0.2\), the maximum allowable level of Young's moduli anisotropy is approximately 1.8 for ANNIE model and 2.8 for MANNIE model. Such a restriction significantly limits applicability of these three parameter models. To demonstrate predictability of the proposed model, several data points for shales are taken from [12, 13, 9, 2, 7]. All the data points have five elastic constants. It is assumed that \(E_{h}\), \(E_{v}\), and \(\nu_{h}\) are given, while the values for \(\nu_{v}\) and \(G_{hv}\) are predicted by the three parameter models. Fig. 2 plots a comparison between the predicted values of \(\nu_{v}\) and \(G_{hv}\) against their respective true values. Three models are compared: the proposed model (7) and (9), ANNIE model (11), and MANNIE model (13). For the latter two models the data points corresponding to complex valued predictions are ignored. In addition, to cater for hydraulic fracture applications, the apparent moduli \(E_{v}^{\prime}\) and \(E_{h}^{\prime}\) are compared. These are defined as \[E_{h}^{\prime}=\frac{C_{11}^{2}-C_{12}^{2}}{C_{11}},\qquad E_{v}^{\prime}=2 \bigg{(}\frac{C_{33}}{C_{11}C_{33}-C_{13}^{2}}\Big{(}\frac{1}{C_{44}}+\frac{2 }{C_{13}+\sqrt{C_{11}C_{33}}}\Big{)}\bigg{)}^{-1/2}. \tag{15}\] The modulus \(E_{h}^{\prime}\) is the apparent elastic modulus for a horizontally oriented plane strain fracture, while \(E_{v}^{\prime}\) is the corresponding apparent modulus for a vertically oriented plane strain fracture, see e.g. [14, 15, 25]. The values of these parameters determine hydraulic fracture growth and therefore are the relevant moduli that need to be predicted accurately. Results shown in Fig. 2 demonstrate that the prediction of \(\nu_{v}\) is the least accurate and the accuracy of prediction is approximately the same for all the models considered. At the same time, the predictions of \(G_{vh}\), \(E_{v}^{\prime}\), and \(E_{h}^{\prime}\) are more accurate, which again applies for all models. The main conclusion is therefore the following. All three models are able to reasonably approximate the remaining elastic constants for the considered data. At the same time, given the specificity of prescribing \(E_{h}\), \(E_{v}\), and \(\nu_{h}\), both ANNIE and MANNIE models have a very limited applicability range in terms of the anisotropy ratio \(E_{h}/E_{v}\). This makes the proposed upscaling based model more suitable as a general three parameter anisotropic model. ## 4 Summary This study proposes a three parameter elastic anisotropy model. The approach is based on upscaling of a periodically layered material with different Young's moduli and the same Poisson's ratio. The input parameters for the model are the horizontal Young's modulus and Poisson's ratio, as well as the vertical Young's modulus. Mathematical expressions for calculating the vertical Poisson's ratio and the shear modulus in the vertical plane are presented. To validate the model, data for a series of measurements for shale samples is gathered from the literature. The predicted values of the vertical Poisson's ratio and the shear modulus are compared to the actual values and show a good level of agreement. In addition, the developed model is compared to the existing three parameter anisotropic models ANNIE and MannNIE. Both of the latter models feature similar level of accuracy compared to the proposed model. However, these models predict unphysical results when the ratio between the horizontal and vertical Young's modulus exceeds a certain value. Therefore, such models are less suitable for the role of a general three parameter elastically anisotropic model.
2306.06590
Mean-Variance Efficient Collaborative Filtering for Stock Recommendation
The rise of FinTech has transformed financial services onto online platforms, yet stock investment recommender systems have received limited attention compared to other industries. Personalized stock recommendations can significantly impact customer engagement and satisfaction within the industry. However, traditional investment recommendations focus on high-return stocks or highly diversified portfolios based on the modern portfolio theory, often neglecting user preferences. On the other hand, collaborative filtering (CF) methods also may not be directly applicable to stock recommendations, because it is inappropriate to just recommend stocks that users like. The key is to optimally blend users preference with the portfolio theory. However, research on stock recommendations within the recommender system domain remains comparatively limited, and no existing model considers both the preference of users and the risk-return characteristics of stocks. In this regard, we propose a mean-variance efficient collaborative filtering (MVECF) model for stock recommendations that consider both aspects. Our model is specifically designed to improve the pareto optimality (mean-variance efficiency) in a trade-off between the risk (variance of return) and return (mean return) by systemically handling uncertainties in stock prices. Such improvements are incorporated into the MVECF model using regularization, and the model is restructured to fit into the ordinary matrix factorization scheme to boost computational efficiency. Experiments on real-world fund holdings data show that our model can increase the mean-variance efficiency of suggested portfolios while sacrificing just a small amount of mean average precision and recall. Finally, we further show MVECF is easily applicable to the state-of-the-art graph-based ranking models.
Munki Chung, Yongjae Lee, Woo Chang Kim
2023-06-11T04:51:29Z
http://arxiv.org/abs/2306.06590v1
# Mean-Variance Efficient Collaborative Filtering for Stock Recommendation ###### Abstract. The rise of FinTech has transformed financial services onto online platforms, yet stock investment recommender systems have received limited attention compared to other industries. Personalized stock recommendations can significantly impact customer engagement and satisfaction within the industry. However, traditional investment recommendations focus on high-return stocks or highly diversified portfolios based on the modern portfolio theory, often neglecting user preferences. The former would result in unsuccessful investment because accurately predicting stock prices is almost impossible, whereas the latter would not be accepted by investors because most individuals tend to possess only a few stocks that they are interested in. On the other hand, collaborative filtering (CF) methods also may not be directly applicable to stock recommendations, because it is inappropriate to just recommend stocks that users like. The key is to optimally blend user's preference with the portfolio theory. However, research on stock recommendations within the recommender system domain remains comparatively limited, and no existing model considers both the preference of users and the risk-return characteristics of stocks. In this regard, we propose a mean-variance efficient collaborative filtering (MVECF) model for stock recommendations that consider both aspects. Our model is specifically designed to improve the pareto optimality (mean-variance efficiency) in a trade-off between the risk (variance of return) and return (mean return) by systemically handling uncertainties in stock prices. Such improvements are incorporated into the MVECF model using regularization, and the model is restructured to fit into the ordinary matrix factorization scheme to boost computational efficiency. Experiments on real-world fund holdings data show that our model can increase the mean-variance efficiency of suggested portfolios while sacrificing just a small amount of mean average precision and recall. Finally, we further show MVECF is easily applicable to the state-of-the-art graph-based ranking models. recommender systems, stock recommendation, collaborative filtering, modern portfolio theory + Footnote †: Corresponding author + Footnote †: Corresponding author in no doubt since individuals are known to hold under-diversified portfolios [3]. However, recommendations of investment products are not trivial due to the high uncertainty in their price. We list _three key requirements_ for stock recommendation. First, stock recommender systems should _incorporate investor's preference_. One may simply recommend stocks that are predicted to have high returns, or recommend well-diversified portfolios constructed based on established portfolio theories. However, such recommendations would be almost irrelevant to the user's appetite, and this is why individuals still hold under-diversified portfolios even though there are a huge number of mutual funds. This is almost the opposite direction from most recommender systems for online services that have been gaining great success from subtle but user-specific recommendations. Hence, stock recommendations should reflect users' preferences that can be inferred from the users' portfolios. Second, stock recommender systems should _improve portfolio diversification_. Direct applications of CF would probably make users' portfolios more concentrated on some risk factors or industry sectors because recommended stocks would be similar to the users' current holdings. More importantly, user preference does not necessarily lead to good investment. Therefore, stock recommendations should bring diversification benefits while satisfying the users' preferences. Third, they should be proven in terms of _ex-post performance_ evaluations. While wrong recommendations in the most of other online services would not severely harm users (e.g., waste of time, need to refund), wrong investment products would directly lead to monetary damages that cannot be simply returned. Hence, the performances of stock recommender systems should be carefully evaluated accounting for high uncertainties in stock prices. In this paper, we develop a novel model for stock recommendation, **mean-variance efficient collaborative filtering (MVECF)**, that has all three key requirements. We utilize the regularization technique to ensure that the recommendation is made based on the user's current portfolio while increasing diversification effect by systemically handling uncertainties of stock returns, and the model is restructured to an ordinary weighted matrix factorization (WMF) form [10] to boost the computational efficiency. The proposed model is specifically designed to improve the pareto optimality in a trade-off between risk and return (i.e., the mean-variance efficiency), which is the essence of the modern portfolio theory (MPT). We further show that MVECF can be easily incorporated into state-of-the-art graph-based ranking models by applying MVECF user-item ratings to the sampling process of the ranking systems. ## 2 Related Works ### Collaborative Filtering CF is the most popular approach in modern recommendation systems because of its efficiency in utilizing the entire user-item preference history [10]. Recently, graph-based ranking models like NCF, NGCF, LightGCN, UltraGCN, and HCCF [8; 23; 7; 13; 26] have become state-of-the-art for implicit feedback data. This work may look similar to a stream of research to increase the diversity and novelty of recommendations. Diversity focuses on recommending dissimilar items, and novelty seeks items that are dissimilar to the items in the user's past experiences. These involve greedy re-ranking [30; 28; 22] and directly optimizing multi-objectives approaches [19; 18; 11; 24; 25]. However, the notion of recommendation diversity and novelty is quite different from portfolio diversification. Portfolio diversification considers the trade-off between minimizing risk and maximizing return of the entire portfolio. ### Stock Recommendations Previous studies on stock recommender systems can be categorized into three. First, purely item-based recommendation. These studies try to find stocks that would have high returns in the future by analyzing item similarities between stocks [27, 21, 29, 4]. This approach has a quite different perspective from conventional recommender systems given that most methods in this category do not utilize user information. In addition, accurately predicting stock returns is almost impossible, because signals are dominated by noises in financial markets. Second, recommendations based on user-item information. These studies suggest ways to measure similarities between stocks or equity funds, and then apply existing CF methods [15, 1, 2]. However, simply buying stocks that are held by similar investors may make a portfolio more exposed to a certain type of risk (or a sector). Hence, we incorporate the portfolio theory to ensure proper diversification within the CF framework. Third, recommendations based on user-item information and then diversification. For example, [16] and [20] recommend stocks based on the second approach and determine weights of stocks to reduce portfolio risk. However, adjusting portfolio weights after choosing stocks would have limited effects. On the other hand, [5] and [6] carefully analyze user similarity and recommend a portfolio according to the user's preference. This approach may be able to provide well-diversified portfolios to investors, but this has the same problem with existing mutual funds that investors prefer possessing a few stocks that they are interested in. ## 3 Preliminaries ### Weighted Matrix Factorization Consider \(m\) users and \(n\) stocks (items). Let the binary variable representing whether the user u holds the stock \(i\) is \(y_{ui}\). In matrix form, they can be represented as the user-item interaction matrix \(Y\in\mathbb{R}^{m\times n}\). WMF decomposes \(Y\) into the user embedding matrix \(P\in\mathbb{R}^{m\times l}\) and the item embedding matrix \(Q\in\mathbb{R}^{n\times l}\) with \(l\) number of latent factors as follows. \[\min\sum_{u,l}c_{ui}(y_{ui}-p_{u}^{T}q_{l})^{2}+\lambda\sum_{u}\lVert p_{u} \rVert^{2}+\lambda\sum_{i}\lVert q_{i}\rVert^{2}\qquad(1)\] Here, \(y_{ui}\in\mathbb{R}\) is an element of \(Y\), and \(p_{u}^{T}\in\mathbb{R}^{1\times l}\) and \(q_{i}^{T}\in\mathbb{R}^{1\times l}\) are row vectors of \(P\) and \(Q\), respectively. \(c_{ui}\in\mathbb{R}\) is a hyperparameter indicating the confidence level about observation \(y_{ui}\), which becomes large when \(y_{ui}=1\) and small when \(y_{ui}=0\), and \(\lambda\) is a hyperparameter for \(L_{2}\) regularization. \(\hat{y}_{ui}\), the estimated preference of user u to item \(i\) is \(p_{u}^{T}q_{l}\). [10] proposed an efficient alternating least squares (ALS) algorithm for solving Equation (1). ### Modern Portfolio Theory (MPT) Markowitz [14] was the first to mathematically define and analyze the risk and return of financial investments. The return of a risky asset was regarded as a random variable and the expected return was defined as its mean value and the risk was defined as its standard deviation. Then, \(n\) risky assets can be described by their return vector \(r\in\mathbb{R}^{n}\) with mean \(\mu\in\mathbb{R}^{n}\) and covariance matrix \(\Sigma\in\mathbb{R}^{n\times n}\). A portfolio of n risky assets can be represented as a weight vector \(w\in\mathbb{R}^{n}\), which should sum to one (i.e., \(\sum_{i}w_{i}=1\) ), and it's expected return and risk can be expressed as \(\mu^{T}w\) and \(w^{T}\Sigma w\), respectively. \[\min_{\{w:\,\Sigma w=1,\ w\geq 0\}}\frac{Y}{2}w^{T}\Sigma w-\mu^{T}w \tag{2}\] In [14], a convex quadratic programming problem (Equation (2)) was proposed to find a pareto optimal portfolio between minimizing the risk and maximizing the expected return. It is called the mean-variance (MV) optimization, and the resulting optimal portfolios are called MV efficient portfolios. The term 'efficient' emphasizes the pareto optimality of the solution from the trade-off between risk and return, where \(\gamma\) is the parameter that represents the risk aversion of the investor. [14] is the foundation of the modern portfolio theory (MPT), and it is widely used in practice as well [12]. An evaluation metric for measuring portfolio efficiency is proposed by [17] which is called the Sharpe ratio. It is one of the most widely used performance measures in investment management. It is defined as a ratio between the expected return and the risk (return standard deviation) of the portfolio as \(SR(w)=\frac{\mu^{T}w}{\sqrt{w^{T}\Sigma w}}\). ## 4 Methodology The common goal of all recommendation systems is to recommend items that are likely to be selected by users in the future. Most recommender systems achieve this by recommending the top k items in the predicted ratings \(\hat{\mathcal{V}}\). In addition to that, the goal of this work is to generate \(\hat{\mathcal{V}}\) so that when the users accept the top k recommendations and add those items to their current portfolios \(\mathcal{Y}_{u}\), the resulting portfolios will become more MV efficient (measured in the Sharpe ratio). Furthermore, our model should be able to adjust within two kinds of trade-off: 1) between recommendation performance and MV efficiency, 2) between portfolio risk and expected return. ### Mean-Variance Regularization In this section, we develop the mean-variance efficient collaborative filtering (MVECF) method, which is a novel WMF model with regularization on the MV efficiency. The estimated item ratings (holdings) of user \(u\) is \(\hat{\mathcal{V}}_{u}=Qp_{u}\), and we consider this as the user's final portfolio. The expected return and the risk of the portfolio would be written as \(\mu^{T}\hat{\mathcal{V}}_{u}\) and \(\hat{\mathcal{V}}_{u}^{T}\hat{\mathcal{V}}_{u}\), respectively. We regularize 'risk - return (i.e., \(\frac{\gamma}{2}\hat{\mathcal{V}}_{u}^{T}\hat{\mathcal{V}}_{u}-\mu^{T}\hat{ \mathcal{V}}_{u}\))' of the user's portfolio, which is the objective function of the MV optimization problem in (2), to the loss function of WMF. Then, the proposed model would recommend stocks based on CF while trying to minimize risk and maximize return. The resulting formulation is given in (3). \(\lambda_{\text{MV}}\) is a hyperparameter that controls the trade-off between the traditional recommendation performance and the MV efficiency of recommended portfolios. \[\min_{P,Q}\sum_{u,l}c_{ui}(\mathcal{Y}_{ui}-\hat{\mathcal{V}}_{ui})^{2}+ \lambda\sum_{u}\lVert p_{u}\rVert^{2}+\lambda\sum_{l}\lVert q_{i}\rVert^{2}+ \lambda_{MV}\sum_{u}\left(\frac{\gamma}{2}\hat{\mathcal{V}}_{u}^{T}\hat{ \mathcal{V}}_{u}-\mu^{T}\hat{\mathcal{V}}_{u}\right) \tag{3}\] If we rewrite the last MV regularization term of (3) in an elementwise expression with \(\mu_{i}\) (elements of \(\mu\)), \(\sigma_{i}^{2}\) and \(\sigma_{ij}\) (diagonal and off-diagonal elements of \(\Sigma\)), Equation (3) can be rewritten as Equation (4). \[\min_{P,Q}\sum_{u,l}\left[c_{ui}(\mathcal{Y}_{ui}-\hat{\mathcal{V}}_{ui})^{2}+ \lambda_{MV}\frac{\gamma}{2}\left(\hat{\mathcal{V}}_{ui}^{2}\sigma_{i}^{2}+ \hat{\mathcal{V}}_{ui}\sum_{j:j\neq i}\hat{\mathcal{V}}_{uj}\sigma_{ij}\right) -\lambda_{MV}\hat{\mathcal{V}}_{ui}\mu_{i}\right]+\lambda\sum_{u}\lVert p_{u} \rVert^{2}+\lambda\sum_{i}\lVert q_{i}\rVert^{2} \tag{4}\] We can see from Equation (4) that the MV regularization would lower the rating of items with high variance (from \(\hat{\mathcal{V}}_{ui}^{2}\sigma_{i}^{2}\)), lower the rating of items with high covariance with user holdings (from \(\hat{\mathcal{V}}_{ui}\sum_{j:j\neq i}\hat{\mathcal{V}}_{uj}\sigma_{ij}\)), and raise the rating of items with high expected returns (from \(-\hat{\mathcal{V}}_{ui}\mu_{i}\)). Hence, it exactly delivers the desired effects. ### Restructuring MVECF into Ordinary WMF Form Here, we further increase the computational efficiency of MVECF by restructuring it into an ordinary WMF form (as in Equation (1)) so that we can train it using the ALS algorithm developed by [10]. ALS is known to converge much faster than SGD, and also SGD is highly sensitive to the choice of learning rate. The trick is quite simple. In the MV regularization term in Equation (3), we change \(\hat{\mathcal{V}}_{ui}\sum_{j:j\neq i}\hat{\mathcal{V}}_{uj}\sigma_{ij}\) into \(\hat{\mathcal{V}}_{ui}\sum_{j:j\neq i}\mathcal{Y}_{uj}\sigma_{ij}/\lvert y_{u}\rvert\). Then, (3) can be rewritten as \[\min\sum_{u,l}\left[\left(c_{ui}+\frac{\gamma}{2}\lambda_{MV}\sigma_{i}^{2} \right)\hat{\mathcal{V}}_{ui}^{2}-\left(2c_{ui}\mathcal{Y}_{ui}-\frac{\gamma}{ 2}\lambda_{MV}\sum_{j:j\neq i}\frac{\gamma_{uj}\sigma_{ij}}{\lvert y_{u}\rvert }+\lambda_{MV}\mu_{i}\right)\hat{\mathcal{V}}_{ui}+c_{ui}\mathcal{V}_{ui}^{2} \right]+\lambda(\sum_{u}\lVert p_{u}\rVert^{2}+\sum_{i}\lVert q_{i}\rVert^{2}) \tag{5}\] This trick has a nice theoretical property as well. Note that regularizing with \(\hat{y}_{ui}\sum_{j,j\neq i}\hat{y}_{ui}\sigma_{ij}\) would reduce the rating of items that have large covariances with 'predicted' user holdings. On the other hand, regularizing with the modified term \(\hat{y}_{ui}\sum_{j:j\neq i}y_{ui}\sigma_{ij}\) would reduce the rating of items that have large covariances with 'current' user holdings. Therefore, no matter which and how many items the user finally accepts from the recommended list, they would all possess diversification potential. We can see that both the first and the second term in (5) are quadratic functions of \(\hat{y}_{ui}\), The idea of the reconstruction is to combine these two terms into one. If we denote \(c_{ui}+\frac{\gamma}{2}\lambda_{MV}\sigma_{i}^{2}\) as \(\tilde{c}_{ui}\) and define \(\tilde{y}_{ui}\) so that \(2\tilde{c}_{ui}\tilde{y}_{ui}\) can be \(2c_{ui}y_{ui}-\frac{\gamma}{2}\lambda_{MV}\sum_{j:j\neq i}\frac{y_{ui}}{|y_{ui }|}\sigma_{ij}+\lambda_{MV}\mu_{i}\) then Equation (5) becomes \(\min\sum_{u,i}\tilde{c}_{ui}\tilde{y}_{ui}^{2}-2\tilde{c}_{ui}\tilde{y}_{ui} \tilde{y}_{ui}+c_{ui}y_{ui}^{2}+\lambda(\sum_{u}\|p_{u}\|^{2}+\sum_{i}\|q_{i} \|^{2})\) and in perfect square form \(\min\sum_{u,i}\tilde{c}_{ui}(\tilde{y}_{ui}-\hat{y}_{ui})^{2}-\tilde{c}_{ui} \tilde{y}_{ui}^{2}+c_{ui}y_{ui}^{2}+\lambda(\sum_{u}\|p_{u}\|^{2}+\sum_{i}\| q_{i}\|^{2})\). Since \(-\tilde{c}_{ui}\tilde{y}_{ui}^{2}+c_{ui}y_{ui}^{2}\) is independent with P and Q, (5) is equivalent to (6). \[\min\sum_{u,i}\tilde{c}_{ui}(\tilde{y}_{ui}-\hat{y}_{ui})^{2}+\lambda(\sum_{u} \|p_{u}\|^{2}+\sum_{i}\|q_{i}\|^{2}) \tag{6}\] Note that (13) has exactly the same form of ordinary WMF with modified ratings \(\tilde{y}_{ui}\) and their weighting coefficients \(\tilde{c}_{ui}\). To interpret \(\tilde{y}_{ui}\) and \(\tilde{c}_{ui}\), we define two MV related parameters \(c_{ui}^{MV}=\frac{\gamma}{2}\lambda_{MV}\sigma_{i}^{2}\) and \(y_{ui}^{MV}=\Big{(}\frac{\mu_{i}}{\gamma}-\frac{1}{2}\sum_{j:j\neq i}\frac{y_ {ui}}{|y_{ui}|}\sigma_{ij}\Big{)}\Big{/}\sigma_{i}^{2}\). Then the definition \(\tilde{y}_{ui}\) can be rewritten as \(\tilde{y}_{ui}=c_{ui}y_{ui}/\tilde{c}_{ui}+c_{ui}^{MV}y_{ui}^{MV}/\tilde{c}_{ui}\) where \(\tilde{c}_{ui}=c_{ui}+c_{ui}^{MV}\). \[\tilde{y}_{ui}=\frac{2c_{ui}y_{ui}+\lambda_{MV}\big{(}\mu_{i}-\frac{\gamma}{2 }\sum_{j\neq i}\frac{y_{ui}}{|y_{ui}|}\sigma_{ij}\big{)}}{2\tilde{c}_{ui}}= \frac{c_{ui}y_{ui}+\frac{Y}{2}\lambda_{MV}\sigma_{i}^{2}\frac{\mu_{i}-\frac{Y }{2}\lambda_{MV}\mu_{i}}{\sigma_{i}^{2}}}{\tilde{c}_{ui}}=\frac{c_{ui}y_{ui}+c _{ui}^{MV}y_{ui}^{MV}}{\tilde{c}_{ui}} \tag{7}\] The MV rating \(y_{ui}^{MV}\) would have a large value when the mean return of item \(i\) (\(\mu_{i}\)) is high and the risk of item \(i\) (\(\sigma_{i}^{2}\)) is low. And the modified target rating \(\tilde{y}_{ui}\) is a weighted sum of the user's current holdings \(y_{ui}\) and the MV rating \(y_{ui}^{MV}\). Therefore, the recommendation \(\hat{y}_{ui}\) would directly reflect the MV rating to favor items with better risk-return profiles. Also, the weighting term \(\tilde{c}_{ui}\) becomes large when \(\sigma_{i}^{2}\) is large, and thus, the model focuses more on matching ratings of risky items compared to safe items. By simply changing the true rating \(y_{ui}\) into a weighted sum of \(y_{ui}\) and \(y_{ui}^{MV}\), we can train a WMF model to make user preferred recommendations, while making the resulting portfolio more efficient in terms of risk-return trade-off. The model can be easily trained by ALS. ## 5 Experiments ### Data & Models Two datasets are used for experiments. One is Survivorship-Bias-Free US Mutual Fund data from Center for Research in Security Prices (CRSP), and the other is Stock Ownership data from Thomson Reuters. Each dataset contains holdings snapshots of users (mutual funds or institutional investors) for every month from 2001 to 2020. Both datasets were retrieved from Wharton Research Data Services (WRDS) database. We split each dataset into yearly sub-datasets. The user-item interaction data of year T is the holdings snapshot reported in December of year T. For MVECF model, stock mean return and covariance matrix are estimated using the returns data in the past 5 years (years T-4 to T). For ex post performance evaluation, we use the next 5 years returns data (years T+1 to T+5). We consider two versions of MVECF. The first one (MVECF\({}_{\text{reg}}\)) is the regularization model and the second one (MVECF\({}_{\text{WMF}}\)) is the ordinary WMF form version. To demonstrate the performance of MVECF, we use the state-of-the-art conventional recommender systems WMF, BPR, LightGCN, UltraGCN, and HCCF as the baseline models. In addition, we consider three existing models for stock recommendations. First, novelty enhancing BPR model (\(\text{BPR}_{\text{nov}}\)) of [25]. For this model, we define the dissimilarity (distance) between two items \(i\) and \(j\) as \(\sqrt{1-\rho_{ij}}\), where \(\rho_{ij}\) is the return correlation between \(i\) and \(j\). Second, the 2-step method [20] introduced in Section 2.1. It filters top-k items using a base recommendation model, and then make the final recommendation by re-ranking the scores of top-k items using the MPT method. We consider two versions of this model with two different base recommendation methods: WMF (\(2\text{Step}_{\text{wmf}}\)) and UltraGCN (\(2\text{Step}_{\text{ugcn}}\)). The data, code, and more detailed experimental settings are given in [https://github.com/author-mvecf/MVECF.git](https://github.com/author-mvecf/MVECF.git) ### Performance Evaluation The performance of stock recommender systems should be evaluated in two aspects: conventional recommendation performance (precision, recall) and MV efficiency. To be more specific, top 20 items of each user's test data are recommended and mean average precision at 20 (MAP@20) and recall at 20 (Recall@20) are used for conventional recommendation performance evaluation. For MV efficiency evaluation, we recommend top 20 items of all non-holding items to users. This is because recommending items only in pre-chosen test set is inappropriate for investment setting. We use Sharpe ratios of the initial portfolio (before recommendation) and the recommended portfolio (after adding 20 recommended items). Both portfolios are constructed as equally weighted portfolios because MVECF models recommend just items (stocks), not their weights. Next, we use two measures that are based on the calculated Sharpe ratios. First, \(\Delta\text{SR}=\text{SR}-\text{SR}_{\text{init}}\), the improvement in the Sharpe ratio, where SR and \(\text{SR}_{\text{init}}\) are Sharpe ratios of recommended and initial portfolios, respectively. Second, \(\text{P}(\text{SR}>\text{SR}_{\text{init}})\), the percentage of users whose Sharpe ratio has increased after recommendation. While \(\Delta\text{SR}\) would show the amount of improvement, \(\text{P}(\text{SR}>\text{SR}_{\text{init}})\) would show how many users get the improvement. As noted in Introduction, ex-post evaluation is particularly important in investment management. Hence, the same metrics, \(\Delta\text{SR}\) and \(\text{P}(\text{SR}>\text{SR}_{\text{init}})\), are calculated with 5 years performance (realized return and variance) of the recommended portfolio. The reason for using Sharpe ratio as our MV performance measure is that it is undoubtedly the most widely used investment performance measure in both practice and academia. We also analyze the improvement in mean return \(\Delta\text{\mu}\) and risk \(\Delta\text{\sigma}\) of recommended portfolio to see the risk-return tradeoff of MVECF. The hyperparameters are tuned within the validation set (10% of total data). Experiments are repeated with different values of balancing hyperparameter \(\lambda_{\text{MV}}\) in 0.1, 1, 10 and risk-aversion level \(\gamma\) in 1, 3, 5. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & Parameters & \multicolumn{6}{c}{Performance metrics} \\ \hline & \(\lambda_{\text{MV}}\) & \(\Delta\text{\mu}\) & \(\Delta\text{\sigma}\) & ASR & \(\text{P}(\text{SR}>\text{SR}_{\text{init}})\) & MAP@20 & Recall@20 \\ \hline \multirow{3}{*}{MVECF\({}_{\text{reg}}\)} & 0.1 & -0.0008 & -0.0092 & 0.0163 & 0.7463 & 0.2365 & 0.6375 \\ & 1 & -0.0016 & -0.0123 & 0.0186 & 0.7794 & 0.2315 & 0.6254 \\ & 10 & -0.0029 & -0.0178 & 0.0258 & 0.8745 & 0.2250 & 0.6091 \\ \hline \multirow{3}{*}{MVECF\({}_{\text{WMF}}\)} & 0.1 & -0.0003 & -0.0057 & 0.0115 & 0.7086 & 0.2526 & 0.7008 \\ & 1 & -0.0003 & -0.0092 & 0.0183 & 0.8267 & 0.2530 & 0.6956 \\ & 10 & 0.0021 & -0.0206 & 0.0590 & 0.9893 & 0.2345 & 0.6063 \\ \hline \multirow{3}{*}{MVECF\({}_{\text{reg}}\)} & \(\gamma\) & \(\Delta\text{\mu}\) & \(\Delta\text{\sigma}\) & ASR & \(\text{P}(\text{SR}>\text{SR}_{\text{init}})\) & MAP@20 & Recall@20 \\ \cline{2-7} & 1 & -0.0022 & -0.0159 & 0.0247 & 0.8541 & 0.2289 & 0.6193 \\ \cline{1-1} & 3 & -0.0029 & -0.0178 & 0.0258 & 0.8745 & 0.2250 & 0.6091 \\ \cline{1-1} & 5 & -0.0032 & -0.0184 & 0.0261 & 0.8784 & 0.2231 & 0.6064 \\ \hline \multirow{3}{*}{MVECF\({}_{\text{WMF}}\)} & 1 & 0.0144 & -0.0077 & 0.0762 & 0.9875 & 0.2338 & 0.6028 \\ \cline{1-1} & 3 & 0.0021 & -0.0206 & 0.0590 & 0.9893 & 0.2345 & 0.6063 \\ \cline{1-1} & 5 & 0.0000 & -0.0244 & 0.0599 & 0.9903 & 0.2321 & 0.6027 \\ \hline \hline \end{tabular} \end{table} Table 1: Average of evaluation metrics across all datasets for recommended portfolio by MVECF models ### Experiment Results Before we demonstrate the relative performances of MVECF models to baseline models, let us check whether MVECF performs as we expected. Table 1 shows the averages of various evaluation metrics of MVECF models with different values of \(\lambda_{\text{MV}}\) and \(\gamma\). The upper half shows the results of various values of \(\lambda_{\text{MV}}\) when \(\gamma\) is fixed to 3. Both models show better \(\Delta\text{SR}\) and \(\text{P}(\text{SR}>\text{SR}_{\text{init}})\) when \(\lambda_{\text{MV}}\) is large, and better MAP(\(\oplus\)20 and Recall(\(\oplus\)20 when \(\lambda_{\text{MV}}\) is small. We can easily see that the tradeoff between MV efficiency and conventional recommendation performance is well controlled by \(\lambda_{\text{MV}}\). The bottom half is the results of varying \(\gamma\) when \(\lambda_{\text{MV}}\) is fixed to 10. As we assume a more risk-averse user by increasing \(\gamma\), it clearly shows MVECF models focus more on reducing risk than increasing return. Hence, \(\gamma\) controls the risk-return tradeoff as desired. Now, we compare the MV efficiency and conventional recommendation performance of MVECF with baseline models. Figure 1 shows the results of conventional recommender systems, existing stock recommendation models, and two versions of MVECF with \(\lambda_{\text{MV}}=10\) and \(\gamma=3\) for a total of 20 sub-datasets. Each marker in the graph represents the average performance of all users in one yearly sub-dataset. Figure 2 represents the ex-post Sharpe ratio performance. From the two figures, it is evident that both \(\text{MVECF}_{\text{reg}}\) and \(\text{MVECF}_{\text{WMF}}\) outperform conventional recommender systems in terms of MV efficiency (i.e., \(\Delta\text{SR}\) and \(\text{P}(\text{SR}>\text{SR}_{\text{init}})\)). \(\text{MVECF}_{\text{WMF}}\), in particular, exhibits dominating performance with \(\text{P}(\text{SR}>\text{SR}_{\text{init}})\) values near 100% for all datasets, implying that MVECF is beneficial for almost all investors in improving MV efficiency. Conventional recommender systems, however, exhibit \(\Delta\text{SR}\) and \(\text{P}(\text{SR}>\text{SR}_{\text{init}})\) values not much different from each other, as they do not address portfolio diversifications. Regarding recommendation performance, MVECF models naturally show inferior performance compared to conventional recommendation systems. However, the decline is not substantial. The average decrease in recommendation performance across 20 sub-datasets compared to the best performing UltraGCN is less than 5% in MAP(\(\oplus\)20 and less than 10% in Recall(\(\oplus\)20 for both MVECF models. This is relatively small compared to the improvement in MV efficiency. As for existing stock recommendation models, Figure 1 indicates that two-step methods (2Step\({}_{\text{wmrf}}\) and 2Step\({}_{\text{ugcn}}\)) achieve similar MV efficiency levels to MVECF models, whereas \(\text{BPR}_{\text{nov}}\) does not. However, a clear difference can be seen from the ex-post performance in Figure 2. While MVECF models outperform all other models in most cases, two-step models even show negative \(\Delta\text{SR}\) and \(\text{P}(\text{SR}>\text{SR}_{\text{init}})\) below 50%. This indicates that the two-step methods would lead to really poor investment performances. ### Incorporating MVECF into Ranking Models Most studies on recommender systems are based on ranking models with implicit feedbacks. Note that MVECF is developed based on WMF, which is a rating-prediction model. However, the key point of MVECF is to modify the true rating \(y_{ui}\) to \(\tilde{y}_{ui}\), which incorporates the MV efficiency of items. Using this, we show that MVECF can be easily incorporated into recent state-of-the-art ranking models. In traditional ranking models, items associated with real user-item interactions are considered as positive samples, while negative samples are chosen from the items without such interactions. We propose an MV efficient sampling scheme that identifies items with \(\tilde{y}_{ui}>\tau\) as positive samples and items with \(\tilde{y}_{ui}<\tau\) as negative samples, where \(\tau\) is a predefined threshold level. As discussed in Section 4.2, \(\tilde{y}_{ui}\) is a weighted sum of the true rating \(y_{ui}\) and the MV rating \(y_{ui}^{MV}\). The MV rating \(y_{ui}^{MV}\) is bad (good) when the item is highly (less) correlated with the user's portfolio and/or has low (high) expected return. Hence, even though a user actually holds item \(i\), if \(y_{ui}^{MV}\) is really bad, it will be classified as a negative sample in MV efficient sampling. Similarly, items with really good \(y_{ui}^{MV}\) will be regarded as positive samples, regardless of actual interactions \(y_{ui}\). Given the sparsity of true positive items, we set the threshold \(\tau\) so that 1% of the original negative samples can be converted to positive samples in MV efficient sampling. Table 2 presents the MV efficiency and recommendation performance of LightGCN and UltraGCN, which exhibited the best recommendation performance in Section 5.4, as well as LightGCN and UltraGCN with the MV efficient sampling. Figure 1: Performance comparison between MVECF and baseline models. The figures in upper row are \(\Delta\text{SR},\ \text{P(SR}>\text{SR}_{\text{init}})\), MAP@20 and Recall@20 with CRSP dataset, and the figures in lower row are those with Thomson Reuters dataset. Figure 2: Ex-post SR performance comparison between MVECF and baseline models. The two left figures are \(\Delta\text{SR}\) and \(\text{P(SR}>\text{SR}_{\text{init}})\) with CRSP dataset, and the two right figures are those with Thomson Reuters dataset. The results clearly show that the state-of-the-art graph based ranking models can be easily extended to improve MV efficiency (\(\Delta\)SR and P(SR \(>\) SR\({}_{\text{init}}\))) with simple modification. ## 6 Conclusion In this paper, we proposed the mean-variance efficient collaborative filtering (MVECF) for stock recommendation that can systemically handle the risk-return profile of recommended portfolios while recommending stocks with the consideration of user preferences. Starting from a simple regularization, we were able to derive MVECF as an ordinary WMF form. The performances of portfolios recommended by MVECF outperformed other recommender systems in both in-sample and out-of-sample settings with only minimal reductions in the recommendation performance. Furthermore, we demonstrated that the modified user-item rating of MVECF can be integrated into the positive and negative sampling of ranking models, allowing state-of-the-art graph-based models to offer MV efficient recommendations for users. The importance of this research lies in addressing the unique challenges of stock recommendation within the rapidly evolving fintech industry. As personalized stock recommendations become increasingly relevant for attracting and retaining customers, our approach can significantly enhance customer engagement and satisfaction, providing investment companies and online brokers with a competitive edge in the fintech landscape. This is the first study to identify the key requirements for stock recommender systems and develop a proper CF model for stock recommendations to fully utilize the user-item interactions. We believe that our study can encourage many researchers to develop more advanced stock recommender systems that can properly handle the risk-return characteristics of stocks as well as the preference of users.
2308.01753
Minimal Convex Environmental Contours
We develop a numerical method for the computation of a minimal convex and compact set, $\mathcal{B}\subset\mathbb{R}^N$, in the sense of mean width. This minimisation is constrained by the requirement that $\max_{b\in\mathcal{B}}\langle b , u\rangle\geq C(u)$ for all unit vectors $u\in S^{N-1}$ given some Lipschitz function $C$. This problem arises in the construction of environmental contours under the assumption of convex failure sets. Environmental contours offer descriptions of extreme environmental conditions commonly applied for reliability analysis in the early design phase of marine structures. Usually, they are applied in order to reduce the number of computationally expensive response analyses needed for reliability estimation. We solve this problem by reformulating it as a linear programming problem. Rigorous convergence analysis is performed, both in terms of convergence of mean widths and in the sense of the Hausdorff metric. Additionally, numerical examples are provided to illustrate the presented methods.
Åsmund Hausken Sande, Johan S. Wind
2023-08-03T13:28:08Z
http://arxiv.org/abs/2308.01753v2
# Minimal convex environmental contours ###### Abstract. We develop a numerical method for the computation of a minimal convex and compact set, \(\mathcal{B}\subset\mathbb{R}^{N}\), in the sense of mean width. This minimisation is constrained by the requirement that \(\max_{b\in\mathcal{B}}\langle b,u\rangle\geq C(u)\) for all unit vectors \(u\in S^{N-1}\) given some Lipschitz function \(C\). This problem arises in the construction of environmental contours under the assumption of convex failure sets. Environmental contours offer descriptions of extreme environmental conditions commonly applied for reliability analysis in the early design phase of marine structures. Usually, they are applied in order to reduce the number of computationally expensive response analyses needed for reliability estimation. We solve this problem by reformulating it as a linear programming problem. Rigorous convergence analysis is performed, both in terms of convergence of mean widths and in the sense of the Hausdorff metric. Additionally, numerical examples are provided to illustrate the presented methods. This Version : November 7, 2021 **Keywords**: Environmental Contours, Linear Programming, Structural Reliability **MSC2020:** 65D18, 90B25, 90C05 ## 1. Introduction Environmental contours are mathematical tools applied to analyse the reliability of marine structures, usually used in the early design phase of e.g. ships or oil platforms. They provide a summary statistic of the relevant environmental factors which reduces the number of computationally expensive response analyses needed. Due to this, environmental contours are widely used in reliability analysis of various marine structures [1, 4, 5, 20], and is listed in the _recommended practices - environmental conditions and environmental loads_ document by DNV (Det Norske Veritas) [3]. We consider \(N\) environmental factors governed by a stochastic process in \(\mathbb{R}^{N}\). A marine structure is then assumed to have a failure set \(\mathcal{F}\subset\mathbb{R}^{N}\) of environmental conditions it cannot safely handle. Let \(\tau_{\mathcal{F}}=\inf\{t:V_{t}\in\mathcal{F}\}\) be the first hitting time of the failure set. Generally, an environmental contour is the boundary of a set \(\mathcal{B}\subset\mathbb{R}^{N}\), representing safe environmental conditions the structure should withstand. Since the exact shape of the failure set \(\mathcal{F}\) is often unknown in the early design stages, the environmental contour is chosen to restrain the time to failure, \(\tau_{\mathcal{F}}\), for _any_ failure set \(\mathcal{F}\) not overlapping with the chosen set \(\mathcal{B}\). Usually, this restriction comes in the form of an indirect lower bound on the _return period_\(\mathbb{E}[\tau_{\mathcal{F}}]\). Note that while this formulation considers failure sets, it is a slight abuse of terminology. The definition of environmental contours is often decoupled from the ###### Abstract We consider the following problem of estimating the number of continuous continuous functions \(\mathcal{F}\) on a given set \(\mathcal{F}\) of finite on \(C\), and then extend it in all directions in order to ensure validity. However, both these methods fail to establish _minimal_ valid contours in a general sense. The contour is intended to represent the most extreme safe conditions for a structure. However, if the contour is made too big it imposes stronger constraints on the class of structures it applies to, thereby shrinking this class. As such it is of interest to construct contours which are minimal in some sense, which would allow us to apply the contour, and the resulting restrictions on \(\tau_{\mathcal{F}}\), to as wide a class as possible. Our goal in this article is to present a method for constructing valid contours that are minimal in the sense of _mean width_ (a generalisation of the perimeter length to more than two dimensions). We show how to solve this by casting it as a linear programming problem. We prove bounds on the sub-optimality incurred by discretisation of the continuous problem, and give convergence results to ensure that our method can find solutions with mean width arbitrarily close to the optimum. In Section 2, we provide a simple explanation of our main goal and results for the two-dimensional case. To proceed, we first cover some necessary definitions and results in Section 3, before moving on to Section 4, where we present our main results in a general setting. Specifically, we prove that our method provides arbitrarily near-optimal solutions. In order to illustrate our results, we present numerical examples in Section 5. Next, we present several results that guarantee convergence of our method in the Hausdorff metric in Section 6. As it turns out that our method can be simplified and improved in the two-dimensional case, we briefly present the improved method in Section 7. Finally, as part of our method, we use quadratures for numerical integration with certain properties. We present some simple constructions for generating such quadratures in Section 8. ## 2. Main results in two dimensions We state our main results in two dimensions here, and wait until the necessary setup has been made before stating the general result in Theorem 4.13. This is only to give a simplified statement, our method and proofs are developed for general dimensions. We are interested in finding the convex, compact shape \(\mathcal{B}\) with the smallest perimeter, which has a given outreach in each direction. Specifically, the data to our problem is a \(L_{C}\)-Lipschitz function from the unit circle to the real numbers, \(C\colon S^{1}\to\mathbb{R}\). We denote the maximum absolute outreach requirement \(\left\|C\right\|_{\infty}\coloneqq\sup_{u\in S^{1}}\left|C(u)\right|\). Formally, the outreach requirement on \(\mathcal{B}\) is this: For all directions \(u\in S^{1}\), there exists some \(p\in\mathcal{B}\) such that \(\left\langle p,u\right\rangle\geq C(u)\). Numerically, we only access \(C\) through a finite number of samples. In two dimensions, we can sample \(m\) evenly spaced directions \(\{u_{i}\}_{i=1}^{m}\). If we restrict \(\mathcal{B}\) to polygons with sides perpendicular to the directions \(\{u_{i}\}_{i=1}^{m}\), and only consider outreach requirements in those directions, we can formulate the resulting problem as a linear programming problem (4.4). In two dimensions there is also a more efficient formulation (7.3). These linear programs can be solved efficiently using standard techniques, giving an optimal solution \(\widetilde{\mathcal{B}}\) to the discretised problem. Our theory shows that \(\widetilde{\mathcal{B}}\) is nearly an optimal solution to the continuous problem. Specifically, if we inflate \(\widetilde{\mathcal{B}}\) (in the sense of Lemma 4.7) by a small amount \(\mathcal{O}\left(\frac{1}{m}(L_{C}+\left\|C\right\|_{\infty})\right)\), it is guaranteed to satisfy the outreach requirements in _all_ directions. Furthermore, the inflated \(\widetilde{\mathcal{B}}\) has a perimeter at most \(\mathcal{O}\left(\frac{1}{m}(L_{C}+\left\|C\right\|_{\infty})\right)\) more than the optimal perimeter over _all_ convex, compact shapes satisfying the outreach requirements. When the number of dimensions is more than two, the perimeter is generalised to the mean width (Definition 3.4), and evenly spaced sample directions are generalised to \(\epsilon\)-accurate quadratures (Definition 3.6) with low dispersion (Definition 3.7). We define these concepts in the next section. ## 3. basic intro to needed theory In this section we will define the main functions we will need throughout the article. ### Standard Notation As in Section 2 we denote by \(\left\|\cdot\right\|\) and \(\langle\cdot,\cdot\rangle\) the canonical norm and inner product on \(\mathbb{R}^{N}\). Furthermore, the hypersphere in \(\mathbb{R}^{N}\) is defined by \(S^{N-1}=\{u\in\mathbb{R}^{N}:\left\|u\right\|=1\}\). We will also need the uniform probability measure on \(S^{N-1}\), denoted by \(\sigma\). ### Convexity A key concept when dealing with convex sets is their outreach. **Definition 3.1**.: _For any set \(\mathcal{B}\subset\mathbb{R}^{N}\) we define the outreach function of \(\mathcal{B}\) as_ \[B(\mathcal{B},u)=\sup_{b\in\mathcal{B}}\langle b,u\rangle.\] _This function is also commonly referred to as the support function of \(\mathcal{B}\)._ A closely related concept is the idea of a hyperplane. **Definition 3.2**.: _We define the hyperplane for some \(b\in\mathbb{R}\), \(u\in S^{N-1}\) as_ \[\Pi(u,b)=\{v\in\mathbb{R}^{N}:\langle u,v\rangle=b\}.\] _We further define the half-spaces_ \[\Pi^{-}(u,b)=\{v\in\mathbb{R}^{N}:\langle u,v\rangle\leq b\},\] \[\Pi^{+}(u,b)=\{v\in\mathbb{R}^{N}:\langle u,v\rangle\geq b\}.\] Consider then \(b\in\mathbb{R}\), \(u\in S^{N-1}\), and some non-empty, convex, and compact \(\mathcal{B}\subset\mathbb{R}^{N}\). Since \(\mathcal{B}\) is compact we have that \(B(\mathcal{B},u)\) is finite. We can even guarantee some regularity of \(B(\mathcal{B},\cdot)\) by the following result. **Proposition 3.3**.: _Let \(\mathcal{B}\subset\mathbb{R}^{N}\) be a non-empty, convex, and compact set. Denote the maximum radius \(R\coloneqq\sup_{p\in\mathcal{B}}\left\|p\right\|_{2}\). Then \(B(\mathcal{B},u)=\max_{p\in\mathcal{B}}\langle p,u\rangle\) is \(R\)-Lipschitz as a function of \(u\in S^{N-1}\). Note also \(\left|B(\mathcal{B},u)\right|\leq R\)._ Proof.: For fixed \(p\in\mathcal{B}\), the function \(\langle\cdot,p\rangle\) is \(\left\|p\right\|_{2}\)-Lipschitz, hence \(R\)-Lipschitz. Since \(B(\mathcal{B},u)\) is the supremum of \(R\)-Lipschitz functions, it is itself \(R\)-Lipschitz. Proposition 3.3 guarantees that \(B(\mathcal{B},\cdot)\) is integrable which allows us to define our key measure of size for convex compact sets. **Definition 3.4**.: _The width of a a non-empty, convex, and compact set \(\mathcal{B}\subset\mathbb{R}^{N}\) along a vector \(u\in S^{N-1}\) can be written as \(B(\mathcal{B},u)-B(\mathcal{B},-u)\). As such, we define the mean width of \(\mathcal{B}\) by_ \[\int_{S^{N-1}}(B(\mathcal{B},u)-B(\mathcal{B},-u))d\sigma(u).\] _Note that this equals_ \[2\int_{S^{N-1}}B(\mathcal{B},u)d\sigma(u).\] For convex shapes in two dimensions, the mean width is equal to the perimeter divided by \(\pi\). Lastly, if \(B(\mathcal{B},u)\leq b\) we must also have \(\mathcal{B}\subset\Pi^{-}(u,b)\). This leads to the following unique representation of compact convex sets. This result is a special case of Theorem 18.8 in [15]. **Proposition 3.5**.: _Let \(\mathcal{B}\subset\mathbb{R}^{N}\) be convex and compact, we then have_ \[\mathcal{B}=\bigcap_{u\in S^{N-1}}\Pi^{-}(u,B(\mathcal{B},u)).\] ### Numerical Definitions In order to compute the mean width our convex sets we will need to employ a numerical integration method providing universal bounds on the integration error. To address this we introduce the following. **Definition 3.6**.: _We say a set of points and weights \(\{(u_{i},w_{i})\}_{i=1}^{m}\subset S^{N-1}\times[0,\infty)\) is an \(\epsilon\)-accurate quadrature, if the following holds for all \(L\)-Lipschitz continuous functions \(f\colon S^{N-1}\to\mathbb{R}\). For all functions \(f\) satisfying \(|f(u)-f(v)|\leq L\left\|u-v\right\|_{2}\) for all \(u,v\in S^{N-1}\), we have_ \[\left|\int_{S}f(u)\,d\sigma(u)-\sum_{i=1}^{m}f(u_{i})w_{i}\right|\leq\epsilon (L+\left\|f\right\|_{\infty}). \tag{3.1}\] In order to control this error we will also need a related concept characterising the spread of some finite \(\mathcal{S}\subset S^{N-1}\). **Definition 3.7**.: _We define the dispersion of a set \(\mathcal{S}\subset S^{N-1}\) by_ \[\mathrm{disp}(\mathcal{S})=\sup_{u\in S^{N-1}}\inf_{v\in\mathcal{S}}\|u-v\|.\] **Remark 3.8**.: _In order to carry out the numerical integration necessary in this paper, we consider a general grid \(\mathcal{S}=\{u_{i}\}_{i=1}^{m}\subset S^{N-1}\) and set of weights \(W=\{w_{i}\}_{i=1}^{m}\), with some constraints on \(\mathrm{disp}(\mathcal{S})\) and the accuracy of \(\{(u_{i},w_{i})\}_{i=1}^{m}\). However, we also give specific constructions of \(\epsilon\)-accurate quadratures in Section 8._ ## 4. Minimal Valid Contours In this section, we aim to compute minimal valid contours in the sense of mean width. To set the scene. We will start by defining relevant concepts about our original and approximating discrete problem, for then to present several results guaranteeing control over estimation errors. ### The Continuous Problem The main problem we want to solve can be precisely formulated as follows. minimise \[\int_{S^{N-1}}B(\mathcal{B},u)d\sigma(u)\] (4.1) subject to \[B(\mathcal{B},\cdot)\geq C(\cdot),\] \[\mathcal{B}\text{ convex and compact}.\] To examine this problem, we introduce the following notation. **Definition 4.1**.: _We denote the set of feasible solutions to the continuous problem (4.1) by_ \[\mathcal{C}^{\infty}=\left\{\mathcal{B}\subset\mathbb{R}^{N}\mid\mathcal{B} \text{ convex and compact, }B(\mathcal{B},\cdot)\geq C(\cdot)\right\}.\] _The optimal value of the objective function is denoted by_ \[V^{CP}=\inf\left\{\int_{S^{N-1}}B(\mathcal{B},u)d\sigma(u),\,\mathcal{B}\in \mathcal{C}^{\infty}\right\}.\] _Lastly, we define the \(\gamma\)-near optimal solution space by_ \[\mathcal{C}^{\gamma}=\left\{\mathcal{B}\in\mathcal{C}^{\infty}\Big{|}\int_{S ^{N-1}}B(\mathcal{B},u)d\sigma(u)-V^{CP}\leq\gamma\right\}.\] It turns out that this problem has an optimal solution under our running assumption that \(C\) is continuous. This result is proven through consideration of the Hausdorff metric, as such the proof is relegated to Theorem 6.3 in Section 6. ### The Discrete Problem In what follows, we will consider a way of approximating optimal contours. To achieve this, we will need to employ numerical methods, which necessitate discretisation. Therefore, we also consider valid environmental contours with respect to some (usually finite) sub-collection of unit vectors \(\mathcal{S}\subseteq S^{N-1}\). We will say that \(\partial\mathcal{B}\) is _\((\mathcal{S},C)\)-valid_ if \(\mathcal{B}\) is convex, compact, and for all \(u\in\mathcal{S}\) we have \(B(\mathcal{B},u)\geq C(u)\). If we take any \(\mathcal{S}=\{u_{i}\}_{i=1}^{m}\) and a \((\mathcal{S},C)\)-valid contour \(\partial\mathcal{B}\), then for every \(u_{i}\in\mathcal{S}\) there must be some \(p_{i}\in\mathcal{B}\) such that \(\langle p_{i},u_{i}\rangle=B(\mathcal{B},u_{i})\), which implies \[\langle p_{i},u_{i}\rangle\geq C(u_{i})\text{ for }i=1,2,\ldots,m. \tag{4.2}\] Furthermore, since for all \(i\) we have \(p_{i}\in\mathcal{B}\), we must also have \[\langle p_{i},u_{j}\rangle\leq B(\mathcal{B},u_{j})\text{ for }i,j=1,2,\ldots,m. \tag{4.3}\] Conversely, assume we have a set of points \(\{p_{i}\}_{i=1}^{m}\) satisfying (4.2) and (4.3) for some convex and compact set \(\mathcal{B}\). By (4.3) we know that \(\{p_{i}\}_{i=1}^{m}\subset\mathcal{B}\), which implies by (4.2) that \(B(\mathcal{B},u_{i})\geq C(u_{i})\) for all \(i\), which means that \(\partial\mathcal{B}\) is \((\mathcal{S},C)\)-valid. Consequently, there is a correspondence between \((\mathcal{S},C)\)-valid contours and sets of points, \(\{p_{i}\}_{i=1}^{m}\). Hence, we consider the following linear programming problem. Note that in order to approximate optimisation in mean width, we will refer to this as the linear program based on \((\mathcal{S},W)\) where \(W=(w_{i})_{i=1}^{m}\) is a set of weights such that \(\{(u_{i},w_{i})\}_{i=1}^{m}\) forms an \(\epsilon\)-accurate quadrature for some \(\epsilon\in\mathbb{R}_{+}\). minimise \[\sum_{i=1}^{m}w_{i}B_{i}\] (4.4) subject to \[\langle p_{i},u_{i}\rangle\geq C(u_{i}) i=1,2,\ldots,m\] \[\langle p_{i},u_{j}\rangle\leq B_{j} i,j=1,2,\ldots,m\] \[p_{i}\in\mathbb{R}^{N} i=1,2,\ldots,m\] \[u_{i}\in S^{N-1} i=1,2,\ldots,m\] \[C(u_{i}),B_{i}\in\mathbb{R} i=1,2,\ldots,m\] We then note two facts about this problem. Firstly, the values \(p_{i}=\|C\|_{\infty}u_{i}\), \(B_{i}=\|C\|_{\infty}\) for all \(i\) with \(\|C\|_{\infty}=\max_{u\in S^{N-1}}|C(u)|\), satisfies the constraints and provides a feasible solution. Secondly, since \(\sum_{i=1}^{m}w_{i}B_{i}\geq\sum_{i=1}^{m}w_{i}C(u_{i})>-\infty\), the objective function is bounded. Combining these facts we know that the problem must have at least one optimal solution, which yields a minimal \((\mathcal{S},\mathcal{C})\)-valid contour by the either of the following two constructions. **Proposition 4.2**.: _Consider the linear programming problem (4.4), with an optimal solution \(\left((p_{i}^{*})_{i=1}^{m}\,,\,(B_{i}^{*})_{i=1}^{m}\right)\). If \(\operatorname{convh}(\cdot)\) denotes the convex hull we have that_ \[\mathcal{B}^{*}=\operatorname{convh}\left(\{p_{i}^{*}\}_{i=1}^{m}\right),\] _defines a \((\mathcal{S},C)\)-valid contour with \(B(\mathcal{B}^{*},u_{i})=B_{i}\) for all \(i\)._ Proof.: Firstly, we note that every \(b\in\mathcal{B}^{*}\) is a convex combination of the \(p_{i}^{*}\)s. This means \(b\) has the representation \(b=\sum_{i=1}^{m}a_{i}(b)p_{i}\) where \(\sum_{i=1}^{m}a_{i}(b)=1\), \(a_{i}\geq 0\), which implies \[B(\mathcal{B}^{*},u_{i}) =\max_{b\in\mathcal{B}^{*}}\langle b,u_{i}\rangle\] \[=\max_{b\in\mathcal{B}^{*}}\sum_{k=1}^{m}a_{k}(b)\langle p_{k},u_ {i}\rangle\] \[\leq\max_{j}\langle p_{j},u_{i}\rangle\max_{b\in\mathcal{B}^{*}} \sum_{k=1}^{m}a_{k}(b)\] \[=\max_{j}\langle p_{j},u_{i}\rangle.\] Conversely, since \(\{p_{i}^{*}\}_{i=1}^{m}\subseteq\mathcal{B}^{*}\) we have \(\max_{j}\langle p_{j},u_{i}\rangle\leq B(\mathcal{B}^{*},u_{i})\) which implies \(B(\mathcal{B}^{*},u_{i})=\max_{j}\langle p_{j},u_{i}\rangle\). As a consequence we get \[B(\mathcal{B}^{*},u_{i})\geq\langle p_{i},u_{i}\rangle\geq C(u_{i}).\] As the convex hull of a finite number of points, \(\mathcal{B}^{*}\), is compact and convex. These facts make \(\partial\mathcal{B}^{*}\) a \((\mathcal{S},C)\)-valid contour. **Corollary 4.3**.: _Consider the linear programming problem (4.4), with an optimal solution \(\left(p_{i}\right)_{i=1}^{m},\,(B_{i})_{i=1}^{m}\). We then have that_ \[\mathcal{B}^{\prime}=\bigcap_{i=1}^{m}\Pi^{-}(u_{i},B_{i}),\] _defines a \((\mathcal{S},C)\)-valid contour with \(B(\mathcal{B}^{\prime},u_{i})=B_{i}\) for all \(i\)._ Proof.: We first note that since \(\langle p_{i},u_{j}\rangle\leq B_{j}\) for all \(i,j\), we must have \(p_{i}\in\mathcal{B}^{\prime}\) for all \(i\). This implies that when \(\mathcal{B}^{*}\) is as defined in Proposition 4.2, we have \(\mathcal{B}^{*}\subseteq\mathcal{B}^{\prime}\) since \(\mathcal{B}^{\prime}\) is convex. This immediately implies that \(\partial\mathcal{B}^{\prime}\) is \((\mathcal{S},C)\)-valid. Similarly to Definition 4.1, we consider the analogous concepts for this discrete problem, which will aid our comparison between the continuous problem (4.1) and our discrete approximation (4.4). **Definition 4.4**.: _We denote the set of valid solutions to the discrete problem (4.4) based on \((\mathcal{S},W)\) by_ \[\mathcal{D}^{\infty}(\mathcal{S},W)=\left\{\mathcal{B}\subset\mathbb{R}^{N} \mid\mathcal{B}\text{ convex and compact, }B(\mathcal{B},u)\geq C(u)\text{ for all }u\in \mathcal{S}\right\}.\] _The optimal value of the objective function is denoted by_ \[V^{DP}(\mathcal{S},W)=\min\left\{\sum_{i=1}^{m}w_{i}B(\mathcal{B},u_{i}),\, \mathcal{B}\in\mathcal{D}^{\infty}\right\}.\] _Lastly, we define the \(\gamma\)-near optimal solution space by_ \[\mathcal{D}^{\gamma}(\mathcal{S},W)=\left\{\mathcal{B}\in\mathcal{C}^{\infty} \mid\sum_{i=1}^{m}w_{i}B(\mathcal{B},u_{i})-V^{DP}\leq\gamma\right\}.\] _We will usually omit the dependence on \((\mathcal{S},W)\) whenever the meaning is clear or otherwise superfluous._ **Remark 4.5**.: _If we consider any \(\mathcal{B}\in\mathcal{D}^{\infty}\) then we know from previous arguments that there exists a feasible solution \(\left(\left(p_{i}\right)_{i=1}^{m},\,(B_{i})_{i=1}^{m}\right)\) with \(\left\{p_{i}\right\}_{i=1}^{m}\subset\mathcal{B}\) and \(B_{i}=B(\mathcal{B},u_{i})\). Consequently, if \(B^{*}\) and \(B^{\prime}\) are defined as in Proposition 4.2 and Corollary 4.3, we see that \(\mathcal{B}^{*}\subseteq\mathcal{B}\subseteq\mathcal{B}^{\prime}\), which means that the constructions of \(\mathcal{B}^{*}\) and \(\mathcal{B}^{\prime}\) provide lower and upper bounds on all sets in \(\mathcal{D}^{\infty}\)._ ### Convergence and Near-Optimality With these definitions established, we can more accurately state the goal of this chapter. We first aim to show how one can construct \((S^{N-1},C)\)-valid contours from any \(\mathcal{B}\in\mathcal{D}^{\gamma}\) for some \(\gamma\geq 0\). Furthermore, we will prove that the optimal value of the discrete problem, \(V^{DP}\), can arbitrarily well approximate \(V^{CP}\). Using this, we get explicit upper bounds on the near-optimality of the constructed \((S^{N-1},C)\)-valid contours. In order to control the near-optimality of solutions to our discrete problems, we need to consider two issues. The first problem we will tackle is the fact that a \((\mathcal{S},C)\)-valid contour is not necessarily \((S^{N-1},C)\)-valid. To amend this, we will consider a method for inflating contours to ensure their validity. Secondly, we will need to correct for the fact that (4.4) optimises for an approximation of mean width, this can be handled by explicitly including the numerical error from discrete integration of the mean width. To construct valid contours from our discrete approximation, we will first need a bound on how much a contour \(\partial\mathcal{B}\) with \(\mathcal{B}\in\mathcal{D}^{\infty}\) can violate the constraint of \(B(\mathcal{B},\cdot)\geq C(\cdot)\). **Lemma 4.6**.: _Fix some \(\mathcal{S}\subset S^{N-1}\) with \(\delta=\operatorname{disp}(\mathcal{S})\) and let \(\mathcal{B}\in\mathcal{D}^{\infty}\). Then for all \(u\in S^{N-1}\), we have_ \[C(u)-\delta(L_{C}+R)\leq B(\mathcal{B},u),\] _where \(L_{C}\) is the Lipschitz constant of \(C\), and \(R=\max_{p\in\mathcal{B}}\|p\|\)._ Proof.: Consider any \(u\in S^{N-1}\) and pick some \(v\in\mathcal{S}\) such that \(\|u-v\|\leq\delta\). This immediately yields by the Lipschitz continuity of \(C\) that \(|C(u)-C(v)|\leq\delta L_{C}\). We then choose some \(p_{v}\in\partial\mathcal{B}\) such that \(\langle p_{v},v\rangle=B(\mathcal{B},v)\geq C(v)\) and get \[B(\mathcal{B},u) \geq\langle p_{v},u\rangle\] \[=\langle p_{v},v\rangle+\langle p_{v},u-v\rangle\] \[\geq C(v)-\delta\|p_{v}\|\] \[\geq C(u)-\delta L_{C}-\delta R\] \[=C(u)-\delta(L_{C}+R).\] With this result, we can quantify how far our \((\mathcal{S},C)\)-valid contours are from being \((S^{N-1},C)\)-valid. The main idea in constructing \((S^{N-1},C)\)-valid contours is to use the bound of Lemma 4.6, and then use following result to inflate the contour. **Lemma 4.7**.: _Assume we have a subset \(\mathcal{S}\subseteq S^{N-1}\) and a convex and compact set \(\mathcal{B}\). If we then define_ \[\mathcal{B}^{e}=\bigcap_{v\in\mathcal{S}}\Pi^{-}\left(v,B(\mathcal{B},v)+e \right),\] _we have that \(B(\mathcal{B}^{e},u)\geq B(\mathcal{B},u)+e\) for all \(u\in S^{N-1}\), and \(B(\mathcal{B}^{e},v)=B(\mathcal{B},v)+e\) for all \(v\in\mathcal{S}\)._ Proof.: For any \(u\in S^{N-1}\) there exists a \(b(u)\in\mathcal{B}\) such that \(\langle b(u),u\rangle=B(\mathcal{B},u)\), we then have immediately that \(\langle b(u)+eu,u\rangle=B(\mathcal{B},u)+e\). Furthermore, since \(b(u)\in\mathcal{B}\), we have for every \(v\in\mathcal{S}\), \(u\in S^{N-1}\) that \(\langle b(u),v\rangle\leq B(\mathcal{B},v)\) which further yields \[\langle b(u)+eu,v\rangle\leq B(\mathcal{B},v)+e\langle u,v\rangle\leq B( \mathcal{B},v)+e.\] As a consequence, we must have \(b(u)+eu\in\mathcal{B}^{e}\), which implies \(B(\mathcal{B}^{e},u)\geq B(\mathcal{B},u)+e\) for all \(u\in S^{N-1}\). Lastly, by definition of \(\mathcal{B}^{e}\), we have \(B(\mathcal{B}^{e},v)\leq B(\mathcal{B},v)+e\) for any \(v\in\mathcal{S}\), which coupled with the previous inequality implies that \(B(\mathcal{B}^{e},v)=B(\mathcal{B},v)+e\) for all \(v\in\mathcal{S}\). These results imply that for any \(\gamma\geq 0\), \(\mathcal{B}\in\mathcal{D}^{\gamma}\), we may inflate the contour in order to guarantee that the resulting \(\mathcal{B}^{e}\) provides a \((S^{N-1},C)\)-valid contour. Note that this construction depends on \(\max_{p\in\mathcal{B}}\|p\|\). In order to extend these results and guarantee universal bounds on the necessary inflation, we will need the following results which limit the size of \(\max_{p\in\mathcal{B}}\|p\|\) for all \(\mathcal{B}\) in \(\mathcal{D}^{\gamma}\). However, to achieve this bound we will first need a small technical computation. **Lemma 4.8**.: _For any \(v\in S^{N-1}\), we have_ \[\int_{S^{N-1}}\langle v,u\rangle^{+}d\sigma(u)\geq\frac{1}{3\sqrt{N}},\] _where \((\cdot)^{+}\) denotes \(\max(\cdot,0)\)._ Proof.: We rewrite as an expectation of absolute values as follows \[\int_{S^{N-1}}\langle v,u\rangle^{+}d\sigma(u) =\frac{1}{2}\mathbb{E}_{u\in\operatorname{Unif}(S^{N-1})}\left( \left|\langle v,u\rangle\right|\right)\] \[=\frac{\mathbb{E}_{z\in\mathcal{N}(0,I_{N})}\left\|z\right\|\, \mathbb{E}_{u\in\operatorname{Unif}(S^{N-1})}\left(\left|\langle v,u\rangle \right|\right)}{2\mathbb{E}_{z\in\mathcal{N}(0,I_{N})}\left\|z\right\|}.\] Noting that the multivariate normal distribution has uniformly random direction, we may calculate the numerator as \[\mathbb{E}_{z\in\mathcal{N}(0,I_{N})}\left\|z\right\|\,\mathbb{E} _{u\in\operatorname{Unif}(S^{N-1})}\left(\left|\langle v,u\rangle\right|\right)=\] \[\mathbb{E}_{z\in\mathcal{N}(0,I_{N})}\left(\left|\langle v,z \rangle\right|\right)=\mathbb{E}_{z_{1}\in\mathcal{N}(0,1)}|z_{1}|=\sqrt{ \frac{2}{\pi}}.\] The denominator can be bounded by \(2\mathbb{E}_{z\in\mathcal{N}(0,I_{N})}\left\|z\right\|\leq 2\sqrt{\mathbb{E}_{z \in\mathcal{N}(0,I_{N})}\left\|z\right\|^{2}}=2\sqrt{N}\). In total, we get \[\int_{S^{N-1}}\langle v,u\rangle^{+}d\sigma(u)\geq\frac{\sqrt{\frac{2}{\pi}}}{ 2\sqrt{N}}>\frac{1}{3\sqrt{N}}.\] **Lemma 4.9**.: _Let \(\mathcal{B}\) be a compact set satisfying \(\max_{p,q\in\mathcal{B}}\|p-q\|\geq K\) for some \(K\in\mathbb{R}^{+}\). We then have that the halved mean width of \(\mathcal{B}\) satisfies_ \[\int_{S^{N-1}}B(\mathcal{B},u)d\sigma(u)\geq\frac{K}{3\sqrt{N}}.\] Proof.: We start by picking \(p_{0},p\in\mathcal{B}\) such that \(\|p-p_{0}\|\geq K\) and note that \[B(\mathcal{B},u)\geq\max(\langle p,u\rangle,\langle p_{0},u\rangle)=\langle p -p_{0},u\rangle^{+}+\langle p_{0},u\rangle.\] If we then note that \(\int_{S^{N-1}}\langle p_{0},u\rangle d\sigma(u)=0\), we can define \(v=\frac{p-p_{0}}{|p-p_{0}|}\) to get \[\int_{S^{N-1}}B(\mathcal{B},u)d\sigma(u)\geq\int_{S^{N-1}}\left(\langle p-p_{0 },u\rangle^{+}+\langle p_{0},u\rangle\right)d\sigma(u)=K\int_{S^{N-1}}\langle v,u\rangle^{+}d\sigma(u).\] By Lemma 4.8 this implies the desired result. With these results, we can now universally bound \(\max_{p\in\mathcal{B}}\|p\|\) for all \(\mathcal{B}\) in \(\mathcal{D}^{\gamma}\), which will also provide a universal bound on the amount of inflation needed in Lemma 4.7 to yield a \((S^{N-1},C)\)-valid contour. **Lemma 4.10**.: _Assume that \(\{(u_{i},w_{i})\}_{i=1}^{m}\subset\mathcal{S}\times[0,\infty)\) is an \(\epsilon\)-accurate quadrature with \(\delta\coloneqq\operatorname{disp}(\mathcal{S})\) satisfying \(\epsilon,\delta\leq\frac{1}{10\sqrt{N}}\). Furthermore, let \(\mathcal{B}\in\mathcal{D}^{\gamma}\), then_ \[\max_{p\in\mathcal{B}}\|p\|\leq 12\sqrt{N}(\|C\|_{\infty}+\gamma).\] Proof.: First, we pick some \(p_{max}\in\partial\mathcal{B}\) such that \(\|p_{max}\|=\max_{p\in\mathcal{B}}\|p\|\). If \(\|p_{max}\|=0\) then the desired bound holds trivially. As such we assume, without loss of generality, that \(\|p_{max}\|>0\) and define \(w=p_{max}/\|p_{max}\|\). We then consider some \(v\in\mathcal{S}\) such that \(\|(-w)-v\|\leq\delta\) and note that there must be some \(p_{0}\in\partial\mathcal{B}\) such that \(\langle p_{0},v\rangle=B(\mathcal{B},v)\geq C(v)\geq-\|C\|_{\infty}\). This means that \[\langle p_{0},w\rangle =\langle p_{0},w+v\rangle-\langle p_{0},v\rangle\] \[\leq\delta\|p_{0}\|+\|C\|_{\infty}\] \[\leq\delta\|p_{max}\|+\|C\|_{\infty},\] which further yields \[\|p_{max}-p_{0}\|\geq\langle p_{max}-p_{0},w\rangle\geq\|p_{max}\|(1-\delta)- \|C\|_{\infty}.\] This means that \[\max_{p,q\in\mathcal{B}}\|p-q\|\geq\|p_{max}\|(1-\delta)-\|C\|_{\infty},\] which, by Lemma 4.9, gives \[\int_{S^{N-1}}B(\mathcal{B},u)d\sigma(u)\geq\frac{\|p_{max}\|(1-\delta)-\|C\|_ {\infty}}{3\sqrt{N}}.\] On the other hand, since \(p_{i}=\|C\|_{\infty}u_{i}\), \(B_{i}=\|C\|_{\infty}\) for all \(i\) is a feasible solution of the linear program, we must also have \[\sum_{i=1}^{m}w_{i}B(\mathcal{B},u_{i})\leq V^{DP}+\gamma\leq\sum_{i=1}^{m}w_{ i}C(u_{i})+\gamma\leq(1+\epsilon)\|C\|_{\infty}+\gamma.\] Finally, by Proposition 3.3 and the definition of an \(\epsilon\)-accurate quadrature, we have that \[\left|\int_{S^{N-1}}B(\mathcal{B},u)d\sigma(u)-\sum_{i=1}^{m}w_{i}B(\mathcal{ B},u_{i})\right|\leq 2\epsilon\|p_{max}\|.\] Putting these facts together, we get \[(1+\epsilon)\|C\|_{\infty}+\gamma \geq\sum_{i=1}^{m}w_{i}B(\mathcal{B},u_{i})\] \[\geq\int_{S^{N-1}}B(\mathcal{B},u)d\sigma(u)-2\epsilon\|p_{max}\|\] \[\geq\frac{\|p_{max}\|(1-\delta)-\|C\|_{\infty}}{3\sqrt{N}}-2 \epsilon\|p_{max}\|.\] Using the assumptions \(\epsilon,\delta\leq\frac{1}{10\sqrt{N}}\) and \(N\geq 2\), we get \[\max_{p\in\mathcal{B}}\|p\|\leq\frac{3\sqrt{N}((1+\epsilon)\|C\|_{\infty}+ \gamma)+\|C\|_{\infty}}{1-\delta-6\sqrt{N}\epsilon}\leq 12\sqrt{N}(\|C\|_{ \infty}+\gamma).\] Combining Lemmas 4.6, 4.7 and 4.10, we can construct \((S^{N-1},C)\)-valid contours from our discrete approximation. This allows us to directly compare our discrete solutions from (4.4) to the theoretical ones of (4.1). However, we still need to address the error stemming from the numerical integration of \(B(\mathcal{B},\cdot)\). To handle this, we note that the integration error of an \(\epsilon\)-accurate quadrature involves \(\max_{p\in\mathcal{B}}\|p\|\), which is bounded for \(\mathcal{B}\in\mathcal{D}^{\gamma}\) by Lemma 4.10. To finish our preparations for the main theorem, we prove the analogous result for \(\mathcal{B}\in\mathcal{C}^{\gamma}\). **Corollary 4.11**.: _Assume \(\mathcal{B}\in\mathcal{C}^{\gamma}\) for some \(\gamma\geq 0\), we then have that_ \[\max_{p\in\mathcal{B}}\|p\|\leq 4\sqrt{N}\left(\|C\|_{\infty}+\gamma\right),\] _where \(\|C\|_{\infty}=\max_{u\in S^{N-1}}|C(u)|\)._ Proof.: First, we pick some \(p_{max}\in\mathcal{B}\) such that \(\|p_{max}\|=\max_{p\in\mathcal{B}}\|p\|\). If \(\|p_{max}\|=0\) then the desired bound holds trivially. As such we assume, without loss of generality, that \(\|p_{max}\|>0\) and define \(v=p_{max}/\|p_{max}\|\). Since \(\partial\mathcal{B}\) is \((S^{N-1},C)\)-valid, there must also exist some \(p_{0}\in\partial\mathcal{B}\) such that \(\langle p_{0},-v\rangle\geq C(-v)\geq-\|C\|_{\infty}\), which implies \[\|p_{max}-p_{0}\|\geq\langle p_{max}-p_{0},v\rangle=\langle p_{max},v\rangle+ \langle p_{0},-v\rangle\geq\|p_{max}\|-\|C\|_{\infty}.\] As a consequence, we note that since \(\max_{p,q\in\mathcal{B}}\|p-q\|\geq\|p_{max}\|-\|C\|_{\infty}\), Lemma 4.9 implies that \[\int_{S^{N-1}}B(\mathcal{B},u)d\sigma(u)\geq\frac{\|p_{max}\|-\|C\|_{\infty}} {3\sqrt{N}}.\] Furthermore, we have that \(\|C\|_{\infty}S^{N-1}\in\mathcal{C}^{\infty}\) which means that \(V^{CP}\leq\|C\|_{\infty}\) and therefore \(\mathcal{B}\in\mathcal{C}^{\gamma}\) implies \(\int_{S^{N-1}}B(\mathcal{B},u)d\sigma(u)\leq\|C\|_{\infty}+\gamma\). Combining these facts with \(N\geq 1\) yields \[\|p_{max}\|\leq 3\sqrt{N}(\|C\|_{\infty}+\gamma)+\|C\|_{\infty}\leq 4\sqrt{N}( \|C\|_{\infty}+\gamma).\] With these universal bounds established, we can discuss the two main results of this chapter, which guarantee that our discrete problem (4.4) indeed approximates (4.1). **Theorem 4.12**.: _Fix some \(\epsilon\)-accurate quadrature \(\{(u_{i},w_{i})\}_{i=1}^{m}\) and assume that \(\mathcal{S}=\{u_{i}\}_{i=1}^{m}\) with \(\delta=\operatorname{disp}(\mathcal{S})\) satisfies \(\epsilon,\delta\leq\frac{1}{10\sqrt{N}}\). Then_ \[V^{CP}-V^{DP} \leq 12\sqrt{N}\|C\|_{\infty}(2\epsilon+\delta)+L_{C}\delta,\] \[V^{DP}-V^{CP} \leq 8\sqrt{N}\|C\|_{\infty}\epsilon.\] Proof.: For the first statement, let \(\left((p_{i}^{*})_{i=1}^{m}\,,\,(B_{i}^{*})_{i=1}^{m}\right)\) be an optimal solution of the linear programming problem (4.4) and choose any \(\mathcal{B}^{DP}\in\mathcal{D}^{0}\) with \(B(\mathcal{B}^{DP},u_{i})=B_{i}^{*}\) for \(i=1,2,\ldots,m\). We then denote \(R^{DP}=\max_{p\in\mathcal{B}^{DP}}\|p\|\) and note that Proposition 3.3 along with the definition of an \(\epsilon\)-accurate quadratures imply \[\left|\sum_{i=1}^{m}w_{i}B_{i}^{*}-\int_{S^{N-1}}B(\mathcal{B}^{DP},u)d \sigma(u)\right|\leq 2\epsilon R^{DP}.\] By Lemma 4.6, we know \(C(u)-\delta(L_{C}+R^{DP})\leq B(\mathcal{B}^{DP},u)\), which motivates us to define \(\mathcal{B}^{e}\) as \[\mathcal{B}^{e}=\bigcap_{u\in S^{N-1}}\Pi^{-}\left(u,B(\mathcal{B}^{DP},u)+ \delta(L_{C}+R^{DP})\right).\] By Lemma 4.7 we know that \(B(\mathcal{B}^{e},u)=B(\mathcal{B}^{DP},u)+\delta(L_{C}+R^{DP})\geq C(u)\) for all \(u\in S^{N-1}\). This yields firstly that \[\int_{S^{N-1}}B(\mathcal{B}^{e},u)d\sigma(u)=\int_{S^{N-1}}B(\mathcal{B}^{DP}, u)d\sigma(u)+\delta(L_{C}+R^{DP}).\] Secondly, it implies that \(\mathcal{B}^{e}\) is a feasible solution for the continuous problem, and therefore \[V^{CP} \leq\int_{S^{N-1}}B(\mathcal{B}^{e},u)d\sigma(u)\] \[=\int_{S^{N-1}}B(\mathcal{B}^{DP},u)d\sigma(u)+\delta(L_{C}+R^{DP})\] \[\leq\sum_{i=1}^{m}w_{i}B_{i}^{*}+2\epsilon R^{DP}+\delta(L_{C}+R^ {DP})\] \[=V^{DP}+2\epsilon R^{DP}+\delta(L_{C}+R^{DP}).\] Combining with Lemma 4.10, this implies \(V^{CP}-V^{DP}\leq 12\sqrt{N}\|C\|_{\infty}(2\epsilon+\delta)+\delta L_{C}\). As for the other direction, we consider some optimal solution of the continuous problem \(\mathcal{B}^{CP}\) and denote \(R^{CP}=\max_{p\in\mathcal{B}^{CP}}\|p\|\). We know that for all \(i=1,2,\ldots,m\) there is some \(p_{i}^{CP}\in\mathcal{B}^{CP}\) such that \((p_{i}^{CP},u_{i})\geq C(u_{i})\). This means that \(\left(\left(p_{i}^{CP}\right)_{i=1}^{m},\,\left(B(\mathcal{B}^{CP},u_{i}) \right)_{i=1}^{m}\right)\) is a feasible solution of the linear programming problem. As a consequence, we have \[V^{DP} \leq\sum_{i=1}^{m}w_{i}B(\mathcal{B}^{CP},u_{i})\] \[\leq\int_{S^{N-1}}B(\mathcal{B}^{CP},u)d\sigma(u)+2\epsilon R^{CP}\] \[=V^{CP}+2\epsilon R^{CP},\] which concludes the proof by recalling that Corollary 4.11 says \(R^{CP}\leq 4\sqrt{N}\|C\|_{\infty}\). This means that we can arbitrarily well approximate \(V^{CP}\) by considering the linear programming problem (4.4). As a consequence of this result, we can also guarantee that the \((S^{N-1},C)\)-valid contours constructed using Lemma 4.7 can be made universally near-optimal in terms of the original continuous problem (4.1), by means of the following result. **Theorem 4.13**.: _Fix some \(\epsilon\)-accurate quadrature \(\{(u_{i},w_{i})\}_{i=1}^{m}\) and assume that \(\mathcal{S}=\{u_{i}\}_{i=1}^{m}\) with \(\delta=\operatorname{disp}(\mathcal{S})\) satisfies \(\epsilon,\delta\leq\frac{1}{10\sqrt{N}}\). Let \(e=\delta\big{(}L_{C}+12\sqrt{N}(\|C\|_{\infty}+\gamma)\big{)}\) and define \(\mathcal{B}^{e}=\bigcap_{u\in\mathcal{S}}\Pi^{-}\left(u,B(\mathcal{B},u)+e \right))\) for any \(\mathcal{B}\in\mathcal{D}^{\gamma}\), \(\gamma\geq 0\)._ _We then have_ \[\mathcal{B}^{e}\in\mathcal{C}^{\gamma+\beta},\] _where \(\beta=4e+32\sqrt{N}(\|C\|_{\infty}+\gamma)\,\epsilon\)._ Proof.: We know from Lemmas 4.6, 4.7 and 4.10 that \(\mathcal{B}^{e}\) satisfies \[B(\mathcal{B}^{e},u)\geq B(\mathcal{B},u)+e\geq\big{(}C(u)-e\big{)}+e=C(u),\] for all \(u\in S^{N-1}\), which implies \(\mathcal{B}^{e}\in\mathcal{C}^{\infty}\). Furthermore, we have \[\sum_{i=1}^{m}w_{i}B(\mathcal{B}^{e},u_{i})=\sum_{i=1}^{m}w_{i}\big{(}B( \mathcal{B},u_{i})+e\big{)}\leq V^{DP}+\gamma+(1+\epsilon)e,\] which yields \(\mathcal{B}^{e}\in\mathcal{D}^{\gamma+(1+\epsilon)e}\). This, along with the definition of \(\epsilon\)-accurate quadratures, Proposition 3.3 and Lemma 4.10, implies \[\left|\sum_{i=1}^{m}w_{i}B(\mathcal{B}^{e},u_{i})-\int_{S^{N-1}}B(\mathcal{B} ^{e},u)d\sigma(u)\right|\leq 24\sqrt{N}\Big{(}\|C\|_{\infty}+\gamma+(1+ \epsilon)e\Big{)}\epsilon.\] For ease of notation we denote \(\alpha=24\sqrt{N}\Big{(}\|C\|_{\infty}+\gamma+(1+\epsilon)e\Big{)}\epsilon\). Combining these facts with Theorem 4.12 then yields \[\int_{S^{N-1}}B(\mathcal{B}^{e},u)d\sigma(u) \leq\sum_{i=1}^{m}w_{i}B(\mathcal{B}^{e},u_{i})+\alpha\] \[\leq V^{DP}+\gamma+(1+\epsilon)e+\alpha\] \[\leq V^{CP}+8\sqrt{N}\|C\|_{\infty}\epsilon+\gamma+(1+\epsilon)e+\alpha\] \[\leq V^{CP}+\beta+\gamma,\] which implies \(\mathcal{B}^{e}\in\mathcal{C}^{\gamma+\beta}\). ## 5. Numerical examples **Example 5.1** (Difference between \(\mathcal{B}^{*}\) and \(\mathcal{B}^{\prime}\)).: _Figure 1 highlights the difference between the convex hull \(\mathcal{B}^{*}\) from Proposition 4.2, and the intersection of half-spaces \(\mathcal{B}^{\prime}\) from Corollary 4.3. We use the unit circle outreach requirement \(C=1\) and sample it in nine evenly spaced directions. We see that the half-spaces defining \(\mathcal{B}^{\prime}\) each tangent the unit circle from the outside. On the other hand, the corners defining \(\mathcal{B}^{*}\) lie on the circle, giving a convex hull inside the circle._ _This means that using \(\mathcal{B}^{\prime}\) will be a more conservative estimate, and therefore a safer choice. In this particular case we even have \(\mathcal{B}^{\prime}\in\mathcal{C}^{\infty}\), making it a \((C,S^{N-1})\)-valid contour without the need for inflation as per Theorem 4.13._ **Example 5.2** (Multiple optimal shapes).: _We construct a Lipschitz-continuous outreach requirement (Figure 2, left) which requires the shape to reach outreach 1 in each of the four cardinal directions. For this requirement, there are infinitely many shapes with the optimal perimeter. There are two distinct solution minimising the area: the two diagonal line segments with zero area. However, interpolating the two diagonal extremes are infinitely many 45 degree tilted rectangles, also with optimal perimeter._ **Example 5.3** (Comparing with naive method).: _In [11], they compute \(C\) under the assumption that the environmental loads, \(V\), are modeled as a sequence of i.i.d. random variables \(\{W\}_{n=0}^{\infty}\). Specifically, they assume that \(V_{t}=W_{\lfloor t/\Delta t\rfloor}\) for a time increment \(\Delta t\in\mathbb{R}_{+}\), where \(\lfloor\cdot\rfloor\) denotes the floor function. They then define \(C(u)\) by the upper \(p_{e}\)-quantile of \(\langle W,u\rangle\), for a target exceedance probability \(p_{e}\). This will Figure 1. We discretise the true outreach requirement (black circle) into only nine directions (blue). This highlights the difference between \(\mathcal{B}^{*}\) from Proposition 4.2 (green) and \(\mathcal{B}^{\prime}\) from Corollary 4.3 (orange). \(\mathcal{B}^{*}\) always lies inside of \(\mathcal{B}^{\prime}\). Figure 2. Left: Periodic outreach function \(C\) constructed such that Equation (4.1) has several shapes with minimal perimeter. Right: The outreach requirement boils down to requiring the shape to touch the four sides of a square (blue). A specific numerical implementation of our method (7.3) output the green shape. We highlight two other optimal shapes (orange). guarantee that the mean time to failure for any convex failure set not intersecting with a \((C,S^{N-1})\)-valid contour is at least \(\Delta t/p_{e}\). We select \(p_{e}=1/29200\) and \(\Delta t=3\) hours, implying a 10 year lower bound on the mean time to failure._ _The suggested method presented in the aforementioned article [11], as well as the recommended practises of DNV [3], is the following. Find a model for \(W\), simulate a number of samples, and use the empirical quantiles of \(\langle W,u\rangle\) as an estimate of \(C(u)\) for a finite selection of directions. We choose \(3\times 10^{5}\) samples in \(360\) uniformly spaced directions._ _In [11], \(W\) was modelled as \(W=(H,T)\). Here \(H\) is a 3-parameter Weibull-distributed random variable in \(\mathbb{R}\) representing significant wave height. \(H\) has scale \(2.259\), shape \(1.285,\) and location \(0.701\). Similarly, \(T\) represents the zero-upcrossing wave period and is assumed to follow a conditional log-normal distribution, i.e. \(\ln(T)\) is normally distributed with with conditional mean \(1.069+0.898H^{0.243},\) and conditional standard deviation \(0.025+0.263e^{-0.148H}.\)_ _It is further suggested to construct \(\mathcal{B}\) by_ \[\mathcal{B}=\bigcap_{u\in S^{N-1}}\Pi^{-}(u,C(u)). \tag{5.1}\] _We will henceforth refer to (5.1) as the naive method._ _The estimated \(C\) is slightly noisy. As pointed out in e.g. [9, 6], this noise may causes the naive method to fail to satisfy the outreach requirements. For an explanation on why this occurs, we refer to either [9, 6] or the discussion around Figure 5._ _As seen in Figure 3, the naive method outputs an improper contour. Using the naive method with the \(C\) from Example 5.2, would cause it to output the single point \((0,0)\), which does not satisfy the outreach requirements._ **Example 5.4** (Three dimensions).: _To demonstrate our method in three dimensions, we use an example from [19]. The construction of \(C\) is similar to Example 5.3. The sequence \(W=(H,T,U)\) distributed as follows. \(H\) follows a 3-parameter Weibull distribution with scale \(1.798\), shape \(1.214\) and location \(0.856\). Given \(H\), \(\ln(T)\) is Figure 3. The naive method fails to output a convex shape satisfying the requirements, when the outreach function \(C\) contains noise. We highlight a violated outreach constraint in blue. The plots are the same, with the right one being zoomed in. normally distributed with mean \(-1.010+2.847H^{0.075}\) and standard deviation \(0.161+0.146e^{-0.683H}\). Finally, given \(H\), \(U\) follows a 2-parameter Weibull distribution with scale \(2.58+0.12H^{1.6}\) and shape \(4.6+2.05H\). We use the empirical quantiles with \(10^{6}\) samples of \(\langle W,u\rangle\) to estimate \(C(u)\), and select \(p_{e}=\frac{1}{29200}\)._ _For discretization, we sample \(C\) according to the cubed hypersphere quadrature with 10 subdivisions (Proposition 8.1). Then we solve Equation (4.4) and compute the convex hull \(\mathcal{B}^{*}\) defined in Proposition 4.2. Figure 4 visualizes the resulting contour._ ## 6. Connection with Hausdorff Topology While the previous section guarantees the construction of arbitrarily near-optimal \((S^{N-1},C)\)-valid contours, there is still the question of whether all elements of \(\mathcal{C}^{0}\) can be approximated in this way. In order to examine this question, we will need the concept of the Hausdorff metric. The resulting topology turns out to be a natural framework for examining convergence of the discrete approximations of our continuous problem, but it will also allow us to properly prove our earlier claim that the continuous problem indeed has optimal solutions. ### Basic Concepts and Definitions The main tool we will use here is the Hausdorff metric, defined as follows. **Definition 6.1**.: _Let \((X,d)\) be a metric space, and let \(F(X)\) denote the collection of all non-empty, compact subsets of \(X\). For any \(x\in X\) and \(A,\,B\in F(X)\), we can define \(d(x,B)=d(B,x)=\min_{b\in\mathcal{B}}d(x,b)\) and_ \[d_{\mathcal{H}}(A,B)=\max\left(\max_{a\in A}d(a,B),\,\max_{b\in B}d(A,b)\right).\] _The set-function \(d_{\mathcal{H}}\) is referred to as the Hausdorff distance._ We have the following basic properties of \(d_{\mathcal{H}}\), for a proof of these properties we refer to [14]. Figure 4. Shape found by Equation (4.4) for the 3d example described in Example 5.4. **Proposition 6.2**.: _The space \((F(X),d_{\mathcal{H}})\) is a metric space. Furthermore, if \((X,d)\) is a complete and compact metric space, then \((F(X),d_{\mathcal{H}})\) is complete and compact as well. Lastly, if \(X\) is a Banach space and \(\{A_{n}\}_{n=1}^{\infty}\subset F(X)\) is a sequence of convex sets converging to some set \(A\in F(X)\), then \(A\) is also convex._ In what follows, we will consider \((\mathbb{R}^{N},d)\) where \(d\) is the canonical Euclidean metric on \(\mathbb{R}^{N}\). This allows us to define the Hausdorff metric on \(F(\mathbb{R}^{N})\), but also allows us to discuss \(F(F(\mathbb{R}^{N}))\), i.e. compact collections (in \(d_{\mathcal{H}}\)) of compact subsets of \(\mathbb{R}^{N}\) and the associated metric, \(d_{\mathcal{H}_{\mathcal{H}}}\). With these definitions we can discuss \(d_{\mathcal{H}_{\mathcal{H}}}(\mathcal{C}^{0},\mathcal{D}^{\gamma})\) which quantifies how well our discrete problem can approximate the _entirety_ of \(\mathcal{C}^{0}\). ### Existence of Solutions to the Continuous Problem Our first point of order is our previous claim of existence of solutions to Equation (4.1). **Theorem 6.3**.: _Define the objective function \(\phi\colon\mathcal{C}^{\infty}\mapsto[V^{CP},\infty)\) by_ \[\phi(\mathcal{B})=\int_{S^{N-1}}B(\mathcal{B},u)d\sigma(u).\] _We then have that \(\phi\) is Lipschitz continuous in \((F(\mathbb{R}^{N}),d_{\mathcal{H}})\) with Lipschitz constant \(1\). Furthermore, for any \(\gamma\geq 0\), \(\mathcal{C}^{\gamma}\subset F(\mathbb{R}^{N})\) is non-empty and compact in the resulting Hausdorff topology. As a specific consequence of this, the continuous problem (4.1) has at least one optimal solution._ Proof.: In what follows, we will need the following relation. If \(\mathcal{B}_{1},\,\mathcal{B}_{2}\subset\mathbb{R}^{N}\) are compact then \(|B(\mathcal{B}_{1},u)-B(\mathcal{B}_{2},u)|\leq d_{\mathcal{H}}(\mathcal{B}_{1 },\mathcal{B}_{2})\) for all \(u\in S^{N-1}\). To see this, we pick \(p_{1}\in\mathcal{B}_{1}\) such that \(\langle p_{1},u\rangle=B(\mathcal{B}_{1},u)\). From the definition of the Hausdorff distance, there must be some \(p_{2}\in\mathcal{B}_{2}\) such that \(\|p_{1}-p_{2}\|\leq d_{\mathcal{H}}(\mathcal{B}_{1},\mathcal{B}_{2})\) which yields \[B(\mathcal{B}_{1},u) =\langle p_{1},u\rangle\] \[=\langle p_{1}-p_{2},u\rangle+\langle p_{2},u\rangle\] \[\leq d_{\mathcal{H}}(\mathcal{B}_{1},\mathcal{B}_{2})+B( \mathcal{B}_{2},u).\] This proves that \(B(\mathcal{B}_{1},u)-B(\mathcal{B}_{2},u)\leq d_{\mathcal{H}}(\mathcal{B}_{1}, \mathcal{B}_{2})\), and an identical argument with \(B_{1}\) and \(B_{2}\) interchanged gives \(B(\mathcal{B}_{2},u)-B(\mathcal{B}_{1},u)\leq d_{\mathcal{H}}(\mathcal{B}_{1}, \mathcal{B}_{2})\). This implies that for any \(u\in S^{N-1}\), the function \(\mathcal{B}\mapsto B(\mathcal{B},u)\) is Lipschitz continuous with Lipschitz constant \(1\). Next, we fix some \(\gamma\geq 0\) and aim to prove compactness of \(\mathcal{C}^{\gamma}\), leaving non-emptiness for later. Note that \(\mathcal{C}^{\gamma}\) is either non-empty or trivially compact, as such we can, without loss of generality, assume that \(\mathcal{C}^{\gamma}\) is non-empty. Furthermore, by Corollary 4.11, we note that for every \(\mathcal{B}\in\mathcal{C}^{\gamma}\) we have \(\mathcal{B}\subset K_{\gamma}\) where \[K_{\gamma}=\{x\in\mathbb{R}^{N}:\|x\|\leq 4\sqrt{N}(\|C\|_{\infty}+\gamma)\}.\] If we denote by \(F(K_{\gamma})\) the collection of all compact subsets of \(K_{\gamma}\), we have that \(\mathcal{C}^{\gamma}\subset F(K_{\gamma})\). Furthermore, by Proposition 6.2, we have that \(F(K_{\gamma})\subset F(\mathbb{R}^{N})\) equipped with the Hausdorff metric is a complete and compact metric space. We next aim to prove that \(\mathcal{C}^{\gamma}\) is a closed (and therefore compact) subset of \(F(K_{\gamma})\) in the Hausdorff topology. To see this, consider any convergent sequence \(\{B_{n}\}_{n=1}^{\infty}\subset\mathcal{C}^{\gamma}\subset F(K_{\gamma})\) with \(\mathcal{B}_{n}\to\mathcal{B}\). By Proposition 6.2 we have that \(\mathcal{B}\) is convex, and by completeness of \(F(K_{\gamma})\) we have \(\mathcal{B}\) compact. Furthermore, by continuity of \(\mathcal{B}\mapsto B(\mathcal{B},u)\), we have for any \(u\in S^{N-1}\) that \[B(\mathcal{B},u)=\lim_{n\to\infty}B(\mathcal{B}_{n},u)\geq C(u),\] implying \(\mathcal{B}\in\mathcal{C}^{\infty}\). Lastly, by \(B(\mathcal{B}_{n},u)\leq\max_{p\in\mathcal{B}_{n}}\|p\|\leq 4\sqrt{N}(\|C\|_{ \infty}+\gamma)\) for all \(u\in S^{N-1}\), we can apply the dominated convergence theorem to get \[\int_{S^{N-1}}B(\mathcal{B},u)d\sigma(u) =\int_{S^{N-1}}\lim_{n\to\infty}B(\mathcal{B}_{n},u)d\sigma(u)\] \[=\lim_{n\to\infty}\int_{S^{N-1}}B(\mathcal{B}_{n},u)d\sigma(u)\] \[\leq V^{CP}+\gamma,\] which implies \(\mathcal{B}\in\mathcal{C}^{\gamma}\). As a consequence, for any \(\gamma\geq 0\), \(\mathcal{C}^{\gamma}\) is a closed subset of the compact space \(F(K_{\gamma})\), and therefore itself compact. As for non-emptiness of \(\mathcal{C}^{\gamma}\) we first remark that \(\mathcal{C}^{\gamma}\) is non-empty for any \(\gamma>0\). To see this, note by the definition of \(V^{CP}\) that there exists either an optimal solution in \(\mathcal{C}^{0}\) with mean width \(V^{CP}\) or a sequence of near-optimal solutions with mean width converging to \(V^{CP}\). Either way, this implies that \(\mathcal{C}^{\gamma}\) is non-empty for any \(\gamma>0\). Next, assume \(\gamma>0\) and note that for any \(\mathcal{B}_{1}\), \(\mathcal{B}_{2}\in\mathcal{C}^{\gamma}\) we have \[|\phi(\mathcal{B}_{1})-\phi(\mathcal{B}_{2})| =\left|\int_{S^{N-1}}(B(\mathcal{B}_{1},u)-B(\mathcal{B}_{2},u))d \sigma(u)\right|\] \[\leq\int_{S^{N-1}}|B(\mathcal{B}_{1},u)-B(\mathcal{B}_{2},u)|\,d \sigma(u)\] \[\leq\int_{S^{N-1}}d_{\mathcal{H}}(\mathcal{B}_{1},\mathcal{B}_{2 })d\sigma(u)\] \[=d_{\mathcal{H}}(\mathcal{B}_{1},\mathcal{B}_{2}).\] This means that \(\phi\) is Lipschitz, so it must attain a minimal value on \(\mathcal{C}^{\gamma}\) for any \(\gamma>0\). This minimum, by definition, will also be a minimum on \(\mathcal{C}^{\infty}\). Consequently, this minimal point, \(\mathcal{B}_{\min}\), must satisfy \(\phi(\mathcal{B}_{\min})=V^{CP}\), yielding \(\mathcal{C}^{0}\) non-empty. ### Convergence in Hausdorff Metric With existence of solutions settled, along with some useful properties of \(\phi\) and \(\mathcal{C}^{\gamma}\), we can move on to our main goal of this section: To prove that we can arbitrarily well approximate the entirety of \(\mathcal{C}^{0}\) by \(\mathcal{D}^{0}\) or \(\mathcal{D}^{\gamma}\). To do so we will consider \[d_{\mathcal{H}_{\mathcal{H}}}(\mathcal{C}^{0},\mathcal{D}^{\gamma})=\max \left(\max_{\mathcal{B}^{\prime}\in\mathcal{C}^{0}}\min_{\mathcal{B}\in \mathcal{D}^{\gamma}}d_{\mathcal{H}}(\mathcal{B},\mathcal{B}^{\prime}),\max_{ \mathcal{B}^{\prime}\in\mathcal{D}^{\gamma}}\min_{\mathcal{B}\in\mathcal{C}^{0 }}d_{\mathcal{H}}(\mathcal{B},\mathcal{B}^{\prime})\right). \tag{6.1}\] The primary goal of this section is to show that for a sequence of quadratures, \(\{(\mathcal{S}_{n},W_{n})\}_{i=1}^{\infty}\), we have \(\{\gamma_{n}\}_{i=1}^{\infty}\), with \(\gamma_{n}\to 0\) such that \(d_{\mathcal{H}_{\mathcal{H}}}(\mathcal{C}^{0},\mathcal{D}^{\gamma_{n}}( \mathcal{S}_{n},W_{n}))\to 0\). This would imply that for any given \(\mathcal{B}^{\prime}\in\mathcal{C}^{0}\) there is some set \(\mathcal{B}\in\mathcal{D}^{\gamma_{n}}(\mathcal{S}_{n},W_{n})\) that approximates it, meaning that _all_ optimal solutions can be approximated by our discrete solutions. Conversely, for any \(\mathcal{B}\in\mathcal{D}^{\gamma_{n}}(\mathcal{S}_{n},W_{n})\), we know it will be close to some \(\mathcal{B}\in\mathcal{C}^{0}\) in Hausdorff distance, implying that our discrete solutions will get closer and closer to our continuous solutions. It turns out that if we are only interested in guaranteeing that our discrete solutions are close to a continuous solution we can drop the inclusion of \(\gamma_{n}\). In particular we will have \(\max_{\mathcal{B}^{\prime}\in\mathcal{D}^{0}(\mathcal{S}_{n},W_{n})}\min_{ \mathcal{B}\in\mathcal{C}^{0}}d_{\mathcal{H}}(\mathcal{B},\mathcal{B}^{\prime} )\to 0.\) Furthermore, if the continuous problem (4.1) has a unique solution, then \(d_{\mathcal{H}_{\mathcal{H}}}(\mathcal{C}^{0},\mathcal{D}^{0}(\mathcal{S}_{n},W_{n}))\to 0.\) However, in order to even consider this distance we will first need to guarantee that \(\mathcal{D}^{\gamma}\) is a compact set in the Hausdorff metric. **Corollary 6.4**.: _Assume that \(\{(u_{i},w_{i})\}_{i=1}^{m}\subset\mathcal{S}\times[0,\infty)\) is an \(\epsilon\)-accurate quadrature with \(\delta\coloneqq\operatorname{disp}(\mathcal{S})\) satisfying \(\epsilon,\delta\leq\frac{1}{10\sqrt{N}}\). We then have that \(\mathcal{D}^{\gamma}\) is compact in \(d_{\mathcal{H}}\)._ Proof.: We first note that Lemma 4.10 yields \(\mathcal{D}^{\gamma}\subset F(K^{\prime}_{\gamma})\) for \[K^{\prime}_{\gamma}=\{x\in\mathbb{R}^{N}:\|x\|\leq 12\sqrt{N}(\|C\|_{\infty}+ \gamma)\}.\] By defining \[\psi(\mathcal{B})=\sum_{i=1}^{m}w_{i}B(\mathcal{B},u_{i}),\] we can then repeat the arguments of Theorem 6.3 with \(K^{\prime}_{\gamma},\psi\) replacing \(K_{\gamma},\phi\), which proves the desired result. With this established we know that (6.1) is well defined, and what remains is finding ways to bound it. We remark that \(\max_{\mathcal{B}^{\prime}\in\mathcal{C}^{0}}\min_{\mathcal{B}\in\mathcal{D}^ {\gamma}}d_{\mathcal{H}}(\mathcal{B},\mathcal{B}^{\prime})\) in (6.1) can be bounded by considering the triangular inequality \[d_{\mathcal{H}}(\mathcal{B},\mathcal{B}^{\prime})\leq d_{\mathcal{H}}( \mathcal{B},\mathcal{B}^{e})+d_{\mathcal{H}}(\mathcal{B}^{e},\mathcal{B}^{ \prime}), \tag{6.2}\] where \(\mathcal{B}^{e}=\bigcap_{v\in S^{N-1}}\Pi^{-}\left(v,B(\mathcal{B},v)+e\right)\) for some appropriate \(e\). The first term of the right side is easily dealt with by extending the results of Lemma 4.7 to the setting of \(d_{\mathcal{H}}\). **Lemma 6.5**.: _Let \(\mathcal{B}\) be a convex and compact set and define_ \[\mathcal{B}^{e}=\bigcap_{v\in S^{N-1}}\Pi^{-}\left(v,B(\mathcal{B},v)+e\right).\] _We then have that \(d_{\mathcal{H}}(\mathcal{B},\mathcal{B}^{e})=e\)._ Proof.: Consider the alternative construction \[\widehat{\mathcal{B}}=\{x\in\mathbb{R}^{N}:d(x,\mathcal{B})\leq e\}.\] We immediately see that \(\widehat{\mathcal{B}}\) is a compact convex set satisfying \(d_{\mathcal{H}}(\mathcal{B},\widehat{\mathcal{B}})=e\), so all we need is to show \(\mathcal{B}^{e}=\widehat{\mathcal{B}}\). To prove this, we note that for all \(x\in\widehat{\mathcal{B}}\) we can decompose \(x=b+c\) where \(b\in\mathcal{B}\) and \(\|c\|\leq e\). Conversely, if \(b\in\mathcal{B}\) and \(\|c\|\leq e\) we have \(b+c\in\widehat{\mathcal{B}}\). Using this we get, for any \(u\in S^{N-1}\), that \[B(\widehat{\mathcal{B}},u) =\max\{\langle x,u\rangle:x\in\widehat{\mathcal{B}}\}\] \[=\max\{\langle b,u\rangle+\langle c,u\rangle:b\in\mathcal{B},\,\| c\|\leq e\}\] \[=B(\mathcal{B},u)+e.\] We may also recall from lemma 4.7 that \(B(\mathcal{B}^{e},u)=B(\mathcal{B},u)+e=B(\widehat{\mathcal{B}},u)\), for all \(u\in S^{N-1}\). This implies that \(\mathcal{B}^{e}=\widehat{\mathcal{B}}\) by Proposition 3.5, which guarantees uniqueness of representation by \(B\). The second term of the right side of (6.2) requires two steps to control. By Theorem 4.12, we can bound the mean width of \(\mathcal{B}^{e}\) by the following result. This result is almost identical to Theorem 4.13. In that result, however, we consider the set \(\mathcal{B}^{e}\) to be an inflation of \(\mathcal{B}\) only in directions \(u\in\mathcal{S}\). In the following result we consider an inflation of \(\mathcal{B}\) in all directions \(u\in S^{N-1}\). **Proposition 6.6**.: _Fix some \(\epsilon\)-accurate quadrature \(\{(u_{i},w_{i})\}_{i=1}^{m}\) and assume that \(\mathcal{S}=\{u_{i}\}_{i=1}^{m}\) with \(\delta=\operatorname{disp}(\mathcal{S})\) satisfies \(\epsilon,\delta\leq\frac{1}{10\sqrt{N}}\). Let \(e=\delta\big{(}L_{C}+12\sqrt{N}(\|C\|_{\infty}+\gamma)\big{)}\) and define \(\mathcal{B}^{e}=\bigcap_{u\in S^{N-1}}\Pi^{-}\left(u,B(\mathcal{B},u)+e\right))\) for some \(\mathcal{B}\in\mathcal{D}^{\infty}\). Note here that we consider \(u\in S^{N-1}\) and not \(u\in\mathcal{S}\) as in Theorem 4.13._ _Then, for any \(\mathcal{B}\in\mathcal{D}^{\gamma}\), \(\gamma\geq 0\), we have that_ \[\mathcal{B}^{e}\in\mathcal{C}^{\gamma+\beta},\] _where \(\beta=e+32\sqrt{N}(\|C\|_{\infty}+\gamma)\,\epsilon\)._ Proof.: We know from Lemmas 4.6, 4.7 and 4.10 that \(\mathcal{B}^{e}\) satisfies \[B(\mathcal{B}^{e},u) =B(\mathcal{B},u)+\delta(L_{C}+12\sqrt{N}(\|C\|_{\infty}+\gamma))\] \[\geq\Big{(}C(u)-e\Big{)}+\delta(L_{C}+12\sqrt{N}(\|C\|_{\infty}+ \gamma))\] \[=C(u)\] for all \(u\in S^{N-1}\), which implies \(\mathcal{B}^{e}\in\mathcal{C}^{\infty}\). Furthermore, we have by the definition of \(\epsilon\)-accurate quadratures, Proposition 3.3 and Lemma 4.10 that \[\left|\sum_{i=1}^{m}w_{i}B(\mathcal{B},u_{i})-\int_{S^{N-1}}B(\mathcal{B},u)d \sigma(u)\right|\leq 24\sqrt{N}(\|C\|_{\infty}+\gamma)\epsilon.\] Combining these facts with Theorem 4.12 then yields \[\int_{S^{N-1}}B(\mathcal{B}^{e},u)d\sigma(u) =\int_{S^{N-1}}B(\mathcal{B},u)d\sigma(u)+e\] \[\leq\sum_{i=1}^{m}w_{i}B(\mathcal{B},u_{i})+24\sqrt{N}(\|C\|_{ \infty}+\gamma)\epsilon+e\] \[\leq V^{DP}+\gamma+24\sqrt{N}(\|C\|_{\infty}+\gamma)\epsilon+e\] \[\leq V^{CP}+8\sqrt{N}\|C\|_{\infty}\epsilon+\gamma+24\sqrt{N}(\|C \|_{\infty}+\gamma)\epsilon+e\] \[\leq V^{CP}+\gamma+\beta,\] which implies \(\mathcal{B}^{e}\in\mathcal{C}^{\gamma+\beta}\). With this result, for any \(\mathcal{B}^{\prime}\in\mathcal{C}^{0}\) and \(\mathcal{B}^{e}\) defined as in Proposition 6.6, we have \(d_{\mathcal{H}}(\mathcal{B}^{\prime},\mathcal{B}^{e})\leq d_{\mathcal{H}_{ \mathcal{H}_{\mathcal{H}}}}(\mathcal{C}^{\gamma},\mathcal{C}^{0})\). This can be controlled by the following. **Proposition 6.7**.: _Consider the Hausdorff metric on the set \(F(\mathcal{C}^{\infty})\), i.e. the set of all compact subsets of \(\mathcal{C}^{\infty}\) where \(\mathcal{C}^{\infty}\) itself is also equipped with a Hausdorff metric. We then have that_ \[\lim_{\gamma\to 0}d_{\mathcal{H}_{\mathcal{H}}}(\mathcal{C}^{\gamma},\mathcal{C}^ {0})=0.\] Proof.: We first recall from Theorem 6.3 that \(\mathcal{C}^{\gamma}\) is compact and that \(\phi:\mathcal{C}^{\infty}\to[V^{CP},\infty)\) defined by \(\phi(\mathcal{B})=\int_{S^{N-1}}B(\mathcal{B},u)d\sigma(u)\) is a Lipschitz continuous function with Lipschitz constant \(1\). Also, note that since \(\mathcal{C}^{0}\subset\mathcal{C}^{\gamma}\), we have \[d_{\mathcal{H}_{\mathcal{H}}}(\mathcal{C}^{\gamma},\mathcal{C}^{0})=\max_{ \mathcal{B}\in\mathcal{C}^{\gamma}}\min_{\mathcal{B}^{\prime}\in\mathcal{C}^ {0}}d_{\mathcal{H}}(\mathcal{B},\mathcal{B}^{\prime}).\] Next, pick some \(\gamma,\beta>0\). Since \(\mathcal{C}^{\gamma}\) is compact, we can consider a finite open covering \(\{\mathcal{U}_{i}\}_{i=1}^{I}\) such that \(\operatorname{diam}(\mathcal{U}_{i})\leq\beta\) for all \(i=1,\ldots,I\). Note that by Lipschitz continuity of \(\phi\), we have for all \(i=1,\ldots,I\) that \(\phi(\mathcal{B})\leq\gamma+\beta\) for any \(\mathcal{B}\in\mathcal{U}_{i}\). As a consequence, if we then define \(\overline{\mathcal{U}_{i}}\) as the closure of \(\mathcal{U}_{i}\) in the Hausdorff metric, we know for all \(i=1,\ldots,I\), that \(\overline{\mathcal{U}_{i}}\) must be a closed subset of the compact set \(\mathcal{C}^{\gamma+\beta}\), which implies that \(\overline{\mathcal{U}_{i}}\) is compact as well. This implies that \(\phi\) must attain a minimum on \(\overline{\mathcal{U}}_{i}\) which lets us define \(\{\psi_{i}\}_{i=1}^{I}\) by \(\psi_{i}=\min_{\mathcal{B}\in\mathcal{U}_{i}}\phi(\mathcal{B})\). We then separate the sets that overlap with \(\mathcal{C}^{0}\) by defining \(\mathcal{I}=\{i=1,\ldots,I:\psi_{i}=V^{CP}\}\) and \(\mathcal{J}=\{i=1,\ldots,I:\psi_{i}>V^{CP}\}\). If we then define \(\psi_{min}=\min_{j\in\mathcal{J}}\psi_{j}>V^{CP}\), we can pick some \(\alpha\in\left(0,\min(\gamma,\psi_{\min}-V^{CP})\right)\) which implies that \(\phi(\mathcal{B})\geq\psi_{min}>V^{CP}+\alpha\) for any \(\mathcal{B}\in\cup_{j\in\mathcal{J}}\overline{\mathcal{U}}_{j}\). As a consequence, we have \[\bigcup_{j\in\mathcal{J}}\overline{\mathcal{U}}_{j}\cap\mathcal{C}^{\alpha}=\emptyset.\] However, since \(\{\mathcal{U}_{i}\}_{i=1}^{I}\) is a covering of \(\mathcal{C}^{\gamma}\supset\mathcal{C}^{\alpha}\) we must have that \[\mathcal{C}^{\alpha}\subseteq\bigcup_{i\in\mathcal{I}}\overline{\mathcal{U}}_{ i}.\] This implies that for every \(\mathcal{B}\in\mathcal{C}^{\alpha}\) we have \(\mathcal{B}\in\overline{\mathcal{U}}_{i^{\prime}}\) for some \(i^{\prime}\in\mathcal{I}\). Since \(\psi_{i^{\prime}}=V^{CP}\), there is some \(\mathcal{B}^{\prime}\in\overline{\mathcal{U}}_{i^{\prime}}\) such that \(\phi(\mathcal{B}^{\prime})=V^{CP}\) which implies \(\mathcal{B}^{\prime}\in\mathcal{C}^{0}\). By \(\operatorname{diam}(\overline{\mathcal{U}}_{i})=\operatorname{diam}(\mathcal{ U}_{i})\leq\beta\) we then finally get that \(d_{\mathcal{H}}(\mathcal{B},\mathcal{B}^{\prime})\leq\beta\), but since \(\mathcal{B}\) was arbitrary we also have \(d_{\mathcal{H}_{\mathcal{H}}}(\mathcal{C}^{0},\mathcal{C}^{\alpha})\leq\beta\). In summary, we see that for every \(\beta>0\) there exists some \(\alpha>0\) such that \(d_{\mathcal{H}_{\mathcal{H}}}(\mathcal{C}^{0},\mathcal{C}^{\alpha})\leq\beta\), which implies \(\lim_{\gamma\to 0}d_{\mathcal{H}_{\mathcal{H}}}(\mathcal{C}^{0},\mathcal{C}^{ \gamma})=0\) by monotonicity. It turns out that the triangle inequality of (6.2) is not sufficient to control (6.1). In particular, we also need to handle \(\max_{\mathcal{B}^{\prime}\in\mathcal{D}^{\gamma}}\min_{\mathcal{B}\in \mathcal{C}^{0}}d_{\mathcal{H}}(\mathcal{B},\mathcal{B}^{\prime})\), which equals \(0\) if \(\mathcal{C}^{0}\subset\mathcal{D}^{\gamma}\). Fortunately, a \(\gamma\) such that this is satisfied can be attained. **Lemma 6.8**.: _Fix some \(\epsilon\)-accurate quadrature \(\{(u_{i},w_{i})\}_{i=1}^{m}\) and assume that \(\mathcal{S}=\{u_{i}\}_{i=1}^{m}\) with \(\delta=\operatorname{disp}(\mathcal{S})\) satisfies \(\epsilon,\delta\leq\frac{1}{10\sqrt{N}}\)._ _Then \(\mathcal{C}^{0}\subset\mathcal{D}^{\gamma}\) for \(\gamma=16\sqrt{N}\|C\|_{\infty}(2\epsilon+\delta)+L_{C}\delta\)._ Proof.: Take any \(\mathcal{B}\in\mathcal{C}^{0}\), we then have, for all \(u\in\mathcal{S}\), that \(B(\mathcal{B},u)\geq C(u)\), which implies \(\mathcal{B}\in\mathcal{D}^{\infty}\). Furthermore, Proposition 3.3 and Corollary 4.11 give us \[\left|V^{CP}-\sum_{i=1}^{m}w_{i}B(\mathcal{B},u_{i})\right|\leq 8\sqrt{N}\|C\|_{ \infty}\epsilon.\] Combining these facts with Theorem 4.12 yields \[\sum_{i=1}^{m}w_{i}B(\mathcal{B},u_{i}) \leq V^{CP}+8\sqrt{N}\|C\|_{\infty}\epsilon\] \[\leq V^{DP}+12\sqrt{N}\|C\|_{\infty}(2\epsilon+\delta)+L_{C} \delta+8\sqrt{N}\|C\|_{\infty}\epsilon\] \[\leq V^{DP}+\gamma,\] which implies \(\mathcal{B}\in\mathcal{D}^{\gamma}\). With all our technical results, we are ready to prove the second main result of this section. **Theorem 6.9**.: _Consider a sequence, \(\{(\mathcal{S}_{n},W_{n})\}_{i=1}^{\infty}\), with \(\mathcal{S}_{n}=(u_{i}^{n})_{i=1}^{I_{n}}\) and \(W_{n}=(w_{i}^{n})_{i=1}^{I_{n}}\) such that \(\{(u_{i}^{n},w_{i}^{n})\}_{i=1}^{I_{n}}\subset\mathcal{S}_{n}\times W_{n}\) forms an \(\epsilon_{n}\)-accurate quadrature. Further assume that \(\epsilon_{n}\) and \(\delta_{n}=\operatorname{disp}(\mathcal{S}_{n})\) both converge to 0 and satisfy \(\epsilon_{n},\delta_{n}\leq\frac{1}{10\sqrt{N}}\) for all \(n\in\mathbb{N}^{+}\). For ease of notation, we will denote \(\mathcal{D}^{\gamma}(\mathcal{S}_{n},W_{n})\) by \(\mathcal{D}^{\gamma}_{n}\) for any \(\gamma\geq 0\)._ _If we define_ \[\gamma_{n}=16\sqrt{N}\|C\|_{\infty}(2\epsilon_{n}+\delta_{n})+L_{C}\delta_{n},\] _we then have that_ \[\lim_{n\to\infty}d_{\mathcal{H}_{\mathcal{H}_{\mathcal{H}}}}\Big{(}\mathcal{D }^{\gamma_{n}}_{n},\mathcal{C}^{0}\Big{)}=0,\] _and_ \[\lim_{n\to\infty}\max_{\mathcal{B}\in\mathcal{D}^{0}_{n}}\min_{\mathcal{B}^{ \prime}\in\mathcal{C}^{0}}d_{\mathcal{H}}(\mathcal{B},\mathcal{B}^{\prime})=0.\] _Furthermore, if there is a unique optimal solution to the continuous problem, i.e. \(\mathcal{C}^{0}=\{\mathcal{B}^{CP}\}\) for some \(\mathcal{B}^{CP}\subset\mathbb{R}^{N}\), we have_ \[\lim_{n\to\infty}d_{\mathcal{H}_{\mathcal{H}}}\Big{(}\mathcal{D}^{0}_{n}, \mathcal{C}^{0}\Big{)}=0.\] Proof.: We first note that Lemma 6.8 implies \(\mathcal{C}^{0}\subset\mathcal{D}^{\gamma_{n}}_{n}\), which yields \[d_{\mathcal{H}_{\mathcal{H}}}\Big{(}\mathcal{D}^{\gamma_{n}}_{n},\mathcal{C}^ {0}\Big{)}=\max_{\mathcal{B}\in\mathcal{D}^{\gamma_{n}}_{n}}\min_{\mathcal{B}^{ \prime}\in\mathcal{C}^{0}}d_{\mathcal{H}}(\mathcal{B},\mathcal{B}^{\prime}).\] Consider therefore any \(\mathcal{B}\in\mathcal{D}^{\gamma_{n}}_{n}\). From Lemmas 4.6, 4.10 and 6.5 along with Proposition 6.6, we get that if we define \(\mathcal{B}^{e_{n}}\) by \[\mathcal{B}^{e_{n}}=\bigcap_{v\in S^{N-1}}\Pi^{-}\left(v,B(\mathcal{B},v)+e_{n }\right),\] where \(e_{n}=\delta_{n}(L_{C}+12\sqrt{N}(\|C\|_{\infty}+\gamma_{n}))\), we have that \(d_{\mathcal{H}}(\mathcal{B},\mathcal{B}^{e_{n}})=e_{n}\) and that \(\mathcal{B}^{e_{n}}\in\mathcal{C}^{\gamma_{n}+\beta_{n}}\) for \[\beta_{n}=32\sqrt{N}(\|C\|_{\infty}+\gamma_{n})\epsilon_{n}+e_{n}.\] As a consequence, we have by the triangle inequality that \[\min_{\mathcal{B}^{\prime}\in\mathcal{C}^{0}}d_{\mathcal{H}}( \mathcal{B},\mathcal{B}^{\prime}) \leq\min_{\mathcal{B}^{\prime}\in\mathcal{C}^{0}}d_{\mathcal{H}}( \mathcal{B}^{e_{n}},\mathcal{B}^{\prime})+d_{\mathcal{H}}(\mathcal{B},\mathcal{ B}^{e_{n}})\] \[\leq\max_{\mathcal{B}^{\prime\prime}\in\mathcal{C}^{\gamma_{n}+ \beta_{n}}}\min_{\mathcal{B}^{\prime}\in\mathcal{C}^{0}}d_{\mathcal{H}}( \mathcal{B}^{\prime\prime},\mathcal{B}^{\prime})+e_{n}\] \[=d_{\mathcal{H}}(\mathcal{C}^{\gamma_{n}+\beta_{n}},\mathcal{C}^{ 0})+e_{n}.\] By noting that \(\lim_{n\to\infty}\gamma_{n}+\beta_{n}=0\), we have \(d_{\mathcal{H}_{\mathcal{H}}}(\mathcal{C}^{\gamma_{n}+\beta_{n}},\mathcal{C}^ {0})\to 0\) by Proposition 6.7. This further yields \[\max_{\mathcal{B}\in\mathcal{D}_{n}^{m}}\min_{\mathcal{B}^{\prime}\in\mathcal{ C}^{0}}d_{\mathcal{H}}(\mathcal{B},\mathcal{B}^{\prime})\to 0,\] which yields the first part of the proof. Furthermore, the second equation follows from \[\max_{\mathcal{B}\in\mathcal{D}_{n}^{0}}\min_{\mathcal{B}^{\prime}\in\mathcal{ C}^{0}}d_{\mathcal{H}}(\mathcal{B},\mathcal{B}^{\prime})\leq\max_{\mathcal{B}\in \mathcal{D}_{n}^{m}}\min_{\mathcal{B}^{\prime}\in\mathcal{C}^{0}}d_{\mathcal{H }}(\mathcal{B},\mathcal{B}^{\prime})\to 0\] Similarly, if \(\mathcal{C}^{0}=\{\mathcal{B}^{\mathrm{CP}}\}\), \[d_{\mathcal{H}_{\mathcal{H}}}\left(\mathcal{D}_{n}^{0},\mathcal{ C}^{0}\right) =\max\left(\max_{\mathcal{B}\in\mathcal{D}_{n}^{0}}\min_{\mathcal{ B}^{\prime}\in\mathcal{C}^{0}}d_{\mathcal{H}}\left(\mathcal{B},\mathcal{B}^{ \prime}\right),\max_{\mathcal{B}^{\prime}\in\mathcal{C}^{0}}\min_{\mathcal{B} \in\mathcal{D}_{n}^{0}}d_{\mathcal{H}}\left(\mathcal{B},\mathcal{B}^{\prime} \right)\right)\] \[=\max\left(\max_{\mathcal{B}\in\mathcal{D}_{n}^{0}}d_{\mathcal{H} }\left(\mathcal{B},\mathcal{B}^{\mathrm{CP}}\right),\min_{\mathcal{B}\in \mathcal{D}_{n}^{0}}d_{\mathcal{H}}\left(\mathcal{B},\mathcal{B}^{\mathrm{CP} }\right)\right)\] \[=\max_{\mathcal{B}\in\mathcal{D}_{n}^{0}}\min_{\mathcal{B}^{\prime }\in\mathcal{C}^{0}}d_{\mathcal{H}}(\mathcal{B},\mathcal{B}^{\prime})\to 0,\] which completes the proof. This theorem tells us that the optimal solution space of the continuous problem can be effectively approximated by a near-optimal solution space of a linear program. Furthermore, if the continuous solution is unique, then the optimal discrete solution space is sufficient to approximate this solution. Lastly, Theorem 6.9 also proves that all optimal solutions of our discrete problem can be made arbitrarily close to a continuous one in terms of Hausdorff distance. This complements Theorem 4.12, and provides an alternate perspective on how our discrete problem approximates the continuous optimal solutions. ## 7. Minimal Valid Contours in 2-D In the linear program (4.4), we have several constraints in place to ensure that the output corresponds to a convex set. In two dimensions, however, there is a more efficient way of phrasing this property. This relates to the presence of loops in the contour as discussed in e.g. [9]. To discuss this, we consider a finite \(\mathcal{S}=\{u_{i}\}_{i=1}^{m}\subset S^{1}\) and parameterise any \(u\in\mathcal{S}\) by the unique angle \(\theta(u)\in[0,2\pi)\) such that \(u=(\cos(\theta),\sin(\theta))\). Using this parameterisation, we will consider \(\mathcal{S}\) to be an ordered set \(\mathcal{S}=(u_{i})_{i=1}^{m}\) such that \(\theta(u_{i})<\theta(u_{i+1})\). Finally, we also denote \(u_{m+1}=u_{1}\), \(u_{0}=u_{m}\), and \(\theta_{i}=\theta(u_{i})\) for any \(0\leq i\leq m+1\). In (4.4) we have defined our constraints to ensure the \(B_{i}=B(\mathcal{B},u_{i})\) for some convex \(\mathcal{B}\), to rephrase those constraints for \(N=2\) we define \[\mathcal{B}= \bigcap_{i=1}^{m}\Pi^{-}\left(u_{i},B_{i}\right), \tag{7.1}\] and examine when \(B(\mathcal{B},u_{i})=B_{i}\) holds. To do so we will denote the hyperplane \(\Pi_{j}=\Pi(u_{j},B_{j})\) and the crossing point \(X_{j}=\Pi_{j}\cap\Pi_{j-1}\) for all \(j\). We see from Figure 4(a) that when \(B(\mathcal{B},u_{i})=B_{i}\), the hyperplane \(\Pi_{i}\) supports \(\mathcal{B}\). Furthermore, we can compute the length, \(L_{i}\) of the line segment \(\mathcal{B}\cap\Pi_{i}\), which equals the distance between \(X_{i}\) and \(X_{i-1}\), by \[L_{i}= \frac{B_{i+1}-\langle u_{i+1},u_{i}\rangle B_{i}}{\sin(\theta_{i+1}- \theta_{i})}+\frac{B_{i-1}-\langle u_{i-1},u_{i}\rangle B_{i}}{\sin(\theta_{i }-\theta_{i-1})}. \tag{7.2}\] The key observation comes from Figure 4(b), where the resulting contour satisfies \(B(\mathcal{B},u_{i})<B_{i}\), and as such we have \(\mathcal{B}\cap\Pi_{i}=\emptyset\). In particular, we note that \(X_{i}\) and \(X_{i-1}\) switch sides. Since Equation (7.2) is based on projecting \(X_{i}\) and \(X_{i-1}\) along \(\Pi_{i}\), if we were to compute \(L_{i}\) by (7.2), we would see that while it still provides the distance between \(X_{i}\) and \(X_{i-1}\), the sign of \(L_{i}\) is now negative. This observation tells us that the condition \(B(\mathcal{B},u_{i})=B_{i}\), for all \(i\), is equivalent to \(L_{i}\geq 0\) for all \(i\). Using this equivalence we can restate our linear program for a quadrature, \((\mathcal{S},W)\), as follows. minimise \[\sum_{i=1}^{m}w_{i}B_{i}\] (7.3) subject to \[L_{i}\geq 0 i=1,2,\ldots,m\] \[B_{i}\geq C(u_{i}) i=1,2,\ldots,m\] \[u_{i}\in S^{N-1} i=1,2,\ldots,m\] \[C(u_{i}),B_{i}\in\mathbb{R} i=1,2,\ldots,m\] Figure 5. Two cases for \(\mathcal{B}\) (purple), defined by (7.1). The condition \(\mathcal{B}\cap\Pi_{i}\neq\emptyset\) fails based on the orientation of \(X_{i}\) and \(X_{i-1}\) (red). Here, \(L_{i}\) is defined as in (7.2). Furthermore, the restrictions \(L_{i}\geq 0\) and \(B_{i}\geq C(u_{i})\) for all \(i\) imply that a \(\mathcal{B}\) defined by \(\mathcal{B}=\bigcap_{i=1}^{m}\Pi^{-}\left(u_{i},B_{i}\right)\) must be a convex set satisfying \(B(\mathcal{B},u_{i})=B_{i}\geq C_{i}\). As such there exists a collection of points \(\{p_{i}\}_{i=1}^{m}\subset\mathcal{B}\) such that \(\langle p_{i},u_{i}\rangle\geq C_{i}\), implying that \(\left(\left(p_{i}\right)_{i=1}^{m},\,\left(B_{i}\right)_{i=1}^{m}\right)\) is a feasible solution of our original linear problem (4.4). This, along with our earlier discussion, shows that (7.3) is equivalent to (4.4), but phrased with far less numerically demanding constraints. However, due to no longer explicitly storing the points \(\left(p_{i}\right)_{i=1}^{m}\), we are now limited in our explicit construction of sets. We can no longer define \(\mathcal{B}^{*}=\operatorname{convh}\left(\{p_{i}^{*}\}_{i=1}^{m}\right)\) from Proposition 4.2 based on our linear program outputs, and as such exclusively rely on \(\mathcal{B}^{\prime}=\bigcap_{i=1}^{m}\Pi^{-}(u_{i},B_{i})\). This construction, as mentioned in Remark 4.5 and shown in Example 5.1, is more conservative and therefore a safer choice than \(\mathcal{B}^{*}\). As such we do not lose much in considering this more efficient method. Nevertheless, since \(L_{i}\) represents the length of the \(i\)'th side this restriction allows us to consider an alternative linear program where we instead minimise \(\sum_{i}L_{i}\), which equals the circumference of \(\mathcal{B}^{\prime}\). Optimising in mean width and circumference turns out to be equivalent in the case where \(\mathcal{S}\) is uniformly distributed and \(w_{i}=1/m\) for all \(i\). When this is the case we have \(\theta_{i}-\theta_{i-1}=2\pi/m\) for all \(i=2,3,\ldots,m\) and \(2\pi+\theta_{1}-\theta_{m}=2\pi/m\), which yields \[\sum_{i=1}^{m}L_{i} =\sum_{i=1}^{m}\frac{B_{i+1}-\langle u_{i+1},u_{i}\rangle B_{i}} {\sin(\theta_{i+1}-\theta_{i})}+\frac{B_{i-1}-\langle u_{i-1},u_{i}\rangle B_ {i}}{\sin(\theta_{i}-\theta_{i-1})}\] \[=\sum_{i=1}^{m}\frac{B_{i+1}-\cos(2\pi/m)B_{i}}{\sin(2\pi/m)}+ \frac{B_{i-1}-\cos(2\pi/m)B_{i}}{\sin(2\pi/m)}\] \[=\frac{1}{\sin(2\pi/m)}\left(\sum_{i=1}^{m}B_{i+1}+\sum_{i=1}^{m} B_{i-1}-2\sum_{i=1}^{m}\cos(2\pi/m)B_{i}\right)\] \[=(2m)\frac{1-\cos(2\pi/m)}{\sin(2\pi/m)}\sum_{i=1}^{m}B_{i}/m.\] Since any optimal \((B_{i})_{i=1}^{m}\) would minimise \(\sum_{i}w_{i}B_{i}\) it must also minimise \(\sum_{i}L_{i}\), making them equivalent objective functions in this case. For a uniformly distributed \(\mathcal{S}\), we have that the optimal weights, \(W=\{w_{i}\}_{i=1}^{m}\), \(w_{i}=1/m\), yield at least a \(\frac{\pi}{2m}\)-accurate quadrature. Using the quadrature \((\mathcal{S},W)\) will therefore allow us to optimise both mean width and circumference with optimal accuracy. In addition, we can use the efficient formulation of (7.3) which significantly increases the computation speed. ## 8. Approximating the mean width In this section, we approximate the mean width of a convex shape using point samples in the sense of Definition 3.6. In two dimensions, the simplest example of an \(\epsilon\)-accurate quadrature is the uniform quadrature \(\left\{\left(\left(\cos\left(\frac{2\pi i}{m}\right),\sin\left(\frac{2\pi i}{ m}\right)\right),\frac{1}{m}\right)\right\}_{i=1}^{m}\subset S^{N-1}\). To see this, we bound the difference (3.1). By symmetry, we can split the integral into \(2m\) identical segments of \(\frac{2\pi}{2m}\) radians each, where each part has one endpoint on a quadrature point. Using the fact that \(f\) is \(L\)-Lipschitz, the approximation error is hence at most \[\frac{2m}{2\pi}\int_{0}^{\frac{2\pi}{2m}}L\sqrt{\sin^{2}(\theta)+(1-\cos(\theta ))^{2}}\,d\theta=\frac{8Lm}{\pi}\sin^{2}\left(\frac{\pi}{4m}\right)\leq\frac{ \pi}{2m}L.\] Hence, the uniform quadrature in two dimensions is \(\frac{\pi}{2m}\)-accurate. It is also straightforward to see that the uniform quadrature has dispersion \(2\sin\left(\frac{\pi}{2m}\right)<\frac{\pi}{m}\). As concrete examples of accurate quadratures in general dimensions, we may use the composite midpoint rule on the cubed hypersphere, which is defined as follows. Let \(s\in\mathbb{N}^{+}\) be the number of subdivisions per dimension. Let \(U=\left\{\frac{2i-s-1}{s}\colon i\in\{1,\dots,s\}\right\}\) denote \(s\) uniformly distributed points on the segment \([-1,1]\). Then we can define a grid on the faces orthogonal on the \(i\)th dimension of the hypercube by taking Cartesian products \[v_{i}\coloneqq\underbrace{U\times\cdots\times U}_{i-1\text{ times}}\times\{-1,1\}\times\underbrace{U\times\cdots\times U}_{N-i\text{ times}}.\] The combined grid becomes \(V\coloneqq\bigcup_{i=1}^{N}v_{i}\). We number the points in \(V\) from \(1\) to \(m\coloneqq 2Ns^{N-1}\). Then, \(\left\{\left(\frac{1}{\|V_{i}\|_{2}}V_{i},\frac{1}{\|V_{i}\|_{2}}\left(\frac{ 2}{s}\right)^{N-1}\right)\right\}_{i=1}^{m}\subset S^{N-1}\) defines a quadrature which we call the _cubed hypersphere quadrature_ with \(s\) subdivisions. **Proposition 8.1**.: _The cubed hypersphere quadrature with \(s\) subdivisions is \(\frac{N2^{N}\sqrt{N-1}}{s}\)-accurate and has dispersion bounded by \(\frac{\sqrt{N-1}}{s}\)._ The proof is deferred to the end of this section. Once we have an \(\epsilon\)-accurate quadrature, we can transform any set \(\{u\}_{i=1}^{m}\) with small dispersion into an accurate quadrature using the following. **Proposition 8.2**.: _Let \(\mathcal{S}=\{u_{i}\}_{i=1}^{m}\subset S^{N-1}\) have dispersion \(\delta=\operatorname{disp}(\mathcal{S})\), and \(\{(v_{j},z_{j})\}_{j=1}^{m}\) be an \(\epsilon\)-accurate quadrature. For \(j=1,\dots,m\), let \(p_{j}\) be the index of a point in \(\mathcal{S}\) closest to \(v_{j}\). Let \(p_{i}^{-1}\coloneqq\{j\in\{1,\dots,m\}\colon p_{j}=i\}\) be a set of indices of points in \(\{v_{j}\}_{j=1}^{m}\) closest to \(u_{i}\). For \(i=1,\dots,m\), let \(w_{i}=\sum_{j\in p_{i}^{-1}}z_{j}\). Then \(\{(u_{i},w_{i})\}_{i=1}^{m}\) is a \(\delta+\epsilon(1+\delta)\)-accurate quadrature._ Proof.: Let \(f\colon S^{N-1}\to\mathbb{R}\) be \(L\)-Lipschitz and have absolute value bounded by \(\left\|f\right\|_{\infty}\), then since \(\{(v_{j},z_{j})\}_{j=1}^{m}\) is an \(\epsilon\)-accurate quadrature, we have \[\left|\int_{S}f(u)\,d\sigma(u)-\sum_{j=1}^{m}f(v_{j})z_{j}\right|\leq\epsilon( L+\left\|f\right\|_{\infty}). \tag{8.1}\] Expanding the definitions of \(w_{i}\) and \(p^{-1}\), we get \[\sum_{i=1}^{m}f(u_{i})w_{i} =\sum_{i=1}^{m}f(u_{i})\sum_{j\in p_{i}^{-1}}z_{j}=\sum_{i=1}^{m} \sum_{j\in p_{i}^{-1}}f(v_{j})z_{j}+z_{j}(f(u_{i})-f(v_{j}))\] \[=\sum_{j=1}^{m}f(v_{j})z_{j}+\sum_{j=1}^{m}z_{j}(f(u_{p_{j}})-f(v _{j})).\] The second term can be bounded by \(f\) being Lipschitz continuous, giving \[\left|\sum_{j=1}^{m}z_{j}(f(u_{p_{j}})-f(v_{j}))\right|\leq\sum_{j=1}^{m}z_{j}L \left\|u_{p_{j}}-v_{j}\right\|_{2}.\] Furthermore, by the definition of \(p_{j}\) as the index of a closest points in \(\mathcal{S}\), and the definition of \(\operatorname{disp}(\mathcal{S})\), we have \(\left\|u_{p_{j}}-v_{j}\right\|_{2}\leq\delta\) for \(j=1,\ldots,m\). Finally, we can bound \(\sum_{j=1}^{m}z_{j}\leq 1+\epsilon\) by inserting the constant function \(f=1\) into (8.1). Combining everything, we get \[\left|\int_{S}f(u)\,d\sigma(u)-\sum_{j=1}^{m}f(u_{j})w_{j}\right| \leq\left|\int_{S}f(u)\,d\sigma(u)-\sum_{j=1}^{m}f(v_{j})z_{j} \right|+\delta L(1+\epsilon)\] \[\leq\epsilon(L+\left\|f\right\|_{\infty})+\delta L(1+\epsilon) \leq(\delta+\epsilon(1+\delta))(L+\left\|f\right\|_{\infty}).\] Which is the definition of being a \(\delta+\epsilon(1+\delta)\)-accurate quadrature. ### Proof of Proposition 8.1 Proof.: Let \(F_{i}\) denote the \(i\)th of the \(2N\) faces of t he hypercube in \(N\) dimensions and let \(S_{i}\) be its projection on the unit hypersphere \(S^{N-1}\). Then, \[\int_{S^{N-1}}f(u)\,d\sigma(u)=\sum_{i=1}^{2N}\int_{S_{i}}f(u)\,d\sigma(u).\] We perform a transformation of variables \(u=\frac{1}{\left\|v\right\|_{2}}v\) to transform the integral over \(S_{i}\) to one over \(F_{i}\). \[\int_{S_{i}}f(u)\,d\sigma(u)=\int_{F_{i}}f\left(\frac{1}{\left\|v\right\|_{2} }v\right)\frac{1}{\left\|v\right\|_{2}}\,dv. \tag{8.2}\] The cubed hypersphere quadrature corresponds to approximating the integral (8.2) with the composite midpoint rule. That is, the integral is divided into \(s^{N-1}\) hypercubes with side lengths \(\frac{2}{s}\), each approximated by the value at their midpoint. Let the \(j\)th of these hypercubes be called \(F_{ij}\). \[\int_{F_{i}}f\left(\frac{1}{\left\|v\right\|_{2}}v\right)\frac{1}{\left\|v \right\|_{2}}\,dv=\sum_{j=1}^{s^{N-1}}\int_{F_{ij}}f\left(\frac{1}{\left\|v \right\|_{2}}v\right)\frac{1}{\left\|v\right\|_{2}}\,dv.\] Note that for all \(v\in F_{i}\), we have \(\left\|v\right\|_{2}\geq 1\). So since \(f\) is \(L\)-Lipschitz, \(f\left(\frac{1}{\left\|v\right\|_{2}}v\right)\) is also \(L\)-Lipschitz for \(v\in F_{i}\). Additionally, \(\frac{1}{\left\|v\right\|_{2}}\) is \(1\)-Lipschitz for \(v\in F_{i}\). Hence, for \(u,v\in F_{i}\), \[\left|f\left(\frac{1}{\left\|u\right\|_{2}}u\right)\frac{1}{\left\| u\right\|_{2}}-f\left(\frac{1}{\left\|v\right\|_{2}}v\right)\frac{1}{\left\|v \right\|_{2}}\right|\] \[\leq \left|f\left(\frac{1}{\left\|u\right\|_{2}}u\right)-f\left(\frac{ 1}{\left\|v\right\|_{2}}v\right)\right|\frac{1}{\left\|u\right\|_{2}}+\left|f \left(\frac{1}{\left\|v\right\|_{2}}v\right)\left(\frac{1}{\left\|u\right\|_{ 2}}-\frac{1}{\left\|v\right\|_{2}}\right)\right|\] \[\leq \left\|u-v\right\|_{2}(L+\left\|f\right\|_{\infty}).\] So \(f\left(\frac{1}{\left\|v\right\|_{2}}v\right)\frac{1}{\left\|v\right\|_{2}}\) is Lipschitz over \(F_{i}\) with constant \(L+\left\|f\right\|_{\infty}\), where \(\left\|f\right\|_{\infty}\) is an upper bound on the absolute value of \(f\). Consider the \(j\)th hypercube, let its midpoint be \(V_{j}\). Inside the hypercube, the maximum distance to the midpoint is \(\frac{\sqrt{N-1}}{s}\), so we get an error bounded by \(\frac{\sqrt{N-1}}{s}(L+\left\|f\right\|_{\infty})\). Specifically, \[\left|\int_{F_{ij}}f\left(\frac{1}{\left\|v\right\|_{2}}v\right) \frac{1}{\left\|v\right\|_{2}}\,dv-\int_{F_{ij}}f\left(\frac{1}{\left\|V_{j} \right\|_{2}}V_{j}\right)\frac{1}{\left\|V_{j}\right\|_{2}}\,dv\right|\] \[\leq\int_{F_{ij}}\frac{\sqrt{N-1}}{s}(L+\left\|f\right\|_{\infty} )\,dv.\] Next, note that the volume of \(F_{ij}\) is \(\left(\frac{2}{s}\right)^{N-1}\), so \[\int_{F_{ij}}f\left(\frac{1}{\left\|V_{i}\right\|_{2}}V_{i}\right)\frac{1}{ \left\|V_{i}\right\|_{2}}\,dv=f\left(\frac{1}{\left\|V_{i}\right\|_{2}}V_{i} \right)\ \frac{1}{\left\|V_{i}\right\|_{2}}\left(\frac{2}{s}\right)^{N-1}=f(u_{i})w_{ i},\] where \(u_{i}\) and \(w_{i}\) are the cubed hypersphere quadrature. Hence the \(i\)th hypercube contributes an error at most \[\left|\int_{F_{ij}}f\left(\frac{1}{\left\|v\right\|_{2}}v\right)\frac{1}{ \left\|v\right\|_{2}}\,dv-f(u_{i})w_{i}\right|\leq\frac{\sqrt{N-1}}{s}(L+ \left\|f\right\|_{\infty})\left(\frac{2}{s}\right)^{N-1}.\] Summing over the \(m=2Ns^{N-1}\) hypercubes, we get an error at most \[\left|\int_{S}f(u)\,d\sigma(u)-\sum_{i=1}^{m}f(u)w_{i}\right| \leq 2Ns^{N-1}\frac{\sqrt{N-1}}{s}(L+\left\|f\right\|_{\infty}) \left(\frac{2}{s}\right)^{N-1}\] \[=\frac{N2^{N}\sqrt{N-1}}{s}(L+\left\|f\right\|_{\infty}).\] Hence the quadrature is \(\frac{N2^{N}\sqrt{N-1}}{s}\)-accurate. The dispersion can also be bounded by considering the hypercubes \(F_{ij}\). Again, all points in the \(j\)th hypercube of the \(i\)th face are within a distance \(\frac{\sqrt{N-1}}{s}\) from the midpoint. Distances in the hypercube only get smaller when they are projected onto the unit hypersphere \(S^{N-1}\), and the projected midpoints are exactly the cubed hypersphere quadrature points. Hence, the dispersion is bounded by \(\frac{\sqrt{N-1}}{s}\). ## 9. Conclusion and Final Remarks In conclusion, this paper has presented a novel algorithm for the computation of valid environmental contours. The proposed algorithm ensures that the contours satisfy the outreach requirements while maintaining a minimal mean width. We have also presented a streamlined algorithm for two dimensions, which improves computation speed for this specific case. Both of the considered methods have been illustrated by numerical examples. Furthermore, as these methods rely on numerical integration, we also provided a generic construction for making arbitrarily accurate quadratures. Lastly, rigorous examination of convergence and existence of solutions has been conducted, ensuring the reliability and accuracy of the proposed methods. Convergence properties have been thoroughly analyzed, including the convergence of the optimal approximate mean width to the optimal mean width, as well as convergence in terms of the Hausdorff metric. These analyses ensure that any approximate solution will give an arbitrarily near-optimal contour, and that any optimal contour can be found by searching the near-optimal approximations. ## Acknowledgements The authors acknowledge financial support by the Research Council of Norway under the SCROLLER project, project number 299897 (Asmund Hausken Sande).
2306.16768
Stealth Ellis Wormholes In Horndeski theories
In this work we are revisiting the well studied Ellis wormhole solution in a generalized Horndeski theory motivated from the Kaluza-Klein compactification procedure of the more fundamental higher dimensional Lovelock gravity. We show that the Ellis wormhole is analytically supported by a gravitational theory with a non-trivial coupling to the Gauss-Bonnet term and we expand upon this notion by introducing higher derivative contributions of the scalar field. The extension of the gravitational theory does not yield any back-reacting component on the spacetime metric, which establishes the Ellis wormhole as a stealth solution in the generalized framework. We propose two simple mechanisms that dress the wormhole with an effective ADM mass. The first procedure is related to a conformal transformation of the metric which maps the theory to another Horndeski subclass, while the second one is inspired by the spontaneous scalarization effect on black holes.
Athanasios Bakopoulos, Nikos Chatzifotis, Cristian Erices, Eleftherios Papantonopoulos
2023-06-29T08:12:17Z
http://arxiv.org/abs/2306.16768v3
# Stealth Ellis Wormholes in Horndeski theories ###### Abstract In this work we are revisiting the well studied Ellis wormhole solution in a generalized Horndeski theory motivated from the Kaluza-Klein compactification procedure of the more fundamental higher dimensional Lovelock gravity. We show that the Ellis wormhole is analytically supported by a gravitational theory with a non-trivial coupling to the Gauss-Bonnet term and we expand upon this notion by introducing higher derivative contributions of the scalar field. The extension of the gravitational theory does not yield any back-reacting component on the spacetime metric, which establishes the Ellis wormhole as a stealth solution in the generalized framework. We propose two simple mechanisms that dress the wormhole with an effective ADM mass. The first procedure is related to a conformal transformation of the metric which maps the theory to another Horndeski subclass, while the second one is inspired by the spontaneous scalarization effect on black holes. ###### Contents * I Introduction * II Stealth Ellis Wormhole in Gauss-Bonnet gravity * III Wormholes in Gauss-Bonnet gravity with Higher order derivative couplings * III.1 The Stealth Ellis configuration * III.2 Massive wormholes via conformal transformations * IV Rescalarization of the Ellis wormhole: Massive wormholes * V Conclusions ## I Introduction Wormholes are one of the simplest and most exotic static solutions of Einstein's equations. The throat of a wormhole is able to connect two space-times, or sometimes two distant parts of the same universe. Conceived initially as a hypothetical structure, and after Flamm provided its mathematical notion [1], wormholes were rediscovered by Einstein and Rosen as a bridge-like structure known as the Einstein-Rosen bridge [2]. After Wheeler first coined the term "wormhole' [3], Ellis proposed a geodesically complete wormhole or 'drainhole' by introducing a scalar field minimally coupled to gravity [4]. However, it was not until the seminal article from Morris and Thorne that the first humanly traversable wormhole was mathematically derived [5]. From asymptotically flat [6; 7; 8] to asymptotically (anti)de-Sitter wormholes [9], a major drawback is inherent in these solutions in the theory of general relativity (GR). The so-called flare-out condition, which is the condition that confers and maintains the geometrical structure of the wormhole, is non-compatible with the Null Energy Condition (NEC), violating all the other conditions. Generally, this exotic matter content, undesirable in the macroscopic world, can however be avoided. It turned out that a scalar field of phantom nature, i.e. with an opposite sign in front of its kinetic term, was sufficiently exotic to support traversable wormholes [4; 10]. This long quest for physically meaningful wormhole solutions has found a prolific theoretical framework in modified theories of gravity. A number of wormhole solutions have been derived in modified theories of gravity. A remarkable result was found in [11], where the authors construct a traversable wormhole without needing any form of exotic matter. This is in fact a consequence of the introduction of quadratic terms in the curvatures at the level of the action, which allows to satisfy the flare-out conditions without spoiling the NEC. In this context, exotic matter is reduced or even not needed when wormholes solutions are realized on alternative theoretical frameworks. Traversable wormholes have been found in many modified gravitational theories, such as in \(f(R)\) gravity [12; 13; 14; 15], \(f(R,T)\) gravity [16], modified teleparallel gravity [17], non-minimal couplings [18; 19], extra fundamental fields [20], Einstein-Gauss-Bonnet gravity [21; 22; 23], Einstein-Scalar-Gauss-Bonnet gravity [24; 25], Brans-Dicke theory [26; 27; 28], Randal-Sundrum model [29], braneworld configurations [30], metric-Palatini gravity [31; 32], with thin shells [33; 34; 35; 36; 37], Einsteinian Cubic gravity [38], Einstein-Dirac-Maxwell [39], massive gravity [40] and by disformal transformations [41; 42; 43; 44; 45], just to name a few during the last decade. For further details, the reader may refer to a comprehensive review in [46]. During the 70's Horndeski built the most general scalar-tensor theory with equations of motion of second order for both the metric and for the scalar field [47], therefore avoiding Ostrogradsky ghosts. Although Horndeski theory was forgotten for decades, great interest resurfaced in the community when it was proved that the decoupling limit of the DGP model [48] corresponds to the Galileon theory [49]. This theory is symmetric under galilean transformation --hence its name-- of the scalar field \(\phi\to\phi+c+b_{\mu}x^{\mu}\) (\(c\), \(b_{\mu}\) being constant) and although it is constructed in flat space, possesses novel properties. A remarkable one is its Vainshtein mechanism having useful applications in cosmology since it allows the effects of the scalar field to be significant at cosmological scales but not at smaller ones [50]. Then, the covariantization of this theory was achieved [51; 52] leaving behind the shift symmetry, and getting the so called generalized Galileon theory which was proven to be equivalent to the Horndeski theory [53]. Moreover, in four dimensions, Horndeski theory can be obtained through the compactification process of the higher dimensional Lovelock theory [54]. The generality of Horndeski theory provide a number of sectors with remarkable properties. For a complete review refer to [55]. As expected, the majority of analytical solutions for compact objects in Horndeski theories have been obtained within the subclass of theories that enjoy parity symmetry \(\phi\to-\phi\) and/or shift symmetry \(\phi\to\phi+\mathrm{cst}\)[43; 56; 57; 58; 59; 60; 61; 62; 63]. However, under physical grounds there are no compelling reasons to restrict ourselves to theories with such symmetries, and in consequence, this freedom opens the possibility to explore spacetime geometries with quite different properties than GR [64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78]. In this scenario, there is a potential risk of dealing with much more involved field equations, precluding the existence of analytical solutions. It has been found that such a theory [79] is derived from the generalization of the potentials appearing in Kaluza-Klein compactifications originating from Lovelock theory [54]. This particular sector of Horndeski theories can be identified as effective string theory actions with an IR logarithmic running for the dilaton, higher order corrections and generalised Liouville type potentials. Motivated by a lack of such symmetries, in this work, we proceed to study wormhole solutions in a generalized Horndeski theory framework. Our first approach is to consider a phantom scalar field which is non-minimally coupled to the Gauss-Bonnet (GB) term. Solving the field equations of this theory and demanding that the redshift function is constant, we find that the Ellis wormhole is a stealth solution of the theory. The charge of the scalar field determines the throat of the Ellis wormhole, as expected. It is well known that in four dimensions the GB term is a topological term. In our case, despite the fact that the scalar field is non-trivially coupled to the GB term, there is no contribution of the GB term to the background metric. Indeed, by calculating the corresponding topological current of the GB term in a completely general spherically symmetric metric ansatz, we show that if the redshift function is constant, the GB term vanishes identically. It is in this sense that the Ellis wormhole can be supported as a stealth solution of the theory. Following up on this, we extend our gravitational action by introducing higher derivative terms, such as a non-trivial derivative coupling to the Einstein tensor and higher order derivative couplings of the scalar field. These terms can be generated in the Horndeski theory from the generalization of the potentials in the Kaluza-Klein compactification, originating from higher dimensional Lovelock gravity, as previously mentioned. We found that despite the presence of non-trivial derivative couplings, the Ellis wormhole is still supported by the extended theory, thus appearing as an effective stealth solution. We then turn our attention on whether the stealth Ellis wormhole in the presence of higher curvature terms and higher derivatives of the scalar field can be extended beyond its stealth nature. Firstly, we show that performing a conformal transformation, which maps us to a different Horndeski subclass, a massive wormhole solution can be generated in the conformal frame. Thus, the Ellis solution, originally a non-gravitating object, acquires an effective ADM mass. On the other hand, constraining ourselves onto the original gravitational theory, we propose a new method of obtaining massive wormhole solutions. In particular, since an actual Birkhoff theorem is absent in the framework of the Horndeski theories, one may construct novel solutions in a perturbative manner by allowing the higher order terms to back react on the stealth solution. This procedure resembles the spontaneous scalarization effect on black holes [80; 81]. However, since the stealth wormholes already carry a non trivial scalar charge we distinguish ourselves from this phenomenon by coining the term "rescalarization". The paper is organized as follows: In Sec. II we extract the analytic stealth wormhole solution for Scalar-Gauss-Bonnet gravity. Then, in Sec. III, it is shown that this configuration is admitted as a stealth solution of the more generic theory obtained as the generalization of the potentials in the Kaluza-Klein compactification of higher dimensional Lovelock gravity. Additionally, the massive configuration is derived through a suitable conformal transformation and some geometric properties are described. Sec. IV is devoted to construct the massive wormhole of the theory by solving the field equations perturbatively around the stealth wormhole. We review our results and conclude in Sec. V. ## II Stealth Ellis Wormhole in Gauss-Bonnet gravity It is well known that the Ellis wormhole solution described by the line element \[ds^{2}=-dt^{2}+dx^{2}+(x^{2}+x_{0}^{2})d\Omega^{2}, \tag{1}\] where \(x_{0}\) denotes the throat radius of the wormhole, is a solution to the simple extension of the Einstein-Hilbert action by the addition of a purely phantom scalar field. The fact that the corresponding redshift function of the geometry is vanishing, which implies a zero mass for the wormhole, enables one to re-extract the solution in the context of a more general scalar-tensor theory. Our first approach shall be to consider the extension of the gravitational theory with a Gauss-Bonnet coupling term, i.e., \[S=\int d^{4}x\frac{\sqrt{|g|}}{16\pi G}\left[R-2\delta(\partial\Phi)^{2}+V[ \Phi]\mathcal{G}\right]\, \tag{2}\] where \(\mathcal{G}=R_{\alpha\beta\mu\nu}R^{\alpha\beta\mu\nu}-4R_{\mu\nu}R^{\mu\nu}+R ^{2}\) is the Gauss-Bonnet term. The \(\delta\) component of the gravitational theory may assume values of \(\pm 1\) in order to differentiate between a phantom and canonical nature of the matter content, while \(V[\Phi]\) is a coupling function of the scalar field to the Gauss-Bonnet term to be determined by the assumption that the Ellis wormhole is a solution to the theory. The corresponding equations of motion by the variation of the theory with respect to the metric and the matter content are \[G_{\mu\nu}=T_{\mu\nu}+\Theta_{\mu\nu} \tag{3}\] \[\Box\Phi=-\frac{\delta}{4}\dot{V}(\Phi)\mathcal{G}\, \tag{4}\] where \[T_{\mu\nu}=2\delta\left(\nabla_{\mu}\Phi\nabla_{\nu}\Phi-\frac{1}{2}g_{\mu\nu} (\partial\Phi)^{2}\right)\, \tag{5}\] is the stress-energy tensor associated with the kinetic term of the matter content, and, \[\Theta_{\mu\nu}=-2\nabla_{\alpha}\left[P_{\ \mu\ \nu}^{\alpha\ \beta}\nabla_{ \beta}V(\Phi)+(\mu\leftrightarrow\nu)\right], \tag{6}\] with \(P_{\alpha\beta\gamma\delta}=-\frac{1}{4}\epsilon_{\alpha\beta\mu\nu}R^{\mu\nu \kappa\lambda}\epsilon_{\kappa\lambda\gamma\delta}\) denoting the double dual of the Riemann tensor, is the stress-energy tensor associated with the variation of the Gauss-Bonnet extension and \(\dot{V}(\Phi)\) denotes the derivative of \(V(\Phi)\) with respect to \(\Phi\). The addition of the Gauss-Bonnet term was not accidental. Indeed, it is well known that in four dimensions, the Gauss-Bonnet term is purely topological and, as such, can be expressed in terms of the divergence of the topological current \[K^{\xi}=-\varepsilon^{\xi\alpha\beta\gamma}\varepsilon_{\sigma\ \mu}^{\ \rho\ \nu}\left[\Gamma_{\ \alpha\rho}^{\sigma}\partial_{\beta}\Gamma_{\ \gamma\nu}^{\mu}+\frac{2}{3}\Gamma_{\ \alpha\rho}^{\sigma}\Gamma_{\ \beta\lambda}^{\mu}\Gamma_{\ \gamma\nu}^{\lambda}\right] \tag{7}\] Assuming a most general spherical symmetric ansatz of the form \[ds^{2}=-F(x)dt^{2}+\frac{dx^{2}}{F(x)}+r^{2}(x)d\Omega^{2}, \tag{8}\] one may immediately verify that the topological current of (7) reads \[K^{\xi}=-\left\{0,\frac{2F^{\prime}(x)\left(1-2F(x)r^{\prime}(x)^{2}\right)}{r (x)^{2}},-\frac{2\cot(\theta)F^{\prime\prime}(x)}{r(x)^{2}},0\right\}\, \tag{9}\] where prime denotes the derivative with respect to \(x\). Since \(\nabla_{\xi}K^{\xi}={\cal G}\), constraining ourselves to a vanishing redshift function, i.e. \(F(x)=1\), the Gauss-Bonnet term will always vanish. Therefore, fixing the \(F(x)\) degree of freedom on the local solution space of the theory, \[ds^{2}=-dt^{2}+dx^{2}+r^{2}(x)d\Omega^{2}\, \tag{10}\] thus remaining with 3 degrees of freedom \((r(x),\Phi(x),V(\Phi))\), the equations of motion described in (3) are effectively reduced to \[G_{\mu\nu} = T_{\mu\nu}+\Theta_{\mu\nu}\, \tag{11}\] \[\Box\Phi = 0. \tag{12}\] The scalar field degree of freedom can be immediately solved under our reduced metric ansatz. Indeed, from the \(\theta\theta\) component of the gravitational equations of motion, one easily obtains that \[r^{\prime\prime}(x)+\delta r(x)\Phi^{\prime}(x)^{2}=0. \tag{13}\] The above equation states that there are 2 possible solutions depending on the sign of \(\delta\), i.e. whether the scalar field is canonical or phantom. For the case of \(\delta=+1\), the above equation contains positive definite components and the sole (acceptable) solution is the trivial Minkowski spacetime with a vanishing scalar field. On the other hand, setting \(\delta=-1\), which will be our focus for the remaining of our work, one immediately obtains that \[\Phi(x)=\Phi_{0}\pm\int\sqrt{\frac{r^{\prime\prime}(x)}{r(x)}}dx. \tag{14}\] The constant term can always be absorbed by a simple field redefinition. Consequently, one may always choose the solution of \[\Phi(x)=\int\sqrt{\frac{r^{\prime\prime}(x)}{r(x)}}dx. \tag{15}\] Having obtained the scalar solution, the corresponding \(xx\) component of the gravitational equations of motion is a simple differential equation of \(r(x)\), \[r(x)r^{\prime\prime}(x)+r^{\prime}(x)^{2}-1=0\to r(x)=\sqrt{(x-c_{1})^{2}+x_{0 }^{2}}\, \tag{16}\] where \(c_{1}\) and \(x_{0}\) are integration constants. It should be stressed at this point that the \(c_{1}\) "charge" is a simple translation in the radial coordinate and can always be absorbed with no physical implications. Indeed, setting \(\ell^{2}=(x-c_{1})^{2}\), one obtains \(d\ell^{2}=dx^{2}\). Therefore, the throat radius function of the wormhole reads \[r(x)=\sqrt{x^{2}+x_{0}^{2}}\, \tag{17}\] which yields that the scalar field is indeed of the Ellis wormhole form \[\Phi(x)=\arctan\left(\frac{x}{x_{0}}\right)\, \tag{18}\] which naturally satisfies \(\Box\Phi=0\). Finally, we deal with the \(tt\) gravitational equation of motion, which can be inverted to a differential equation of \(V(\Phi)\) with respect to \(\Phi\) and reads \[\tilde{V}(\Phi)-4\tan(\Phi)\dot{V}(\Phi)=0\to V(\Phi)=\alpha\left(\frac{2\tan( \Phi)}{3}+\frac{1}{3}\tan(\Phi)\sec^{2}(\Phi)\right). \tag{19}\] This concludes the first section of our work, where we have shown that the Ellis wormhole solution described by the doublet \[\left\{g_{\mu\nu}=diag(-1,1,(x^{2}+x_{0}^{2}),(x^{2}+x_{0}^{2})\sin^{2}\theta), \Phi(x)=\arctan\left(\frac{x}{x_{0}}\right)\right\}\, \tag{20}\] is indeed supported by the extended gravitational theory \[S=\int d^{4}x\frac{\sqrt{|g|}}{16\pi G}\left[R+2(\partial\Phi^{2})+\alpha \left(\frac{2\tan(\Phi)}{3}+\frac{1}{3}\tan(\Phi)\sec^{2}(\Phi)\right)\mathcal{ G}\right]\, \tag{21}\] with a nontrivial coupling to the Gauss-Bonnet term. ## III Wormholes in Gauss-Bonnet gravity with Higher order derivative couplings ### The Stealth Ellis configuration Following our previous result, our goal in this section is to generalize our process with the addition of higher order derivative couplings. It is well known that the presence of a phantom scalar field in a gravitational action will generate instabilities on the local solutions supported by the theory [82]. However, when the action contains higher order contributions on the scalar field kinetic term, one cannot immediately assume that the \(\delta=-1\) configuration will yield unstable solutions, since it is probable that the higher order kinetic term contributions are able to rectify the instability pathologies associated with a phantom scalar field [83]. In particular, due to the higher order derivative couplings, it is unclear from the definition of the conjugate momentum of the scalar field whether the matter content is indeed of phantom nature. To this end, we are considering the following further extension of the gravitational theory \[\begin{split} S=\int d^{4}x\frac{\sqrt{|g|}}{16\pi G}\left[R+2( \partial\Phi^{2})+\alpha\left(\frac{2\tan(\Phi)}{3}+\frac{1}{3}\tan(\Phi) \sec^{2}(\Phi)\right)\mathcal{G}\right.\\ \left.+V_{2}(\Phi)G^{\mu\nu}\partial_{\mu}\Phi\partial_{\nu} \Phi+V_{3}(\Phi)(\nabla\Phi)^{4}+V_{4}(\Phi)(\nabla\Phi)^{2}(\Box\Phi)\right],\end{split} \tag{22}\] which was motivated by the work done in [79], where they considered a generic Horndeski gravity occuring from the generalization of the potentials in the Kaluza-Klein compactification originating from higher dimensional Lovelock gravity. Naturally, the complexity of the above theory leads to highly non-trivial equations of motion, which hold no real merit in being written down explicitly. Our goal is to test whether the Ellis wormhole doublet described in (20) can be supported by the theory in a similar manner to the work done in the previous section. It turns out that the simplicity of the Ellis scalar field solution, allows the entire gravitational equations of motion system to be inverted and re-expressed as differential equations of the 3 potentials with respect to the scalar field. Indeed, the three gravitational equations of motion in the orthonormal frame are found to read as follows, \[tt: 2\tan(\Phi)\dot{V}_{2}(\Phi)-V_{2}(\Phi)\left(6\tan^{2}(\Phi)-3 \right)-V_{3}(\Phi)+\dot{V}_{4}(\Phi)-4V_{4}(\Phi)\tan(\Phi)=0\, \tag{23}\] \[xx: V_{2}(\Phi)\left(2\tan^{2}(\Phi)-1\right)+3V_{3}(\Phi)-\dot{V}_{4}( \Phi)+4V_{4}(\Phi)\tan(\Phi)=0\,\] (24) \[\theta\theta: \tan(\Phi)\dot{V}_{2}(\Phi)+V_{2}(\Phi)-4V_{2}(\Phi)\tan^{2}( \Phi)-V_{3}(\Phi)+\dot{V}_{4}(\Phi)-4V_{4}(\Phi)\tan(\Phi)=0. \tag{25}\] It is important to note here that \(x_{0}\) does not appear in the gravitational equations of motion, which implies that the primary charge nature of the throat radius survives in the generalized Horndeski framework with higher order derivative couplings. It is easy to notice that the \(tt-\theta\theta\) combination of the equations of motion yields a differential equation solely in terms of \(V_{2}\). Indeed, one may easily extract that, \[tt-\theta\theta:\quad\sin(2\Phi)\dot{V}_{2}(\Phi)+4V_{2}(\Phi)\cos(2\Phi)=0\implies V _{2}(\Phi)=\beta\csc^{2}(2\Phi)\, \tag{26}\] where \(\beta\) is an integration constant. Moving on, we also notice that both the \(tt\) and the \(xx\) equations are linear algebraic equations of \(V_{3}\). From the \(xx\) component we find that, \[V_{3}(\Phi)=\frac{1}{3}\left(\beta\csc^{2}(2\Phi)-\frac{1}{2}\beta\sec^{4}( \Phi)+\dot{V}_{4}(\Phi)-4V_{4}(\Phi)\tan(\Phi)\right). \tag{27}\] Plugging the above result into the \(tt\) equation of motion we extract a simple differential equation of \(V_{4}\) \[\beta-2\sin^{2}(\Phi)\cos^{4}(\Phi)\dot{V}_{4}(\Phi)+V_{4}(\Phi)\sin^{3}(2\Phi)=0\, \tag{3.7}\] whose solution reads \[V_{4}(\Phi)=-\frac{1}{2}\beta\csc(\Phi)\sec^{3}(\Phi)+\gamma\sec^{4}(\Phi)\, \tag{3.8}\] while consequently, \(V_{3}\) is found to be \[V_{3}(\Phi)=\beta\csc^{2}(2\Phi). \tag{3.9}\] We note at this point that this choice of the corresponding potentials yields that the entire extension of the action under consideration vanishes on shell and it is in this sense that the Ellis wormhole is supported as a local solution of the theory. However, any divergence of the trivial Ellis wormhole on the geometry, either by considering massive wormholes or the effect of gravitational waves will switch on the Gauss-Bonnet and the higher derivative terms, which allows for a much richer structure than the initially considered gravitational theory. ### Massive wormholes via conformal transformations So far, we have been focusing strictly on the trivial massless Ellis wormhole. It should be noted that a corresponding massive configuration can always be found in the general framework of Horndeski gravity by a simple conformal transformation. Indeed, the simplicity of the scalar field solution will be a very helpful aid. We are constraining ourselves to a target spacetime with a \(Z_{2}\) symmetry on the radial coordinate in order to avoid known causal pathologies associated with time-machine configurations, such as the Ellis drainhole [4], and Schwarzschild-like asymptotic behaviour on the \(g_{tt}\) component of the metric in order to obtain a positive definite ADM mass at infinity. A way to approach this is to consider the conformal transformation of \[g_{\mu\nu}\rightarrow(1-2\zeta\cos\Phi)g_{\mu\nu} \tag{3.10}\] where \(\zeta\) is a novel parameter that will be constrained in order to have well defined non-degenerate conformal transformation. The metric we obtain yields the following line-element \[ds^{2}=-\left(1-\frac{2\zeta x_{0}}{\sqrt{x^{2}+x_{0}^{2}}}\right)dt^{2}+ \left(1-\frac{2\zeta x_{0}}{\sqrt{x^{2}+x_{0}^{2}}}\right)\left[dx^{2}+(x^{2} +x_{0}^{2})d\Omega^{2}\right]. \tag{3.11}\] In order to extract any meaningful results on the nature of the above spacetime, we should stress at this point that the corresponding area radius function \[R(x)=\sqrt{\left(1-\frac{2\zeta x_{0}}{\sqrt{x^{2}+x_{0}^{2}}}\right)(x^{2}+x _{0}^{2})}=\sqrt{x_{0}\left(x_{0}-2\zeta\sqrt{x^{2}+x_{0}^{2}}\right)+x^{2}} \tag{3.12}\] yields that \(R(x)\sim x\) at \(x\rightarrow\infty\), which implies that we can test the asymptotic behaviour in terms of the \(x\) radial coordinate. On the other hand, since we are requesting the absence of horizons, which incidentally yields the conformal transformation to be well-defined, we are constraining the positive definite \(\zeta\) parameter to \[\zeta<\frac{1}{2}. \tag{3.13}\] In addition, it is easily tested that the corresponding throat of the wormhole lies at \(x=0\), since the area radius function contains a single global minimum there under the constrain of (3.13) with a transformed throat radius \[x_{t}=\sqrt{1-2\zeta}x_{0}. \tag{3.14}\] It is now trivial to deduce that the corresponding ADM mass of the above wormhole is simply \(M_{effective}=\zeta x_{0}\), which implies that although the scalar field contains a single charge, i.e. the throat radius, it is always possible to construct massive wormholes with a fixed ratio of mass to throat radius defined by the coupling constant we introduced in the conformal transformation. Finally, we would like to comment that since the conformal transformation does not contain any real valued singular points, the absence of singularities in our new spacetime is immediately verified. Rescalarization of the Ellis wormhole: Massive wormholes The Ellis wormhole stands as a fascinating solution within the general Horndeski theory, offering insights into exotic spacetime structures. However, as a stealth solution, the Ellis wormhole does not provide explicit information regarding the coupling constants that govern the underlying Horndeski theory. Unlike in GR, where a uniqueness theorem restricts the appearance of alternative solutions, the Horndeski theory lacks such a limitation. Hence, the existence of a second non-trivial solution alongside the Ellis wormhole becomes a possibility, potentially unveiling crucial details about the Horndeski theory itself. The concept of scalarization, initially discovered in the context of neutron stars [84] and black holes [85; 86; 87], presents an analogous phenomenon to consider. Scalarization refers to the emergence of scalar field configurations in the presence of a trivial background solution, significantly impacting the system's properties. In the case of the Ellis wormhole, which already possesses a distinctive scalar field characterizing its background solution, a modified term, "rescalarization," is introduced to distinguish this scenario from conventional scalarization. Rescalarization reflects the idea that the existing scalar field of the Ellis wormhole may acquire additional characteristics due to the presence of higher orders in the action, thereby shedding light on the Horndeski theory's fundamental aspects. It should be noted that in the context of black holes, scalarization arises as a result of the instability exhibited by the background Schwarzschild black hole within a certain mass range that depends on the specific effective scalar-tensor theory. Similarly, in our specific case, the process of rescalarization transpires due to the instability manifested by the background Ellis wormhole. This section delves into the intriguing phenomenon of rescalarization associated with the Ellis wormhole and explores the emergence of massive wormhole solutions within the Horndeski theory framework. By investigating the interplay between the non-trivial scalar field and the second solution, we aim to unravel the unique features and implications of rescalarized, massive wormholes. The rescalarization of the Ellis wormhole unveils new avenues for studying exotic spacetime geometries and enriches our knowledge of the diverse gravitational phenomena that can exist in the universe. In our pursuit of discovering new solutions within the generalized Horndeski theory, as discussed in the previous section, we direct our attention to the specific coupling functions described by Eqs. (2.19), (3.5), (3.8), and (3.9) in the action given by Eq. (3.1). To facilitate our analysis, we adopt the following metric ansatz, \[ds^{2}=-F(x)dt^{2}+\frac{dx^{2}}{S(x)}+(x^{2}+x_{0}^{2})d\Omega^{2}. \tag{4.1}\] Due to the complexity of the field equations, rendering analytical solutions unfeasible, we focus on the weak field approximation. Within this approximation, we assume that the strength of the theory, characterized by the coupling constant, is small. Thus, we seek perturbative solutions by expanding around a known background solution. Since the Ellis wormhole serves as a recognized stealth solution within the theory, we select it as our background. Consequently, we choose to expand the solutions in terms of the Gauss-Bonnet coupling constant, denoted as \(\alpha\). To maintain simplicity and expand solely in one parameter, we assume that the remaining two coupling constants of the theory, \(\beta\) and \(\gamma\), are proportional to \(\alpha\). Specifically, we set \(\beta=\alpha\lambda_{1}\) and \(\gamma=\alpha\lambda_{2}\), where \(\lambda_{1,2}\) represent two dimensionless constants. In terms of the metric functions, we propose the following expansions: \[F(x) =1+\alpha f(x)+\mathcal{O}(\alpha^{2})\, \tag{4.2}\] \[S(x) =1+\alpha s(x)+\mathcal{O}(\alpha^{2})\,\] (4.3) \[\Phi(x) =\arctan\left(\frac{x}{x_{0}}\right)+\alpha h(x)+\mathcal{O}( \alpha^{2}). \tag{4.4}\] These expansions allow us to perturbatively investigate the effects of the higher-order theory coupling constant and its associated scalar field on the metric and scalar functions, providing insights into the existence of new solutions within the Horndeski theory. Upon substituting the expansions into the field equations and considering the limit as \(\alpha\to 0\), we can derive the first-order equations in \(\alpha\), \[(-2x_{0}h+xs)^{\prime}=0\, \tag{4.5}\] \[xf^{\prime}+2x_{0}h^{\prime}+s=0\,\] (4.6) \[(x^{2}+x_{0}^{2})f^{\prime\prime}+x(f^{\prime}-s^{\prime})-2s=0. \tag{4.7}\] The above equations correspond to the \((t,t)\) equation, \((r,r)\) equation, and the combination \((t,t)-(\theta,\theta)\), respectively. Remarkably, we notice that the constants \(\lambda_{1}\) and \(\lambda_{2}\) do not appear in the aforementioned equations. Therefore, at first order in \(\alpha\), the solutions depend solely on the Gauss-Bonnet term. Integrating the above system yields, \[h(x) =\frac{h_{1}x_{0}}{x}\arctan\left(\frac{x}{x_{0}}\right)-h_{1}+ \frac{h_{0}}{x}-\frac{s_{0}}{2x_{0}}\, \tag{10}\] \[f(x) =f_{0}+2h_{1}\arctan\left(\frac{x}{x_{0}}\right)\,\] (11) \[s(x) =\frac{2x_{0}}{x^{2}}\left(-h_{1}x+h_{1}x_{0}\arctan\left(\frac{ x}{x_{0}}\right)+h_{0}\right)\, \tag{12}\] where \(h_{0}\), \(h_{1}\), \(f_{0}\), and \(s_{0}\) are integration constants. To ensure finiteness of the scalar field at \(x=0\), we set \(h_{0}=0\), and for the scalar field to exhibit the same asymptotics as the Ellis wormhole at infinity, we set \(s_{0}=-2h_{1}x_{0}\). Additionally, the requirement of asymptotic flatness implies \(f_{0}=-\pi h_{1}\). As a result, there is only one independent integration constant (\(h_{1}\)). Thus, the complete solution is given by, \[F(x) =1+\alpha\left[2h_{1}\arctan\left(\frac{x}{x_{0}}\right)+\pi h_{ 1}\right]\, \tag{13}\] \[S(x) =1-\frac{2\alpha h_{1}x_{0}}{x^{2}}\left[x-x_{0}\arctan\left( \frac{x}{x_{0}}\right)\right]\,\] (14) \[\Phi(x) =\arctan\left(\frac{x}{x_{0}}\right)+\frac{\alpha h_{1}x_{0}}{x} \arctan\left(\frac{x}{x_{0}}\right). \tag{15}\] This solution is regular at \(x=0\) and lacks horizons, thus describing a traversable wormhole. It generalizes the Ellis wormhole within the framework of the generalized Horndeski theory. By expanding the solution at infinity, we find, \[F(x)=1-\frac{2\alpha h_{1}x_{0}}{x}+\mathcal{O}\left(\frac{1}{x^{2}}\right)\,\qquad\text{and}\qquad S(x)=1-\frac{2\alpha h_{1}x_{0}}{x}+\mathcal{O}\left( \frac{1}{x^{2}}\right). \tag{16}\] Therefore, through the process of rescalarization, the Ellis wormhole, originally a non-gravitating object, acquires an ADM mass term of \(M=\alpha h_{1}x_{0}\). We observe that the ADM mass depends on the coupling constant \(\alpha\), suggesting that the Gauss-Bonnet term introduces a mass term to the Ellis wormhole. Finally, from the expansion of the scalar field at infinity, we can deduce the new scalar charge of the solution as \(q=-x_{0}(1-\frac{1}{2}\alpha h_{1}\pi)=-x_{0}+\frac{1}{2}M\pi.\) This reveals that the Gauss-Bonnet term adds a small contribution to the scalar charge of the new solution, which is associated with the mass of the wormhole. By using the above equation, and expanding in \(\alpha\), we may express the novel throat radius \(x_{0}\) in terms of the solution parameters \(q\), \(h_{1}\) and \(\alpha\) as \(|x_{0}|=q(1+\frac{1}{2}\alpha h_{1}\pi).\) Therefore, for the same value of the scalar charge \(q\), the rescalarized wormhole is larger than the original Ellis one. It should be stressed at this point that, in contrast to the conformal transformation mechanism of extracting massive wormholes, the spacetime configuration that we reach does not seem to have a \(Z_{2}\) symmetry on the radial coordinate. Constraining ourselves to proper Minkowski asymptotics on both universes, we find that the temporal coordinate needs to be differently rescaled in each patch of spacetime. This yields causal pathologies in the solution which can be immediately remedied by taking the absolute value of the \(x\) radial coordinate in the solution. Naturally, this implies a non-differentiability issue at the throat, which can be solved by introducing a simple thin-shell of matter. ## V Conclusions This research endeavor delves into a novel exploration of wormholes within the context of generalized Horndeski theories, which encompass the Gauss-Bonnet term and additional higher-order terms inspired by the renowned Lovelock theory. Our primary objective was to prove the existence of the Ellis wormhole as a stealth solution in the extended framework and meticulously investigate the rescalarization process on the stealth geometry. Thus, we unveiled the intricate result of an effective ADM mass arising solely as an effect of the higher order terms. In the initial section of this study, we focused our attention on the Einstein-scalar-Gauss-Bonnet theory, wherein a phantom scalar field was introduced as an essential component. Through the imposition of a vanishing redshift function and the utilization of a specific metric ansatz, we succeeded in simplifying the intricate equations of motion to a more manageable form. This reduction in complexity provided us with a valuable opportunity to ascertain the scalar field and metric solutions associated with the theory. Our investigation yielded a remarkable outcome: we discovered that, under a particular coupling function, the Einstein-scalar-Gauss-Bonnet theory admits the existence of the Ellis wormhole as a stealth solution. This peculiar result marks a significant milestone as, to our knowledge, it represents the very first instance of obtaining an exact analytic spherically symmetric solution within the framework of the Einstein-scalar-Gauss-Bonnet theory. In the subsequent section, higher-order derivative couplings are included within the gravitational theory. Thereby, inspired by the compactifications of the Lovelock theory, the theoretical framework is enriched by the introduction of supplementary potentials into the action. Subsequently, we derived the equations of motion governing these potentials. By considering the esteemed Ellis wormhole solution as a background, we successfully reformulated the equations of motion in terms of derivatives of the potentials with respect to the scalar field \(\Phi\). Remarkably, we proved the existence of a unique form of the new potentials, which renders the generalized theory capable of accommodating the familiar Ellis wormhole as a stealth solution. The rescalarization of the Ellis wormhole within the generalized Horndeski theory has provided intriguing insights into the nature of wormholes and the underlying gravitational theory. By considering the existence of a second non-trivial solution alongside the Ellis wormhole, we have uncovered the versatility and richness of the Horndeski theory, which lacks the limitations imposed by uniqueness theorems in GR. The concept of rescalarization, coined in this study, has allowed us to explore massive wormholes and their implications within the theory. Through our analysis, we have demonstrated that the Gauss-Bonnet term plays a pivotal role in the rescalarization process. By expanding the solutions in terms of the Gauss-Bonnet coupling constant, we have derived first-order equations that depend solely on this term, revealing its significant influence on the dynamics of the rescalarized Ellis wormhole. This finding highlights the importance of considering higher-order terms in gravitational theories beyond GR to fully comprehend the behavior of exotic spacetime structures. The rescalarized solution of the Ellis wormhole within the framework of the generalized Horndeski theory manifests compelling characteristics. Specifically, it represents a regular traversable wormhole, which possesses notable properties worthy of investigation. Notably, the inclusion of the Gauss-Bonnet term in the theory leads to the emergence of an ADM mass term within the wormhole structure. This acquired mass term signifies an intriguing connection between the Gauss-Bonnet term and the mass attribute associated with the wormhole. Additionally, an analysis of the scalar charge reveals a modest contribution originating from the Gauss-Bonnet term, linked to the mass of the wormhole. In summary, we found that the connection of exotic compact objects to the gravitational theories is rather subtle, since the Ellis wormhole may appear as a stealth solution in expanded theoretical frameworks. Moreover, through the process of rescalarization, the investigation of massive wormholes and their properties provides valuable insights into the dynamics and implications of higher-order terms in the theory. This study serves as a stepping stone for future explorations into the theoretical frameworks that support wormholes and possibly other intriguing configurations. ###### Acknowledgements. The research project was supported by the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the "3rd Call for H.F.R.I. Research Projects to support Post-Doctoral Researchers" (Project Number: 7212). AB in particular thanks the Physics Department at the Silesian University of Opava for hospitality and support within the project CZ.02.2.69/0.0/0.0/18_054/0014696. The work of N.C. and E.P. is supported by the research project of the National Technical University of Athens (NTUA) 65232600-ACT-MTG: _Alleviating Cosmological Tensions Through Modified Theories of Gravity_. C.E. is funded by Agencia Nacional de Investigacion y Desarrollo (ANID) through Proyecto Fondecyt Iniciacion folio 11221063, Etapa 2023.
2304.00773
Narayana's cows numbers which are concatenations of three repdigits in base $ρ$
Narayana's sequence is a ternary recurrent sequence defined by the recurrence relation $\mathcal{N}_n=\mathcal{N}_{n-1}+\mathcal{N}_{n-3}$ with initial terms $\mathcal{N}_0=0$ and $\mathcal{N}_1=\mathcal{N}_2=\mathcal{N}_3=1$. Let $\rho\geqslant2$ be a positive integer. In this study, it is proved that the $n$th Narayana's number $ \mathcal{N}_n$ which is concatenations of three repdigits in base $\rho$ satisfies $n<5.6\cdot 10^{48}\cdot \log^7\rho$. Moreover, it is shown that the largest Narayana's number which is concatenations of three repdigits in base $\rho$ with $1 \leqslant \rho \leqslant 10$ is $58425=\mathcal{N}_{31}=\overline{3332200}_5=\overline{332223}_7.$
Pagdame Tiebekabe, Kouèssi Norbert Adédji, Nadjime Pindra, Mahouton Norbert Hounkonnou
2023-04-03T07:54:34Z
http://arxiv.org/abs/2304.00773v2
# Narayana's cows numbers which are concatenations of three repdigits in base \(\rho\) ###### Abstract Narayana's sequence is a ternary recurrent sequence defined by the recurrence relation \(\mathcal{N}_{n}=\mathcal{N}_{n-1}+\mathcal{N}_{n-3}\) with initial terms \(\mathcal{N}_{0}=0\) and \(\mathcal{N}_{1}=\mathcal{N}_{2}=\mathcal{N}_{3}=1\). Let \(\rho\geqslant 2\) be a positive integer. In this study, it is proved that the \(n\)th Narayana's number \(\mathcal{N}_{n}\) which is concatenations of three repdigits in base \(\rho\) satisfies \(n<5.6\cdot 10^{48}\cdot\log^{7}\rho\). Moreover, it is shown that the largest Narayana's number which is concatenations of three repdigits in base \(\rho\) with \(1\leqslant\rho\leqslant 10\) is \(58425=\mathcal{N}_{31}=\overline{3332200_{5}}=\overline{332223_{7}}\). **Keywords**: Narayana's numbers, base \(\rho,\) concatenations, repdigits, linear forms in logarithms, reduction method. **2020 Mathematics Subject Classification: 11B39, 11J86, 11D61.** ## 1 Introduction The power of linear forms in logarithms in solving Diophantine exponential equations in general and Diophantine equations in linear recurrent sequences in particular is well established. Since the introduction of this powerful tool by the British mathematician Alan Baker in \(1966\) by proving a landmark result about linear forms in the logarithms of algebraic numbers, which helped to earn him the Fields Medal in \(1970\), several exponential Diophantine equations have been solved. Since few years now, researchers in number theory have been interested in the resolution of Diophantine equations in linear recurrent sequences. Variants of these equations have been intensely studied. The determination of the terms of linear recurrent sequences which are product or concatenations of two or three repdigits have been widely studied. For details of recent work related to this problem, we refer the reader to [4, 13, 2, 1, 5, 7, 8, 9, 10]. Our work aims to be more general than previous work. In this article, following what was done by Mahadi Ddamulira et al [4], we study the problem of writing all Narayana numbers which are concatenations of three repdigits in base \(\rho\) with \(\rho\geq 2\). To be precise, we find all solutions of the Diophantine equation \[\mathcal{N}_{n}=\overline{\underbrace{d_{1}\ldots d_{1}d_{2}\ldots d_{2}d_{3} \ldots d_{3}}_{\ell\text{ {times}}}} \tag{1}\] in non-negative integers \(n,d_{1},d_{2},d_{3},\ell,m,k\) with \(n\geqslant 0,1\leqslant k\leqslant m\leqslant\ell\) and \(d_{1},d_{2},d_{3}\in\{0,1,2,3,\ldots,9\},d_{1}>0\). Contrary to most previous works on this question done in base \(10\), we determined an upper bound of \(n\) and \(\ell+m+k\) in terms of \(\rho\) with \(\rho\geqslant 2\). For application, we found in section \(4\) all Narayana's cows numbers that are concatenations of three repdigits in the cases \(2\leqslant\rho\leqslant 10\). The main results are as follows : **Theorem 1.1**.: _Let \(\rho\geqslant 2\) be an integer. All solutions to the Diophantine equation (1) satisfy_ \[\ell+m+k<n<5.6\cdot 10^{48}\cdot\log^{7}\rho.\] Moreover, the above result implies. **Corollary 1.1**.: _The Diophantine equation (1) has only finitely many solutions in positive integers \(n,d_{1},d_{2},d_{3},\ell,m\) and \(k.\)_ By considering the cases \(2\leqslant\rho\leqslant 10,\) we get the following result. **Theorem 1.2**.: _The only Nayarana numbers which are concatenations of three repdigits in base \(\rho\) with \(2\leq\rho\leq 10\) are_ \[4,6,9,13,19,28,41,60,88,129,189,277,406,595,872,1278,1873,2745,4023,18560,58425.\] _More precisely, we have: \(4=\mathcal{N}_{6}=\overline{100}_{2},\)\(6=\mathcal{N}_{7}=\overline{110}_{2},\)\(13=\mathcal{N}_{9}=\overline{111}_{3}=\overline{1101}_{2},\)\(19=\mathcal{N}_{10}=\overline{10011}_{2}=\overline{201}_{3}=\overline{103}_{4},\)_ \[\begin{array}{rclrcl}28&=&\mathcal{N}_{11}&=&\overline{1001}_{3}=\overline{ 130}_{4}=\overline{103}_{5}=\overline{11100}_{2},\\ 41&=&\mathcal{N}_{12}&=&\overline{1112}_{3}=22\overline{14}_{4}=\overline{131 }_{5},\\ 60&=&\mathcal{N}_{13}&=&\overline{11110}_{2}=\overline{330}_{4}=\overline{220 }_{5}=\overline{140}_{6}=\overline{114}_{7},\\ 88&=&\mathcal{N}_{14}&=&\overline{323}_{5}=\overline{224_{6}}=\overline{154_{ 7}}=\overline{130}_{8}=\overline{1120}_{4}=\overline{107}_{9},\\ 129&=&\mathcal{N}_{15}&=&\overline{333}_{6}=\overline{243_{7}}=\overline{201} _{8}=\overline{1004_{5}}=\overline{2001}_{4}=\overline{153}_{9}=\overline{1000 000}1_{2}=\overline{129}_{10},\\ 189&=&\mathcal{N}_{16}&=&\overline{2331}_{4}=\overline{21000}_{3}=\overline{ 360_{7}}=\overline{230}_{9}=\overline{1224_{5}}=\overline{513}_{6}=\overline{2 75}_{8}=\overline{189}_{10},\\ 277&=&\mathcal{N}_{17}&=&\overline{114}_{6}=\overline{544_{7}}=\overline{425} _{8}=\overline{337}_{9}=\overline{10111}_{4},\\ 406&=&\mathcal{N}_{18}&=&\overline{3111}_{5}=\overline{626_{8}}=\overline{1120 _{7}}=\overline{501_{9}}=\overline{406}_{10},\\ 595&=&\mathcal{N}_{19}&=&\overline{1123}_{8}=\overline{731_{9}}=\overline{595} _{10},\\ 872&=&\mathcal{N}_{20}&=&\overline{11442}_{5}=\overline{1168}_{9}=\overline{155 0}_{8}=\overline{872}_{10},\\ 1278&=&\mathcal{N}_{21}&=&\overline{5530}_{6},\\ 1873&=&\mathcal{N}_{22}&=&\overline{24443}_{5}=\overline{2511}_{9},\\ 2745&=&\mathcal{N}_{23}&=&\overline{11001}_{7},\\ 4023&=&\mathcal{N}_{24}&=&\overline{7667}_{8},\\ 18560&=&\mathcal{N}_{28}&=&\overline{44200}_{8},\\ 58425&=&\mathcal{N}_{31}&=&\overline{3332200_{5}}=\overline{332223}_{7}.\\ \end{array}\] For the proof of Theorem 1.1, we use \(n>560\) by writing Diophantine (11) in three different ways to get three linear forms in logarithms. Next we apply lower bound for linear forms in logarithms to obtain an upper bound on \(n\). To do this we need some preliminary results which are discussed in the following section. ## 2 Preliminary results ### Some properties of Narayana's cows sequence Narayana's cows sequence comes from a problem with cows proposed by the Indian mathematician Narayana in the \(14\)th century. In this problem, we assume that there is a cow at the beginning and each cow produces a calf every year from the \(4\) years old. Narayana's cow problem counts the number of calves produced each year [3]. This problem seems to be similar to the Fibonacci rabbit problem. So also the answers, known as Narayana sequence and Fibonacci sequence. Narayana's cows sequence (A000930 in OEIS) satisfies a third-order recurrence relation : \[\mathcal{N}_{n}=\mathcal{N}_{n-1}+\mathcal{N}_{n-3}\,\text{for}\,n\geqslant 3.\] This has the initial values \(\mathcal{N}_{0}\) and \(\mathcal{N}_{1}=\mathcal{N}_{2}=\mathcal{N}_{3}=1\)[3]. Explicitly, the characteristic polynomial of \(\mathcal{N}_{n}\) is : \[\varphi(x)=x^{3}-x^{2}-1,\] and the characteristic roots are: \[\alpha= \frac{1}{3}\left(\sqrt[3]{\frac{1}{2}(29-3\sqrt{93})}+\sqrt[3]{ \frac{1}{2}(3\sqrt{93}+29)}+1\right), \tag{2}\] \[\beta= \frac{1}{3}-\frac{1}{6}\left(1-i\sqrt{3}\right)\sqrt[3]{\frac{1} {2}(29-3\sqrt{93})}-\frac{1}{6}\left(1+i\sqrt{3}\right)\sqrt[3]{\frac{1}{2}(3 \sqrt{93}+29)},\] (3) \[\gamma= \frac{1}{3}-\frac{1}{6}\left(1+i\sqrt{3}\right)\sqrt[3]{\frac{1} {2}(29-3\sqrt{93})}-\frac{1}{6}\left(1-i\sqrt{3}\right)\sqrt[3]{\frac{1}{2}(3 \sqrt{93}+29)}. \tag{4}\] Then, the Narayana sequence can be obtained by Binet's formula: \[\mathcal{N}_{n}=a\alpha^{n}+b\beta^{n}+c\gamma^{n}. \tag{5}\] For \(n\in\mathbb{Z}_{\geqslant 0}\) the generating function of the Narayana's cows sequence is \[g(x)=\frac{1}{1-x-x^{3}}=\sum_{n=0}^{\infty}\mathcal{N}_{n+1}x^{n}.\] With the Vieta theorem, we have: \[\left\{\begin{array}{l}\alpha+\beta+\gamma=1,\\ \alpha\beta+\beta\gamma+\alpha\gamma=0,\\ \alpha\beta\gamma=1.\end{array}\right. \tag{6}\] From formula (5), we obtain \[\mathcal{N}_{0}=a+b+c=0,\] \[\mathcal{N}_{1}=a\alpha+b\beta+c\gamma=1,\] \[\mathcal{N}_{2}=a\alpha^{2}+b\beta^{2}+c\gamma^{2}=1,\] which implies \[a=\frac{1-\beta-\gamma}{(\alpha-\beta)(\alpha-\gamma)},\quad b=\frac{1- \alpha-\gamma}{(\beta-\alpha)(\beta-\gamma)},\quad\text{and}\quad c=\frac{1- \alpha-\beta}{(\gamma-\beta)(\gamma-\alpha)}.\] Also, from formula (6), we can simplify \(a,b\) and \(c\) and obtain \[a=\frac{\alpha}{\alpha^{2}-\alpha\beta-\alpha\gamma+\beta\gamma}=\frac{ \alpha}{\alpha^{2}+2\beta\gamma}=\frac{\alpha^{2}}{\alpha^{3}+2}\] \[b=\frac{\beta^{2}}{\beta^{3}+2}\quad\text{and}\quad c=\frac{\gamma^{2}}{ \gamma^{3}+2} \tag{7}\] and the minimal polynomial of \(a\) over integers is \(31x^{3}-3x-1\). Setting \(\Pi(n)=\mathcal{N}_{n}-a\alpha^{n}=b\beta^{n}+c\gamma^{n}\), we notice that \(|\Pi(n)|<\frac{1}{\alpha^{n/2}}\) for all \(n>1\). The Narayana's sequence was originally defined at positive indices. Actually, it can be extended to negative indices by defining \[\mathcal{N}_{-n}=\frac{a}{\alpha^{n}}+\frac{b}{\beta^{n}}+\frac{c}{\gamma^{n}}. \tag{8}\] So, the following recurrence relation holds for all integral indices \[\mathcal{N}_{n}=\mathcal{N}_{n-1}+\mathcal{N}_{n-3}\quad\text{for}\quad n\in \mathbb{Z}. \tag{9}\] Through a simple computation, the first few terms of \(\mathcal{N}_{n}\) at negative indices can be obtained from formulas (7) and (8), so that \(\mathcal{N}_{-1}=0,\,\mathcal{N}_{-2}=1,\,\mathcal{N}_{-3}=0,\,\mathcal{N}_{ -4}=-1\), which also satisfies relation (9). The characteristic polynomial for \((\mathcal{N}_{n})_{n\geq 0}\) is irreducible in \(\mathbb{Q}[x]\). We note that it has a real zero \(\alpha(>1)\) and two conjugate complex zeros \(\beta\) and \(\gamma\) with \(|\beta|=|\gamma|<1\). In fact, \(\alpha\approx 1.46557\). We also have the following properties of \((\mathcal{N}_{n})_{n\geq 0}\). **Lemma 2.1**.: _For the sequence \((\mathcal{N}_{n})_{n\geq 0}\), we have_ \[\alpha^{n-2}\leqslant\mathcal{N}_{n}\leqslant\alpha^{n-1},\quad\text{for} \quad n\geqslant 1.\] Proof.: Using induction, one can easily prove Lemma 2.1. Let \(\mathbb{K}:=\mathbb{Q}(\alpha,\beta)\) be the splitting field of the characteristic polynomial \(\varphi\) over \(\mathbb{Q}\). Then \([\mathbb{K}:\mathbb{Q}]=6\) and \([\mathbb{Q}(\alpha):\mathbb{Q}]=3.\) The Galois group of \(\mathbb{K}/\mathbb{Q}\) is given by \[\mathcal{G}:=\text{Gal}(\mathbb{K}/\mathbb{Q})\cong\{(1),(\alpha\beta),( \alpha\gamma),(\beta\gamma),(\alpha\beta\gamma)\}\cong S_{3}.\] We identify the automorphisms of \(\mathcal{G}\) with the permutation group of the zeroes of \(\varphi\). We highlight the permutation \((\alpha\beta)\), corresponding to the automorphism \(\sigma:\alpha\mapsto\beta,\beta\mapsto\alpha,\gamma\mapsto\gamma\), which we use later to obtain a contradiction on the size of the absolute value of certain bound. ### Linear forms in Logarithms We begin this section with a few reminders about logarithmic height of an algebraic number. Let \(\eta\) be an algebraic number of degree \(d,\) let \(a_{0}>0\) be the leading coefficient of its minimal polynomial over \(\mathbb{Z}\) and let \(\eta=\eta^{(1)},\ldots,\eta^{(d)}\) denote its conjugates. The quantity defined by \[h(\eta)=\frac{1}{d}\left(\log|a_{0}|+\sum_{j=1}^{d}\log\max\left(1,\left|\eta ^{(j)}\right|\right)\right)\] is called the logarithmic height of \(\eta\). Some properties of height are as follows. For \(\eta_{1},\eta_{2}\) algebraic numbers and \(m\in\mathbb{Z}\), we have \[h(\eta_{1}\pm\eta_{2}) \leq h(\eta_{1})+h(\eta_{2})+\log 2,\] \[h(\eta_{1}\eta_{2}^{\pm 1}) \leq h(\eta_{1})+h(\eta_{2}),\] \[h(\eta_{1}^{m}) =|m|h(\eta_{1}).\] If \(\eta=\frac{p}{q}\in\mathbb{Q}\) is a rational number in reduced form with \(q>0,\) then the above definition reduces to \(h(\eta)=\log(\max\{|p|,q\}).\) We can now present the famous Matveev result used in this study. Thus, let \(\mathbb{L}\) be a real number field of degree \(d_{\mathbb{L}}\), \(\eta_{1},\ldots,\eta_{s}\in\mathbb{L}\) and \(b_{1},\ldots,b_{s}\in\mathbb{Z}\setminus\{0\}\). Let \(B\geq\max\{|b_{1}|,\ldots,|b_{s}|\}\) and \[\Lambda=\eta_{1}^{b_{1}}\cdots\eta_{s}^{b_{s}}-1.\] Let \(A_{1},\ldots,A_{s}\) be real numbers with \[A_{i}\geq\max\{d_{\mathbb{L}}h(\eta_{i}),|\log\eta_{i}|,0.16\},\quad i=1,2, \ldots,s.\] With the above notations, Matveev proved the following result [12]. **Theorem 2.1**.: _Assume that \(\Lambda\neq 0\). Then_ \[\log|\Lambda|>-1.4\cdot 30^{s+3}\cdot s^{4.5}\cdot d_{\mathbb{L}}^{2}\cdot(1+ \log d_{\mathbb{L}})\cdot(1+\log B)\cdot A_{1}\cdots A_{s}.\] We also need the following result of [11]. **Lemma 2.2**.: _Let \(r\geqslant 1\) and \(H>0\) be such that \(H>(4r^{2})^{r}\) and \(H>L/(\log L)^{r}\). Then_ \[L<2^{r}H(\log H)^{r}.\] ### Reduction method The bounds on the variables obtained via Baker's theory are too large for any computational purposes. To reduce the bounds, we use reduction method due to Dujella and Petho ([6], Lemma 5a). For a real number \(X\), we write \(\|X\|:=\min\{|\ X-n\ |\colon n\in\mathbb{Z}\}\) for the distance of \(X\) to the nearest integer. **Lemma 2.3** (Dujella and Petho, [6]).: _Let \(M\) be a positive integer, \(p/q\) be a convergent of the continued fraction expansion of the irrational number \(\tau\) such that \(q>6M,\) and \(A,B,\mu\) be some real numbers with \(A>0\) and \(B>1.\) Furthermore, let_ \[\varepsilon:=\parallel\mu q\parallel-M\cdot\parallel\tau q\parallel.\] _If \(\varepsilon>0,\) then there is no solution to the inequality_ \[0<|u\tau-v+\mu|<AB^{-w} \tag{10}\] _in positive integers \(u,v\) and \(w\) with_ \[u\leqslant M\text{ and }w\geqslant\frac{\log(Aq/\varepsilon)}{\log B}.\] The following results will also be used in our proof. **Lemma 2.4**.: _For any non-zero real number \(x\), if \(|e^{x}-1|<\frac{1}{2}\), then \(|x|<2|e^{x}-1|.\)_ ## 3 Proof of Theorem 1.1 For the proof of Theorem 1.1 we can assume that \(n>560.\) ### The initial bound on \(n\). To begin with, we consider the Diophantine equation (1), and rewrite it as \[\mathcal{N}_{n}= \underbrace{\overline{d_{1}\ldots d_{1}d_{2}\ldots d_{2}d_{3} \ldots d_{3}}}_{\ell\text{ times}\text{ }m\text{ times}\text{ }k\text{ times}}\] \[= \underbrace{\overline{d_{1}\ldots d_{1}}}_{\ell\text{ times}}\cdot 10^{m+k}+ \underbrace{\overline{d_{2}\ldots d_{2}}}_{m\text{ times}}\cdot 10^{k}+\underbrace{\overline{d_{3} \ldots d_{3}}}_{k\text{ times}}\] \[= d_{1}\left(\frac{\rho^{\ell}-1}{\rho-1}\right)\cdot\rho^{m+k}+d _{2}\left(\frac{\rho^{m}-1}{\rho-1}\right)\cdot\rho^{k}+d_{3}\left(\frac{\rho ^{k}-1}{\rho-1}\right).\] Therefore, we have \[\mathcal{N}_{n}=\frac{1}{\rho-1}\left(d_{1}\rho^{\ell+m+k}-(d_{1}-d_{2})\rho^{ m+k}-(d_{2}-d_{3})\rho^{k}-d_{3}\right). \tag{11}\] We state and prove the following lemma which gives a relation between \(n\) and \(l+m+k\) of (11). **Lemma 3.1**.: _All solutions to the Diophantine equation (1) satisfy_ \[(\ell+m+k-1)\log\rho+\log\alpha<n\log\alpha<(\ell+m+k)\log\rho+1.\] Proof.: From (1) and Lemma 2.1, we get \[\alpha^{n-2}\leqslant\mathcal{N}_{n}<\rho^{\ell+m+k}.\] Taking logarithm on both sides, we have \[(n-2)\log\alpha<(\ell+m+k)\log\rho\] which leads to \[n\log\alpha<(\ell+m+k)\log\rho+2\log\alpha<(\ell+m+k)\log\rho+1. \tag{12}\] On the other hand, for the lower bound, (11) implies that \[\rho^{\ell+m+k-1}<\mathcal{N}_{n}\leqslant\alpha^{n-1}.\] Taking logarithm on both sides, we get \[(\ell+m+k-1)\log\rho<(n-1)\log\alpha,\] which leads to \[(\ell+m+k-1)\log\rho+\log\alpha<n\log\alpha. \tag{13}\] Comparing (12) and (13) completes the proof of Lemma 3.1. #### 3.1.1 Upper bound for \(\ell\log\rho\) in terms of \(n\) **Lemma 3.2**.: _All solutions to (1) satisfy_ \[\ell\log\rho<3.74\cdot 10^{13}\cdot\log^{2}\rho\cdot(1+\log n).\] Proof.: Using (5) and (11), we have \[a\alpha^{n}+b\beta^{n}+c\gamma^{n}=\frac{1}{\rho-1}\left(d_{1}\cdot\rho^{\ell+m+k} -(d_{1}-d_{2})\cdot\rho^{m+k}-(d_{2}-d_{3})\cdot\rho^{k}-d_{3}\right).\] Equivalently \[(\rho-1)a\alpha^{n}-d_{1}\cdot\rho^{\ell+m+k}=-(\rho-1)\Pi(n)-(d_{1}-d_{2}) \cdot\rho^{m+k}-(d_{2}-d_{3})\cdot\rho^{k}-d_{3}.\] Thus, we have that \[\left|(\rho-1)a\alpha^{n}-d_{1}\cdot\rho^{\ell+m+k}\right| =\left|-(\rho-1)\Pi(n)-(d_{1}-d_{2})\cdot\rho^{m+k}-(d_{2}-d_{3}) \cdot\rho^{k}-d_{3}\right|\] \[\leqslant(\rho-1)\cdot\alpha^{-n/2}+(\rho-1)\cdot\rho^{m+k}+( \rho-1)\cdot\rho^{k}+(\rho-1)\] \[<3(\rho-1)\cdot\rho^{m+k},\] where we used the fact that \(n>560\). Dividing both sides of the inequality by \(d_{1}\cdot\rho^{\ell+m+k}\) gives \[\left|\left(\frac{(\rho-1)a}{d_{1}}\right)\cdot\alpha^{n}\cdot\rho^{-(\ell+m+ k)}-1\right|<\frac{3(\rho-1)\cdot\rho^{m+k}}{d_{1}\cdot\rho^{\ell+m+k}}< \frac{3}{\rho^{\ell-1}}. \tag{14}\] Let \[\Lambda_{1}:=\left(\frac{(\rho-1)a}{d_{1}}\right)\cdot\alpha^{n}\cdot\rho^{-( \ell+m+k)}-1. \tag{15}\] We then proceed to apply Theorem 2.1 on (15). We have first to observe that \(\Lambda_{1}\neq 0\). Indeed, if it were zero, we would then get that \[a\alpha^{n}=\frac{d_{1}}{\rho-1}\cdot\rho^{\ell+m+k}.\] In this case therefore, applying the automorphism \(\sigma\) of the Galois group \(\mathcal{G}\) on both sides of the preceeding equation and taking absolute values, we obtain \[\left|\left(\frac{d_{1}}{\rho-1}\cdot\rho^{\ell+m+k}\right)\right|=|\sigma(a \alpha^{n})|=|c\gamma^{n}|<1,\] which is false. Thus, we have \(\Lambda_{1}\neq 0\). Theorem 2.1 is then applied on (15) with the following parameters : \[\eta_{1}:=\frac{(\rho-1)a}{d_{1}},\,\eta_{2}:=\alpha,\,\eta_{3}:=\rho,\,b_{1} :=1,\,b_{2}:=n,\,b_{3}:=-\ell-m-k,\,s:=3.\] From Lemma 3.1, we have \(\ell+m+k<n\). Consequently, we choose \(B:=n\). Notice that \(\mathbb{K}:=\mathbb{Q}(\eta_{1},\eta_{2},\eta_{3})=\mathbb{Q}(\alpha)\), since \(a=\frac{\alpha^{2}}{\alpha^{3}+2}\). Moreover its minimal polynomial over integers is \(31x^{3}-3x-1\). Therefore, \(d_{\mathbb{K}}:=[\mathbb{K}:\mathbb{Q}]=3\). Using the properties of the logarithmic height, we estimate \(h(\eta_{1})\) as follows : \[h(\eta_{1}) =h\left(\frac{(\rho-1)a}{d_{1}}\right)\leqslant h(\rho-1)+h(a)+h (d_{1})\] \[\leqslant 2\log\rho+\frac{1}{3}\log 31\leqslant 4\log\rho,\] which holds for \(\rho\geqslant 2\). Similarly, we have \(h(\eta_{2})=h(\alpha)=\dfrac{\log\alpha}{3}\) and \(h(\eta_{3})=h(\rho)=\log\rho\). Therefore, we choose \[A_{1}:=12\log\rho,\,A_{2}:=\log\alpha,\,\text{and}\,A_{3}:=3\log\rho.\] By Theorem 2.1, we get \[\log|\Lambda_{1}| >-1.4\cdot 30^{6}\cdot 3^{4.5}\cdot 3^{2}\cdot(1+\log 3)\cdot(1+\log n )\cdot 12\log\rho\cdot\log\alpha\cdot 3\log\rho\] \[>-3.73\cdot 10^{13}\cdot\log^{2}\rho\cdot(1+\log n)\] which when compared with (14) gives \[(\ell-1)\log\rho-\log 3<3.73\cdot 10^{13}\cdot\log^{2}\rho\cdot(1+\log n),\] leading to \[\ell\log\rho<3.74\cdot 10^{13}\cdot\log^{2}\rho\cdot(1+\log n). \tag{16}\] #### 3.1.2 Upper bound for \(m\log\rho\) in terms of \(n\) **Lemma 3.3**.: _All solutions to (1) satisfy_ \[m\log\rho<3.5\cdot 10^{26}\cdot\log^{3}\rho\cdot(1+\log n)^{2}.\] Proof.: Rewriting (5), we obtain \[(\rho-1)a\alpha^{n}-(d_{1}\cdot\rho^{\ell}-(d_{1}-d_{2}))\rho^{m+k}=-(\rho-1) \Pi(n)-(d_{2}-d_{3})\cdot\rho^{k}-d_{3},\] which shows that \[\left|(\rho-1)a\alpha^{n}-(d_{1}\cdot\rho^{\ell}-(d_{1}-d_{2})) \rho^{m+k}\right| =\left|-(\rho-1)\Pi(n)-(d_{2}-d_{3})\cdot\rho^{k}-d_{3}\right|\] \[\leqslant(\rho-1)\cdot\alpha^{-n/2}+(\rho-1)\cdot\rho^{k}+\rho-1\] \[<2\cdot\rho^{k+1}.\] Dividing both sides of the inequality by \((d_{1}\cdot\rho^{\ell}-(d_{1}-d_{2}))\rho^{m+k}\), we have that \[\left|\left(\dfrac{(\rho-1)a}{d_{1}\cdot\rho^{\ell}-(d_{1}-d_{2})}\right) \cdot\alpha^{n}\cdot\rho^{-m-k}-1\right|<\dfrac{2\cdot\rho^{k+1}}{(d_{1}\cdot \rho^{\ell}-(d_{1}-d_{2}))\rho^{m+k}}<\dfrac{2}{\rho^{m-1}}. \tag{17}\] Let \[\Lambda_{2}:=\left(\dfrac{(\rho-1)a}{d_{1}\cdot\rho^{\ell}-(d_{1}-d_{2})} \right)\cdot\alpha^{n}\cdot\rho^{-m-k}-1.\] Using similar argument as in \(\Lambda_{1}\), we apply Theorem 2.1 on \(\Lambda_{2}\). We notice that \(\Lambda_{2}\neq 0\). If it were, then we would have that \[a\alpha^{n}=\left(\dfrac{d_{1}\cdot\rho^{\ell}-(d_{1}-d_{2})}{\rho-1}\right) \cdot\rho^{m+k}.\] Applying the automorphism \(\sigma\) of the Galois group \(\mathcal{G}\) on both sides, and taking the absolute values, we obtain \[1<\left|\left(\frac{d_{1}\cdot\rho^{\ell}-(d_{1}-d_{2})}{\rho-1}\right)\cdot \rho^{m+k}\right|=|\sigma(a\alpha^{n})|=|c\gamma^{n}|<1,\] which is false. Therefore, \(\Lambda_{2}\neq 0\). We then proceed to apply Theorem 2.1with the following parameters: \[\eta_{1}:=\frac{(\rho-1)a}{d_{1}\cdot\rho^{\ell}-(d_{1}-d_{2})},\,\eta_{2}:= \alpha,\,\eta_{3}:=\rho,\,b_{1}:=1,\,b_{2}:=n,\,b_{3}:=-m-k,\,s=3.\] Since \(m+k<n\), we take \(B:=n\). Again taking \(\mathbb{K}:=\mathbb{Q}(\eta_{1},\eta_{2},\eta_{3})=\mathbb{Q}(\alpha)\), we have that \(d_{\mathbb{K}}:=[\mathbb{K}:\mathbb{Q}]=3\). Next, we use the properties of the logarithmic height to estimate \(h(\eta_{1})\) as before, and obtain \[h(\eta_{1}) =h\left(\frac{(\rho-1)a}{d_{1}\cdot\rho^{\ell}-(d_{1}-d_{2})}\right)\] \[\leqslant h((\rho-1)a)+h(d_{1}\cdot\rho^{\ell}-(d_{1}-d_{2}))\] \[\leqslant h(\rho-1)+h(a)+\log 2+h(d_{1}\cdot\rho^{\ell})+h(d_{1}-d_ {2})\] \[\leqslant\frac{1}{3}\log 31+4\log(\rho-1)+\ell\log\rho+2\log 2\] \[\leqslant 3.74\cdot 10^{13}\cdot\log^{2}\rho\cdot(1+\log n)+4\log( \rho-1)+\frac{1}{3}\log 31+2\log 2\] \[<3.75\cdot 10^{13}\cdot\log^{2}\rho\cdot(1+\log n),\] where we used the fact that \(\ell\log\rho<3.74\cdot 10^{13}\cdot\log^{2}\rho\cdot(1+\log n).\) We then take \[A_{1}:=1.125\cdot 10^{14}\cdot\log^{2}\rho\cdot(1+\log n),\,A_{2}:=\log \alpha,\,\text{and}\,A_{3}:=3\log\rho.\] Theorem 2.1 says that \[\log|\Lambda_{2}| >-1.4\cdot 30^{6}\cdot 3^{4.5}\cdot 3^{2}\cdot(1+\log 3)\cdot(1+ \log n)\cdot(1.125\cdot 10^{14}\cdot\log^{2}\rho\cdot(1+\log n))\cdot\log\alpha \cdot 3\log\rho\] \[>-3.49\cdot 10^{26}\cdot\log^{3}\rho\cdot(1+\log n)^{2}\] which when compared with (17) gives \[(m-1)\log\rho-\log 2<3.49\cdot 10^{26}\cdot\log^{3}\rho\cdot(1+\log n)^{2},\] which simplifies to \[m\log\rho<3.5\cdot 10^{26}\cdot\log^{3}\rho\cdot(1+\log n)^{2}. \tag{18}\] #### 3.1.3 Upper bound for \(\ell+m+k\) and \(n\) Rewriting (5), we obtain \[(\rho-1)a\alpha^{n}-\left(d_{1}\cdot\rho^{\ell+m}-(d_{1}-d_{2})\cdot\rho^{m}- (d_{2}-d_{3})\right)\cdot\rho^{k}=-(\rho-1)\Pi(n)-d_{3},\] which shows that \[\left|(\rho-1)a\alpha^{n}-\left(d_{1}\cdot\rho^{\ell+m}-(d_{1}-d_{2 })\cdot\rho^{m}-(d_{2}-d_{3})\right)\cdot\rho^{k}\right| =|-(\rho-1)\Pi(n)-d_{3}|\] \[\leqslant(\rho-1)\cdot\alpha^{-n/2}+(\rho-1)<2(\rho-1).\] Dividing both sides of the inequality by \((\rho-1)a\alpha^{n}\), we have that \[\left|\left(\frac{d_{1}\cdot\rho^{\ell+m}-(d_{1}-d_{2})\cdot\rho^{m}-(d_{2}-d_ {3})}{(\rho-1)a}\right)\cdot\alpha^{-n}\cdot\rho^{k}-1\right|<\frac{2}{a\alpha ^{n}}<\frac{5}{\alpha^{n}}. \tag{19}\] Now, we let \[\Lambda_{3}:=\left(\frac{d_{1}\cdot\rho^{\ell+m}-(d_{1}-d_{2})\cdot\rho^{m}-( d_{2}-d_{3})}{(\rho-1)a}\right)\cdot\alpha^{-n}\cdot\rho^{k}-1.\] Using similar arguments as in \(\Lambda_{1}\) and \(\Lambda_{2}\), we apply Theorem 2.1 on \(\Lambda_{3}\). We notice that \(\Lambda_{3}\neq 0\). If it were, then we would have \[a\alpha^{n}=\left(\frac{d_{1}\cdot\rho^{\ell+m}-(d_{1}-d_{2})\cdot\rho^{m}-(d _{2}-d_{3})}{\rho-1}\right)\cdot\rho^{k}.\] Applying the automorphism \(\sigma\) of the Galois group \(\mathcal{G}\) on both sides, and taking the absolute values, we obtain \[1<\left|\left(\frac{d_{1}\cdot\rho^{\ell+m}-(d_{1}-d_{2})\cdot\rho^{m}-(d_{2} -d_{3})}{\rho-1}\right)\cdot\rho^{k}\right|=|\sigma(a\alpha^{n})|=|c\gamma^{n} |<1,\] which is false. Therefore, \(\Lambda_{3}\neq 0\). We proceed to apply Theorem 2.1 with the following parameters : \[\eta_{1}:=\left(\frac{d_{1}\cdot\rho^{\ell+m}-(d_{1}-d_{2})\cdot\rho^{m}-(d_{ 2}-d_{3})}{(\rho-1)a}\right),\;\eta_{2}:=\alpha,\,\eta_{3}:=\rho,\,b_{1}:=1,\, b_{2}:=-n,\,b_{3}:=k,\,s:=3.\] Since \(k<n\), we take \(B:=n\). Again, taking \(\mathbb{K}=\mathbb{Q}(\eta_{1},\eta_{2},\eta_{3})=\mathbb{Q}(\alpha)\), we have that \(d_{\mathbb{K}}:=[\mathbb{K}:\mathbb{Q}]=3\). Nest, we use the properties of the logarithmic height to estimate \(h(\eta_{1})\) as before, and obtain: \[h(\eta_{1}) =h\left(\frac{d_{1}\cdot\rho^{\ell+m}-(d_{1}-d_{2})\cdot\rho^{m} -(d_{2}-d_{3})}{(\rho-1)a}\right)\] \[\leqslant h(d_{1}\cdot\rho^{\ell+m}-(d_{1}-d_{2})\cdot\rho^{m}-( d_{2}-d_{3}))+h((\rho-1)a)\] \[\leqslant h(d_{1}\cdot\rho^{\ell+m})+h((d_{1}-d_{2})\cdot\rho^{m })+h(d_{2}-d_{3})+\log 2+h(\rho-1)+h(a)\] \[\leqslant h(d_{1})+(\ell+m)h(\rho)+h(d_{1}-d_{2})+mh(\rho)+h(d_{2} -d_{3})+\log 2+h(\rho-1)+h(a)\] \[\leqslant h(d_{1})+(\ell+m)h(\rho)+h(d_{1})+h(d_{2})+\log 2+mh( \rho)+h(d_{2})+h(d_{3})+\log 2+h(\rho-1)+h(a)\] \[\leqslant 2h(d_{1})+2h(d_{2})+h(d_{3})+(2m+\ell)h(\rho)+h(\rho-1)+h(a )+2\log 2\] \[<6\log(\rho-1)+2\times 3.5\cdot 10^{26}\cdot\log^{3}\rho\cdot(1+ \log n)^{2}+3.74\cdot 10^{13}\cdot\log^{2}\rho\cdot(1+\log n)+\frac{1}{3}\log 31+2 \log 2\] \[<1.4\cdot 10^{27}\cdot\log^{3}\rho\cdot(1+\log n)^{2}.\] Note that in above estimate, we have used the estimates from (16) and (18). We then take \[A_{1}:=4.2\cdot 10^{27}\cdot\log^{3}\rho\cdot(1+\log n)^{2},\,A_{2}:=\log \alpha,\,\text{and}\,A_{3}:=3\log\rho.\] Theorem 2.1 says that \[\log|\Lambda_{3}| >-1.4\cdot 30^{6}\cdot 3^{4.5}\cdot 3^{2}\cdot(1+\log 3)\cdot(1+\log n )\cdot(4.2\cdot 10^{27}\cdot\log^{3}\rho\cdot(1+\log n)^{2})\cdot\log\alpha\cdot 3 \log\rho\] \[>-1.31\cdot 10^{40}\cdot\log^{4}\rho\cdot(1+\log n)^{3}\] and comparison of this inequality with (19) gives \[n\log\alpha-\log 5<1.31\cdot 10^{40}\cdot\log^{4}\rho\cdot(1+\log n)^{3},\] which satisfies to \[n<2.75\cdot 10^{41}\cdot\log^{4}\rho\cdot\log^{3}n,\quad\text{with }1+\log n<2 \log n.\] Next, we apply Lemma 2.2 that enable us to find an upper bound of \(n\), with following parameters \[r:=3,\quad L:=n,\quad\text{and}\quad H:=2.75\cdot 10^{41}\cdot\log^{4}\rho.\] Therefore, we have \[n<2^{3}\cdot 2.75\cdot 10^{41}\cdot\log^{4}\rho\cdot(95.42+4\log\log\rho)^{3},\] which leads to \[n<5.6\cdot 10^{48}\cdot\log^{7}\rho.\] The above inequality holds because form \(\rho\geqslant 2,\) we have \(95.42+4\log\log\rho<136\log\rho.\) By Lemma 3.1, we have that \[\ell+m+k<n<5.6\cdot 10^{48}\cdot\log^{7}\rho.\] This completes the proof of Theorem 1.1. ## 4 The study of the cases \(2\leqslant\rho\leqslant 10\) In this section, we explicitly determine all the Narayana numbers which are concatenations of three repdigits in base \(\rho,\) with \(\rho\) between \(2\) and \(10.\) So our result in this case is Theorem 1.2. Proof of Theorem 1.2.: Then in this range Theorem 1.1 allows us to deduce that all solutions to Diophantine equation (1) satisfy \[\ell+m+k<n<2\times 10^{51}.\] The next step is therefore the reduction of the upper bound above in order to identify the real range in which the possible solutions of (1) are found. For this, we proceed in three steps as follow. **Step 1:** Using (14), let \[\Gamma_{1}:=-\log(\Lambda_{1}+1)=(\ell+m+k)\log\rho-n\log\alpha-\log\left( \frac{(\rho-1)a}{d_{1}}\right).\] Notice that (14) is rewritten as \[\left|e^{-\Gamma_{1}}-1\right|<\frac{3}{\rho^{\ell-1}}.\] Observe that \(-\Gamma_{1}\neq 0\), since \(e^{-\Gamma_{1}}-1=\Lambda_{1}\neq 0\). Assume that \(\ell\geqslant 4\), then \[\left|e^{-\Gamma_{1}}-1\right|<\frac{3}{\rho^{\ell-1}}<\frac{1}{2}.\] Therefore, by Lemma 2.4, we have \[|\Gamma_{1}|<\frac{6}{\rho^{\ell-1}}.\] Substituting \(\Gamma_{1}\) in the above inequality with its value and dividing through by \(\log\alpha\), we obtain \[\left|(\ell+m+k)\left(\frac{\log\rho}{\log\alpha}\right)-n+\left(\frac{\log(d_{ 1}/(\rho-1)a)}{\log\alpha}\right)\right|<\frac{6/\log\alpha}{\rho^{\ell-1}}.\] Thus to apply Lemma 2.3 we can choose \[\tau:=\frac{\log\rho}{\log\alpha},\quad\mu:=\frac{\log(d_{1}/(\rho-1)a)}{\log \alpha},\quad A:=\frac{6}{\log\alpha},\quad B:=\rho,\quad\text{and}\quad w:= \ell-1\quad\text{with}\quad 1\leqslant d_{1}\leqslant\rho-1.\] Because \(\ell+m+k<n<2\times 10^{51},\) we can take \(M:=2\times 10^{51}\). So, for the remaining proof, we use _Mathematica_ to apply Lemma 2.3. For the computations, if the first convergent \(q_{t}\) such that \(q_{t}>6M\) does not satisfy the condition \(\varepsilon>0,\) then we use the next convergent until we find the one that satisfies the conditions. Thus, we have that \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \(\rho\) & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline \hline \(q_{t}\) & \(q_{115}\) & \(q_{93}\) & \(q_{107}\) & \(q_{108}\) & \(q_{86}\) & \(q_{101}\) & \(q_{107}\) & \(q_{95}\) & \(q_{93}\) \\ \hline \(\varepsilon\geqslant\) & 0.28 & 0.21 & 0.13 & 0.24 & 0.03 & 0.03 & 0.02 & 0.05 & 0.03 \\ \hline \(\ell-1\leqslant\) & 183 & 114 & 92 & 78 & 70 & 64 & 61 & 57 & 55 \\ \hline \end{tabular} Therefore, \[1\leqslant\ell\leqslant\frac{\log((6/\log\alpha)\cdot q_{115}/0.28)}{\log 2}+1 \leqslant 184.\] **Step 2:** For the next step we have to reduce the upper bound on \(m\). To do this, let us consider \[\Gamma_{2}:=-\log(\Lambda_{2}+1)=(m+k)\log\rho-n\log\alpha+\log\left(\frac{d_ {1}\cdot\rho^{\ell}-(d_{1}-d_{2})}{(\rho-1)a}\right).\] Thus inequalities (17) become \[\left|e^{-\Gamma_{2}}-1\right|<\frac{2}{\rho^{m-1}}<\frac{1}{2},\] which holds for \(m\geqslant 3.\) It follows from Lemma 2.4 that \[\left|(m+k)\left(\frac{\log\rho}{\log\alpha}\right)-n+\left(\frac{\log((d_{1} \cdot\rho^{\ell}-(d_{1}-d_{2}))/(\rho-1)a)}{\log\alpha}\right)\right|<\frac{4 /\log\alpha}{\rho^{m-1}}. \tag{20}\] Hence, since the conditions of Lemma 2.3 are satisfied, we may now apply it to inequality (20) with the following data \[\tau:=\frac{\log\rho}{\log\alpha},\quad\mu:=\frac{\log((d_{1}\cdot\rho^{\ell} -(d_{1}-d_{2}))/(\rho-1)a)}{\log\alpha},\quad A:=\frac{4}{\log\alpha},\quad B: =\rho,\quad\text{and}\quad w:=m-1\] with \(1\leqslant d_{1},d_{2}\leqslant\rho-1\) and \(1\leqslant\ell\leqslant 184.\) As \(m+k<n<2\times 10^{51},\) we can take \(M:=2\times 10^{51}.\) With Mathematica we got the following results \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \(\rho\) & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline \hline \(q_{t}\) & \(q_{115}\) & \(q_{93}\) & \(q_{107}\) & \(q_{108}\) & \(q_{86}\) & \(q_{101}\) & \(q_{107}\) & \(q_{95}\) & \(q_{93}\) \\ \hline \(\varepsilon\geqslant\) & 0.0009 & 0.0001 & 0.001 & 0.0002 & 0.0001 & 0.0006 & 0.0002 & \(10^{-6}\) & 0.00003 \\ \hline \(m-1\leqslant\) & 191 & 120 & 95 & 82 & 73 & 66 & 63 & 61 & 58 \\ \hline \end{tabular} In all cases we can conclude that \[1\leqslant m\leqslant\frac{\log((4/\log\alpha)\cdot q_{115}/0.0009)}{\log\rho}+ 1\leqslant 192.\] **Step 3:** Finaly, to reduce the bound on \(n\) we have to choose \[\Gamma_{3}:=\log(\Lambda_{3}+1)=(m+k)\log\rho-n\log\alpha+\log\left(\frac{d_{1 }\cdot\rho^{\ell+m}-(d_{1}-d_{2})\cdot\rho^{m}-(d_{2}-d_{3})}{(\rho-1)a}\right).\] By inequalities (19) we have that \[\left|e^{-\Gamma_{3}}-1\right|<\frac{5}{\rho^{n}}<\frac{1}{2},\] which is valid for \(n\geqslant 3\) and \(\rho\geqslant 2\). It follows from Lemma 2.4 that \[\left|k\left(\frac{\log\rho}{\log\alpha}\right)-n+\frac{\log((d_{1}\cdot\rho^ {\ell+m}-(d_{1}-d_{2})\cdot\rho^{m}-(d_{2}-d_{3}))/(\rho-1)a)}{\log\alpha} \right|<\frac{10/\log\alpha}{\rho^{n}}. \tag{21}\] Now we have to apply Lemma 2.3 to (21) by taking the following parameters \[\tau:=\frac{\log\rho}{\log\alpha},\quad\mu:=\frac{\log((d_{1}\cdot\rho^{\ell +m}-(d_{1}-d_{2})\cdot\rho^{m}-(d_{2}-d_{3}))/(\rho-1)a)}{\log\alpha},\quad A: =\frac{10}{\log\alpha},\quad B:=\rho\] and \(w:=n\) with \(1\leqslant d_{1},d_{2}\leqslant\rho-1,\,1\leqslant\ell\leqslant 184\) and \(1\leqslant m\leqslant 183\). Using the fact that \(k<n<2\times 10^{51}\), we can take \(M:=2\times 10^{51}\) \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \(\rho\) & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline \hline \(q_{t}\) & \(q_{115}\) & \(q_{93}\) & \(q_{107}\) & \(q_{108}\) & \(q_{86}\) & \(q_{101}\) & \(q_{107}\) & \(q_{95}\) & \(q_{93}\) \\ \hline \(\varepsilon\geq\) & \(10^{-6}\) & \(10^{-7}\) & \(10^{-6}\) & \(10^{-7}\) & \(10^{-7}\) & \(10^{-8}\) & \(10^{-7}\) & \(10^{-7}\) & \(10^{-8}\) \\ \hline \(n\leqslant\) & 201 & 126 & 100 & 87 & 76 & 72 & 67 & 63 & 64 \\ \hline \end{tabular} It follows from the above table that \[1\leqslant n\leqslant\frac{\log((10/\log\alpha)\cdot q_{115}/10^{-6})}{\log 2} \leqslant 201,\] which is valid for all \(\rho\) such that \(2\leqslant\rho\leqslant 10\). In the light of the above results, we need to check equation (1) in the cases \(2\leqslant\rho\leqslant 10\) for \(1\leq d_{1},d_{2},d_{3}\leq 9\), \(1\leq n\leq 201,\,1\leq k\leq 201,\,1\leq\ell\leq 184\) and \(1\leq m\leq 192\). A quick inspection using Maple reveals that Diophantine equation (1) has only the solution listed in the statement of Theorem 1.2. This completes the proof of Theorem 1.2. ## Acknowledgements The first author is partially supported by Universite de Kara (Togo) and the second author is supported by IMSP, Institut de Mathematiques et de Sciences Physiques de l'Universite d'Abomey-Calavi. This project was initiated when the first author visited IMSP for Number Theory research school. He thanks the authorities for the warm hospitality and the working environment.
2307.09561
Non-distributive description logic
We define LE-ALC, a generalization of the description logic ALC based on the propositional logic of general (i.e. not necessarily distributive) lattices, and semantically interpreted on relational structures based on formal contexts from Formal Concept Analysis (FCA). The description logic LE-ALC allows us to formally describe databases with objects, features, and formal concepts, represented according to FCA as Galois-stable sets of objects and features. We describe ABoxes and TBoxes in LE-ALC, provide a tableaux algorithm for checking the consistency of LE-ALC knowledge bases with acyclic TBoxes, and show its termination, soundness and completeness. Interestingly, consistency checking for LE-ALC is in PTIME for acyclic and completely unravelled TBoxes, while the analogous problem in the classical ALC setting is PSPACE-complete.
Ineke van der Berg, Andrea De Domenico, Giuseppe Greco, Krishna B. Manoorkar, Alessandra Palmigiano, Mattia Panettiere
2023-07-18T19:19:20Z
http://arxiv.org/abs/2307.09561v3
# Non-distributive description logic+ ###### Abstract We define \(\mathrm{LE}\)-\(\mathcal{ACC}\), a generalization of the description logic \(\mathcal{ACC}\) based on the propositional logic of general (i.e. not necessarily distributive) lattices, and semantically interpreted on relational structures based on formal contexts from Formal Concept Analysis (FCA). The description logic \(\mathrm{LE}\)-\(\mathcal{ACC}\) allows us to formally describe databases with objects, features, and formal concepts, represented according to FCA as Galois-stable sets of objects and features. We describe ABoxes and TBoxes in \(\mathrm{LE}\)-\(\mathcal{ACC}\), provide a tableaux algorithm for checking the consistency of \(\mathrm{LE}\)-\(\mathcal{ACC}\) knowledge bases with acyclic TBoxes, and show its termination, soundness and completeness. Interestingly, consistency checking for \(\mathrm{LE}\)-\(\mathcal{ACC}\) with acyclic TBoxes is in PTIME, while the complexity of the consistency checking of classical \(\mathcal{ACC}\) with acyclic TBoxes is PSPACE-complete. _Keywords: Description logic, Tableaux algorithm, Formal Concept Analysis, LE-logics._ ## 1 Introduction Description Logic (DL) [2] is a class of logical formalisms, typically based on classical first-order logic, and widely used in Knowledge Representation and Reasoning to describe and reason about relevant concepts in a given application domain and their relationships. Since certain laws of classical logic fail in certain application domains, in recent years, there has been a growing interest in developing versions of description logics on weaker (non-classical) propositional bases. For instance, in [20], an intuitionistic version of the DL \(\mathcal{ACC}\) has been introduced for resolving some inconsistencies arising from the classical law of excluded middle when applying \(\mathcal{ACC}\) to legal domains. In [6, 19], many-valued (fuzzy) description logics have been introduced to account for uncertainty and imprecision in processing information in the Semantic Web, and recently, frameworks of non-monotonic description logics have been introduced [14, 18, 15]. One domain of application in which there is no consensus as to how classical logic should be applied is Formal Concept Analysis (FCA). In this setting, formal concepts arise from formal contexts \(\mathbb{P}=(A,X,I)\), where \(A\) and \(X\) are sets (of objects and features respectively), and \(I\subseteq A\times X\). Specifically, formal concepts are represented as Galois-stable tuples \((B,Y)\) such that \(B\subseteq A\) and \(Y\subseteq X\) and \(B=\{a\in A\mid\forall y(y\in Y\Rightarrow aIy)\}\) and \(Y=\{x\in X\mid\forall b(b\in B\Rightarrow bIx)\}\). The formal concepts arising from a formal context are naturally endowed with a partial order (the sub-concept/super-concept relation) as follows: \((B_{1},Y_{1})\leq(B_{2},Y_{2})\) iff \(B_{1}\subseteq B_{2}\) iff \(Y_{2}\subseteq Y_{1}\). This partial order is a complete lattice, which is in general non-distributive. The failure of distributivity in the lattice of formal concepts introduces a tension between classical logic and the natural logic of formal concepts in FCA. This failure motivated the introduction of lattice-based propositional (modal) logics as the (epistemic) logics of formal concepts [10, 9]. Complete relational semantics of these logics is given by _enriched formal contexts_ (cf. Section 2.2), relational structures \(\mathbb{F}=(\mathbb{P},\mathcal{R}_{\Box},\mathcal{R}_{\Diamond})\) based on formal contexts. In this paper, we introduce LE-\(\mathcal{ALC}\), a lattice-based version of \(\mathcal{ALC}\) which stands in the same relation to the lattice-based modal logic of formal concepts [12] as classical \(\mathcal{ALC}\) stands in relation to classical modal logic: the language and semantics of LE-\(\mathcal{ALC}\) is based on enriched formal contexts and their associated modal algebras. Thus, just like the language of \(\mathcal{ALC}\) can be seen as a hybrid modal logic language interpreted on Kripke frames, the language of LE-\(\mathcal{ALC}\) can be regarded as a hybrid modal logic language interpreted on enriched formal contexts. FCA and DL are different and well known approaches in the formal representation of concepts (or categories). They have been used together for several purposes [1, 4, 17]. Thus, providing a DL framework which allows us to describe formal contexts (possibly enriched, e.g. with additional relations on them) would be useful in relating these frameworks both at a theoretical and at a practical level. Proposals to connect FCA and DL have been made, in which concept lattices serve as models for DL concepts. Shilov and Han [21] interpret the positive fragment of \(\mathcal{ALC}\) concept names over concept lattices and show that this interpretation is compatible with standard Kripke models for \(\mathcal{ALC}\). A similar approach is used by Wrum [22] in which complete semantics for the (full) Lambek calculus is defined on concept lattices. The approach of the present paper for defining and interpreting non-distributive description logic and modal logic in relation with concept lattices with operators differs from the approaches mentioned above in that it is based on duality-theoretic insights (cf. [10]). This allows us not only to show that the DL framework introduced in the present paper is consistent with the standard DL setting and its interpretation on Kripke models, but also to show that several properties of these logics and the meaning of their formulas can also be "lifted" from the classical (distributive) to non-distributive settings (cf. [7, 12, 8] for extended discussions). The main technical contribution of this paper is a tableaux algorithm for checking the consistency of LE-\(\mathcal{ALC}\) ABoxes. We show that the algorithm is terminating, sound and complete. Interestingly, this algorithm has a polynomial time complexity, compared to the complexity of the consistency checking of classical \(\mathcal{ALC}\) ABoxes which is PSPACE-complete. The algorithm also constructs a model for the given ABox which is polynomial in size. Thus, it also implies that the corresponding hybrid modal logic has the finite model property. _Structure of the paper._ In Section 2, we give the necessary preliminaries on the DL \(\mathcal{ALC}\), lattice-based modal logics and their relational semantics. In Section 3, we introduce the syntax and the semantics of LE-\(\mathcal{ALC}\). In Section 4, we introduce a tableaux algorithm for checking the consistency of LE-\(\mathcal{ALC}\) ABoxes and show that it is terminating, sound and complete. In Section 5, we conclude and discuss some future research directions. ## 2 Preliminaries ### Description logic \(\mathcal{ALC}\) Let \(\mathcal{C}\) and \(\mathcal{R}\) be disjoint sets of primitive or atomic _concept names_ and _role names_. The set of _concept descriptions_ or compound concept names over \(\mathcal{C}\) and \(\mathcal{R}\) are defined recursively as follows. \[C:=A\mid\top\mid\bot\mid C\wedge C\mid C\lor C\mid\neg C\mid\exists r.C\mid \forall r.C\] where \(A\in\mathcal{C}\) and \(r\in\mathcal{R}\). An _interpretation_ is a tuple \(\mathrm{I}=(\Delta^{\mathrm{I}},\cdot^{\mathrm{I}})\) s.t. \(\Delta^{\mathrm{I}}\) is a non-empty set and \(\cdot^{\mathrm{I}}\) maps every concept name \(A\in\mathcal{C}\) to a set \(A^{\mathrm{I}}\subseteq\Delta^{\mathrm{I}}\), and every role name \(r\in\mathcal{R}\) to a relation \(r^{\mathrm{I}}\subseteq\Delta^{\mathrm{I}}\times\Delta^{\mathrm{I}}\). This mapping extends to all concept descriptions as follows: \[\begin{array}{lcl}\top^{\mathrm{I}}&=&\Delta^{\mathrm{I}}&\bot^{\mathrm{I}}= \varnothing\\ (C\wedge D)^{\mathrm{I}}&=&C^{\mathrm{I}}\cap D^{\mathrm{I}}&(C\lor D)^{ \mathrm{I}}=C^{\mathrm{I}}\cup D^{\mathrm{I}}\\ (\exists r.C)^{\mathrm{I}}&=&\{d\in\Delta^{\mathrm{I}}\mid\exists e((d,e)\in r ^{\mathrm{I}}\&\&\ e\in C^{\mathrm{I}}\}&(\neg C)^{\mathrm{I}}=\Delta^{\mathrm{ I}}\setminus C^{\mathrm{I}}\\ (\forall r.C)^{\mathrm{I}}&=&\{d\in\Delta^{\mathrm{I}}\mid\forall e((d,e)\in r ^{\mathrm{I}}\Rightarrow e\in C^{\mathrm{I}}\}&\end{array}\] Let \(\mathcal{S}\) be a set of individual names disjoint from \(\mathcal{C}\) and \(\mathcal{R}\), such that for every \(a\) in \(\mathcal{S}\), \(a^{\mathrm{I}}\in\Delta^{\mathrm{I}}\). For any \(a,b\in\mathcal{S}\), any \(C\in\mathcal{C}\) and \(r\in\mathcal{R}\), an expression of the form \(a:C\) (resp. \((a,b):r\)) is an \(\mathcal{ALC}\)_concept assertion_ (resp. _role assertion_). A finite set of \(\mathcal{ALC}\) concept and role assertions is an \(\mathcal{ALC}\)_ABox_. An assertion \(a:C\) (resp. \((a,b):r\)) is _satisfied_ in an interpretation I if \(a^{\mathrm{I}}\in C^{\mathrm{I}}\) (resp. if \((a^{\mathrm{I}},b^{\mathrm{I}})\in r^{\mathrm{I}}\)). An \(\mathcal{ALC}\)_TBox_ is a finite set of expressions of the form \(C_{1}\equiv C_{2}\). An interpretation I _satisfies_\(C_{1}\equiv C_{2}\) iff \(C_{1}^{\mathrm{I}}=C_{2}^{\mathrm{I}}\). An \(\mathcal{ALC}\)_knowledge base_ is a tuple \((\mathcal{A},\mathcal{T})\), where \(\mathcal{A}\) is an \(\mathcal{ALC}\) ABox, and \(\mathcal{T}\) is an \(\mathcal{ALC}\) TBox. An interpretation I is a _model_ for a knowledge base \((\mathcal{A},\mathcal{T})\) iff it satisfies all members of \(\mathcal{A}\) and \(\mathcal{T}\). A knowledge base \((\mathcal{A},\mathcal{T})\) is _consistent_ if there is a model for it. An ABox \(\mathcal{A}\) (resp. TBox \(\mathcal{T}\)) is _consistent_ if the knowledge base \((\mathcal{A},\varnothing)\) (resp. \((\varnothing,\mathcal{T})\)) is consistent. An \(\mathcal{ALC}\)_concept definition_ in \(T\) is an expression of the form \(A\equiv C\) where \(A\) is an atomic concept. We say that \(A\)_directly uses_\(B\) if there is a concept definition \(A\equiv C\) in \(\mathcal{T}\) such that \(B\) occurs in \(C\). We say that \(A\)_uses_\(B\) if \(A\) directly uses \(B\), or if there is a concept name \(B^{\prime}\) such that \(A\) uses \(B^{\prime}\) and \(B^{\prime}\) directly uses \(B\). A finite set \(\mathcal{T}\) of concept definitions is an _acyclic_ TBox if 1. there is no concept name in \(\mathcal{T}\) that uses itself, 2. no concept name occurs more than once on the left-hand side of a concept definition in \(\mathcal{T}\). Checking the consistency of a knowledge base is a key problem in description logics, usually solved via tableaux algorithms. In the \(\mathcal{ALC}\) case, checking the consistency of any knowledge base is EXPTIME-complete while checking the consistency of a knowledge base with acyclic TBoxes is PSPACE-complete [2]. ### Basic normal non-distributive modal logic and its semantics The logic introduced in this section is part of a family of lattice-based logics, sometimes referred to as _LE-logics_ (cf. [11]), which have been studied in the context of a research program on the logical foundations of categorization theory [10, 9, 8, 12]. Let \(\mathsf{Prop}\) be a (countable) set of atomic propositions. The language \(\mathcal{L}\) is defined as follows: \[\varphi:=\bot\mid\top\mid p\mid\varphi\land\varphi\mid\varphi\lor\varphi\mid \Box\varphi\mid\Diamond\varphi,\] where \(p\in\mathsf{Prop}\), and \(\Box\in\mathcal{G}\) and \(\Diamond\in\mathcal{F}\) for finite sets \(\mathcal{F}\) and \(\mathcal{G}\) of unary \(\Diamond\)-type (resp. \(\Box\)-type) modal operators. The _basic_, or _minimal normal \(\mathcal{L}\)-logic_ is a set \(\mathbf{L}\) of sequents \(\varphi\vdash\psi\), with \(\varphi,\psi\in\mathcal{L}\), containing the following axioms for every \(\Box\in\mathcal{F}\) and \(\Diamond\in\mathcal{G}\): \[\begin{array}{ sets. A _formal concept_ of a polarity \(\mathbb{P}=(A,X,I)\) is a tuple \(c=(B,Y)\) such that \(B\subseteq A\) and \(Y\subseteq X\), and \(B=Y^{\downarrow}\) and \(Y=B^{\uparrow}\). The subset \(B\) (resp. \(Y\)) is the _extension_ (resp. the _intension_) of \(c\) and is denoted by \([\![c]\!]\) (resp. \(([\![c]\!])\). It is well known (cf. [13]) that the sets \(B\) and \(Y\) are Galois-stable, and that the set of formal concepts of a polarity \(\mathbb{P}\), with the order defined by \[c_{1}\leq c_{2}\quad\text{iff}\quad[\![c_{1}]\!]\subseteq[\![c_{2}]\!]\quad \text{iff}\quad([\![c_{2}]\!])\subseteq([\![c_{1}]\!],\] forms a complete lattice \(\mathbb{P}^{+}\), namely the _concept lattice_ of \(\mathbb{P}\). For the language \(\mathcal{L}\) defined above, an _enriched formal \(\mathcal{L}\)-context_ is a tuple \(\mathbb{F}=(\mathbb{P},\mathcal{R}_{\Box},\mathcal{R}_{\Diamond})\), where \(\mathcal{R}_{\Box}=\{R_{\Box}\subseteq A\times X\mid\Box\in\mathcal{G}\}\) and \(\mathcal{R}_{\Diamond}=\{R_{\Diamond}\subseteq X\times A\mid\Diamond\in \mathcal{F}\}\) are sets of \(I\)_-compatible_ relations, that is, for all \(\Box\in\mathcal{G}\), \(\Diamond\in\mathcal{F}\), \(a\in A\), and \(x\in X\), the sets \(R_{\Box}^{(0)}[x]\), \(R_{\Box}^{(1)}[a]\), \(R_{\Diamond}^{(0)}[a]\), \(R_{\Diamond}^{(1)}[x]\) are Galois-stable in \(\mathbb{P}\). For each \(\Box\in\mathcal{G}\) and \(\Diamond\in\mathcal{F}\), their associated relations \(R_{\Box}\) and \(R_{\Diamond}\) provide their corresponding semantic interpretations as operations \([R_{\Box}]\) and \(\langle R_{\Diamond}\rangle\) on the concept lattice \(\mathbb{P}^{+}\) defined as follows: For any \(c\in\mathbb{P}^{+}\), \([R_{\Box}]c=(R_{\Box}^{(0)}[(\![c]\!]),I^{(1)}[R_{\Box}^{(0)}[(\![c]\!]]))\quad \text{and}\quad\langle R_{\Diamond}\rangle c=(I^{(0)}[R_{\Diamond}^{(0)}[ \![[c]\!]]),R_{\Diamond}^{(0)}[\![[c]\!]])\). We refer to the algebra \(\mathbb{F}^{+}=(\mathbb{P}^{+},\{[R_{\Box}]\}_{\Diamond\in\mathcal{G}},\{ \langle R_{\Diamond}\rangle\}_{\Diamond\in\mathcal{F}})\) as the _complex algebra_ of \(\mathbb{F}\). A _valuation_ on such an \(\mathbb{F}\) is a map \(V\colon\mathsf{Prop}\to\mathbb{P}^{+}\). For each \(p\in\mathsf{Prop}\), we let \([\![p]\!]:=[\![V(p)]\!]\) (resp. \((\![p]\!):=(\![\![V(p)]\!)\)) denote the extension (resp. intension) of the interpretation of \(p\) under \(V\). A _model_ is a tuple \(\mathbb{M}=(\mathbb{F},V)\) where \(\mathbb{F}=(\mathbb{P},\mathcal{R}_{\Box},\mathcal{R}_{\Diamond})\) is an enriched formal context and \(V\) is a valuation on \(\mathbb{F}\). For every \(\varphi\in\mathcal{L}\), we let \([\![\varphi]\!]_{\mathbb{M}}:=[\![V(\varphi)]\!]\) (resp. \((\![\varphi]\!)_{\mathbb{M}}:=(\![V(\varphi)]\!)\)) denote the extension (resp. intension) of the interpretation of \(\varphi\) under the homomorphic extension of \(V\). The following 'forcing' relations can be recursively defined as follows: \(\begin{array}{lcl}\mathbb{M},a\Vdash p&\text{iff }a\in[\![p]\!]_{\mathbb{M}}& \mathbb{M},x\succ p&\text{iff }x\in(\![p]\!)_{\mathbb{M}}\\ \mathbb{M},a\Vdash\top&\text{always}&\mathbb{M},x\succ\top&\text{iff }aIx\text{ for all }a\in A\\ \mathbb{M},x\succ\bot&\text{always}&\mathbb{M},a\Vdash\bot&\text{iff }aIx\text{ for all }x\in X\\ \mathbb{M},a\Vdash\varphi\land\psi&\text{iff }\mathbb{M},a\Vdash\varphi\text{ and }\mathbb{M},a\Vdash\psi&\text{if }\mathbb{M},x\succ\varphi\land\psi&\text{iff }(\forall a\in A)\ (\mathbb{M},a\Vdash\varphi\land\psi \Rightarrow aIx)\\ \mathbb{M},x\succ\varphi\lor\psi&\text{iff }\mathbb{M},x\succ\varphi\text{ and } \mathbb{M},x\succ\psi&\text{if }\mathbb{M},a\Vdash\varphi\lor\psi&\text{if }(\forall x\in X)\ (\mathbb{M},x\succ\varphi\lor\psi \Rightarrow aIx).\end{array}\) As to the interpretation of modal formulas, for every \(\Box\in\mathcal{G}\) and \(\Diamond\in\mathcal{F}\): \(\begin{array}{lcl}\mathbb{M},a\Vdash\Box\varphi&\text{iff }(\forall x\in X)( \mathbb{M},x\succ\varphi\Rightarrow aR_{\Box}x)&\mathbb{M},x\succ\Box\varphi& \text{iff }(\forall a\in A)(\mathbb{M},a\Vdash\Box\varphi \Rightarrow aIx)\\ \mathbb{M},x\succ\Diamond\varphi&\text{iff for all }a\in A\text{, if }\mathbb{M},a\Vdash\varphi& \text{then }xR_{\Diamond}a&\mathbb{M},a\Vdash\Diamond\varphi& \text{iff }(\forall x\in X)(\mathbb{M},x\succ\Diamond\varphi \Rightarrow aIx)\end{array}\) The definition above ensures that, for any \(\mathcal{L}\)-formula \(\varphi\), \(\begin{array}{lcl}\mathbb{M},a\Vdash\varphi&\text{iff }a\in[\![\varphi]\!]_{\mathbb{M}}, \quad\text{and}\quad\mathbb{M},x\succ\varphi&\text{iff }x\in(\![\varphi]\!]_{\mathbb{M}}.\\ \mathbb{M}\models\varphi\vdash\psi&\text{iff }\quad[\![\varphi]\!]_{\mathbb{M}} \subseteq[\![\psi]\!]_{\mathbb{M}}\quad\text{iff }\quad([\![\psi]\!]_{\mathbb{M}} \subseteq(\![\varphi]\!)_{\mathbb{M}}.\end{array}\) The interpretation of the propositional connectives \(\vee\) and \(\wedge\) in the framework described above reproduces the standard notion of join and the meet of formal concepts used in FCA. The interpretation of the operators \(\Box\) and \(\Diamond\) is motivated by algebraic properties and duality theory for modal operators on lattices (cf. [12, Section 3] for an expanded discussion). In [8, Proposition 3.7], it is shown that the semantics of LE-logics is compatible with Kripke semantics for classical modal logic, and thus, LE-logics are indeed generalizations of classical modal logic. This interpretation is further justified in [8, Section 4] by noticing that, under the interpretations of the relation \(I\) as \(aIx\) iff "object \(a\) has feature \(x\)" and \(R=R_{\Box}=R_{\Diamond}^{-1}\) as \(aRx\) iff "there is evidence that object \(a\) has feature \(x\)", then, for any concept \(c\), the extents of concepts \(\Box c\) and \(\Diamond c\) can be interpreted as "the set of objects which _certainly_ belong to \(c\)" (upper approximation), and "the set of objects which _possibly_ belong to \(c\)" (lower approximation) respectively. Thus, the interpretations of \(\Box\) and \(\Diamond\) have similar meaning in the LE-logic as in the classical modal logic. A similar justification regarding similarity of epistemic interpretations of \(\Box\) in classical and lattice-based modal logics is discussed in [9]. This transfer of meaning of modal axioms from classical modal logic to LE-logics has been investigated as a general phenomenon in [7, Section 4.3], [12]. ## 3 LE Description logic In this section, we introduce the non-classical DL LE-\(\mathcal{ALC}\), so that LE-\(\mathcal{ALC}\) will be in same relation with LE-logic as \(\mathcal{ALC}\) is with classical modal logic. This similarity extends to the models we will introduce for LE-\(\mathcal{ALC}\): in the same way as Kripke models of classical modal logic are used as models of \(\mathcal{ALC}\), enriched formal contexts, which provide complete semantics for LE-logic, will serve as models of LE-\(\mathcal{ALC}\). In this specific respect, LE-\(\mathcal{ALC}\) can be seen as a generalization of the positive fragment (i.e. the fragment with no negations in concept names) of \(\mathcal{ALC}\) in which we do not assume distributivity laws to hold for concepts. Consequently, the language of LE-\(\mathcal{ALC}\) contains individuals of two types, usually interpreted as the _objects_ and _features_ of the given database or categorization. Let OBJ and FEAT be disjoint sets of individual names for objects and features. The set \(\mathcal{R}\) of the role names for LE-\(\mathcal{ALC}\) is the union of three disjoint sets of relations: (1) the singleton set \(\{I\mid I\subseteq\textsf{OBJ}\times\textsf{FEAT}\}\); (2) a set \(\mathcal{R}_{\Box}=\{R_{\Box}\subseteq\textsf{OBJ}\times\textsf{FEAT}\mid \Box\in\mathcal{G}\}\); (3) a set \(\mathcal{R}_{\Diamond}=\{R_{\Diamond}\subseteq\textsf{FEAT}\times\textsf{OBJ }\mid\Diamond\in\mathcal{G}\}\). While \(I\) is intended to be interpreted as the incidence relation of formal concepts, and encodes information on which objects have which features, the relations in \(\mathcal{R}_{\Box}\) and \(\mathcal{R}_{\Diamond}\) encode additional relationships between objects and features (cf. [8] for an extended discussion). For any set \(\mathcal{C}\) of atomic concept names, the language of LE-\(\mathcal{ALC}\) concepts is: \(C:=D\mid C_{1}\wedge C_{2}\mid C_{1}\lor C_{2}\mid\top\mid\bot\mid\langle R_{ \Diamond}\rangle C\mid[R_{\Box}]C\) where \(D\in\mathcal{C}\), \(R_{\Box}\in\mathcal{R}_{\Box}\) and \(R_{\Diamond}\in\mathcal{R}_{\Diamond}\). This language matches the language of LE-logic, and has an analogous intended interpretation on the complex algebras of enriched formal contexts (cf. Section 2.2). As usual, \(\vee\) and \(\wedge\) are to be interpreted as the smallest common superconcept and the greatest common subconcept as in FCA. The constants \(\top\) and \(\bot\) are to be interpreted as the largest and the smallest concept, respectively. We do not include \(\neg C\) as a valid concept name in our language, since there is no canonical and natural way to interpret negations in non-distributive settings. The concept names \(\langle R_{\Diamond}\rangle C\) and \([R_{\Box}]C\) in LE-\(\mathcal{ALC}\) are intended to be interpreted as the operations \(\langle R_{\Diamond}\rangle\) and \([R_{\Box}]\) defined by the interpretations of their corresponding role names in enriched formal contexts, analogously to the way in which \(\exists r\) and \(\forall r\) in \(\mathcal{ALC}\) are interpreted on Kripke frames. We do not use the symbols \(\forall r\) and \(\exists r\) in the context of LE-\(\mathcal{ALC}\) because, as discussed in Section 2.2, the semantic clauses of modal operators in LE-logic use universal quantifiers, and hence using the same notation verbatim would be ambiguous or misleading. TBox assertions in LE-\(\mathcal{ALC}\) are of the shape \(C_{1}\equiv C_{2}\), where \(C_{1}\) and \(C_{2}\) are concepts defined as above.4 The ABox assertions are of the form: Footnote 4: As is standard in DL (cf. [2] for more details), general concept inclusion of the form \(C_{1}\sqsubseteq C_{2}\) can be rewritten as concept definition \(C_{1}\equiv C_{2}\wedge C_{3}\), where \(C_{3}\) is a new concept name. \[aR_{\Box}x,\quad xR_{\Diamond}a,\quad aIx,\quad a:C,\quad x::C,\quad\neg\alpha,\] where \(\alpha\) is any of the first five ABox terms. We refer to the terms of first three types as _relational terms_. The interpretations of the terms \(a:C\) and \(x::C\) are: "object \(a\) is a member of concept \(C\)", and "feature \(x\) is in the description of concept \(C\)", respectively. An _interpretation_ for LE-\(\mathcal{ALC}\) is a tuple \(\mathrm{I}=(\mathbb{F},\cdot^{\mathrm{I}})\), where \(\mathbb{F}=(\mathbb{P},\mathcal{R}_{\Box},\mathcal{R}_{\Diamond})\) is an enriched formal context, and \(\cdot^{\mathrm{I}}\) maps: 1. individual names \(a\in\mathsf{OBJ}\) (resp. \(x\in\mathsf{FEAT}\)), to some \(a^{\mathrm{I}}\in A\) (resp. \(x^{\mathrm{I}}\in X\)); 2. relation names \(I\), \(R_{\Box}\) and \(R_{\Diamond}\) to relations \(I^{\mathrm{I}}\), \(R^{\mathrm{I}}_{\Box}\) and \(R^{\mathrm{I}}_{\Diamond}\) in \(\mathbb{F}\); 3. any primitive concept \(D\) to \(D^{\mathrm{I}}\in\mathbb{F}^{+}\), and other concepts as follows: \[\begin{array}{ll}\bot^{\mathrm{I}}=(X^{\downarrow},X)&\top^{\mathrm{I}}=(A, A^{\uparrow})&(C_{1}\wedge C_{2})^{\mathrm{I}}=C_{1}^{\mathrm{I}}\wedge C_{2}^{ \mathrm{I}}\\ (C_{1}\lor C_{2})^{\mathrm{I}}=C_{1}^{\mathrm{I}}\lor C_{2}^{\mathrm{I}}&([R_{ \Box}]C)^{\mathrm{I}}=[R_{\Box}^{\mathrm{I}}]C^{\mathrm{I}}&(\langle R_{ \Diamond}\rangle C)^{\mathrm{I}}=\langle R^{\mathrm{I}}_{\Diamond}\rangle C ^{\mathrm{I}}\end{array}\] where the operators \([R_{\Box}^{\mathrm{I}}]\) and \(\langle R_{\Diamond}^{\mathrm{I}}\rangle\) are defined as in Section 2.2. The satisfiability relation for an interpretation \(\mathrm{I}\) is defined as follows: 1. \(\mathrm{I}\models C_{1}\equiv C_{2}\) iff \(\llbracket C_{1}^{\mathrm{I}}\rrbracket=\llbracket C_{2}^{\mathrm{I}}\rrbracket\) iff \(\llbracket C_{2}^{\mathrm{I}}\rrbracket=\llbracket C_{1}^{\mathrm{I}}\rrbracket\). 2. \(\mathrm{I}\models a:C\) iff \(a^{\mathrm{I}}\in\llbracket C^{\mathrm{I}}\rrbracket\) and \(\mathrm{I}\models x::C\) iff \(x^{\mathrm{I}}\in(\llbracket C^{\mathrm{I}}\rrbracket)\). 3. \(\mathrm{I}\models aIx\) (resp. \(aR_{\Box}x\), \(xR_{\Diamond}a\)) iff \(a^{\mathrm{I}}I^{\mathrm{I}}x^{\mathrm{I}}\) (resp. \(a^{\mathrm{I}}R_{\Box}^{\mathrm{I}}x^{\mathrm{I}}\), \(x^{\mathrm{I}}R_{\Diamond}^{\mathrm{I}}a^{\mathrm{I}}\)). 4. \(\mathrm{I}\models\neg\alpha\), where \(\alpha\) is any ABox term, iff \(\mathrm{I}\not\models\alpha\). An interpretation \(\mathrm{I}\) is a _model_ for an LE-\(\mathcal{ALC}\) knowledge base \((\mathcal{A},\mathcal{T})\) if \(\mathrm{I}\models\mathcal{A}\) and \(\mathrm{I}\models\mathcal{T}\). The framework of LE-\(\mathcal{ALC}\) formally brings FCA and DL together in two important ways: (1) the concepts of LE-\(\mathcal{ALC}\) are naturally interpreted as formal concepts in FCA; (2) the language of LE-\(\mathcal{ALC}\) is designed to represent knowledge and reasoning in the setting of enriched formal contexts. ## 4 Tableaux algorithm for ABox of LE-\(\mathcal{ALC}\) In this section, we define a tableaux algorithm for checking the consistency of LE-\(\mathcal{ALC}\) ABoxes. An LE-\(\mathcal{ALC}\) ABox \(\mathcal{A}\) contains a _clash_ iff it contains both \(\beta\) and \(\neg\beta\) for some relational term \(\beta\). The expansion rules below are designed so that the expansion of \(\mathcal{A}\) will contain a clash iff \(\mathcal{A}\) is inconsistent. The set \(sub(C)\) of sub-formulas of any LE-\(\mathcal{ALC}\) concept name \(C\) is defined as usual. A concept name \(C^{\prime}\)_occurs_ in \(\mathcal{A}\) (in symbols: \(C^{\prime}\in\mathcal{A}\)) if \(C^{\prime}\in sub(C)\) for some \(C\) such that one of the terms \(a:C\), \(x::C\), \(\neg a:C\), or \(\neg x::C\) is in \(\mathcal{A}\). A constant \(b\) (resp. \(y\)) _occurs_ in \(\mathcal{A}\) (\(b\in\mathcal{A}\), or \(y\in\mathcal{A}\)), iff some term containing \(b\) (resp. \(y\)) occurs in it. The tableaux algorithm below constructs a model \((\mathbb{F},\cdot^{\text{I}})\) for every consistent \(\mathcal{A}\), where \(\mathbb{F}=(\mathbb{P},\mathcal{R}_{\Box},\mathcal{R}_{\Diamond})\) is such that, for any \(C\in\mathcal{A}\), some \(a_{C}\in A\) and \(x_{C}\in X\) exist such that, for any \(a\in A\) (resp. any \(x\in X\)), \(a\in\llbracket C\rrbracket\) (resp. \(x\in(\llbracket C\rrbracket^{\text{I}})\) iff \(aIx_{C}\) (resp. \(a_{C}Ix\)). We call \(a_{C}\) and \(x_{C}\) the _classifying object_ and the _classifying feature_ of \(C\), respectively. To make our notation more easily readable, we will write \(a_{\Box C},\,x_{\Box C}\) (resp. \(a_{\Diamond C},\,x_{\Diamond C}\)) instead of \(a_{[R_{\Box}]C},\,x_{[R_{\Box}]C}\) (resp. \(a_{\langle R_{\Diamond}\rangle C}\), \(x_{\langle R_{\Diamond}\rangle C}\)) Moreover, for every \(R_{\Box}\in\mathcal{R}_{\Box}\) and \(R_{\Diamond}\in\mathcal{R}_{\Diamond}\), we will also impose the condition that \(a\in\llbracket[R_{\Box}]C\rrbracket\) (resp. \(x\in(\!(R_{\Diamond})C\!)\rrbracket\)) iff \(aR_{\Box}x_{C}\) (resp. \(xR_{\Diamond}a_{C}\)), where \(a_{C}\) and \(x_{C}\) are the classifying object and the classifying feature of \(C\), respectively. Note that we can always assume w.l.o.g. that any consistent ABox \(\mathcal{A}\) is satisfiable in a model with classifying objects and features (cf. Theorem 3). ``` Input: An LE-\(\mathcal{ALC}\) ABox \(\mathcal{A}\). Output: whether \(\mathcal{A}\) is inconsistent. 1:if there is a clash in \(\mathcal{A}\)thenreturn "inconsistent". 2:if no expansion rule is applicable to \(\mathcal{A}\)thenreturn "consistent". 3: pick any applicable expansion rule \(R\), apply\(R\) to \(\mathcal{A}\) and proceed recursively. ``` **Algorithm 1** tableaux algorithm for checking LE-\(\mathcal{ALC}\) ABox consistency Below, we list the expansion rules. The commas in each rule are metalinguistic conjunctions, hence every tableau is non-branching. \[\begin{array}{ccccc}\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span \ Remark 1 (Branching): Note that no expansion rule above involves branching. Thus, unlike tableaux algorithms for \(\mathcal{ALC}\), Algorithm 1 does not involve any branching. New elements are added to \(\mathcal{A}\) only via adjunction and creation rules. Example 1: Let \(\mathcal{A}=\{b:[R_{\square}][R_{\square}]C_{1},b:[R_{\square}][R_{\square}]C_{2}, y::[R_{\square}](C_{1}\wedge C_{2}),\neg(bR_{\square}y)\}\). It is easy to check that \(\mathcal{A}\) has no LE-\(\mathcal{ALC}\) model. The algorithm applies on \(\mathcal{A}\) as follows (We only do the partial expansion to show that the clash exists): \begin{tabular}{|c|c|c|c|} \hline Rule & Premises & Added terms \\ \hline Creation & & \(x_{\square C_{1}}::[R_{\square}]C_{1}\), \(x_{\square C_{2}}::[R_{\square}]C_{2}\), \(x_{C_{1}\wedge C_{2}}::C_{1}\wedge C_{2}\) \\ \(\Box\) & \(x_{\square C_{i}}::[R_{\square}]C_{i}\), \(b\!:[R_{\square}]\![R_{\square}]\![C_{i}\) & \(bR_{\square}x_{\square C_{i}}\) & \(i=1,2\) \\ \(R_{\square}\) & \(bR_{\square}x_{\square C_{i}}\) & \(\blacklozenge hx_{\square C_{i}}\) & \(i=1,2\) \\ Appending & \(\blacklozenge bIx_{\square C_{i}}\) & \(\blacklozenge b:[R_{\square}]C_{i}\) & \(i=1,2\) \\ \hline \end{tabular} By applying the same process to \(\blacklozenge b:[R_{\square}]C_{1}\), \(\blacklozenge b:[R_{\square}]C_{2}\) and \(x_{\square C_{1}}::[R_{\square}]C_{1}\), \(x_{\square C_{2}}::[R_{\square}]C_{2}\), we add the terms \(\blacklozenge\blacklozenge b:C_{1}\) and \(\blacklozenge\blacklozenge b:C_{2}\) to the tableau. Then the further tableau expansion is as follows: \begin{tabular}{|c|c|c|c|} \hline Rule & Premises & Added terms \\ \hline \(\wedge_{X}\) & \(x_{C_{1}\wedge C_{2}}::C_{1}\wedge C_{2}\), \(\blacklozenge\blacklozenge b:C_{1}\), \(\blacklozenge\blacklozenge b:C_{2}\), \(\blacklozenge\blacklozenge\blacklozenge b:C_{1}\) & \(\blacklozenge\blacklozenge bIx_{C_{1}\wedge C_{2}}\) \\ Appending & \(\blacklozenge\blacklozenge bIx_{C_{1}\wedge C_{2}}\) & \(\blacklozenge\blacklozenge\blacklozenge b:C_{1}\wedge C_{2}\) \\ \(adj_{\square}\) (twice) & \(\blacklozenge\blacklozenge b:C_{1}\wedge C_{2}\) & \(b:[R_{\square}][R_{\square}](C_{1}\wedge C_{2})\) \\ \(\Box\) & \(b:[R_{\square}][R_{\square}](C_{1}\wedge C_{2})\), \(y::[R_{\square}](C_{1}\wedge C_{2})\) & \(bR_{\square}y\) \\ \hline \end{tabular} Thus, there is a clash between \(\neg(bR_{\square}y)\) and \(bR_{\square}y\) in the expansion. Example 2: Let \(\mathcal{A}=\{\neg(bIy),y::C_{1},\neg(b:C_{2}),b:C_{1}\lor C_{2},bR_{\square}y\}\). The following table shows the tableau expansion for \(\mathcal{A}\). Let \(\mathcal{W}\coloneqq\{C_{1},C_{2},C_{1}\lor C_{2}\}\). \begin{tabular}{|c|c|c|} \hline Rule & Premises & Added terms \\ \hline Initial & & \(\neg(bIy),y::C_{1},\neg(b:C_{2}),b:C_{1}\lor C_{2},bR_{\square}y\) \\ Creation & & \(a_{C}:C\), \(xc::C\), \(C\in\mathcal{W}\) \\ Basic & \(a_{C}:C\), \(x_{C}::C\), \(C\in\mathcal{W}\) & \(a_{C}Ix_{C}\), \(C\in\mathcal{W}\) \\ Appending & \(a_{C_{1}}Ix_{C_{1}\lor C_{2}}\), \(a_{C_{2}}Ix_{C_{1}\lor C_{2}}\) & \(a_{C_{1}}:C_{1}\lor C_{2}\), \(a_{C_{2}}:C_{1}\lor C_{2}\) \\ \(\lor_{X}\) & \(x_{C_{1}\lor C_{2}}::C_{1}\lor C_{2}\) & \(x_{C_{1}\lor C_{2}}::C_{1}\), \(x_{C_{1}\lor C_{2}}::C_{2}\) \\ Basic & \(a_{C_{1}}::C_{1}\lor C_{2}\), \(x_{C_{1}\lor C_{2}}::C_{1}\) & \(a_{C_{1}}Ix_{C_{1}\lor C_{2}}\) \\ Basic & \(a_{C_{2}}::C_{1}\lor C_{2}\), \(x_{C_{1}\lor C_{2}}::C_{1}\) & \(a_{C_{2}}Ix_{C_{1}\lor C_{2}}\) \\ \(R_{\square}\) & \(bR_{\square}y\) & \(\blacklozenge bIy\), \(bI\Box y\) \\ \(\neg_{b}\) & \(\neg(b:C_{1})\) & \(\neg(bIx_{C_{2}})\) \\ \hline \end{tabular} Note that no expansion rule is applicable anymore. It is clear that the tableau does not contain any clashes. Thus, this ABox has a model. By the procedure described in Section 4.2, this model is given by \(\mathcal{R}_{\square}=\{R_{\square}\},\mathcal{R}_{\Diamond}=\{R_{\Diamond}\}, A=\{a_{C_{1}},a_{C_{2}},a_{C_{1}\lor C_{2}},b,\blacklozenge b\}\), \(X=\{x_{C_{1}},x_{C_{2}},x_{C_{1}\lor C_{2}},y,\Box y\}\), \(I=\{(a_{C_{1}},x_{C_{2}})_{C\in\mathcal{W}}\), \((a_{C_{1}},x_{C_{1}\lor C_{2}}),(a_{C_{2}},x_{C_{1}\lor C_{2}}),(\blacklozenge b, y),(b,\Box y)\}\), \(R_{\square}=\{(b,y)\}\), \(R_{\Diamond}=\varnothing\). ### Termination of the tableaux algorithm In this section, we show that Algorithm 1 always terminates for any finite LE-\(\mathcal{ALC}\) ABox \(\mathcal{A}\). Since no rule branches out, we only need to check that the number of new individuals added by the expansion rules is finite. Note that the only rules for adding new individuals are the creation and adjunction rules. The creation rules add one new object and feature for every concept \(C\) occurring in the expansion of \(\mathcal{A}\). Thus, it is enough to show that the number of individuals and new concepts added by applying adjunction rules is finite. To do so, we will show that any individual constant introduced by means of any adjunction rule will contain only finitely many modal operators applied to a constant occurring in \(\mathcal{A}\) or added by the creation rule and any new concept name added will contain finitely many \(\Box\) and \(\Diamond\) operators applied to a concept occurring in \(\mathcal{A}\). Definition 1: The \(\Diamond\)-depth \(\Diamond_{\mathcal{D}}\) and \(\Box\)-depth \(\Box_{\mathcal{D}}\) of \(C\) is defined as follows: 1. if \(C\) is an atomic concept, then \(\Diamond_{\mathcal{D}}(C)=\Box_{\mathcal{D}}(C)=0\); 2. \(\Diamond_{\mathcal{D}}(\langle R_{\Diamond}\rangle C)=\Diamond_{\mathcal{D}}(C)+1\) and \(\Box_{\mathcal{D}}(\langle R_{\Diamond}\rangle C)=\Box_{\mathcal{D}}(C)\); 3. \(\Diamond_{\mathcal{D}}([R_{\Box}]C)=\Diamond_{\mathcal{D}}(C)\) and \(\Box_{\mathcal{D}}([R_{\Box}]C)=\Box_{\mathcal{D}}(C)+1\); 4. \(\Diamond_{\mathcal{D}}(C_{1}\lor C_{2})=\max(\Diamond_{\mathcal{D}}(C_{1}), \Diamond_{\mathcal{D}}(C_{2}))\) and \(\Box_{\mathcal{D}}(C_{1}\lor C_{2})=\min(\Box_{\mathcal{D}}(C_{1}),\Box_{ \mathcal{D}}(C_{2}))\); 5. \(\Diamond_{\mathcal{D}}(C_{1}\wedge C_{2})=\min(\Diamond_{\mathcal{D}}(C_{1}), \Diamond_{\mathcal{D}}(C_{2}))\) and \(\Box_{\mathcal{D}}(C_{1}\wedge C_{2})=\max(\Box_{\mathcal{D}}(C_{1}),\Box_{ \mathcal{D}}(C_{2}))\). Definition 2: The \(\Box\)-depth \(\Box_{\mathcal{D}}\) and \(\Diamond\)-depth \(\Diamond_{\mathcal{D}}\) of any constants \(b\) and \(y\) are: 1. if \(b,y\in\mathcal{A}\), \(\Box_{\mathcal{D}}(b)=\Diamond_{\mathcal{D}}(b)=\Box_{\mathcal{D}}(y)=\Diamond_ {\mathcal{D}}(y)=0\); 2. \(\Box_{\mathcal{D}}(a_{C})=\Diamond_{\mathcal{D}}(x_{C})=0\), \(\Diamond_{\mathcal{D}}(a_{C})=-\Diamond_{\mathcal{D}}(C)\), and \(\Box_{\mathcal{D}}(x_{C})=-\Box_{\mathcal{D}}(C)\); 3. \(\Box_{\mathcal{D}}(\blacklozenge b)=\Box_{\mathcal{D}}(b)+1\), \(\Box_{\mathcal{D}}(\lozenge b)=\Box_{\mathcal{D}}(b)\), \(\Diamond_{\mathcal{D}}(\blacklozenge b)=\Diamond_{\mathcal{D}}(b)\), \(\Diamond_{\mathcal{D}}(\lozenge b)=\Diamond_{\mathcal{D}}(b)-1\); 4. \(\Box_{\mathcal{D}}(\Box y)=\Box_{\mathcal{D}}(y)-1\), \(\Diamond_{\mathcal{D}}(\Box y)=\Diamond_{\mathcal{D}}(y)\), \(\Box_{\mathcal{D}}(\blacksquare y)=\Box_{\mathcal{D}}(y)\), \(\Diamond_{\mathcal{D}}(\blacksquare y)=\Diamond_{\mathcal{D}}(y)+1\). The following lemma is key to give bounds on the \(\Box\)-depth and \(\Diamond\)-depth of new concept names added in a tableau expansion. Lemma 1: _For any individual names \(b,y\) and for any \(R_{\Box}\in\mathcal{R}_{\Box},R_{\Diamond}\in\mathcal{R}_{\Diamond}\), 1. If \(bR_{\Box}y\) is added to a tableau expansion, but \(bR_{\Box}y\not\in\mathcal{A}\), then \(b:[R_{\Box}]C\) and \(y::C\) already occur in a previous expansion of \(\mathcal{A}\) for some \(C\). 2. If \(yR_{\Diamond}b\) is added to a tableau expansion, but \(yR_{\Diamond}b\not\in\mathcal{A}\), then \(y::\langle R_{\Diamond}\rangle C\) and \(b:C\) already occur in a previous expansion of \(\mathcal{A}\) for some \(C\). 3. If \(bIy\) is added to a tableau expansion by any rule other than the adjunction rules \(R_{\Box}\) or \(R_{\Diamond}\) applied to some term occurring in \(\mathcal{A}\), then the tableau can (and hence, if \(\mathcal{A}\) is consistent, it will at some point) be expanded with the terms \(b:C\) and \(y::C\) (in zero or more steps) for some \(C\). 4. If \(bIy\) is added to the expansion as described in the previous item, then either: (i) The terms \(b:C\) and \(y::C^{\prime}\) occur in some previous expansion of \(\mathcal{A}\) for some \(C\), \(C^{\prime}\) such that \(\Diamond_{\mathcal{D}}(C)=\Diamond_{\mathcal{D}}(C^{\prime})\) and \(\Box_{\mathcal{D}}(C)=\Box_{\mathcal{D}}(C^{\prime})\). (ii) \(b=\blacklozenge d\) (resp. \(b=\Diamond d\)) for some \(d\), and the terms \(d:[R_{\Box}]C\) and \(y::C\) (resp. \(y::\langle R_{\Diamond}\rangle C\) and \(b:C\)) occur in some previous expansion of \(\mathcal{A}\) for some \(C\). (iii) \(y=\blacksquare w\) (resp. \(y=\Box w\)) for some \(w\), and the terms \(w::\langle R_{\Diamond}\rangle C\) and \(b:C\) (resp. \(b:[R_{\Box}]C\) and \(w::C\)) occur in some previous expansion of \(\mathcal{A}\) for some \(C\). 5. If \(b:C\) is added to the tableau by some expansion rule, there is \(d:C^{\prime}\) s.t. (i) \(d:C^{\prime}\in\mathcal{A}\) or is added by applying the creation rule. (ii) \(b\) is obtained by applying some finite combination of \(\Diamond\) and \(\blacklozenge\) to \(d\). (iii) \(\Diamond_{\mathcal{D}}(C^{\prime})+\Diamond_{\mathcal{D}}(d)\leq\Diamond_{ \mathcal{D}}(C)+\Diamond_{\mathcal{D}}(b)\), and \(\Box_{\mathcal{D}}(C)+\Box_{\mathcal{D}}(b)\leq\Box_{\mathcal{D}}(C^{\prime})+ \Box_{\mathcal{D}}(d)\). 6. If \(y::C\) is added to the tableau by some expansion rule, there is \(w::C^{\prime}\) s.t. (i) \(w::C^{\prime}\in\mathcal{A}\) or is added by applying the creation rule. (ii) \(y\) is obtained by applying some finite combination of \(\Box\) and \(\blacksquare\) to \(w\). (iii) \(\Diamond_{\mathcal{D}}(C)+\Diamond_{\mathcal{D}}(y)\leq\Diamond_{\mathcal{D}}(C^{ \prime})+\Diamond_{\mathcal{D}}(w)\), and \(\Box_{\mathcal{D}}(C^{\prime})+\Box_{\mathcal{D}}(w)\leq\Box_{\mathcal{D}}(C)+ \Box_{\mathcal{D}}(y)\). Proof: Items 1 and 2 follow from the observation that new terms of the type \(bR_{\Box}y\) and \(yR_{\Diamond}b\) are only added through the expansion rules for terms of the forms \(b:[R_{\Box}]C\) and \(y::\langle R_{\Diamond}\rangle C\), respectively. For item 3, the cases where \(bIy\) is introduced with the expansion rules for \(b:C\) or \(y::C\) are straightforward. If the expansion rule for \(y::C_{1}\wedge C_{2}\) is applied, then from the term \(x_{C_{1}\wedge C_{2}}::C_{1}\wedge C_{2}\) we can get \(bIx_{C_{1}\wedge C_{2}}\) (since both \(b:C_{1}\) and \(b:C_{2}\) must be present), finally obtaining \(b:C_{1}\wedge C_{2}\) from the appending rule. The \(b:C_{1}\lor C_{2}\) case is analogous. The only other rule that can add \(bIy\) is the adjunction rule. However, note that this can only happen if \(yR_{\Diamond}b\) or \(bR_{\Box}y\) is present. By item 1, if the term \(bR_{\Box}y\) is added then \(b:[R_{\Box}]C\) and \(y::C\) are in the tableau and it also adds the terms \(\blackloyIy\) and \(bI\Box y\). Note that since \(b:[R_{\Box}]C\) and \(y::C\) are in the tableau, \(\blackloz:C\) and \(\Box y::[R_{\Box}]C\) must also be in it. The first term can be obtained from \(b:[R_{\Box}]C\) adding \(bR_{\Box}x_{C}\) to the tableau and applying the adjunction rule and then the appending rule. Using the fact that \(a_{\Box C}:[R_{\Box}]C\) is in the tableau after applying the creation rule, \(\Box y::[R_{\Box}]C\) can be obtained similarly. Therefore, the required condition is satisfied for both \(\blackloyIy\) and \(bI\Box y\). We can deal with the terms of the form \(yR_{\Diamond}b\) analogously. For item 4, the only non-trivial case is when \(\blackloyIy,bI\Box y\) or \(\Diamond bIy,bI\blacksquare y\) are added via an adjunction rule. In the first case, \(bR_{\Box}y\) must be present, meaning that item 1 is applicable and hence for some \(C\), both \(b:[R_{\Box}]C\) and \(y::C\) appear in the tableau, satisfying the thesis. The other case is treated analogously. We prove items 5 and 6 by simultaneous induction on the number of expansion rules applied. The rules which can add new terms of the form \(b:C\) and \(y::C\) are the expansion rules for terms of the form \(b:C_{1}\wedge C_{2}\), \(y::C_{1}\lor C_{2}\), the appending rules, and the adjunction rules. If \(b:C\) is obtained from \(b:C\wedge C^{\prime}\), either the latter is present in the original tableau and the thesis follows trivially, or the induction hypothesis applies and it follows by transitivity. The case where \(y::C\) comes from \(y::C\lor C^{\prime}\) is analogous. If \(b:[R_{\Box}]C\) is obtained from \(\blackloz:C\) via an adjunction rule, then it suffices to apply the induction hypothesis to \(\blackloz:C\), noticing that no black operators can appear in the starting tableau. The adjunction case for \(y::\langle R_{\Diamond}\rangle C\) is similar. Without loss of generality, we only treat the case where the appending rule is used to add a term of the form \(b:C\). Notice that for the appending rule to be applicable we must have \(bIx_{C}\) in the tableau. Then by item 4, either: (i) There exist terms \(b:C_{1}\) and \(x_{C}::C_{2}\) in the tableau such that \(\Diamond_{\mathcal{D}}(C_{1})=\Diamond_{\mathcal{D}}(C_{2})\) and \(\Box_{\mathcal{D}}(C_{1})=\Box_{\mathcal{D}}(C_{2})\). (ii) \(b=\blacklozd\) (resp. \(b=\Diamond d\)) for some \(d\), and there exist terms \(d:[R_{\Box}]C_{2}\) and \(x_{C}::C_{2}\) (resp. \(x_{C}::\langle R_{\Diamond}\rangle C_{2}\) and \(b:C_{2}\)) in the tableau for some \(C_{2}\). (iii) \(x_{C}=\blacksquare w\) (resp. \(x_{C}=\Box w\)) for some \(w\), and there exist terms \(w::\langle R_{\Diamond}\rangle C_{2}\) and \(b:C_{2}\) (resp. \(b:[R_{\Box}]C_{2}\) and \(w::C_{2}\)) in the tableau for some \(C_{2}\). In case (i), if \(C\equiv C_{2}\), the thesis follows easily, else we apply the induction hypothesis to \(x_{C}::C_{2}\) to find a term \(w::C_{2}^{\prime}\) in the original tableau such that \[\Diamond_{\mathcal{D}}(C_{1})=\Diamond_{\mathcal{D}}(C_{2})+\Diamond_{ \mathcal{D}}(x_{C})\leq\Diamond_{\mathcal{D}}(C_{2}^{\prime})+\Diamond_{ \mathcal{D}}(w), \tag{2}\] \[\Box_{\mathcal{D}}(C_{2}^{\prime})+\Box_{\mathcal{D}}(w)\leq\Box_{\mathcal{D}}( C_{2})+\Box_{\mathcal{D}}(x_{C})=\Box_{\mathcal{D}}(C_{1})-\Box_{\mathcal{D}}(C), \tag{3}\] where \(x_{C}\) is obtained by applying \(n\)\(\Box\)-operators to \(w\) for some \(n\) (note that \(x_{C}\) can not be obtained by application of \(\blacksquare\)-operators). Thus, we have \(w=x_{C_{3}}\) such that \(C=[R_{\Box}]_{1}\cdots[R_{\Box}]_{n}C_{3}\). Since \(x_{C_{3}}::C_{2}^{\prime}\) is in the original tableau, it must have been added by a creation rule, meaning that \(C_{2}^{\prime}\equiv C_{3}\). Thus, we have \(\Box_{\mathcal{D}}(w)=-\Box_{\mathcal{D}}(C_{2}^{\prime})\), \(\diamond_{\mathcal{D}}(w)=0\), \(\diamond_{\mathcal{D}}(C_{2}^{\prime})=\diamond_{\mathcal{D}}(C)\), and \(\Box_{\mathcal{D}}(C_{2}^{\prime})=\Box_{\mathcal{D}}(C)-n\). Using these equalities in (3) and (2) we obtain \(\diamond_{\mathcal{D}}(C_{1})+\diamond_{\mathcal{D}}(b)\leq\diamond_{\mathcal{D }}(C)+\diamond_{\mathcal{D}}(b)\quad\text{and}\quad\Box_{\mathcal{D}}(C)+ \Box_{\mathcal{D}}(b)\leq\Box_{\mathcal{D}}(C_{1})+\Box_{\mathcal{D}}(b)\). Thus, if \(b:C_{1}\in\mathcal{A}\), then it is the witness we needed, otherwise it is sufficient to apply the induction hypothesis to \(b:C_{1}\), and the result follows by transitivity. In case (ii), suppose \(d:[R_{\Box}]C_{2}\) and \(x_{C}::C_{2}\) are both in the tableau. If \(C\equiv C_{2}\), then the proof follows easily applying the induction hypothesis once to \(b:C_{2}\) if it is not in the original tableau. Otherwise, we can apply the induction hypothesis to \(x_{C}::\langle R_{\diamond}\rangle C_{2}\), obtaining, by the same argument as in case (i), \(\diamond_{\mathcal{D}}(C_{2})\leq\diamond_{\mathcal{D}}(C)\) and \(\Box_{\mathcal{D}}(C)\leq\Box_{\mathcal{D}}(C_{2})\). Therefore, \(\diamond_{\mathcal{D}}([R_{\Box}]C_{2})+\diamond_{\mathcal{D}}(d)=\diamond_{ \mathcal{D}}(C_{2})+\diamond_{\mathcal{D}}(d)=\diamond_{\mathcal{D}}(C_{2})+ \diamond_{\mathcal{D}}(\blacklozenge d)\leq\diamond_{\mathcal{D}}(C)+ \diamond_{\mathcal{D}}(b)\), \(\Box_{\mathcal{D}}(C)+\Box_{\mathcal{D}}(b)\leq\Box_{\mathcal{D}}(C_{2})+ \diamond_{\mathcal{D}}(\blacklozenge\blacklozenge\blacklozenge\blacklozenge d )=\Box_{\mathcal{D}}(C_{2})+\Box_{\mathcal{D}}(d)+1=\) \(\Box_{\mathcal{D}}([R_{\Box}]C_{2})+\Box_{\mathcal{D}}(d)\). Thus, if \(d:[R_{\Box}]C_{2}\in\mathcal{A}\), then it is the witness we need; otherwise, it is sufficient to apply the induction hypothesis a second time to \(d:[R_{\Box}]C_{2}\), and the result then follows by transitivity. The proof for the remaining subcase, where \(b:C^{\prime}\) and \(x_{C}::\langle R_{\diamond}\rangle C^{\prime}\) are both present in the tableau, is done similarly. The proof for case (iii) is analogous to (ii) and therefore omitted. Definition 3: The \(\Box\)-depth (resp. \(\diamond\)-depth) of an ABox \(\mathcal{A}\) is \(\Box_{\mathcal{D}}(\mathcal{A})\coloneqq\max\{\Box_{\mathcal{D}}(C^{\prime} )\ |\ C^{\prime}\in\mathcal{A}\}\) (resp. \(\diamond_{\mathcal{D}}(\mathcal{A})\coloneqq\max\{\diamond_{\mathcal{D}}(C^{ \prime})\ |\ C^{\prime}\in\mathcal{A}\}\)). Corollary 1: _Let \(C\) be any concept name added to the tableau expansion at some step. Then \(\Box_{\mathcal{D}}(C)\leq\Box_{\mathcal{D}}(\mathcal{A})\), and \(\diamond_{\mathcal{D}}(C)\leq\diamond_{\mathcal{D}}(\mathcal{A})\)._ Proof: By item 5 of Lemma 1, for any \(b:C\) added to the tableau we must have another term \(d:C^{\prime}\) in \(\mathcal{A}\) or added by a creation rule, such that \(\Box_{\mathcal{D}}(C)\leq\Box_{\mathcal{D}}(C)+\Box_{\mathcal{D}}(b)\leq\Box_ {\mathcal{D}}(C^{\prime})+\Box_{\mathcal{D}}(d)=\Box_{\mathcal{D}}(C^{\prime})\). The first inequality holds because \(\Box_{\mathcal{D}}(b)\) is always non-negative, and the equality follows from the fact that, as \(d\) is in the original tableau or added by a creation rule, its \(\Box\)-depth is zero. The proof for the \(\diamond\)-depth can be shown in a similar manner using item 6 of Lemma 1. Definition 4: For any concept ABox term of the form \(t\equiv a:C\) or \(t\equiv x::C\), \(size(t)=1+|sub(C)|\). For any relational term \(\beta\), \(size(\beta)=2\). For any LE-\(\mathcal{ALC}\) ABox \(\mathcal{A}\), \(size(\mathcal{A})=\sum_{t\in\mathcal{A}}size(t)\). Theorem 3.1 (Termination): _For any ABox \(\mathcal{A}\), the tableaux algorithm 1 terminates in a finite number of steps which is polynomial in \(size(\mathcal{A})\)._ Proof: New individuals are added to the tableau only in the following ways: (1) individuals of the form \(a_{C}\) or \(x_{C}\) can be added by creation rules; (2) individuals of the form \(\Box y\), \(\blacksquare y\), \(\diamond b\), and \(\blacklozenge b\) can be added through the expansions rules for \(bR_{\Box}x\) and \(yR_{\Diamond}a\). As to (1), by Corollary 1, the \(\Box\)-depth (resp. \(\Diamond\)-depth) of any \(C\) appearing in an expansion of \(\mathcal{A}\) is bounded by \(\Box_{\mathcal{D}}(\mathcal{A})\) (resp. \(\Diamond_{\mathcal{D}}(\mathcal{A})\)). Moreover, no new propositional connective is ever added to create a new concept name in any of the rules. Therefore, the total number of concept names occurring in an expansion of \(\mathcal{A}\) is bounded by \(size(\mathcal{A})*(\Box_{\mathcal{D}}(\mathcal{A})+\Diamond_{\mathcal{D}}( \mathcal{A}))\). Thus, only finitely many constants of type (1) can be added. For (2), for any individual name \(b\) added by some expansion rule, \(b\) occurs in \(b:C\) for some \(C\). By Lemma 1 (5), there is a term \(d:C^{\prime}\in\mathcal{A}\) s.t. \(\Box_{\mathcal{D}}(b)+\Box_{\mathcal{D}}(C)\leq\Box_{\mathcal{D}}(d)+\Box_{ \mathcal{D}}(C^{\prime})=\Box_{\mathcal{D}}(C^{\prime})\). Therefore, \(\Box_{\mathcal{D}}(b)\) is bounded by \(\Box_{\mathcal{D}}(\mathcal{A})\). On the other hand, by item 6 of the same lemma we also have \(0\leq\Diamond_{\mathcal{D}}(C^{\prime})+\Diamond_{\mathcal{D}}(d)\leq\Diamond _{\mathcal{D}}(C)+\Diamond_{\mathcal{D}}(b)\). The first inequality follows from the fact that \(d\in\mathcal{A}\), and thus \(\Diamond_{\mathcal{D}}(d)=0\) or \(d=a_{C^{\prime}}\), and thus \(\Diamond_{\mathcal{D}}(d)=-\Diamond_{\mathcal{D}}(C^{\prime})\). Therefore, we must have \(-\Diamond_{\mathcal{D}}(C)\leq\Diamond_{\mathcal{D}}(b)\), meaning that \(\Diamond_{\mathcal{D}}(b)\) is bounded below by \(-\Diamond_{\mathcal{D}}(\mathcal{A})\). Thus, the number of connectives \(\Diamond\) and \(\blacklozenge\) in \(b\) is bounded by \(\Box_{\mathcal{D}}(\mathcal{A})+\Diamond_{\mathcal{D}}(\mathcal{A})\). Repeating the same argument for the individual names of type \(y\), the total number of new constant names occurring in an expansion of \(\mathcal{A}\) is bounded by \(size(\mathcal{A})*(\Box_{\mathcal{D}}(\mathcal{A})+\Diamond_{\mathcal{D}}( \mathcal{A}))\). Thus, only finitely many constants of type (2) are added. Overall, the size of the tableau expansion (and hence the model) is \(O((size(\mathcal{A})*(\Box_{\mathcal{D}}(\mathcal{A})+\Diamond_{\mathcal{D}}( \mathcal{A}))^{2}*(|\mathcal{R}_{\Box}|+|\mathcal{R}_{\Diamond}|))\). Since the tableaux algorithm for LE-\(\mathcal{ACC}\) does not involve any branching, the above theorem implies that the time complexity of checking the consistency of an LE-\(\mathcal{ACC}\) ABox \(\mathcal{A}\) using the tableaux algorithm is \(Poly(size(\mathcal{A}))\). ### Soundness of the tableau algorithm For any consistent ABox \(\mathcal{A}\), we let its _completion_\(\overline{\mathcal{A}}\) be its maximal expansion (which exists due to termination). If there is no clash in \(\overline{\mathcal{A}}\), we construct a model \((\mathbb{F},\cdot^{\mathrm{I}})\) where \(A\) and \(X\) are the sets of names of objects and features occurring in the expansion, and for any \(a\in A\), \(x\in X\), and any role names \(R_{\Box}\in\mathcal{R}_{\Box}\), \(R_{\Diamond}\in\mathcal{R}_{\Diamond}\) we have \(aIx\), \(aR_{\Box}x\), \(xR_{\Diamond}a\) iff such relational terms explicitly occur in \(\overline{\mathcal{A}}\). Let \(\mathbb{F}=(A,X,I,\mathcal{R}_{\Box},\mathcal{R}_{\Diamond})\) be the relational structure obtained in this manner. We define an interpretation I on it as follows. For any object name \(a\), and feature name \(x\), we let \(a^{\mathrm{I}}\coloneqq a\) and \(x^{\mathrm{I}}\coloneqq x\). For any atomic concept \(D\), we define \(D^{\mathrm{I}}=(x_{D}{}^{\downarrow},a_{D}{}^{\uparrow})\). Next, we show that I is a valid interpretation for LE-\(\mathcal{ACC}\). To this end, we need to show that \(\mathbb{F}\) is an enriched formal context, i.e. that all \(R_{\Box}\) and \(R_{\Diamond}\) are \(I\)-compatible, and that \(D^{\mathrm{I}}\) is a concept in the concept lattice \(\mathbb{P}^{+}\) of \(\mathbb{P}=(A,X,I)\). The latter condition is shown in the next lemma, and the former in the subsequent one. Lemma 2: \(x_{D}^{\downarrow\uparrow}=a_{D}^{\uparrow}\) _and \(a_{D}^{\uparrow\downarrow}=x_{D}^{\downarrow}\) for any \(D\in\mathcal{C}\)._ Proof: By the creation rules, we always have \(a_{D}:D\) and \(x_{D}::D\) in \(\overline{\mathcal{A}}\), meaning that the tableau can be expanded with \(a_{D}Ix_{D}\). Therefore, we always have \(x_{D}^{\downarrow\uparrow}\subseteq a_{D}^{\uparrow}\). Suppose \(a_{D}Iy\) and \(bIx_{D}\) for some \(y\in X\), \(b\in A\). Then by the appending rules we have \(y::D\in\overline{\mathcal{A}}\). This along with \(bIx_{D}\in\overline{\mathcal{A}}\) immediately implies \(bIy\in\overline{\mathcal{A}}\). Thus, we also have \(a_{D}^{\uparrow}\subseteq x_{D}^{\downarrow\uparrow}\). We can prove the other equality analogously. Lemma 3: _All the relations \(R_{\Box}\in\mathcal{R}_{\Box}\) and \(R_{\Diamond}\in\mathcal{R}_{\Diamond}\) in \(\mathbb{F}=(\mathbb{P},\mathcal{R}_{\Box},\mathcal{R}_{\Diamond})\) are \(I\)-compatible._ Proof: We need to show that for any \(b\in A\) and \(y\in X\), and any \(\Box\in\mathcal{G}\) and \(\Diamond\in\mathcal{F}\), (1) \(R_{\Box}^{(0)}[y]=(\Box y)^{\downarrow}\), (2) \(R_{\Box}^{(1)}[b]=(\blacklozenge b)^{\uparrow}\), (3) \(R_{\Diamond}^{(0)}[b]=(\Diamond b)^{\uparrow}\), and (4) \(R_{\Diamond}^{(1)}[y]=(\blacksquare y)^{\downarrow}\). We prove only (1) and (2). The proofs for (3) and (4) are analogous. 1. For any \(b\in A\), if \(bR_{\Box}y\in\mathcal{A}\), then \(bI\Box y\) can be added by the adjunction rule, and thus \(R_{\Box}^{(0)}[y]\subseteq(\Box y)^{\downarrow}\). If \(bR_{\Box}y\notin\mathcal{A}\), then \(bI\Box y\) is not added by applying adjunction rule to some \(bR_{\Box}y\) in the original tableau. Thus, by item 1 of Lemma 1, \(b:C,\Box y::C\in\overline{\mathcal{A}}\). Since \(\Box y::C\) can only be added by the appending rule if \(a_{C}I\Box y\in\overline{\mathcal{A}}\), and since this term can only be introduced by applying the adjunction rule to the term \(\blacklozenge a_{C}Iy\), some concept \(C^{\prime}\) exists such that \(\blacklozenge a_{C}:C^{\prime},y::C^{\prime}\in\overline{\mathcal{A}}\) (again by item 3 of Lemma 1). Then by the adjunction rule we have \(a_{C}:[R_{\Box}]C^{\prime}\in\overline{\mathcal{A}}\). Since \(b:C\), \(x_{\Box C^{\prime}}::C\), and \(y::C^{\prime}\) are all in \(\overline{\mathcal{A}}\), \(bIx_{\Box C^{\prime}}\) and \(b:[R_{\Box}]C^{\prime}\) must be in it as well. This, along with \(y::C^{\prime}\in\overline{\mathcal{A}}\), ensures that \(bR_{\Box}y\) is added to the tableau expansion at some step, and we can conclude that \((\Box y)^{\downarrow}\subseteq R_{\Box}^{(0)}[y]\), as desired. 2. For every \(b\in A\), if \(bR_{\Box}y\in\mathcal{A}\), then by the adjunction rule we add \(\blacklozenge bIy\). Thus, \(R_{\Box}^{(1)}[b]\subseteq(\blacklozenge b)^{\uparrow}\). If \(bR_{\Box}y\notin\mathcal{A}\), then by item 1 of Lemma 1, some terms \(\blacklozenge b:C\) and \(y::C\) must occur in \(\overline{\mathcal{A}}\) for some \(C\). So we have \(y::C\) and (by an adjunction rule) \(b:[R_{\Box}]C\), and hence \(bR_{\Box}y\) must occur in \(\overline{\mathcal{A}}\). So \(\blackloyIy\in\overline{\mathcal{A}}\) implies \(bR_{\Box}y\in\overline{\mathcal{A}}\). Thus, \((\blacklozenge b)^{\uparrow}\subseteq R_{\Box}^{(1)}[b]\), as desired. From the lemmas above, it immediately follows that the tuple \(M=(\mathbb{F},\cdot^{\downarrow})\), with \(\mathbb{F}\) and \(\cdot^{\downarrow}\) defined at the beginning of the present section, is a model for LE-\(\mathcal{ALC}\). The following lemma states that the interpretation of any concept \(C\) in the model \(M\) is completely determined by the terms of the form \(bIx_{C}\) and \(a_{C}Iy\) occurring in the tableau expansion. Lemma 4: _Let \(M=(\mathbb{F},\cdot^{\downarrow})\) be the model defined by the construction above. Then for any concept \(C\) and individuals \(b\), \(x\) occurring in \(\overline{\mathcal{A}}\),_ _(1) \(b\in[\![C]\!]_{M}\) iff \(bIx_{C}\in\overline{\mathcal{A}}\) (2) \(x\in(\![C]\!]_{M}\) iff \(a_{C}Ix\in\overline{\mathcal{A}}\)._ Proof: By induction on the complexity of \(C\). The base case (when \(C\) is atomic) is immediate by the construction of the model. For \(C=\top\), by rule \(\top\), and \(x_{\top}::\top\) from the creation rule, \(bIx_{T}\in\overline{\mathcal{A}}\) for any \(b\in A\). Therefore, \(x_{\top}^{\downarrow}=A=[\![\top]\!]\). For item 2, for any \(y\), and if \(a_{T}Iy\in\overline{\mathcal{A}}\), then by the appending rule \(y::\top\in\overline{\mathcal{A}}\). Then by \(\top\) and the basic rule \(bIy\in\overline{\mathcal{A}}\) for all \(b\). Thus, \((\![\top]\!]=A^{\uparrow}\subseteq a_{\top}^{\uparrow}\). Moreover, if \(y\in(\![\top]\!]\), then \(bIy\in\overline{\mathcal{A}}\) for any \(b\). In particular \(a_{T}Iy\in\overline{\mathcal{A}}\). Thus, \((\![\top]\!]=a_{\top}^{\uparrow}\). The proof for \(\perp\) is analogous. For the induction step, we have four cases. 1. Suppose \(C=C_{1}\lor C_{2}\). For the first claim, notice that \(b\in[\![C_{1}\lor C_{2}]\!]\) iff \(\forall y(y\in(\![C_{1}]\!]\cap(\![C_{2}]\!]\Rightarrow bIy)\). By the induction hypothesis, this is equivalent to \[\forall y(y::C_{1}\in\overline{\mathcal{A}}\;\;\&\;\;y::C_{2}\in\overline{ \mathcal{A}}\implies bIy\in\overline{\mathcal{A}}).\] By the creation rule for \(C_{1}\lor C_{2}\), we have \(x_{C_{1}\lor C_{2}}::C_{1}\lor C_{2}\), and consequently both \(x_{C_{1}\lor C_{2}}::C_{1}\) and \(x_{C_{1}\lor C_{2}}::C_{2}\) are added to the tableau. Thus, if the condition \(y::C_{1}\) & \(y::C_{2}\Rightarrow bIy\) is satisfied for any \(y\) in \(\overline{\mathcal{A}}\), then \(bIx_{C_{1}\lor C_{2}}\in\overline{\mathcal{A}}\). So \(b\in[\![C_{1}\lor C_{2}]\!]\) implies that \(bIx_{C_{1}\lor C_{2}}\in\overline{\mathcal{A}}\). Conversely, if \(bIx_{C_{1}\lor C_{2}}\in\overline{\mathcal{A}}\), then by the appending rule \(b:C_{1}\lor C_{2}\in\overline{\mathcal{A}}\). Thus, for any \(y::C_{1}\) and \(y::C_{2}\in\overline{\mathcal{A}}\), \(bIy\in\overline{\mathcal{A}}\) due to rule \(\lor_{A}\). Hence, \(bIx_{C_{1}\lor C_{2}}\in\overline{\mathcal{A}}\) implies \(\forall y(y::C_{1}\in\overline{\mathcal{A}}\ \ \&\ y::C_{2}\in\overline{\mathcal{A}} \implies bIy\in\overline{\mathcal{A}})\). As observed before, this is equivalent to \(y\in(\![C_{1}\lor C_{2}]\!]\), as desired. For the second claim, notice that \(x\in(\![C_{1}\lor C_{2}]\!]\) iff \(x\in(\![C_{1}]\!]\) and \(x\in(\![C_{2}]\!]\). By induction hypothesis, this is equivalent to \(x::C_{1}\) and \(x::C_{2}\) occurring in \(\overline{\mathcal{A}}\). By the creation rule for \(C_{1}\lor C_{2}\), \(a_{C_{1}\lor C_{2}}:C_{1}\lor C_{2}\in\overline{\mathcal{A}}\). Since \(x::C_{1},x::C_{2}\in\overline{\mathcal{A}}\), we have \(a_{C_{1}\lor C_{2}}Ix\in\overline{\mathcal{A}}\) by the rule \(\lor_{X}\). Conversely, if \(a_{C_{1}\lor C_{2}}Ix\in\overline{\mathcal{A}}\), then \(x::C_{1}\lor C_{2}\in\overline{\mathcal{A}}\) by the appending rules, which implies \(x::C_{1},x::C_{2}\in\overline{\mathcal{A}}\), or equivalently, \(x\in(\![C_{1}\lor C_{2}]\!]\). 2. The proof for \(C=C_{1}\wedge C_{2}\) is similar to the previous one. 3. Suppose \(C=[R_{\Box}]C_{1}\). For the first claim, note that \(b\in[\![R_{\Box}]C_{1}]\!]\) iff \(\forall y(y\in(\![C_{1}]\!]\Rightarrow bR_{\Box}y)\). By induction hypothesis, this is equivalent to \(\forall y(y::C_{1}\in\overline{\mathcal{A}}\Rightarrow bR_{\Box}y\in\overline {\mathcal{A}})\). Since \(x_{C_{1}}::C_{1}\in\overline{\mathcal{A}}\), by the creation rule for \(C_{1}\), it follows that \(bR_{\Box}x_{C_{1}}\in\overline{\mathcal{A}}\). By the adjunction rule, this implies \(bI\Box x_{C_{1}}=bIx_{\Box C_{3}}\in\overline{\mathcal{A}}\). Conversely, if \(bIx_{\Box C_{1}}\in\overline{\mathcal{A}}\), then by the appending rule also \(b:[R_{\Box}]C_{1}\in\overline{\mathcal{A}}\). That is, for any \(y\), if \(y::C_{1}\in\overline{\mathcal{A}}\), then \(bR_{\Box}y\in\overline{\mathcal{A}}\) by the expansion rule for \(\Box\). As observed before, this implication is equivalent to \(b\in[\![R_{\Box}]C_{1}]\!]\), as desired. For the second claim, notice that \(y\in(\![R_{\Box}]C_{1}]\!]\) iff \(\forall b(b\in[R_{\Box}]C_{1}\Rightarrow bIy)\). Equivalently (as proved previously), for all \(b\), if \(b:[R_{\Box}]C_{1}\in\overline{\mathcal{A}}\), implies \(bIy\in\overline{\mathcal{A}}\). Combining this with the fact that the creation rule for \([R_{\Box}]C_{1}\) implies \(a_{\Box C_{1}}::[R_{\Box}]C_{1}\in\overline{\mathcal{A}}\), this implies that \(a_{\Box C_{1}}Iy\in\overline{\mathcal{A}}\) as well. Conversely, suppose \(a_{\Box C_{1}}Iy\in\overline{\mathcal{A}}\). Then for any \(b\), if \(b:[R_{\Box}]C_{1}\in\overline{\mathcal{A}}\), then \(bIy\in\overline{\mathcal{A}}\). This is equivalent to \(y\in(\![R_{\Box}]C_{1}]\!]\). 4. The proof for \(C=\langle R_{\Diamond}\rangle C_{1}\) is similar to the previous one. Theorem 2.2 (Soundness): _The model \(M=(\mathbb{F},\cdot^{1})\) defined above satisfies the ABox \(\mathcal{A}\)._ Proof: We proceed by cases. 1. By construction, \(M\) satisfies all terms of the form \(bR_{\Box}y\), \(bIy\), or \(yR_{\Diamond}b\) in \(\mathcal{A}\). 2. By construction, any relational term is satisfied by \(M\) iff it explicitly occurs in \(\overline{\mathcal{A}}\). Thus, either \(M\) satisfies all terms of the form \(\neg(bR_{\Box}y)\), \(\neg(bIy)\), or \(\neg(yR_{\Diamond}b)\) occurring in \(\mathcal{A}\), or some expansion of \(\mathcal{A}\) contains a clash. 3. For the terms of the form \(b:C\), \(y::C\), \(\neg(b:C)\), or \(\neg(y::C)\), we have \(b\in[\![C]\!]\) iff \(bIx_{C}\in\overline{\mathcal{A}}\), and \(y\in(\![C]\!]\) iff \(a_{C}Iy\in\overline{\mathcal{A}}\) (Lemma 4). For any \(b:C\), \(y::C\), \(\neg(b:C)\), or \(\neg(y::C)\) occurring in \(\mathcal{A}\), we respectively add \(bIx_{C}\), \(a_{C}Iy\), \(\neg(bIx_{C})\), or \(\neg(a_{C}Iy)\) to \(\overline{\mathcal{A}}\) via the expansion rules, and thus \(M\) satisfies the constraints. The following corollary is an immediate consequence of the termination and soundness of the tableau procedure. Corollary 2 (Finite Model Property): _For any consistent LE-\(\mathcal{ALC}\) ABox \(\mathcal{A}\), some model of \(\mathcal{A}\) exists the size of which is polynomial in \(size(\mathcal{A})\)._ Proof: The model \(M\) of Theorem 2.2 is the required witness. The polynomial bound on the size of \(M\) follows from the proof of Theorem 2. ### Completeness of the tableau algorithm In this section, we prove the completeness of the tableau algorithm. The following lemma is key to this end, since it shows that every model for an LE-\(\mathcal{ALC}\) ABox can be extended to a model with classifying object and features. Lemma 5: _For any ABox \(\mathcal{A}\), any model \(M=(\mathbb{F},\,^{1})\) of \(\mathcal{A}\) can be extended to a model \(M^{\prime}=(\mathbb{F}^{\prime},\,^{1^{\prime}})\) such that \(\mathbb{F}^{\prime}=(A^{\prime},X^{\prime},I^{\prime},\{R^{\prime}_{\Box}\}_{ \Box\in\mathcal{G}},\{R^{\prime}_{\Diamond}\}_{\Diamond\in\mathcal{F}})\), \(A\subseteq A^{\prime}\) and \(X\subseteq X^{\prime}\), and moreover for every \(\Box\in\mathcal{G}\) and \(\Diamond\in\mathcal{F}\):_ _1. There exists \(a_{C}\in A^{\prime}\) and \(x_{C}\in X^{\prime}\) such that:_ \[C^{1^{\prime}}=(I^{\prime(0)}[x_{C}^{1^{\prime}}],I^{\prime(1)}[a_{C}^{1^{ \prime}}]),\quad a_{C}^{1^{\prime}}\in[\![C^{1^{\prime}}]\!],\quad x_{C}^{1^{ \prime}}\in(\![C^{1^{\prime}}]\!], \tag{4}\] _2. For every individual \(b\) in \(A\) there exist \(\Diamond b\) and \(\blacklozenge b\) in \(A^{\prime}\) such that:_ \[I^{\prime(1)}[\blacklozenge b]=R^{\prime(1)}_{\Box}[b^{1^{\prime}}]\quad \text{and}\quad I^{\prime(1)}[\Diamond b]=R^{\prime(0)}_{\Diamond}[b^{1^{ \prime}}], \tag{5}\] _3. For every individual \(y\) in \(X\) there exist \(\Box y\) and \(\blacksquare y\) in \(X^{\prime}\) such that:_ \[I^{\prime(0)}[\blacksquare y]=R^{\prime(1)}_{\Diamond}[y^{1^{\prime}}]\quad\text{ and}\quad I^{\prime(0)}[\Box y]=R^{\prime(0)}_{\Box}[y^{1^{\prime}}]. \tag{6}\] _4. For any \(C\), \([\![C^{1}]\!]=[\![C^{1^{\prime}}]\!]\cap A\) and \((\![C^{1}]\!]=(\![C^{1^{\prime}}]\!]\cap X\)._ Proof: Fix \(\Box\in\mathcal{G}\) and \(\Diamond\in\mathcal{F}\). Let \(M^{\prime}\) be defined as follows. For every concept \(C\), we add new elements \(a_{C}\) and \(x_{C}\) to \(A\) and \(X\) (respectively) to obtain the sets \(A^{\prime}\) and \(X^{\prime}\). For any \(J\in\{I,R_{\Box}\}\), any \(a\in A^{\prime}\) and \(x\in X^{\prime}\), we set \(aJ^{\prime}x\) iff one of the following holds: 1. \(a\in A\), \(x\in X\), and \(aJx\); 2. \(x\in X\), and \(a=a_{C}\) for some concept \(C\), and \(bJx\) for all \(b\in[\![C^{1}]\!]\); 3. \(a\in A\), and \(x=x_{C}\) for some concept \(C\), and \(aJy\) for all \(y\in(\![C^{1}]\!]\); 4. \(a=a_{C_{1}}\) and \(x=x_{C_{2}}\) for some \(C_{1}\), \(C_{2}\), and \(bJy\) for all \(b\in[\![C_{1}^{1}]\!]\), and \(y\in(\![C_{2}^{1}]\!]\). We set \(xR^{\prime}_{\Diamond}a\) iff one of the following holds: 1. \(a\in A\), \(x\in X\), and \(xR_{\Diamond}a\); 2. \(x\in X\), and \(a=a_{C}\) for some concept \(C\), and \(xR_{\Diamond}b\) for all \(b\in[\![C^{1}]\!]\); 3. \(a\in A\), and \(x=x_{C}\) for some concept \(C\), and \(yR_{\Diamond}a\) for all \(y\in(\![C^{1}]\!]\); 4. \(a=a_{C_{1}}\) and \(x=x_{C_{2}}\) for some \(C_{1}\), \(C_{2}\), and \(yR_{\Diamond}b\) for all \(b\in[\![C_{1}^{1}]\!]\), \(y\in(\![C_{2}^{1}]\!]\). For any \(b\in A\), \(y\in X\), let \(\blacklozenge b=a_{\Box(cl(b))}\), \(\Diamond b=a_{\Diamond(cl(b))}\), \(\blacksquare y=x_{\Diamond(cl(y))}\), and \(\Box y=x_{\Box(cl(y))}\), where \(cl(b)\) (resp. \(cl(y)\)) is the smallest concept generated by \(b\) (resp. \(y\)). For any \(C\), let \(C^{l^{\prime}}=(I^{\prime(0)}[x_{C}],I^{\prime(1)}[a_{C}])\). Then \(M^{\prime}\) is as required. Theorem 4.1 (Completeness): _Let \(\mathcal{A}\) be a consistent ABox and \(\mathcal{A}^{\prime}\) be obtained via the application of any expansion rule applied to \(\mathcal{A}\). Then \(\mathcal{A}^{\prime}\) is also consistent._ Proof: If \(\mathcal{A}\) is consistent, by Lemma 5, a model \(M^{\prime}\) of \(\mathcal{A}\) exists which satisfies (4), (5) and (6). The statement follows from the fact that any term added by any expansion rule is satisfied by \(M^{\prime}\) where we interpret \(a_{C}\), \(x_{C}\), \(\blacklozenge b\), \(\Diamond b\), \(\Box y\), \(\blacksquare y\) as in Lemma 5. Remark 2: The algorithm can easily be extended to acyclic TBoxes, via the unravelling technique (cf. [3] for details). ## 5 Conclusion and future work In this paper, we define a two-sorted non-distributive description logic LE-\(\mathcal{ALC}\) to describe and reason about formal concepts arising from (enriched) formal contexts from FCA. We describe ABox and TBox terms for the logic and define a tableaux algorithm for it. This tableaux algorithm decides the consistency of ABoxes and acyclic TBoxes, and provides a procedure to construct a model when the input is consistent. We show that this algorithm is computationally more efficient than the tableaux algorithm for \(\mathcal{ALC}\). This work can be extended in several interesting directions. Dealing with cyclic TBoxes and RBox axioms.In this paper, we introduced a tableaux algorithm only for knowledge bases with acyclic TBoxes. We conjecture that the following statement holds of general (i.e. possibly cyclic) TBoxes. Conjecture.The tableaux algorithm introduced in this paper can be extended to check the consistency of any knowledge base \((\mathcal{A},\mathcal{T})\) (with possibly cyclic TBox axioms) in time polynomial in \(size(\mathcal{A}\cup\mathcal{T})\). Developing such an algorithm is a research direction we are currently pursuing. Another aspect we intend to develop in future work concerns giving a complete axiomatization for LE-\(\mathcal{ALC}\). RBox axioms are used in description logics to describe the relationship between different relations in knowledge bases and the properties of these relations such as reflexivity, symmetry, and transitivity. It would be interesting to see if it is possible to obtain necessary and/or sufficient conditions on the shape of RBox axioms for which a tableaux algorithm can be obtained. This has an interesting relationship with the problem in LE-\(\mathcal{\ }\)of providing computationally efficient proof systems for various extensions of LE-logic in a modular manner [16, 5]. Generalizing to other semantic frameworks.The non-distributive DL introduced in this paper is semantically motivated by a relational semantics for LE-logics which establishes a link with FCA. A different semantics for the same logic, referred to as graph-based semantics [12], provides another interpretation of the same logic as a logic suitable for evidential and hyper-constructivist reasoning. In the future, we intend to develop description logics for reasoning in the framework of graph-based semantics, to appropriately model evidential and hyper-constructivist settings. Generalizing to more expressive description logics.The DL LE-\(\mathcal{ALC}\) is the non-distributive counterpart of \(\mathcal{ALC}\). A natural direction for further research is to explore the non-distributive counterparts of extensions of \(\mathcal{ALC}\) such as \(\mathcal{ALCI}\) and \(\mathcal{ALCIN}\). Description logic and Formal Concept Analysis.The relationship between FCA and DL has been studied and used in several applications [1, 4, 17]. The framework of LE-\(\mathcal{ALC}\) formally brings FCA and DL together, both because its concepts are naturally interpreted as formal concepts in FCA, and because its language is designed to represent knowledge and reasoning in enriched formal contexts. Thus, these results pave the way to the possibility of establishing a closer and more formally explicit connection between FCA and DL, and of using this connection in theory and applications.
2306.09648
Learning CO$_2$ plume migration in faulted reservoirs with Graph Neural Networks
Deep-learning-based surrogate models provide an efficient complement to numerical simulations for subsurface flow problems such as CO$_2$ geological storage. Accurately capturing the impact of faults on CO$_2$ plume migration remains a challenge for many existing deep learning surrogate models based on Convolutional Neural Networks (CNNs) or Neural Operators. We address this challenge with a graph-based neural model leveraging recent developments in the field of Graph Neural Networks (GNNs). Our model combines graph-based convolution Long-Short-Term-Memory (GConvLSTM) with a one-step GNN model, MeshGraphNet (MGN), to operate on complex unstructured meshes and limit temporal error accumulation. We demonstrate that our approach can accurately predict the temporal evolution of gas saturation and pore pressure in a synthetic reservoir with impermeable faults. Our results exhibit a better accuracy and a reduced temporal error accumulation compared to the standard MGN model. We also show the excellent generalizability of our algorithm to mesh configurations, boundary conditions, and heterogeneous permeability fields not included in the training set. This work highlights the potential of GNN-based methods to accurately and rapidly model subsurface flow with complex faults and fractures.
Xin Ju, François P. Hamon, Gege Wen, Rayan Kanfar, Mauricio Araya-Polo, Hamdi A. Tchelepi
2023-06-16T06:47:47Z
http://arxiv.org/abs/2306.09648v1
# Learning CO\({}_{2}\) plume migration in faulted reservoirs with Graph Neural Networks ###### Abstract Deep-learning-based surrogate models provide an efficient complement to numerical simulations for subsurface flow problems such as CO\({}_{2}\) geological storage. Accurately capturing the impact of faults on CO\({}_{2}\) plume migration remains a challenge for many existing deep learning surrogate models based on Convolutional Neural Networks (CNNs) or Neural Operators. We address this challenge with a graph-based neural model leveraging recent developments in the field of Graph Neural Networks (GNNs). Our model combines graph-based convolution Long-Short-Term-Memory (GConvLSTM) with a one-step GNN model, MeshGraphNet (MGN), to operate on complex unstructured meshes and limit temporal error accumulation. We demonstrate that our approach can accurately predict the temporal evolution of gas saturation and pore pressure in a synthetic reservoir with impermeable faults. Our results exhibit a better accuracy and a reduced temporal error accumulation compared to the standard MGN model. We also show the excellent generalizability of our algorithm to mesh configurations, boundary conditions, and heterogeneous permeability fields not included in the training set. This work highlights the potential of GNN-based methods to accurately and rapidly model subsurface flow with complex faults and fractures. keywords: Graph neural networks, carbon storage, two-phase flow, surrogate model, deep learning + Footnote †: journal: Elsevier ## 1 Introduction In geological carbon storage (GCS), large amounts of supercritical CO\({}_{2}\) are injected into subsurface reservoirs for permanent storage and must be monitored over very long periods of time to ensure safe and effective storage [1; 2]. Underground formations often exhibit high degrees of heterogeneity characterized by stratigraphic layering [3] and the presence of faults and fractures [4; 5]. These geological features critically impact the migration and the long-term behavior of CO\({}_{2}\) plumes. Moreover, faults and fractures could potentially lead to hazards such as induced seismicity or leakage [6; 7]. To ensure the safety and effectiveness of CO\({}_{2}\) injection projects, faults and fractures must be accurately modeled in high-fidelity (HF) numerical simulations. As a result, there is a strong interest in building numerical models based on unstructured polyhedral meshes that conform to the complex geological features of the porous medium [8; 9; 10]. In addition, managing geological uncertainty in large-scale storage operations requires running a large number of accurate numerical predictions of CO\({}_{2}\) plume migration over decades to centuries [11; 12; 13]. This leads to extremely high computational costs for planning CO\({}_{2}\) storage projects with faults. Data-driven deep-learning-based (DL) surrogate models for subsurface flow problems have shown great potential to complement HF numerical simulations and reduce the computational burden of uncertainty quantification studies. Data-driven deep-learning (DL) models rely on data to learn the underlying physics. They approximate the input and output of interest by building statistical models from simulation data generated by HF simulators. These methods target the minimization of the data loss between prediction fields and label data and can efficiently achieve converged solutions with satisfactory accuracy [14]. Previous DL-based models have shown excellent accuracy in predicting flow dynamics [11] and better computational efficiency than HF reservoir simulators [15; 16]. Mo et al. developed a DL surrogate model that integrates an autoregressive model with a convolutional-NNs-based (CNNs) encoder-decoder network to forecast CO\({}_{2}\) plume migration in random 2D permeability fields [17]. Tang et al. [15; 16] combined a residual U-Net (R-U-Net) with convLSTM networks to predict the temporal evolution of the saturation and pressure fields in 2D and 3D oil production simulations. Their recurrent R-U-Net model was later applied to CO\({}_{2}\) storage with coupled flow and geomechanics [18]. Wen et al. [19] developed an R-U-Net-based surrogate model for CO\({}_{2}\) plume migration, where injection durations, injection rates, and injection locations are encoded as channels of input images. Recently, Wen et al. [20; 21] combined U-Net and Fourier neural operator [20] by adding convolutional information in the Fourier layer, which yields significantly improved cost-accuracy trade-off. However, these existing data-driven surrogate models are limited to Cartesian meshes with simple geometries, which fails to predict in unstructured meshes with stencils that vary in size and shape [22; 23]. For example, CNNs are designed for image processing and exploit the Cartesian structure of pixels, implying that these models can only efficiently operate on regular grids [16; 18]. This limitation significantly undermines the applicability of these surrogates to field-scale models with complex geological features such as faults and fractures. To overcome these limitations, here we aim to construct a DL surrogate model based on a graph neural network (GNN) that can capture the flow dynamics in realistic subsurface flow problems modeled with complex unstructured meshes conforming to faults and fractures. The key feature of GNNs is to represent unstructured simulation data as graphs with nodes and edges, in which the nodes represent cell-centered data (e.g., pore pressure, phase saturation). In contrast, the edges represent cell-to-cell connectivity and face-centered data (e.g., transmissibility, Darcy fluxes). This is key to enabling the DL surrogate to operate with unstructured-mesh-based simulation data containing complex internal structures. Recently, a class of GNNs named message-passing neural networks (MPNNs) has demonstrated its efficiency in learning forward dynamics [24; 25; 26; 27]. In MPNNs, a learnable message-passing function is designed to propagate information over the graph through a local aggregation process [23]. Using the local aggregation process works effectively on embedding spaces and helps the model learn better representations. This aggregation operation is pivotal to enable a node to incorporate information from its neighbors, enriching the own representation of the node in the embedding space. As a result, these improved representations produce more accurate predictions and contribute to the potency of MPNNs in handling unstructured graph-based data. Of particular relevance to this work is the MPNN methodology named MeshGraphNet (MGN) proposed by Pfaff et al. (2020) [25], in which the training graph is constructed from an HF simulation mesh. The authors demonstrated that by encoding various physical quantities-depending on underlying physics-as edge features of a graph, MGN could be a fast surrogate trained from unstructured HF simulations. This work also shows that MGN-based surrogate models have the ability to generalize to meshes unseen during training and can capture internal boundaries more accurately than CNN-based models. Wu et al. [28] applied the MGN architecture to an oil-and-gas problem and developed a hybrid architecture to learn the dynamics of reservoir simulations on Cartesian meshes. The hybrid architecture used a U-Net (a variant of CNNs [29]) for the pore pressure and MGN for the phase saturation. Notably, these GNN surrogate models all focus on next-state predictions, i.e., they approximate the next state of a physical system from the current state and advance in time in an autoregressive manner. However, recent work has shown that next-state models are prone to suffer from substantial temporal error accumulation [25; 30] when autoregressively rolling out for a long period. This limitation is problematic to predict CO\({}_{2}\) plume behavior, as CO\({}_{2}\) operations often require the simulation of multiple decades of injection for a commercial-scale project. Therefore, in this work, we introduce a graph-based recurrent neural network to mitigate temporal error accumulation and achieve a reliable long-term prediction. The proposed recurrent GNN framework includes (1) a modified MGN that encodes and processes the current physical state into embedding spaces of the entire graph and (2) a graph-based recurrent convLSTM (GConvLSTM) [31] model that predicts the next state based on the embeddings computed by MGN and on recurrent memories from past states. In comparison to the original next-state MGN predictor, the proposed algorithm, referred to as MeshGraphNet-Long Short-Term Memory (MGN-LSTM), can better mitigate temporal error accumulation and significantly improves the performance for the long-term prediction of CO\({}_{2}\) plume behavior. Our MGN-LSTM algorithm can accurately approximate HF simulations on unstructured meshes and is generalizable to meshes, boundary conditions, and permeability fields unseen during training. Our main contributions include: (1) using GNN to overcome the current limitations of previous surrogate models to handle complex simulation data on unstructured meshes in the context of CO\({}_{2}\) geological storage in faulted reservoirs; (2) introducing the accurate MGN-LSTM architecture reducing temporal error accumulation; and (3) demonstrating the generalizability of MGN-LSTM to unseen meshes and boundary conditions as well as its stable extrapolation to future states. This paper proceeds as follows. In Section 2, we introduce the two-phase flow equations applicable to CO\({}_{2}\) geological storage. Section 3 describes the proposed surrogate model (MGN-LSTM) and associated data-processing and training procedures. In Section 4, the MGN-LSTM is used to predict saturation and pore pressure fields in two-phase flow (CO\({}_{2}\)-brine) problems. Section 5.1 demonstrates the improved accuracy of MGN-LSTM compared to the standard MGN algorithm. Section 6 concludes the work and suggests future research directions. ## 2 Problem statement ### Governing equations of CO\({}_{2}\)-brine flow In this work, we consider miscible two-phase (gas and aqueous) two-component (H\({}_{2}\)O and CO\({}_{2}\)) flow in a compressible porous medium. We employ a 2D domain in the \(x-y\) plane for simplicity, but the model and algorithms presented here can be extended to 3D. The H\({}_{2}\)O component is only present in the aqueous phase, while the CO\({}_{2}\) component can be present in both the aqueous and the gas phases. We denote the aqueous and the gas phases using the subscripts \(a\) and \(g\), respectively. The mass conservation of each component reads: \[\frac{\partial}{\partial t}\left(\phi\sum_{\ell=1}^{2}x_{c\ell} \rho_{\ell}s_{\ell}\right)+\nabla\cdot\left(\sum_{\ell=1}^{2}x_{c\ell}\rho_{ \ell}\nu_{\ell}\right)+\sum_{\ell=1}^{2}x_{c\ell}\rho_{\ell}q_{\ell}=0,\qquad c =\{\mathrm{H}_{2}O,CO_{2}\}, \tag{1}\] where \(\phi\) is the porosity, \(x_{c\ell}\) is the mass fraction of component \(c\) in phase \(\ell\), \(\rho_{\ell}\) is the density of phase \(\ell\), \(s_{\ell}\) is the saturation of phase \(\ell\), \(\nu_{\ell}\) is the Darcy velocity of phase \(\ell\), and \(q_{\ell}\) is the source flux for phase \(\ell\). Using the multiphase extension of Darcy's law, we write that the Darcy velocity is proportional to the gradient of the pressure: \[\mathbf{v}_{\ell}=-\frac{k_{r\ell}}{\mu_{\ell}}\bar{\mathbf{k}}\cdot \nabla p_{\ell},\qquad\ell=\{a,g\}, \tag{2}\] where \(k_{r\ell}\) is the relative permeability of phase \(\ell\), \(\mu_{\ell}\) is the viscosity of phase \(\ell\), \(\bar{\mathbf{k}}\) is the permeability tensor, and \(p_{\ell}\) is the phase pressure. In this work, we assume that the permeability tensor is diagonal with an equal value for each entry. The system is closed with the following constraints \[s_{g}+s_{a}=1,\] (saturation constraint) (3) \[p_{g}-p_{a}=p_{c}(s_{g}),\] (capillary pressure constraint) (4) \[x_{\mathrm{H}_{2}O,\ell}+x_{\mathrm{CO}_{2},\ell}=1,\qquad\ell \in\{a,g\},\] (component fraction constraints) (5) as well as standard thermodynamics constraints on fugacities. The partitioning of the mass of the CO\({}_{2}\) component between the gas phase and the aqueous phase is determined as a function of pressure, temperature, and salinity using the model of Duan and Sun [32]. The gas phase densities and viscosities are computed using the Span-Wagner [33] and Fenghour-Wakeham [34] correlations, respectively, while the brine properties are obtained using the methodology of Phillips et al. [35]. The relative permeabilities are computed with the Brooks-Corey model as \(k_{rg}(s_{g})=0.95(s_{g}/s_{g,max})^{2}\) and \(k_{ra}(s_{a})=((s_{a}-s_{a,min})/(1-s_{a,min}))^{6}\). The capillary pressure \(p_{c}\) is computed from \(s_{g}\) using the Leverett J-function relationship. The domain is initially saturated with brine, with an initial pressure of 10 MPa and an initial temperature of 143.76 degrees Celsius. We use analytical (Carter-Tracy) aquifer boundary conditions. A well injects pure supercritical CO\({}_{2}\) at a rate of 0.058 kg/s for 950 days, assuming a storage reservoir with unit meter thickness. ### Finite-volume discretization on unstructured polygonal meshes Considering a mesh with \(n_{C}\) cells, the system of equations (1)-(2) is discretized with a cell-centered, fully implicit (backward-Euler) finite-volume scheme based on a two-point flux approximation (TPFA) and single-point upstream weighting. The primary variables are chosen to be the gas (CO\({}_{2}\)-rich) phase pressure \(p=p_{g}\), the overall component densities \(\rho_{\mathrm{H}_{2}O}\), and \(\rho_{\mathrm{CO}_{2}}\), where an overall component density represents the mass of a given component per unit volume of mixture. The primary variables can be related to the variables of equation (1) using the formulas given in [36]. At each time step, the nonlinear system of discretized equations is solved with Newton's method with damping to update all the primary variables in a fully coupled fashion. All the simulations are performed with GEOS, an open-source multiphysics simulator targeting geological carbon storage and other subsurface energy systems [37; 38; 7]. Unstructured polygonal meshes are well suited to represent complex faults and to perform local spatial refinement around the wells. In this work, we focus on the class of perpendicular bisector (PEBI) grids [39; 8; 9; 10; 40] generated with MRST (Matlab Reservoir Simulation Toolbox) [41] to mesh a 1 km x 1 km x 1 m domain containing an injector well and two straight impermeable faults in a conforming fashion (see an example mesh shown in Figs. 1(a)). As PEBI meshes are orthogonal by construction, the flow simulations can be performed with the TPFA finite-volume scheme without compromising the accuracy of the solution. ## 3 Recurrent GNN surrogate model In this section, we first describe the graph representation of unstructured mesh-based simulation data. Then, we introduce the key aspects of the deep-learning-based surrogate model for two-phase subsurface flow problems, including the learning task, model architecture, training procedure, and data preparation. ### Input and output graph representation To leverage the capabilities of graph-based machine learning, we represent the unstructured mesh-based input and output data at a given time as a directed graph with properties. The graph is denoted by \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) where \(\mathcal{V}\) and \(\mathcal{E}\) are node and edge sets, respectively. As shown in Fig. 1(a), each mesh cell \(i\) is represented by a graph node \(i\in\mathcal{V}\). Two adjacent cells \(i\) and \(j\) connected with non-zero transmissibility are represented by two graph edges, with edge \((i,j)\in\mathcal{E}\) pointing to node \(j\) and edge \((j,i)\) pointing to node \(i\). The transmissibility \(T_{ij}\) at each connection between cells \(i\) and \(j\) is computed as a function of grid geometry and rock permeability as explained in [41; 42], and is equal to zero when two cells are separated by a fault. For a given set of mesh and permeability, the graph structure remains fixed for all time steps. The properties associated with node \(i\) at time \(t_{n}\) are termed as node feature, \(\mathbf{x}_{i}^{n}:\mathcal{V}\rightarrow\mathbb{R}^{n_{N}}\). The properties corresponding to edge \((i,j)\) are referred to as edge feature, \(\mathbf{f}_{(i,j)}:\mathcal{E}\rightarrow\mathbb{R}^{n_{E}}\) and are independent of time. The dimensions of the node and edge features, \(n_{N}\) and \(n_{E}\), are specified below. As illustrated in Fig. 1(d), the node feature \(\mathbf{x}_{i}^{n}\) consists of a combination of dynamic variables at time \(t_{n}\), \(y_{i}^{n}\in\mathbb{R}\), and static model features \(\mathbf{m}_{i}\in\mathbb{R}^{n_{M}}\), where \(n_{M}\) is the dimension of the static model features. Specifically, \(y_{i}^{n}\) denotes Figure 1: Construction of the graph representation from an unstructured mesh: (a) an example mesh with two impermeable faults and one injector; highlighted areas are zoomed-in sections of (b) cells along fault line #2; (c) internal cells and their graph representations; and (d) input node features and edge features as well as node output (node label). the state variable whose dynamics are learned by the neural network, such as phase saturation, \(s_{g,i}\), or pore pressure, \(p_{g,i}\). Note that in this work, for each dynamic variable (\(s_{g}\) or \(p_{g}\)), we train one individual prediction model. The static model parameters \(\mathbf{m}_{i}\) include the scalar permeability \(k_{i}\) (one dimension), the cell volume \(V_{i}\) (one dimension), the cell center coordinates \(\mathbf{x}_{i}\) (two dimensions), and the cell type \(\mathbf{n}_{i}\) (a one-hot vector of dimension 4), such that the dimension of the model parameters is \(n_{M}=1+1+2+4=8\) and the dimension of the node features is \(n_{N}=n_{M}+1=9\). As shown in Figs. 1 (b) to (c), the cell type is used to identify cells playing a key role in the simulation, such as the cells along the faults and the cells where source terms (well) or boundary conditions are imposed. The fourth cell type includes the remaining cells (not along faults, and not where source and boundary conditions are imposed). The edge feature \(\mathbf{f}_{(i,j)}\) of edge \((i,j)\) is constructed to enrich the graph connectivity information with the (signed) distance between cell centers, \(\mathbf{x}_{j}\)-\(\mathbf{x}_{i}\in\mathbb{R}^{2}\), (see Fig. 1(d)) and its absolute value, such that \(n_{E}=3\). The schematic expressions of node and edge features used in this study are given in Fig. 1(d)). We explored four configurations of node and edge inputs; the corresponding variables are detailed in Table 1. ### MeshGraphNet-Long Short-Term Memory (MGN-LSTM) The proposed MGN-LSTM model is designed to learn the spatio-temporal evolution of the selected dynamic variable (pressure or CO\({}_{2}\) saturation) of the two-phase flow problem defined in Section 2.1. Given the initial state \(\mathbf{Y}^{0}=[y_{1}^{0},\dots,y_{nc}^{0}]^{T}\) and the static model features \(\mathbf{M}=[(\mathbf{m}_{1})^{T},\dots,(\mathbf{m}_{nc})^{T}]^{T}\) defined at the nodes of \(\mathcal{G}\), and given the features at all edges \(\mathbf{F}\), we compute the sequence of dynamic variables \((\hat{\mathbf{Y}}^{1},\dots,\hat{\mathbf{Y}}^{nr})\) in an autoregressive way as follows: \[\hat{\mathbf{Y}}^{0}=\mathbf{Y}^{0}, \tag{6}\] \[\hat{\mathbf{Y}}^{n+1}=f_{\text{MGN-LSTM},\theta}(\mathcal{G}, \hat{\mathbf{Y}}^{n},\mathbf{M},\mathbf{F}),\qquad n\in\{1,\dots,n_{T}\}, \tag{7}\] where \(f_{\text{MGN-LSTM},\theta}\) is the surrogate model parameterized by the model weights \(\theta\). The number of cells in the mesh is denoted by \(n_{C}\) and the number of temporal snapshots by \(n_{T}\). Once the models are trained, the inference is identical to the training process of Eq. (7) and produces a surrogate model with all the learnable parameters being fixed. The training process will be discussed in detail in the next section. In MGN-LSTM, the temporal evolution of the dynamic variable is captured by learning directly on the graph using latent node and edge features derived from the physical node and edge features revie Figure 2: Workflow schematic for the recurrent GNN model proposed in this article, referred to as MGN-LSTM. Given a domain in which supercritical CO\({}_{2}\) is injected in the center, the goal of MGN-LSTM is to learn a sequence of dynamic variables. The domain is discretized using an unstructured mesh that conforms to two fault lines and an injection well. A GNN framework based on MGN is used to predict physical quantities from time step \(n\) to time step \(n+1\) until the end of the simulation. A graph-based recurrent neural network framework based on Graph ConvLSTM [31] is employed to better capture and represent temporal dynamics. the recent work of Pfaff et al. [25]. Specifically, our algorithm, sketched in Fig. 2 and described in the next sections, combines the encoder-processor-decoder procedure of MGN [25] with a graph-based sequence model named Graph ConvLSTM [31] to perform the learning task. Let us consider a one-step prediction, i.e., the computation of \(\hat{\mathbf{Y}}^{n+1}\) using \(\hat{\mathbf{Y}}^{n}\) in Eq. (7). First, the encoder-processor steps detailed in Section 3.2.1 map the physical node/edge features to the latent node/edge features. Then, as described in Section 3.2.2, the latent space variables are used as input to the Graph ConvLSTM algorithm, which aims at retaining spatial-temporal information encoded in the recurrent memories. Finally, the output of Graph ConvLSTM is decoded and mapped to the physical space using the procedure of Section 3.2.3. At that point, the one-step prediction of the dynamic variable is complete. In the following subsections, we define the main components of the proposed architecture. #### 3.2.1 Encoder and processor The encoder is the first step of the prediction in MGN-LSTM. Using the graph representation of Section 3.1, we compute the initial latent feature vectors at time \(t_{n}\), \(\mathbf{v}_{i}^{n,0}\in\mathbb{R}^{n_{H}}\) and \(\mathbf{e}_{i}^{n,0}\in\mathbb{R}^{n_{H}}\) from the physical feature vectors, \(\mathbf{x}_{i}^{n}\) and \(\mathbf{f}_{(i,j)}\). The hyperparameter \(n_{H}\) denotes the size of the latent vectors. The computation of the latent vectors is done using the node and edge multilayer perceptrons (MLPs), denoted respectively by \(\text{mlp}_{v}^{0}\) and \(\text{mlp}_{e}^{0}\), as follows: \[\mathbf{v}_{i}^{n,0} =\text{mlp}_{v}^{0}\left(\mathbf{x}_{i}^{n}\right), \tag{8}\] \[\mathbf{e}_{(i,j)}^{n,0} =\text{mlp}_{e}^{0}\left(\mathbf{f}_{(i,j)}\right).\] The graph equipped with the initial latent node and edge features computed by the encoder at time \(t\) in Eq. (8) is the input to the processor. The processor consists of \(m\) message-passing steps computed in sequence. At step \(\ell\in\{1,\dots,m\}\) in the sequence, each graph edge feature \(\mathbf{e}_{(i,j)}^{n,\ell}\) is updated using its value at the previous message-passing step \(\ell-1\) and the values of the adjacent node features at step \(\ell-1\), as follows \[\mathbf{e}_{(i,j)}^{n,\ell}=\text{mlp}_{e}^{\ell}\left(\left[\mathbf{e}_{(i,j )}^{n,\ell-1},\mathbf{v}_{i}^{n,\ell-1},\mathbf{v}_{j}^{n,\ell-1}\right]\right),\qquad\ell\in\{1,\dots,m\} \tag{9}\] to obtain the updated value. In Eq. (9), the operator \([\cdot]\) concatenates the given arguments on the feature dimension. The mapping in each step in the message passing is computed using MLP with residual connection and Rectified Linear Unit (ReLu) as the non-linear activation function. Then, each graph node \(\mathbf{v}_{i}^{n,\ell}\) is updated using its value at the previous message-passing step, \(\ell-1\), and the aggregation of its incident edge features at step \(\ell\): \[\mathbf{v}_{i}^{n,\ell}=\text{mlp}_{v}^{\ell}\left(\left[\mathbf{v}_{i}^{n, \ell-1},\sum_{j\in\text{adj}(i)}\mathbf{e}_{(i,j)}^{n,\ell}\right]\right), \qquad\ell\in\{1,\dots,m\} \tag{10}\] where adj(\(i\)) is the set of nodes connected to node \(i\). Using Eqs. (9)-(10), the processor computes an updated set of node features that are then used by the Graph ConvLSTM to produce recurrent memories. The update of the edge-based messages, \(\mathbf{e}_{(i,j)}^{n,\ell}\), is key to the accuracy of the MGN flow predictions as it propagates information between neighboring graph nodes (i.e., between neighboring mesh cells). This design choice differentiates MGN from other classical GNN frameworks relying only on node features (see [43]), such as GCN and GraphSAGE. Moreover, leveraging edge information makes it possible to capture nontrivial topological information regarding connectivity and transmissibility, which play an important role in HF simulations and could be used to infuse more physics into the data-driven model (see Section 4.5). #### 3.2.2 Graph-based convolutional LSTM model To limit the temporal error accumulation and improve prediction accuracy, we complement MGN with a variant of Convolutional Long Short-Term Memory (ConvLSTM) that operates on graph data named Graph Convolutional LSTM (GConvLSTM) [31]. The latter is obtained by replacing the convolutional operator in ConvLSTM with a graph operator. Specifically, we follow the choice made in [31] and replace the Euclidean 2D convolution kernel with the Chebyshev spectral convolutional kernel [44], whose hyperparameters are given in Appendix 9.1.3. Graph spectral filters are known to perform effectively on graph-based data with a small number of parameters thanks to their isotropic nature [31]. The goal of the Graph ConvLSTM step at time \(t_{n}\) is to compute the cell state \(\mathbf{C}^{n}\) and the hidden state \(\mathbf{H}^{n}\). This is done as a function of the latent representation of node features computed by the processor of MGN at time \(t_{n}\) (Section 3.2.1) and of the recurrent memories \(\mathbf{C}^{n-1}\) and \(\mathbf{H}^{n-1}\). Using the terminology of ConvLSTM, the GConvLSTM architecture involves a set of memory cells, namely the cell state \(\mathbf{C}^{n}\) and the hidden state \(\mathbf{H}^{n}\). GConvLSTM also relies on input gates \(\mathbf{i}^{n}\), output gates \(\mathbf{o}^{n}\), and forget gates \(\mathbf{f}^{n}\) defined in [31]. These gates are based on a graph convolution operator and are used to control the flow of information into, out of, and within the memory cells. By construction, the cell state \(\mathbf{C}^{n}\in\mathbb{R}^{n_{c}\times n_{H}}\) and the hidden state \(\mathbf{H}^{n}\in\mathbb{R}^{n_{c}\times n_{H}}\) exhibit temporal dynamics and can contain spatial structural information of the graph-based input \(V^{m,m}=[(\mathbf{v}_{1}^{n,m})^{T},\ldots,(\mathbf{v}_{n_{c}}^{n,m})^{T}]^{T}\) at time \(t_{n}\). The functions of GConvLSTM are computed as follows: \[\mathbf{i}^{n} =\sigma\big{(}\mathbf{W}_{x,i}*_{\mathcal{G}}V^{m,m}+\mathbf{W}_ {hi}*_{\mathcal{G}}\mathbf{H}^{n-1}+\mathbf{b}_{i}\big{)}\,, \tag{11}\] \[\mathbf{f}^{n} =\sigma\big{(}\mathbf{W}_{xf}*_{\mathcal{G}}V^{m,m}+\mathbf{W}_ {hf}*_{\mathcal{G}}\mathbf{H}^{n-1}+\mathbf{b}_{f}\big{)}\,,\] \[\mathbf{C}^{n} =\mathbf{f}^{n}\odot\mathbf{C}^{n-1}+\mathbf{i}^{n}\odot\tanh \big{(}\mathbf{W}_{xc}*_{\mathcal{G}}V^{m,m}+\mathbf{W}_{hc}*_{\mathcal{G}} \mathbf{H}^{n-1}+\mathbf{b}_{c}\big{)}\,,\] \[\mathbf{o}^{n} =\sigma\big{(}\mathbf{W}_{xc}*_{\mathcal{G}}V^{m,m}+\mathbf{W}_ {ho}*_{\mathcal{G}}\mathbf{H}^{n-1}+\mathbf{W}_{co}\mathbf{C}^{n}+\mathbf{b}_{ o}\big{)}\,,\] \[\mathbf{H}^{n} =\mathbf{o}^{n}\odot\tanh\left(\mathbf{C}^{n}\right)\,,\] where \(*_{\mathcal{G}}\) denotes the graph convolution operator, \(\odot\) denotes the Hadamard product, and \(\sigma(\cdot)\) is the sigmoid activation function. \(\mathbf{W}\) and \(\mathbf{b}\) are respectively the weights of the graph convolutional kernel and the bias term. \(\mathbf{i}^{n}\) determines how much of the new input is incorporated into the cell state. \(\mathbf{f}^{n}\) controls the information to eliminate from the previous cell state. \(\mathbf{o}^{n}\) determines how much of the cell state is output to the next time step. In MGN-LSTM, the input vector \(\mathbf{V}^{n,m}\in\mathbb{R}^{n_{c}\times n_{H}}\) is the latent representation of node features computed by the processor of MGN at time \(t_{n}\) (Section 3.2.1). In Eq. (11), the vectors \(\mathbf{C}^{n-1}\) and \(\mathbf{H}^{n-1}\) are the recurrent memories obtained from the previous time step. After evaluating Eq. (11), the updated \(\mathbf{H}^{n}\) is decoded into the next-step physical state as explained in Section 3.2.3. More details regarding GConvLSTM gates and states can be found in [31]. #### 3.2.3 Decoder The decoder maps the updated hidden state \(\mathbf{H}^{n}\) computed by GConvLSTM to the dynamic node-based properties in physical state using MLP as follows: \[\hat{y}_{i}^{n+1}=\mathrm{ml}_{\mathbf{v}}^{m+1}\left(\mathbf{h}_{i}^{n}\right) \tag{12}\] where \(\mathbf{h}_{i}^{n}\) contains the rows of the hidden state vector \(\mathbf{H}^{n}\) corresponding to the updated latent vector of graph node \(i\), and \(m\) is the number of steps performed by the processor. Detailed illustrations of the encoder, processor, LSTM cell, and decoder are given in Appendices 9.1.1, 9.1.2, and 9.1.3, respectively. ### Loss function and Optimizer We train MGN-LSTM on training time steps by minimizing the misfit between the true node label (HF simulation results) and the predicted node label. We use the per-node root mean square error (RMSE) loss to quantify the data mismatch for each time step. The loss function reads: \[\mathcal{L}_{\text{MGN-LSTM}}=\sum_{n=1}^{n_{P}}\sqrt{\frac{1}{n_{B}}\sum_{i= 1}^{n_{B}}\left\|y_{i}^{n}-\hat{y}_{i}^{n}\right\|_{2}^{2}} \tag{13}\] where \(n_{B}\) denotes the number of nodes in a batch of training meshes, \(y_{i}^{n}\) denotes the true output in the data set, \(\hat{y}_{i}^{n}\) is the output predicted by MGN-LSTM, as formalized in Eq. (7). During training, the learning weights of \(f_{\text{MGN-LSTM},\theta}\) are updated based on the gradient to the loss function through back-propagation. Unlike next-step models such as MeshGraphNet [25], MGN-LSTM can propagate gradients throughout the entire sequence, allowing the model to utilize information from previous steps to produce stable predictions with small error accumulation. Furthermore, recurrent memories output by LSTM in MGN-LSTM retain information from previous inputs, which can be utilized to inform future predictions. An adaptive moment estimation (ADAM) optimizer is used and the learning rate is gradually decreased from \(10^{-4}\) to \(10^{-6}\). With the dataset and a fixed set hyperparameters at hand, each epoch takes 40s to 100s on an NVIDIA A100 GPU depending on the architecture dimension. The value of specific training hyperparameters is given in Appendix 9.1.4. ## 4 Results In this section, we first describe the dataset considered to train and test the MGN-LSTM model. We illustrate that MGN-LSTM can accurately predict the CO\({}_{2}\) saturation plume and pore pressure evolution in the presence of impermeable faults. At the end of this section, we also show how the saturation prediction accuracy can be improved by incorporating more physical properties (for instance, relative permeability) in the node and edge features of the graph. The results of MGN-LSTM will be compared with those obtained with standard LSTM in Section 5.1. ### Data description and MGN-LSTM training setup We generate a total of 500 realizations of the synthetic geological models of size 1 km x 1 km x 1 m. The domain shape as well as the position of the two impermeable faults are fixed across all realizations. The coordinates of the endpoints of fault line 1 are (100m, 300m) and (400m, 600m), and the coordinates of the endpoints of fault line 2 are (400m, 500m) and (800m, 800m). The synthetic models differ in their geological parameters (permeability), mesh configuration, and well location. In each synthetic model, we first randomly generate the location of the injection well constrained within a prescribed 200 x 200 m box in the center of the domain to ensure that the injector is not placed too close to the boundary. Then, a PEBI mesh conforming exactly to the specified faults and refined around the well is generated. After that, we create a geomodel in which the permeability values are assigned to each cell according to a randomly generated Von Karman distribution [45] using SGeMS [46]. Specifically, the mean and standard deviation of log-permeability are 3.912 ln(mD) and 0.5 ln(mD), respectively, which results in an average permeability of 50 mD in the reservoir. A constant porosity of 0.2 is assigned to all cells. Fig. 3 shows three sampled geomodels and meshes used for training. High-fidelity numerical simulation is then performed for each model using the GEOS simulator. MGN-LSTM is trained with 450 input meshes, in which each mesh has an 11-step rollout of simulation data, representing 550 days of CO\({}_{2}\) injection. The rollout division is linear with 50 days per step. The trained model is then tested on 50 unseen meshes for 11-step rollouts and 19-step rollouts with temporal extrapolation to 950 days. The average number of graph nodes and graph edges in the dataset are 1885 and 4500 respectively. We use two separate MGN-LSTM instances of the same architecture to predict the two different dynamical quantities, namely gas saturation and pore pressure. The only difference between these two models is the dynamical quantity used to form node features and node labels during the training and inference stages. In this study, we use 'detrending' scaling [18] for all fields of node/edge features. Specifically, the preprocessing can be expressed as: \[\tilde{\mathbf{x}}_{i}^{n}=\frac{\mathbf{x}_{i}^{n}-\text{mean}([\mathbf{x}_ {1}^{n},\ldots,\mathbf{x}_{n_{S}}^{n}])}{\text{std}([\mathbf{x}_{1}^{n}, \ldots,\mathbf{x}_{n_{S}}^{n}])},\quad i=1,\ldots,n_{S},\ \ n=1,\ldots,n_{T}, \tag{14}\] where \(n_{S}\) is the number of training samples. With this approach, a field of node features at time \(t_{n}\), \(\mathbf{x}_{i}^{n}\), can be normalized by subtracting the mean field (over all \(n_{S}\) samples) and dividing each element by its standard deviation at that time step. ### Evaluation metrics To quantify the prediction accuracy for gas saturation, we use the plume saturation error, \(\delta^{s_{x}}\), introduced in [21] and defined as: \[\delta^{s_{g}} =\frac{1}{\sum_{i,n}I_{i}^{n}}\sum_{n=1}^{n_{T}}\sum_{i=1}^{n_{C}}I_ {i}^{n}\left|s_{g,i}^{n}-\hat{g}_{g,i}^{n}\right|, \tag{15}\] \[I_{i}^{n} =1\quad\text{ if }\quad\left(s_{g,i}^{n}>0.01\right)\cup\left( \left|\hat{g}_{g,i}^{n}\right|>0.01\right),\] where \(I_{i}^{n}=1\) indicates that a mesh cell has a non-zero gas saturation in either the ground truth or the prediction, \(s_{g}^{n}\) denotes the gas saturation from HF simulations (ground truth), \(\hat{g}_{g}^{n}\) is the predicted gas saturation, \(n_{T}\) is the number of temporal snapshots, including training ranges (11 steps, 550 days) and extrapolated ranges (8 steps, 400 days), and \(n_{C}\) is the number of cells in the mesh, which can vary between simulation models. As discussed in [21], \(\delta^{s_{g}}\) provides a strict metric to evaluate CO\({}_{2}\) gas saturation because it focuses on the accuracy within the separate phase plume. We use the relative error \(\delta^{p_{g}}\) defined below to evaluate the prediction accuracy for pore pressure: \[\delta^{p_{g}}=\frac{1}{n_{C}n_{T}}\sum_{n=1}^{n_{T}}\sum_{i=1}^{n_{C}}\frac{ \left|p_{g,i}^{n}-\hat{p}_{g,i}^{n}\right|}{p_{g,init}}, \tag{16}\] where \(p_{g}^{n}\) denotes the ground truth pore pressure given by the HF simulation, \(\hat{p}_{g}^{n}\) is the predicted pore pressure, and \(p_{g,init}\) is the initial reservoir pressure, which remains identical (10 MPa) for all test/training cases. In the following sections, we use these metrics to illustrate the accuracy of MGN-LSTM in two steps. First, in Section 4.3, we consider a representative mesh in the test set and demonstrate the ability of MGN-LSTM to capture the complex plume dynamics for time steps beyond the training period. Then, in Section 4.4, we consider 10 representative test meshes to illustrate the reliability of MGN-LSTM predictions for unseen well locations, permeability fields, and meshes yielding very different CO\({}_{2}\) plume shapes. ### Predicting complex spatio-temporal dynamics beyond the training period In this section, we consider mesh 468 from the test set as an example. The mesh and location of the well are highlighted in the insets of Fig. 4. This test case has a CO\({}_{2}\) plume saturation error (Eq. (15)) within the interquartile range of the test ensemble and can therefore be considered as representative of the accuracy of the surrogate model predictions (see Fig. 6). This particular configuration is illustrated here because its well location is close to the faults and therefore produces an interesting saturation plume. Figure 4 compares the MGN-LSTM prediction of the CO\({}_{2}\) saturation fields with the HF simulation results at five snapshots (\(t\) = 200, 400 days in the training period, \(t\) = 600, 800, 950 days beyond the training period). MGN-LSTM and HF simulations exhibit an excellent match. Despite the presence of impermeable faults near the injector, we observe that (1) MGN-LSTM can accurately capture the complex temporal evolution of the CO\({}_{2}\) plume in the presence of impermeable faults, and (2) MGN-LSTM can extrapolate beyond the training horizon with a mild error accumulation at the saturation front. The MGN-LSTM pore pressure predictions are presented in Fig 5. They also exhibit a very good agreement with the HF simulation results during and after the training period. Unlike for the saturation variable, the prediction errors are distributed over the entire domain due to the elliptic behavior of the pressure variable. Figure 3: Heterogeneous permeability realizations with two fixed impermeable faults and one injection well for three cases. The coordinates of well location in each sampled case is shown on the top, and the insets show the enlarged vicinity of each well. Figure 4: Temporal evolution of the CO\({}_{2}\) saturation plume. The first and second rows respectively show the CO\({}_{2}\) saturation fields from HF and MGN-LSTM for a representative test case (mesh 468) at five different times. The third row shows the saturation error between MGN-LSTM and HF. The first three columns represent predictions within the training period and the last two columns represent the model predictions for time steps beyond the training period. Figure 5: Temporal evolution of the pore pressure. The first and second rows respectively show the pore pressure fields from HF and MGN-LSTM for a representative test case (mesh 468) at five different times. The third row shows the pressure error between MGN-LSTM and HF. The first two columns show predictions within the training period and the last three columns show the model predictions for time steps beyond the training period. Figure 6: Accuracy of the MGN-LSTM prediction of (a) gas saturation and (b) pore pressure after rolling out 550 and 950 days for all test meshes. The meshes discussed in Section 4.4 are highlighted as triangles and shown on top of the boxplot of the 950-day rollout results. ### Generalizability to meshes, well locations, and permeabilities not seen during training In this section, we consider 10 representative meshes from the test set (meshes 484, 494, 469, 490, 497 for the saturation and meshes 467, 451, 464, 450, 472 for pressure) to demonstrate that MGN-LSTM generalizes well for different meshes, boundary conditions (well locations) and permeability fields that are not included in the training set. As shown in Fig. 6, the 10 test cases have a saturation error within the interquartile range of the test ensemble and are therefore representative of the surrogate accuracy. The results after 950 days are presented in Figs. 7-8 for the CO\({}_{2}\) saturation and the pore pressure, respectively. The figures show that the permeability field and well location relative to the faults vary drastically from one case to the other. Due to the complex interplay between the saturation front and the faults, the differences in initial setup yield very different plume shapes. Still, MGN-LSTM achieves an excellent agreement with the HF simulation results after 950 days for both saturation and pressure. MGN-LSTM displays a remarkable generalizability considering the high dimensionality of the problem and the training data size. The median saturation errors in the CO\({}_{2}\) plume prediction for the training set and the extrapolated ranges in the testing set are only 1.0% and 1.3%, respectively. This degree of accuracy in the prediction of CO\({}_{2}\) plume migration is sufficient for many practical applications, including the estimation of sweep efficiencies [21]. Similarly, the pressure prediction exhibits an excellent accuracy. The median pore pressure errors for the training set and the extrapolated ranges in the testing set stand at 0.16% and 0.19%. This confirms the reliability of MGN-LSTM predictions for complex configurations unseen during training. Figure 7: CO\({}_{2}\) saturation predictions at 950 days for five distinct test meshes. From top to bottom, the rows show reservoir permeability, HF simulation of CO\({}_{2}\) saturation, MGN-LSTM prediction of CO\({}_{2}\) saturation, and MGN-LSTM prediction errors. The coordinates of well location in each test case is shown on the top of first row and the insets show the enlarged vicinity of each well. ## 6 Conclusion Figure 8: Pore pressure predictions at 950 days for five distinct test meshes. From top to bottom, the rows show reservoir permeability, HF simulation of pore pressure, MGN-LSTM prediction of pore pressure, and MGN-LSTM prediction errors. The coordinates of well location in each test case is shown on the top of first row and the insets show the enlarged vicinity of each well. ### Improving performance using augmented physics-based graph node/edge features In this section, we describe a methodology to improve the prediction accuracy of MGN-LSTM. In the previous sections, the edge feature of MGN-LSTM only accounted for distance-related information (see Section 3.1). But, more physics insights can be infused into the algorithm by incorporating more information in the edge and node features. Specifically, we explore three modifications of the gas saturation model discussed in earlier sections, focusing on the incorporation of additional features. The first modification involves the inclusion of static transmissibility (\(T_{ij}\)) as an additional edge feature, the second variation introduces phase relative permeability (\(k_{r,i}\)) as an additional node feature, and the third variation combines both static transmissibility and phase relative permeability as extra edge and node features, respectively. The transmissibility remains fixed throughout rollout steps. The relative permeability of gas phase \(k_{r,i}\) is computed as a function of the predicted gas saturation, \(s_{g,i}^{n-1}\). The input and output variables for each case are summarized in Table 1. These three modifications are evaluated separately and incrementally, and are only applied to the prediction of CO\({}_{2}\) saturation. We train each experiment with different random seeds. We evaluate the effect of augmented features on model performance by measuring the 11-step rollout plume error, which only includes the training range, namely the first 550 days. Figure 9 confirms that adding more physical information in the node/edge features clearly improves CO\({}_{2}\) saturation prediction accuracy. Using the transmissibility as an edge yields a mild error reduction in \(\delta^{s_{g}}\), while the largest improvement is observed for the addition of relative permeability in the node feature with a near 10%-reduction in \(\delta^{s_{g}}\). This experiment demonstrated that incorporating more physics into the architecture can improve prediction accuracy even in a purely data-driven framework. ## 5 Discussion ### Comparison between MGN-LSTM and standard MGN The standard MGN approach [25] suffers from temporal error accumulation and often requires mitigation strategies to maintain a stable long-term prediction. In the present study, we rely on a noise injection technique [30; 25] to enable a stable rollout of standard MGN. We compare standard MGN and MGN-LSTM with the same encoder-processor-decoder architecture. In MGN-LSTM, two separate standard MGN models are trained for predicting gas saturation and pore pressure. The accuracy improvements discussed in Section 4.5 are not used here. The noise injection strategy and training procedure for MGN are detailed in Appendix 9.2, as well as the model parameters used in the study. The goal of this ablation-nature comparison is to demonstrate the ability of LSTM to constrain temporal errors. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Case & Node & Edge & Node \\ & input & input & output \\ \hline Baseline & \(s_{g,i}^{n}\), \(V_{i},k_{i},\mathbf{n_{i}},\mathbf{x_{i}}\) & \(\begin{bmatrix}\mathbf{x}_{j}-\mathbf{x}_{i},\\ \left|\mathbf{x}_{j}-\mathbf{x}_{i}\right|\end{bmatrix}\) & \(s_{g,i}^{n+1}\) \\ \hline Static transmissibility & \(s_{g,i}^{n}\), \(V_{i},k_{i},\mathbf{n_{i}},\mathbf{x}_{i}\) & \(\begin{bmatrix}\mathbf{x}_{i}-\mathbf{x}_{j},\\ \left|\mathbf{x}_{i}-\mathbf{x}_{j}\right|\end{bmatrix}\) & \(s_{g,i}^{n+1}\) \\ \hline Relative permeability & \(s_{g,i}^{n}\), \(V_{i},k_{i},\mathbf{n_{i}},\mathbf{x}_{i},k_{r,i}(s_{g,i}^{n-1})\) & \(\begin{bmatrix}\mathbf{x}_{i}-\mathbf{x}_{j},\\ \left|\mathbf{x}_{i}-\mathbf{x}_{j}\right|\end{bmatrix}\) & \(s_{g,i}^{n+1}\) \\ \hline Static transmissibility & \(s_{g,i}^{n}\), \(V_{i},k_{i},\mathbf{n_{i}},\mathbf{x}_{i},k_{r,i}(s_{g,i}^{n-1})\) & \(\begin{bmatrix}\mathbf{x}_{i}-\mathbf{x}_{j},\\ \left|\mathbf{x}_{i}-\mathbf{x}_{j}\right|\end{bmatrix}\), & \(s_{g,i}^{n+1}\) \\ \hline \end{tabular} \end{table} Table 1: Input and output of the surrogate model: \(s_{g,i}^{n}\), \(V_{i},k_{i},\mathbf{n_{i}},\mathbf{x}_{i},k_{r,i}(s_{g,i}^{n-1})\) denote gas saturation, cell volume, scalar permeability, cell type, cell center, cell relative permeability, respectively, for a given cell. Note that \(k_{r,i}\) takes the gas saturation at the previous prediction step, \(s_{g,i}^{n-1}\), to update its value. Here, \(\mathbf{n_{i}}\) is a one-hot vector of size 4, encoding whether a cell is an internal cell, injector, cell along fault lines, and boundary cell. \(\mathbf{T}_{i,j}\) denotes the static transmissibility between cells \(i\) and \(j\). Figure 10: Prediction accuracy for (a) gas saturation and (b) pore pressure after rolling out 550 and 950 days for mesh 468 (see Section 4.3). Figure 9: Effects of injecting more physics information on the 11-step rollout (550 days of injection) performances of MGN-LSTM at predicting CO\({}_{2}\) saturation. The error bars correspond to min/median/max performance across \(12\) random seeds. “Trans” denotes the case with transmissibility included in the edge features, “Rela. Perm.” denotes the case with relative permeability included in the node features, and “Trans + Rela. Perm.” combines the two approaches. Next, we briefly illustrate the difference in prediction accuracy between MGN-LSTM and standard MGN for the meshes considered in Sections 4.3 and 4.4. Considering mesh 468 of Section 4.3 first, we compare the predictions of standard MGN and MGN-LSTM after the training period in Fig. 10. It is clear that for both saturation and pressure, the standard MGN predictions start deviating significantly from the HF data after 800 days. The standard MGN error accumulates at the front for CO\({}_{2}\) saturation and is more diffused for pressure. In both cases however, the MGN-LSTM error remains constrained thanks to the addition of the LSTM cell. Considering now the 10 meshes of Section 4.4, Fig. 11 confirms the poor prediction accuracy of standard MGN after 950 days of injection for both CO\({}_{2}\) saturation and pressure compared to MGN-LSTM. We summarize the respective accuracy of MGN-LSTM and standard MGN by comparing their ensemble results. Figure 12 shows the boxplots of error metrics \(\delta^{S_{g}}\) and \(\delta^{P_{g}}\) evaluated at (1) 550 days (the end of training) and (2) 950 days (400 days of extrapolation). We observe that in both cases, MGN-LSTM outperforms standard MGN in terms of accuracy for the prediction of CO\({}_{2}\) saturation as well as pressure. This is particularly the case after the end of the training period (i.e., between 550 days and 950 days), as the standard MGN saturation prediction accuracy deteriorates significantly due to error accumulation. ### Computational efficiency To analyze the computational efficiency, we compare the inference times of MGN-LSTM, standard MGN, and the high-fidelity simulator, GEOS, as demonstrated in Table 2. Utilizing the previously discussed dataset with 1,885 cells per mesh on average, MGN-LSTM requires an average of 0.31 seconds for a 19-step rollout on an NVIDIA Figure 11: Prediction accuracy for (a) gas saturation and (b) pore pressure after rolling out 550 and 950 days for all test meshes of Section 4.4. Tesla A100 GPU to process a single batch. This contrasts with standard MGN's shorter average time requirement of 0.07 seconds on the same hardware. The inference time for MGN-LSTM is around four times that of standard MGN, primarily due to the auto-regressive prediction of dynamic predictions in LSTM. The reduction of this overhead is an area of focus for future work. Comparing these surrogate models with GEOS demonstrates a significant performance gain. Specifically, on the same dataset, MGN-LSTM exhibits a nearly 160-fold reduction in execution time in comparison to GEOS, which operates on a CPU Intel(R) Xeon(R) E5-2680 v4 2.10GHz. We expect that this performance gain will be even more prominent with a larger mesh size. \begin{table} \begin{tabular}{|l|c|c|c|} \hline & MGN-LSTM avg. & Standard MGN avg. & GEOS \\ & inference time (s)\({}^{\text{a}}\) & inference time (s)\({}^{\text{b}}\) & run time (s)\({}^{\text{c}}\) \\ \hline 11-step rollout (550 days) & 0.18 & 0.04 & 22.12 \\ \hline 19-step rollout (950 days) & 0.31 & 0.07 & 49.02 \\ \hline \end{tabular} * On an NVIDIA Tesla A100 GPU, single-batch inference run * On an NVIDIA Tesla A100 GPU, single-batch inference run * On an Intel Xeon E5–2695 v4, single-core serial run \end{table} Table 2: Average inference times for MGN-LSTM and standard MGN with 11-step rollout (550 days) and 19-step rollout (950 days). The inference times are compared with the GEOS run time after 550 days and 950 days. Figure 12: Boxplots of the test ensembles for plume saturation and pressure errors of MGN-LSTM and standard MGN. Panels (a) and (c) compare the CO\({}_{2}\) saturation predictions of MGN-LSTM and standard MGN after rolling out 11 steps (550 days) and 19 steps (950 days), respectively, on the test ensembles. Panel (b) and (d) compare the pore pressure predictions of MGN-LSTM and standard MGN after rolling out 11 steps (550 days) and 19 steps (950 days), respectively, on the test ensembles. MGN-LSTM is in orange and standard MGN is in blue. ## 6 Concluding remarks This paper aims at demonstrating the applicability of deep-learning-based surrogate models for subsurface flow simulations with complex geological structures. We present a graph-based neural surrogate model that can operate on unstructured meshes and naturally handle geological fault structures. Our model combines a graph-based Long-Short-Term-Memory (LSTM) cell with a one-step graph neural network model, MeshGraphNet (MGN), to control temporal error accumulation. The model is trained on 450 high-fidelity, unstructured simulation results. The mesh configuration, well location, and reservoir permeability field vary from one simulation to the other. The accuracy and performance of MGN-LSTM were analyzed on a set of 50 test cases. Our results demonstrate that MGN-LSTM can accurately predict the temporal evolution of gas saturation and pore pressure in a CO\({}_{2}\) injection case with two impermeable faults. The model exhibits excellent generalizability to mesh configurations, well locations, and permeability fields not included in the training set. Furthermore, our comparison study shows that MGN-LSTM is more accurate than standard MGN and predicts dynamic fields with a smaller temporal error accumulation thanks to the LSTM cell. These conclusions have key implications in the field of CO\({}_{2}\) storage since accurately capturing uncertainties related to geological structures is critical. By enabling the construction of surrogates on unstructured meshes, MGN-LSTM has the potential to accelerate the quantification of uncertainties in CO\({}_{2}\) injection cases with complex geological structures such as faults and fractures. Future work in this area should address several topics. We observed that using full-graph embeddings to retain recurrent memories in LSTM could require a significant amount of memory, limiting the size of the model that could be trained. Future work could explore alternative architecture designs to avoid using full-graph embeddings. Furthermore, the model training is currently limited to 2D cases, and scaling up to larger domains remains an open challenge. Finally, combining data assimilation techniques with the proposed surrogate model is also of interest. ## 7 Code Availability The MGN-LSTM model architecture code and training dataset used in this study can be accessed at [https://github.com/IsaacJu-debug/gnn_flow](https://github.com/IsaacJu-debug/gnn_flow) upon the publication of this manuscript. ## 8 Acknowledgements We are grateful for the funding provided by TotalEnergies through the FC-MAELSTROM project. We also acknowledge the Livermore Computing Center and TotalEnergies for providing the computational resources used in this work. We thank Nicola Castelletto for his insights, fruitful discussions, and help with PEBI mesh generation and support in GEOS. Portions of this work were performed during a summer internship of the first author with TotalEnergies in 2022. ## 9 Appendix ### Details of MGN-LSTM #### 9.1.1 Encoder This subsection gives the detailed structure for the node level MLP (mlp\({}_{v}\)) and edge level MLP (mlp\({}_{e}\)), as shown in Table 3. #### 9.1.2 Processor As discussed in Section 3.2.1, the processor is a stack of \(m\) graph neural network (GNN) layers, each one performing a round of message passing. The hyperparameter values used for the processor are detailed in Table 4. Except for the input dimension, the structure of mlp\({}_{v}\) and mlp\({}_{e}\) employed in the processor are the same with the ones used in the encoder (Table 3). #### 9.1.3 Graph Convolutional LSTM cell The GConvLSTM cell employs identical input and output channel dimensions, both set to the processor's latent size of 100. The filter size of Chebyshev spectral convolutional kernel is set to 8, with symmetric normalization being applied to the graph Laplacian [31]. #### 9.1.4 Training hyperparameters All hyperparameters employed in the training process are listed in Table 5. ### Details of MeshGraphNet (MGN) baseline Unlike MGN-LSTM, where entire sequences of training steps are trained, MGN training involves only minimizing data mismatch between predicted node label and true node label for a single time step. Moreover, MGN utilizes the same encoder-processor-decoder components and training hyperparameters as those used in MGN-LSTM. #### 9.2.1 Noise injection in MGN Following the noise injection technique given in [25], we apply a zero-mean Gaussian noise to the field of dynamical quantities, such as gas saturation, during training. The standard deviation, \(s_{n}\), of the noise is a hyperparameter \begin{table} \begin{tabular}{l l} \hline \hline Node MLP (\(\text{mlp}_{v}\)) & Edge MLP (\(\text{mlp}_{e}\)) \\ \hline Input: \(x\) & Input: \(x\) \\ \(x\) = Linear(\(8,100\))(\(x\)) & \(x\) = Linear(\(3,100\))(\(x\)) \\ \(x\) = RELU(\(x\)) & \(x\) = RELU(\(x\)) \\ \(x\) = Linear(\(100,100\))(\(x\)) & \(x\) = Linear(\(100,100\))(\(x\)) \\ \(x\) = LayerNorm(\(100\))(\(x\)) & \(x\) = LayerNorm(\(100\))(\(x\)) \\ \hline \hline \end{tabular} \end{table} Table 4: Hyperparameters used for processor \begin{table} \begin{tabular}{l|l} \hline \hline Parameter name & value \\ \hline Loss function & RMSE \\ Number of epochs & 500 \\ Batch size & 10 \\ Learning rate & 0.001 \\ Weight decay & \(5\text{e}-4\) \\ Optimizer & Adam \\ Optimizer scheduler & cos \\ \hline \hline \end{tabular} \end{table} Table 5: Hyperparameters used for training \begin{table} \begin{tabular}{l l} \hline \hline Node MLP (\(\text{mlp}_{v}\)) & Edge MLP (\(\text{mlp}_{e}\)) \\ \hline Input: \(x\) & Input: \(x\) \\ \(x\) = Linear(\(8,100\))(\(x\)) & \(x\) = Linear(\(3,100\))(\(x\)) \\ \(x\) = RELU(\(x\)) & \(x\) = RELU(\(x\)) \\ \(x\) = Linear(\(100,100\))(\(x\)) & \(x\) = Linear(\(100,100\))(\(x\)) \\ \(x\) = LayerNorm(\(100\))(\(x\)) & \(x\) = LayerNorm(\(100\))(\(x\)) \\ \hline \hline \end{tabular} \end{table} Table 3: Details of node and edge encoders requiring tuning to achieve a stable rollout. In this work, we have found \(s_{n}=0.05\cdot s\) to be a good choice, where \(s\) is the standard deviation of dynamical quantities in the dataset. We remark that the proposed MGN-LSTM model does not need perturbing corresponding dynamical quantities in the training dataset during the training.
2307.06652
A Chronological History of X-Ray Astronomy Missions
In this chapter, we briefly review the history of X-ray astronomy through its missions. We follow a temporal development, from the first instruments onboard rockets and balloons to the most recent and complex space missions. We intend to provide the reader with detailed information and references on the many missions and instruments that have contributed to the success of the exploration of the X-ray universe. We have not included missions that are still operating, providing the worldwide community with high-quality observations. Specific chapters for these missions are included in a dedicated section of the handbook.
Andrea Santangelo, Rosalia Madonia, Santina Piraino
2023-07-13T09:47:04Z
http://arxiv.org/abs/2307.06652v1
# A Chronological History of X-Ray Astronomy Missions ###### Abstract In this chapter we briefly review the history of X-ray Astronomy through its missions. We follow a temporal development, from the first instruments onboard rockets and balloons to the most recent and complex space missions. We intend to provide the reader with detailed information and references on the many missions and instruments that have contributed to the success of the exploration of the X-ray Universe. We have not included missions that are still operating, providing the world-wide community with high quality observations. Specific chapters for these missions are included in a dedicated section of the handbook. X-rays astronomy; X-rays Balloons; X-rays Rockets; X-ray space missions; History of X-ray astronomy. ## Introduction Earth's atmosphere is not (fortunately!) transparent to X-rays. In the figure published in 1968 by Riccardo Giacconi and colleagues [1], a figure in many ways now historical, the attenuation of the electromagnetic radiation penetrating the atmosphere due to atmospheric absorption is presented as a function of the wavelength (see Figure 1). To explore the Universe in X-rays or in the soft gammas, it is therefore necessary to fly instrumentation onboard rockets, balloons, or satellites, and this presented new technological challenges at the end of the 1950s. The development of X-ray Astronomy therefore had to wait for the development of rockets capable of carrying instrumentation into the upper layers of the atmosphere. Its history thus coincides with the'space race', which began after the end of World War II and experienced a decisive acceleration with the launch of Sputnik in 1957, and Yuri Gagarin's first human space flight ever on April 12, 1961 [2]. Following the best-known narrative, one usually traces the birth of X-ray Astronomy to the program of the AS&E-MIT and especially to the flight of the rocket launched by Giacconi, Paolini, Rossi, and Gursky on June 18 1962 from White Sand (New Mexico), which led to the discovery of the first celestial source of X-rays, Sco X-1 [3]. However, the story, as we will see later, is more complex. This paper, although detailed, is not exhaustive, further read could be found in Giacconi's book "Secrets of the hoary deep" [6]; in Hirsh's "Glimpsing an Invisible Universe" [7] Figure 1: Atmospheric absorption as a function of the wavelength (bottom axis). The solid lines indicate the fraction of the atmosphere, expressed in unit of 1 atmosphere pressure (right vertical axis) or in terms of altitude (left vertical axis), at which half of the incoming celestial radiation is absorbed by the atmosphere. Whereas radio and visible wavelength (blue rectangle) can reach without being absorbed the Earth’s surface, Infrared, Ultraviolet and X-rays are strongly absorbed. (Credit High Energy Astrophysics Group, University of Tubingen). and in the review chapter of Pounds "Forty years on from Aerobee 150: a personal prospective" [8]. ## The early years of X-ray astronomy At the beginning of the 20th century, there was great interest among scientific communities in the study of the Earth's atmosphere. The emanation power of newly discovered radioactive elements, with new types of radiation, and the discovery of cosmic rays (then called 'penetrating radiation') are probably the main reason for this interest. Whether or not some layer of the upper atmosphere could be ionized was of particular interest for investigation2. Footnote 2: How suggested by Swann [13]: The subject of the ionization of the upper atmosphere is one of extreme importance to students (SIC). From various points of view there are indications that the upper atmosphere is to be treated as a region of high electrical conductivity. Weaking on radio waves, Merle Tuve and Gregory Breit [9] noted an interference phenomenon hypothetically due to the existence of an ionized reflecting layer in the upper atmosphere (the Kennelly-Heaviside layer or E layer) [10]3. Between 1925 and 1930, Edward O. Hulburt published different papers on the reflecting properties of the Kennelly-Heaviside layer of the atmosphere [11; 12]. He suggested that this should have been related to some Sun activity, because the ionization of the atmosphere could only be due to absorption of the Sun's ultraviolet light or, more likely, X-rays. When, immediately after World War II, the U.S. military offered research organizations and scientific institutions the opportunity to fly scientific instruments aboard V-2 rockets, developed during the war by Wernher von Braun, Edward Hulburt, head of the Naval Research Laboratory (NRL), enthusiastically accepted the offer to further investigate the reflective power of the atmosphere. Herbert Friedman was working in Hulburt's department. He was interested in studying the Sun's UV and X-rays to understand their role in the formation of the ionosphere. Footnote 3: The work of Tuve and Breit started a new research’s branch that in the end brought to the invention of the Radar Using a combination of filters and gas mixtures, Friedman built several photomultiplier tubes, each sensitive in a narrow frequency range. With the V-2 number 49 flight, launched in September 1949 from White Sands [15], Friedman and colleagues confirmed the hypothesis that the ionization of the atmosphere above 87 km was due to solar X-rays emitted by the sun corona [16]. For the first time X-ray instrumentation had been launched above the Earth's atmosphere. Further de velopments in the field were obtained thanks to the construction of a new type of rocket, the Aerobee serie, by James van Allen. Using Aerobee rockets, Friedman and colleagues conducted a series of night flights to search for stellar sources that, as the sun, could emit UV and X-ray radiation [16; 17]. Only an upper limit of \(10^{-8}\) ergs cm\({}^{-2}\) s\({}^{-1}\) A\({}^{-1}\) was obtained. Herbert Friedman (see Figure 2) was a pioneer of X-ray astronomy: he obtained the first X-ray image of the sun with a pinhole camera, and flew the first Bragg spectrometer for measuring hard X rays. The first satellite, SOLRAD, for long-term monitoring of the sun was also conceived and developed by Friedman. The prewar interest in the physics of the upper atmosphere and its interaction with the solar radiation, was also strong in the UK, at the Imperial College and at the University College London (UCL)4. In 1942, the Gassiot Committee, a committee concerned with upper atmospheric research, of the Royal Society began to cooperate with the Meteorological Research Committee in order to consider the use of rockets for a program for atmospheric physics [19]. The war and the postwar economic difficulty of the nation slowed down the program, but eventually the common interest of military institutions and scientific groups gave the scientific community the possibility to fly instrumentation on board of rockets and later satellites. Most remarkable was the agreement between the head of the Physics department at UCL, Figure 2: Left: Herbert Friedman (1916-2000) was certainly a pioneer in X-ray research of the celestial sources. Right: Friedman’s U.S. Patent No. 2,475,603, for an adaptation of the tube used in a Geiger-Mueller counter. Thanks to a reduced background, Friedman’s tube design increased the counter’s sensitivity to weak sources. The details of the figure can be found at [18] Credit: Public Domain Harrie Massey5, and Sir Arnold Hall, director of the Royal Aircraft Establishment (RAE) Farborough6. The result was the funding of the Skylark Program and the formation of space research group at UCL (Robert Boyd), Imperial College (Percival Albert Sheppard), Birmingham University (James Sayers), Queen's University of Belfast (David Bates and Karl George Emeleus), and University College Wales, Aberystwyth (William Granville Beynon). The early Skylark missions were dedicated to the study of the sun, the Lyman-\(\alpha\) and X-ray emission, as well as the study of upper atmosphere, as already said. The International Geophysical Year (1957/58) was a motivating occasion for the development of projects and studies. Indeed, the first successful test launch of a Skylark rocket occurred on the 13th of February 1957 from Woomera Range, Adelaide Australia. The choice of Woomera was due to the personal Australian relationships of H. Massey7. Nine months later on November 13th the first Skylark rocket with a scientific payload was launched from Woomera. The program was very successful, new instruments for X-ray were developed by a group of young scientist of the UCL, among them Ken Pounds who in 1960 received an assistant lectureship at Leicester with a three years grant of \(\pm\)13006 from the Department of Scientific and Industrial Research. This generous fund was indubitably a consequence of the launch of the Sputnik I. The two new instrument developed by UCL and then the Leicester group were: a photographic emulsion protected in an amoured steel cassette with the filter mounted behind aluminum and beryllium foils8; and a Proportional Counter Spectrometer (PCS) that according to Ken Pounds was to became the workhorse detector in X-ray astronomy [19]9. Footnote 5: According to Pounds ”... Sir Harrie Massey... was the key player in establishing the UK as the clear leader –after the USA and the Soviet Union– in the early years of space research [19]. Footnote 6: We cite Ken Pounds [19]: ”[...] on 13 May 1953, when the chairman of the Gassiot Committee was about to leave for Shenley to play in the annual UCL staff-students cricket match. Massey’s response to the question, ’would there be interest in using rockets available from the Ministry for scientific research?” was an immediate ’yes’ [...]” Footnote 7: Massey was born on May 16, 1908 in Invermay, Victoria, Australia. Footnote 8: “This device was flown successfully in over 20 Skylark flights during the 1960s, providing the first direct broad band X-ray spectra over a wide range of solar activity [20]” The study of solar X-rays were pioneered in the Netherlands by Kees de Jager, who started a laboratory for space science at the University of Utrecht. He was supported by the atomic physicist Rolf Mewe, who developed theoretical models for data interpretation. Also the Cosmic-Ray Working Group at the University of Leiden, in the sixties, worked on X-ray with rockets and balloons in collaboration with the Nagoya University and ISAS Institute in Japan. Eventually in 1983 these groups, together with the Groningen University joined in the SRON (Stichting Ruimte Onderzoek Nederland) with the aim to develop instruments for space science missions. As already mentioned, the turning point for space activities in general, and therefore for X-ray astronomy, was the so called 'Sputnik shock' of 1957. New opportunities appeared and space research was welcomed and financed. In the US, in September 1959, Bruno Rossi, chairman of the board of the American Science & Engineering (AS&E), a startup high-tech company formed in Cambridge a year earlier by Martin Annis, suggested to Riccardo Giacconi, who was called from Prince ton to become head of the Space Science Division of the AS&E, to develop some research program on X-ray astronomy. In the next few months, on February 1960, two proposals were submitted to the newly formed NASA: one, rather visionary, to develop a X-Ray telescope (Wolter type), and another for a rocket mission to investigate the emission (or scattering) of X-Rays from the moon and from the Crab Nebula. NASA accepted the first proposal and refused the second one. In an oral interview Nancy Roman, of NASA, said that this proposal was not funded because, in her mind, it was impossible to detect such emission [21]9. Nevertheless, certain of the importance of rocket mission and waiting for the realisation of the telescope, Giacconi sent the proposal to the Air Force Cambridge Research Laboratory that, on the contrary to NASA, funded a series of rocket launches. The first and second Aerobee rocket launch failed but the third one was very successful: it changed the history of astronomy and our perception of the Universe. It is fair to mention that in December 1960, a year before the discovery of Sco X-1, Philip Fisher of the Lookheed Company, had submitted a proposal to NASA to search for cosmic X-ray sources [4]. Nevertheless the launches of the Loockheed rockets (Aerobee 4.69 and 4.70) occurred on September 30 1962, and March 15 1963, after the discovery of Sco X-1. However, according to Fisher [5], his results were not properly taken into account and cited in the subsequent scientific meetings focused on X-Ray Astronomy. Footnote 9: According to Nancy Roman: “Yes. The first X-ray work was ’62, if I remember right, and that was funded by the Air Force. I didn’t fund it. I guess you can blame me for being too good a scientist or you can blame me for not having foresight. Giaconni came to me with a proposal to fly an experiment to measure solar X-rays scattered off the moon, and it was, to me, absolutely clear that that was impossible. Still is.[...] If they had come to me to say they wanted to do a sky survey in X-ray, I think, admittedly in hindsight, that I would have supported them, because I was very much aware of the desirability of finding out something about new wave length regions. But I could not see supporting an experimental rocket to measure reflected solar X-rays from the moon.” Note that the misspelling of the name of Giacconi is already in the original transcript. ## Rockets and balloons in the 60s and 70s ### Rockets On June 18, 1962 an Airforce Aerobee rocket was launched from White Sand missile range in New Mexico with the aim and the appropriate instrumentation to search for X-ray emission from celestial objects. The pioneer AS&E-MIT experiment discovered the first extra solar source of X-rays (See Figure 3), a diffuse X-Ray background component, and the probable existence of a second source in the proximity of the Cygnus constellation [3]. The payload consisted of three Geiger counters, each composed of seven mica windows of 20 cm\({}^{2}\) comprising area, the detectors had a sensitivity between 2 and 8 A. As predicted by Nancy Roman, no X-rays from the moon were observed. That was just the beginning: an intense program based on rocket launches was started. Rockets observed for only a few minutes, from a maximum altitude of \(\sim 200\) km. The list of the rocket experiments launched until 1970 is included in Appendix 1. The large majority of rocket launches was performed by US scientists. However, the UK participated to the early race of X-ray astronomy, with the Skylark launches. In particular, Skylark launches SL118 and SL119, from Woomera, Australia, provided for the first time a survey of X-ray sources in the Southern Hemisphere. ### Balloons The short fly-time of the sound rockets was a clear limit for the study of the X-ray sources, especially once their variability was revealed. With the use of aerostatic balloons, long-duration observations on the order of hours became possible, even if from a lower altitude. Different research groups undertook intense and fruitful balloon campaigns. In particular, the group at MIT of George W. Clark, Gordon P. Garmire and Minoru Oda, on leave from Tokyo University, was involved in a robust and successful program for X-ray sky observations with balloons. A detailed list of the MIT Balloon flights is reported in Appendix 2. The efforts to fly balloon mission for X-ray astronomy were undertaken by other institutions world-wide. In particular, the following groups were rather active: Leiden University; the GIFCO group at Bologna University and the TESRE institute of the Consiglio Nazionale delle Ricerche (CNR); the Centre d'Etudes Nucleaires (CEN) Saclay (France); the Figure 3: The discovery plot which marked the beginning of X-ray astronomy. The azimuthal distribution of the count rates of the Geiger-Müller detectors flow in the June 1962 flight is shown. (Credit [3]) TATA institute in Mumbai (India); the Rice university Houston (Texas); the Adelaide University (Australia) and Nagoya University (Japan). A detailed list of the Balloon flights launched by these institutions is reported in Appendix 3. We wish to mention in particular that a balloon program in the hard X-rays (20-200 keV) was pursued by the Institut fur Astronomie und Astrophysik der Universitat Tubingen (AIT) and the Max Planck-Institut fur Extraterrestrische Physics in Garching (MPE) from 1973 to 1980 with nine successful balloon flights from Texas and Australia [22]. The program was led by Joachim Trumper who started German X-ray astronomy in Tubingen before moving to the directorship at MPE. A detailed list of the Balloon flights launched by these institutions is reported in Appendix 4. The instruments were built and operated by MPE and AIT and consisted of scintillation counters with NaI(Tl)-crystals [23; 24]. The close cooperation between AIT and MPI continued in the 80s with the construction and operation of the High Energy X-Ray Experiment (HEXE) used during three successful balloon campaigns (May 1980 and September 1981 launched from Palestine/Texas, as well as November 1982 launched from Uberaba/Brazil). At the beginning of the 1970s, the main world wide available balloon launch site was the NSBF facility in Palestine, Texas, USA. In the period between 1967 to 1976, the average flight duration was about 10-15 hours, with a few exceptions (4 flights lasted 40-60 hours and only one, in 1974, up to 120 hours). The opening in 1975 of the Stratospheric Balloon Launch Base of Trapani-Milo in Sicily provided the opportunity to use transatlantic flights whith long and stable durations, and the capability to carry payloads with mass up to 2-3 tons at altitudes above 38-42 km, perfect to realize X-Ray investigations of cosmic sources [25]. The flight campaign started with a precursor flight operated by the Italian Trapani-Milo ground operation crew and a launch team from NSBF-NASA. The payload had a total weight of 1500 kg out of which 500 kg of scientific experiments and flight services. The flight started on August 5, 1975, and safely landed on the US east coast after a flight of 81 hours. Several successful balloon experiment were performed by the Istituto di Astrofisica Spaziale of Rome (IAS), in collaboration with others Italian and international institutes10. The list of the major transatlantic Balloon missions is reported in Appendix 4. Footnote 10: Among them the Istituto di Fisica Cosmica of Milan (IFC), the Laboratorio di Tecnologie e Studio delle Radiazioni Extraterrestri of Bologna (TESRE), the Istituto di Fisica Cosmica con Applicazione all’Informatica of Palermo (IFCAI), and international Institutions such as University of Southampton and RAL (UK), TATA Institute (Mumbai, India), Tubingen University (Germany), ADFA (Australia), CNES (France), INTA (Spain), etc [25] ## Uhuru and the others, opening the age of the satellites in the early 70s The first satellite designed for cosmic X-ray observation was the US Vanguard 3 satellite, launched on September 18, 1959. It operated until December 11, 1959. The payload consisted of ion chambers provided by NRL that were intended to detect (solar) X-rays (and Lyman-alpha). Unfortunately as noted in [26] "the Van Allen Belt radiation swamped the detectors most of the time and no useful X-ray data were obtained". On October 13, 1959 the US Explorer 7 satellite was launched from Cape Canaveral. It operated until August 24, 1961, and, like Vanguard 3, carried, among other experiments, ion chambers provided by NRL. The goal was to detect (solar) X-rays (and Lyman-alpha). Unfortunately, no useful X-ray data were obtained similar to the cas of Vanguard 3 [26]. ## 4 Uhuru The Small Astronomical Satellite 1 (SAS-1) was the first of small astronomy satellites developed by NASA, and was entirely devoted to the observations of cosmic X-ray sources (see Figure 4). It was launched on December 12, 1970 from the Italian San Marco launch platform off the coast of Kenya, operated by the Italian Centro Ricerche Aerospaziali. December 12 was the seventh anniversary of the independence of Kenya, and in recognition of the kind hospitality of the Kenyan people, Marjorie Townsend, the NASA mission project manager, named the successfully launched mission 'Uhuru', Swahili for 'freedom'. Uhuru was launched into a nearly equatorial circular orbit of about 560 km apogee and 520 km perigee, with an inclination of 3\({}^{\circ}\) and an orbital period of 96 minutes. The mission ended in March 1973. The X-ray detectors consisted of two sets of large-area proportional counters sensitive (with more than 10 percent efficiency) to X-ray photons in the 1.7-18 keV range. The lower limit was determined by the attenuation of the beryllium windows of the counter plus a thin thermal shroud, needed to maintain the temperature stability of the spacecraft. The upper energy limit was determined by the transmission properties of the filling gas. Pulse-shape discrimination and anticoincidence techniques were used to reduce the particle background and high-energy photons [27]. The main features of the mission are reported in Table 1. The main science achievement of Uhuru was, with no doubt, the completion of the first X-ray all sky survey up to a sensitivity of 0.5 mCrab (between 10 and 100 times better than what achievable with rockets). Uhuru detected 339 X-ray sources of different classes: X-ray binaries, supernova remnants, Seyfert galaxies, and clusters of galaxies, for which diffuse X-ray emission was discovered [28] (Figure 5). \begin{table} \begin{tabular}{l l l} \hline Instrument & Set 1 & Set 2 \\ \hline Bandpass (keV) & 1.7-18 & 1.7-18 \\ Eff Area (cm\({}^{2}\)) & 840 & 840 \\ Field of view (FWHM) & 0.52\({}^{\circ}\)\(\times\) 5.2\({}^{\circ}\) & 5.2\({}^{\circ}\)\(\times\) 5.2\({}^{\circ}\) \\ Timing resolution (s) & 0.192 & 0.384 \\ \hline Sensitivity (ergs cm\({}^{-2}\) sec\({}^{-1}\)) & \multicolumn{2}{c}{1.5\(\times\)10\({}^{-11}\)} \\ \hline \end{tabular} \end{table} Table 1: Uhuru ## Apollo 15 and Apollo 16 On July 26, 1971, the Apollo 15 lunar mission carried, inside the Scientific Instrument Module (SIM) of the Service Module, an X-ray fluorescence spectrometer (XRFS [29; 30]) and a gamma-ray spectrometer (GRS), with the aim to study the composition of the lunar surface. Similarly, on April 16, 1972 the same suite of instruments was flown on Apollo 16. The XRFS was manufactured by the AS&E. The main objective was indeed to study the Moon's surface from lunar orbit, in order to Figure 4: Left: Marjorie Townsend discusses the SAS-1 X-ray Explorer Satellite’s performance with Bruno Rossi during preflight tests at NASA’s Goddard Space Flight Center. Marjorie Townsend was the first woman to become a satellite project manager at NASA. Right: a schematic of the satellite. All major basic elements of a X-ray satellite are shown. (Credit NASA) Figure 5: The Map of the X-ray sky after Uhuru, according to the fourth Uhuru catalogue. Figure adapted from [28]. better understand the Moon's overall chemical composition (see [31]). On the way back from the Moon to the Earth (i.e., during the 'trans-Earth coast') the XRFS observed parts of the X-ray sky. The prime objective of the Apollo observations during the trans-Earth coast was to understand the nature of the X-ray sources discovered earlier (e.g., Cyg X-1, Sco X-1) by observing them continuously for approximately half an hour to an hour, which was unique at that time. UHURU could only observe for approximately 1 or 2 min per sighting. Preliminary results were reported in the Apollo 15 and 16 Preliminary Science Reports [32; 33]. Further results from the trans-Earth coast observations include a mysterious (Type I?) burst seen by Apollo 15 (see [34]) and a gamma-ray burst seen by Apollo 16 [35; 36]. ## Sas-3 The second small satellite for X-ray astronomy SAS-3 was launched on May 7, 1975 again from the Italian San Marco launch facility. Its initial orbit was equatorial. SAS-3 was designed as a spinning satellite. The spin rate was controlled by a gyroscope that could be commanded to stop rotation. In this way, all instruments could be pointed providing about 30 minutes of continuous exposures on sources such as a pulsars, bursters, or transients. The nominal spin period was 95 minutes, which was also the orbital period having an inclination of 3\({}^{\circ}\) and an altitude of 513 km. The scientific payload [38], designed and built at MIT, consisted of four X-ray instruments (see Figure 7): * 2 rotating modulation collimator systems (RMCS [39]), each of which had an effective area of 178 cm\({}^{2}\), and consisted of a modulation collimator and proportional counters active in the energy bands of 2-6 and 6-11 keV. The collimator Figure 6: The Scientific Instrument Module of Apollo 15 (Credit HEAG@UCSD [37]). had an overall FOV of 12\({}^{\circ}\times\)12\({}^{\circ}\), with a FWHM of 4.5 arcmin, centered on the direction parallel to the spin axis (satellite +Z-axis). * 3 crossed slat collimators (SME [40]) each with a proportional counter. They were designed to monitor a large portion of the sky in a wide band of directions centered on the plane perpendicular to the rotation axis of the satellite. Each detector had an on-axis effective area of 75 cm\({}^{2}\). The collimators defined 3 long, narrow FoVs which intersected on the +Y axis and were inclined with respect to the YZ plane of the satellite at angles of -30\({}^{\circ}\), 0\({}^{\circ}\), and +30\({}^{\circ}\), respectively. During the scanning mode, an X-ray source would appear in the 3 detectors. Three lines could then be obtained, and their intersection determined the source position. The central collimator had a field of view of 2\({}^{\circ}\times\)120\({}^{\circ}\) with FWHM 1\({}^{\circ}\times\)32\({}^{\circ}\). The left and right collimators had narrower, but similar responses, i.e., 0.5\({}^{\circ}\times\)32\({}^{\circ}\) (FWHM) and 1.0\({}^{\circ}\times\)100\({}^{\circ}\) (FW). The proportional counters were filled with argon and were sensitive in the range 5-15 keV. In addition the central detector featured a xenon counter, located behind the argon detector, that extended the energy range to 60 keV. Over the energy range 1.5-6 keV, 1 count/s was equivalent to 1.5 \(\times\) 10\({}^{-10}\)erg cm\({}^{-2}\) sec\({}^{-1}\) for a Crab-like spectrum. In any given orbit, at the nominal 95 min spin period, 60% of the sky was scanned by the center slat detector with an effective area from 300-1125 cm\({}^{2}\). * 3 tube collimators (TC [41]), sensitive to X-rays in the range 0.4-55 keV, located above and below, each of which with an effective area of 80 cm\({}^{2}\). The third was along 'the left' with an effective area of 115 cm\({}^{2}\) of the slat collimators, that defined a 1.7\({}^{\circ}\) circular FOV. The tube collimator above the slat collimator was inclined at an angle of 5 degrees above the Y-axis, and could therefore be used as a background reference for the other tube collimators aligned along the Y-axis. * 1 low-energy detector system (LEDS [42]) to the right of the slat collimators. It consisted of a set of 4 grazing incidence, parabolic reflection concentrators with 2 independent gas-flow counters sensitive to X-rays in the range 0.15-1.0 keV, and with an effective area of 20 cm\({}^{2}\). The major scientific objectives were reaching a position accuracy of bright X-ray sources to \(\sim\) 15 arcs; study of a selected sample of sources over the energy range 0.1-55 keV and search the sky for X-ray novae, flares, and other transient phenomena. The science highlights of the mission included the discovery of a dozen X-ray burst sources [41], among which include the Rapid Burster [43] the first discovery of X-ray from an highly magnetic White Dwarf (WD) binary system, AM Her [44], the discovery of X-ray from Algol and HZ 43 [39], the precise location of about 60 X-ray sources and the survey of the Soft X-ray background (0.1-0.28 kev) [45]. ## Heao-1 In 1977, NASA started launching a series of very large scientific payloads called High Energy Astronomy Observatories (HEAO). They were launched by Atlas Centaur rockets. The payloads were about 2.5 m\(\times\)5.8 m in size and \(\sim\)3 000 kg in mass [46; 47; 48]. The telemetry rate was large, \(\sim\)6 400 bits/s compared to the 1 000 bits/s typical of earlier satellites. The first of these missions, HEAO-1 (HEAO-A before launch) surveyed the X-ray sky almost three times over the 0.2 keV-10 MeV energy band, and provided nearly constant monitoring of X-ray sources near the ecliptic poles. More detailed studies of a number of objects were made through pointed observations lasting typically 3-6 hours. HEAO-1, operated from August 12, 1977 to January 9, 1979 in a satellite orbit at 432 km, with 23\({}^{\circ}\) inclination and a period of 93.5 min. The science payload included four major instruments (for the details see Table 2): * a Large Area Sky Survey experiment (LASS) consisting of a proportional-counter array (seven modules), sensitive in the 0.25-25 keV energy range, designed to survey the sky for discrete sources [49] * a smaller proportional-counter array, the Cosmic X-ray Experiment (CXE), designed principally to study the diffuse X-ray background from 0.215-60 keV [50; 51]. It consisted of six proportional counters: * Low Energy Detectors (LED), 2 detectors operating in the 0.15-3.0 keV energy range * Medium Energy Detector (MED) operating in the 1.5-20 keV range * High Energy Detector (HED), 3 detectors in the 2.5-60 keV energy range * a Modulation Collimator (MC) experiment, covering the energy range 0.9-13.3 keV, with 2 detectors (MC1and MC2). It was designed to determine accurate (\(\sim\)1') celestial positions [52], * a high-energy experiment, the Hard X-Ray/Low Energy Gamma Ray Experiment [53; 48], extending to \(\sim\)10 MeV, consisting in seven inorganic phoswich scintillator detectors: * Low Energy Detectors, 2 detectors in the 15-200 keV range Figure 7: Left: a schematic diagram of the instruments of the science payload. SAS3 was already a complex mission. Note that it also had onboard a set of 4 grazing incidence concentrators. Right: an artistic impression of the SAS-3 satellite. (Credit NASA) * Medium Energy Detectors operating in the 80 keV-2 MeV range * High Energy Detector in the range 120 keV-10 MeV Comprehensive catalogs of X-ray sources (one for each experiment) were obtained (see Figure 8). The large area of LASS, and the occasional pointed mode, with \(1^{\circ}\times 4^{\circ}\) FWHM collimation, enabled studies of rapid temporal variability, with e.g., the discovery of aperiodic variability in Cyg X-1 down to a time scale of 3 ms [54], discovered the first eclipse in a low-mass binary system (X1658-298) [55; 56], the 5-Hz quasi-periodic oscillation (QPO) in the 'normal-branch' mode of Cyg X-2 [57] and variability on the time scale of tens of milliseconds in an X-ray burst [58]. The CXE experiment provided a complete flux-limited High Galactic Latitude \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline Payload & A1(LASS) & \multicolumn{3}{c}{A2(CXE)} & \multicolumn{3}{c}{A3(MC)} & \multicolumn{3}{c}{A4} \\ Detector & & LED & MED & HED & MC1 & MC2 & LED & MED & HED \\ \hline Energy & 0.25-25 & 0.15-3 & 1.5-20 & 2.5-60 & 0.9-13.3 & 15-200 & 80-2000 & 120-10000 \\ Range (keV) & & & & & & & & \\ Eff Area (cm\({}^{2}\)) & 1350-1900 & 2\(\times\)400 & 800 & 3\(\times\)800 & 2\(\times\)400 & 300 & 2\(\times\)100 & 4\(\times\)45 & 100 \\ & & & & 1.5\({}^{\circ}\times\)3\({}^{\circ}\) & & & & \\ FOV & 1\({}^{\circ}\times\)4\({}^{\circ}\)-1\({}^{\circ}\times\)0.5\({}^{\circ}\) & & 3\({}^{\circ}\times\)3\({}^{\circ}\) & 4\({}^{\circ}\times\)4\({}^{\circ}\) & 1.7\({}^{\circ}\times\)20\({}^{\circ}\) & 17\({}^{\circ}\) & 37\({}^{\circ}\) \\ & & & 3\({}^{\circ}\times\)6\({}^{\circ}\) & & & & & \\ \hline \hline \end{tabular} \end{table} Table 2: HEAO-1 payload. Figure 8: The HEAO 1 A-1 X-Ray Source Catalog includes results from the first six months of data from HEAO-1, during which time a scan of the entire sky was completed. It contains positions and intensities for 842 sources. Half of the sources remained unidentified at the time of catalogue publication (1984). (Credit NASA). Survey (85 sources) which yielded improved X-ray luminosity functions for active galactic nuclei and clusters of galaxies [59], a classification among AGN types [60], a measurement of the diffuse X-ray background from 3-50 keV [61; 51]. The celestial positions, accurate to about 1 arcmin, obtained with the MC experiment, led to several hundred optical identifications and source classifications. The results from the high-energy instrument included the observation of the high-energy spectra of AGN, which were key for understanding the origin of the diffuse background [62], the discovery of the binary system, LMC X-4, with \(\sim 30\) d periodic on-off states, and the second example (after Her X-1) of cyclotron absorption in a binary system, 4U0115+63 [63]. ## Late 1970s and the 1980s: the program in the US Thanks to Uhuru and HEAO-1, a new sky had been revealed and X-ray astronomy entered a new mature phase thanks to collimators and proportional counters. A key step forward was now necessary: X-ray focusing. A step that had been prepared by Riccardo Giacconi since the beginning of the 1960s, with robust R&D plans. ## Einstein All efforts to develop X-ray focusing telescopes resulted in a proposal to NASA for a focused Large Orbiting X-Ray Telescope (LOXT), whose team was assembled by Giacconi in 1970. Indeed, the second of NASA's three High Energy Astrophysical Observatories, HEAO-2, renamed Einstein after launch, revolutionized X-ray astronomy thanks to its Wolter Type-I grazing-incidence X-ray focusing optics [64] (see Figure 9). It was the first high-resolution imaging X-ray telescope launched into space [65]. Focusing enabled not only a much better position constraint, but also was key to dramatically reduce the particle background, since the volume of the detector was now significantly smaller than before. The HEAO-2 sensitivity was then several hundred times better than any previous mission. Thanks to its few arcsec angular resolution, tens of arcmin field-of-view, and greater sensitivity, it was now possible to study the diffuse emission, to image extended objects, and detect a large number of faint sources. It was a revolutionary mission in X-ray astronomy, and its scientific outcome completely changed our view of the X-ray sky. Einstein operated from November 12, 1978 to April 26, 1981, in a satellite orbit at 465-476 km, with 23.5\({}^{\circ}\) inclination and a period of 94 minutes. The scientific payload consisted of four instruments covering the energy range 0.2-20 keV, which could be rotated, one at a time, into the focal plane of the Optics (see Table 3 for the details of the instrument parameters): * an Imaging Proportional Counter (IPC [66; 65]), operating in the 0.4-4.0 keV with high sensitivity * a High Resolution Imager (HRI [67]) operating in the 0.15-3.0 keV range * a Solid State Spectrometer (SSSn [68]) in the 0.5-4.5 keV range with moderate sensitivity * a Focal Plane Crystal Spectrometer (FPCS [69]) in the 0.42-2.6 keV range with very high spectral resolution \(\frac{E}{\Delta E}\) of 50-100 for E\(<\)0.4 keV, \(\frac{E}{\Delta E}\) of 100-1000 for E\(>\)0.4 keV). Einstein also carried a non-focusing Monitor Proportional Counter array (MPC, [70]) to measure the higher-energy emission (2- 15 keV) of bright sources in the view direction of the main telescopes, and an Objective Grating Spectrometer (OGS [71]), with 500 mm\({}^{-1}\) & 1000 mm\({}^{-1}\), energy resolution \(\frac{E}{\Delta E}\sim\)50 was used in conjunction with HRI. Many fundamental and far reaching results were obtained [72]: The high spatial resolution morphological studies of supernova remnants; the many faint sources resolved in M31 and the Magellanic Clouds; the first study of the X-ray emission from the hot intra-cluster medium in clusters of galaxies revealing cooling inflow and cluster evolution; the discovery of X-ray jets from Cen A and M87 aligned with radio jets; the First medium and Deep X-ray surveys. On top of this, Einstein discovered thousands of "serendipitous" sources. Einstein was also the first X-ray NASA mission to have a Guest Observer program. Figure 9: Left: A schematic view of the Einstein satellite. Right: The Einstein view of the galactic center of the Andromeda Galaxy (M31). The power of focusing appears in the many point sources resolved (both figures credit NASA). ## 10 Late 1970s and the 1980s: the program in Europe ## 11 Copernicus Copernicus or Orbiting Astronomical Observatory 3 (OAO-3) was a collaborative effort between the USA (NASA) and the UK (SERC). The main instrument on board was the the Princeton University UV telescope (PEP) consisting of a Cassegrain telescope with an 80 cm primary mirror, a 7.5 cm secondary, and a Paschen-Runge spectrometer. In addition, the mission carried an X-ray astronomy instrument developed by the Mullard Space Science Laboratory (MSSL) of UCL. OAO-3 was launched on August 21, 1972 into a circular orbit of 7 123 km radius and an inclination of 35\({}^{\circ}\). Although some of the instruments ceased to work, it operated for almost nine years until February 1981. The X-ray Experiment (UCLXe) consisted of 4 co-aligned X-ray detectors observing in the energy-range 0.7-10 keV: the collimated proportional counter (CPC), and 3 Wolter type-0 grazing incidence telescopes (WT-0). At the focus of the telescopes 2 proportional counters (PC1, PC2) and 1 channel photo-multiplier (CHP) were used. In Table 4 we report the main parameters of the instruments [73]. Science highlights of the mission were: the discovery of several long period pulsars (e.g. X Per) [74]; the discovery of absorption dips in Cyg X-1 [75]; the Long-term monitoring of pulsars and other bright X-ray binaries [76]; the observed rapid intensity variability from Cen A [77]. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Instrument & \multicolumn{2}{c}{CPC} & \multicolumn{3}{c}{WT-0} \\ & & pc1 & pc2 & chip \\ \hline Bandpass (nm) & \(0.1-0.3\) & \(0.3-0.9\) & \(0.6-1.8\) & \(2-7\) \\ Eff Area (cm\({}^{2}\)) & \(17.8\) & \(5.5\) & \(12.5\) & \(22.9\) \\ & & \(1^{\prime}\) & \(2^{\prime}\) & \\ FOV (FWHM) & – & \(3^{\prime}\) & \(6^{\prime}\) & \(10^{\prime}\) \\ & & \(10^{\prime}\) & \(10^{\prime}\) & \\ \hline Energy range (keV) & \multicolumn{3}{c}{0.7-10} \\ \hline \hline \end{tabular} \end{table} Table 4: Aside an instrument for UV astronomy, COPERNICUS carried on-board 4 X-ray detectors. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Payload & \multicolumn{6}{c}{WolterType 1} & MPC & OGS \\ & IPC & HRI & SSS & FPCS & & \\ \hline Bandpass (keV) & \(0.4-4\) & \(0.15-3\) & \(0.5-4.5\) & \(0.42-2.6\) & \(1.5-20\) & \\ Eff Area (cm\({}^{2}\)) & \(100\) & \(5\)-\(20\) & \(200\) & \(0.1\)-\(1\) & \(667\) & \\ & & & & \(6^{\prime}\) & & \\ Field of view (FOV) & \(75^{\prime}\) & \(25^{\prime}\) & \(6^{\prime}\) & \(2^{\prime}\times 20^{\prime}\) & \(1.5^{\circ}\) & \\ & & & & \(3^{\prime}\times 30^{\prime}\) & & \\ Spatial resolution & \(\sim 1^{\prime}\) & \(\sim 2^{\prime\prime}\) & & & \\ \(\frac{E}{\Delta E}\) & & & \(3-25\) & \(50-100^{*}\) & & \\ & & & & \(100-1000^{**}\) & \(\sim 20\%\) & \(\sim 50\) \\ \hline \hline \end{tabular} \end{table} Table 3: HEAO 2 science payload ## Ans ANS (Astronomische Nederlandse Satelliet) was a collaboration between the Netherlands Institute for Space Research (NIRV) and NASA. Launched on August 30, 1974, the mission re-entered the atmosphere on June 14, 1977. Its orbit should have been circular with a radius of 500 km, but due to a failure of the first-stage guidance, the final orbit was highly inclined (98\({}^{\circ}\)) and elliptic (258 km perigee and 1173 km apogee) with a period \(\sim\)99 min. ANS took on-board three instruments: An ultraviolet telescope spectrometer (UVT) [78] by the University of Groningen; a soft X-ray experiment (SXX), [79] developed by the University of Utrecht, that consisted of two parts known as Utrecht soft and medium X-ray detectors; and a hard X-ray experiment (HXX) [80] of the AS&E-MIT group. In particular: the UVT instrument consisted of a Cassegrain telescope followed by a grating spectrometer of the Wadsworth-type; the Utrecht soft X-ray (USXD) consisted of a grazing-incidence parabolic collector while the Utrecht medium X-ray detector (UMXD) was a 1.7 \(\upmu\) titanium proportional counter; the HXX experimental package contained three major components: a collimator assembly, a large area detector (LAD) unit, and a Bragg-crystal spectrometer (BCS) tuned for detection of the silicon lines. The details of these experiments are summarized in Table 5. ANS scientific highlights include the discovery of X-ray bursts, flash of X-rays of several seconds, emitted by neutron stars in binary accreting systems [81], the detection of X-rays from Stellar Coronae (Capella) [82], the first detection of X-ray flares from UV Ceti and YZ CMi [83] ## Aniel V The Ariel V Satellite, developed by a joint collaboration of UK and US, was launched from the San Marco platform on October 15, 1974, into a low inclination (2.8\({}^{\circ}\)), near-circular orbit at an altitude of \(\sim\) 520 km. The orbital period was 95 min. The mission ended on March 14, 1980. The British Science Research Council managed the project for the UK, The NASA GSFC for the US. Ariel V was dedicated to the monitoring of the X-ray sky. The science payload included 6 instruments. Four, aligned with the spin axis, were devoted to a detailed study of a \begin{table} \begin{tabular}{l c c c c c} \hline \hline Instrument & UVT & \multicolumn{3}{c}{SXX} & HXX \\ & & usXD & umXD & lad & bcs \\ \hline Bandpass (keV) & & \(0.2-0.28\) & \(1-7\) & \(1-30\) & \(1-4.2\) \\ Bandpass (Å) & 1550Å\(-\)3300Å & & & & \\ Eff Area (cm\({}^{2}\)) & 266 & 144 & 45 & 40 & 6 \\ FOV (FWHM) & 2.5\({}^{\prime}\times\)2.5\({}^{\prime}\) & 34\({}^{\prime}\) & 38\({}^{\prime}\times\)75\({}^{\prime}\) & 10\({}^{\prime}\) & 3\({}^{\circ}\) & 3\({}^{\circ}\) \\ \hline \hline \end{tabular} \end{table} Table 5: ANS small region of the sky within \(\sim\)10\({}^{\circ}\) of the satellite pole. The set included: a rotation modulation collimator (RMC or Exp-A), consisting of rotation collimators and three different detectors, a photo-multiplier, an electron multiplier and a proportional counter; a high resolution proportional counter spectrometer (Exp-C); a Bragg Crystal Spectrometer (Exp-D) operating in the energy band 2-8 keV, that used a honeycomb collimator; and a scintillator telescope (ST or Exp-F). The remaining 2 instruments were arranged in a direction perpendicular to the spin axis. The all-sky monitor (ASM or Exp-G), the only experiment of the mission developed by the US, utilized two X-ray pin-hole cameras to image the sky in order to monitor transient X-ray phenomena, and all the strong X-ray sources for long-term temporal effects; the Sky Survey Instrument (SSI or Exp-B) [84] consisted of two pairs of proportional counters (LE and HE) [85]. Ariel V performed long-term monitoring of numerous X-ray sources. It also discovered several long period (minutes) X-ray pulsars [86] and several bright X-ray transients probably containing a Black Hole (e.g. A0620-00=Nova Mon 1975) [87], [88]. It also discovered iron line emission in extragalactic sources [89] and established Seyfert I galaxies (AGN) as a class of X-ray emitters. In Table 6 we report details of the scientific payload of the mission. ## 4 Cos-b Cos-B was an ESA mission built by the so called Caravane Collaboration that included: The Laboratory for Space Research, Leiden, The Netherlands; Istituto di Fisica Cosmica e Informatica del CNR, Palermo, Italy; Laboratorio di Fisica Cosmica e Tecnologie Relative del CNR, Milano, Italy; Max-Planck-Institut fur Extraterrestrische Physik, Garching, Germany; Service d'Electronique Physique, CEN de Saclay, France; Space Science Department of ESA, ESTEC, Noordwijk, The Netherlands The principal scientific objective was to provide a view of the gamma-ray Universe, nevertheless it took onboard a proportional counter sensitive to 2-12 keV X-rays. As one can read in [90] "This detector was intended to provide synchronization of possible pulsed gamma-ray emission from pulsating X-ray sources. The \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & & \multicolumn{3}{c}{Aligned} & \multicolumn{3}{c}{Offset} \\ Instrument & A (rmc) & C & D & F (st) & G (asm) & \multicolumn{3}{c}{B (ssi)} \\ & & & & & & & le & he \\ \hline Bandpass (keV) & 0.3-30 & 1.3-28.6 & 2-8 & 26-1200 & 3-6 & 1.2-5.8 & 2.4-19.8 \\ Eff Area (cm\({}^{2}\)) & – & – & – & 8 & & & 290 \\ FOV (FWHM) & 10\({}^{\circ}\)-20\({}^{\circ}\) & 3.5\({}^{\circ}\) & & & & & \\ \hline Energy range (keV) & & & 0.7-10 & & & & \\ \hline \hline \end{tabular} \end{table} Table 6: Ariel V payload consisted of 6 instruments; 4 aligned with the spin axes (Exp A, Exp C, Exp D, Exp F) and two were offset (Exp G, Exp B). pulsar synchronizer was also used for monitoring the intensity of radiation from X-ray sources." ## 6 Ariel VI UK6, named Ariel VI after launch, was launched from the Wallops Island Launch Center in the USA on June 2, 1979. The orbit was elliptical with an apogee of 650 km and a perigee of 600 at an inclination of 55 \({}^{\circ}\). Ariel VI was a national UK mission but, in comparison with the success of its predecessor Ariel V, much less successful due to the problems caused by the interference with powerful military radars. In fact strong magnetic fields severely hammered the command encoder and the pointing operations. Ariel VI carried 3 scientific instruments: one was a cosmic ray experiment consisting of Cerenkov and gas scintillation counters, and the other 2 were X-ray instruments. The soft X-ray Telescope (here SXT) developed by MSSL in collaboration with the University of Birmingham, consisted of four grazing-incidence hyperboloid mirrors that reflected X-rays through an aperture/filter to four continuous-flow propane gas detectors [91]. The medium X-ray proportional counter (MXPC) developed by the Leicester University consisted of 4 multi-layered Xe-proportional counters [92]. Ariel VI continued to observe until February 1982. Table 7 shows some of the features of the X-ray instruments. Although partially, the observations carried with Ariel VI brought some results like the observation of phase variable iron line emission of the source GX 1+4 [93] or the spectral observation of Active Galaxies [92] ## 7 Exosat The European Space Agency's (ESA's) X-ray Observatory, EXOSAT [94], was active from May 1983 to April 1986. It was launched into a highly eccentric orbit (e\(\sim\) 0.93) with a 90.6 hr period, inclination of 73\({}^{\circ}\), at an apogee and perigee of 191,000 km and 350 km respectively (at the beginning of the mission). This - at that time - peculiar orbit was chosen to enable long (from 76 hr or 90 hr), uninterrupted observations during a single orbit. Due also to the great distance from the Earth \begin{table} \begin{tabular}{l c c} \hline \hline Experiment & \begin{tabular}{c} SXT \\ Grazing telescope + Xe-prop. counter \\ \end{tabular} & MXPC \\ \hline Bandpass (keV) & 0.1–2 & 1–50 \\ Eff Area (cm\({}^{2}\)) & 65 & 300 \\ FOV (FWHM) & 1.2\({}^{\circ}\)–4.6\({}^{\circ}\) & 3\({}^{\circ}\) \\ \hline \hline \end{tabular} \end{table} Table 7: Ariel VI featured two X-ray instruments onboard the SXT and the MXPC. (\(\sim\)50 000 km), EXOSAT was almost always visible from the ground station at Villarranca in Spain during science instruments operations. The payload of the satellite consisted of the Low Energy Telescopes (LE), composed of two identical Wolter-I telescopes. Each could operate in imaging mode by means of Channel Multiplier Array (CMA) or Position Sensitive Detector (PSD) or in spectroscopy mode with gratings behind the optics and the CMA in the focal plane, [95]; the Medium Energy Instrument (ME), the main instrument in the lunar occultation mode [96]; the Gas Scintillation Proportional Counter (GSPC [97]). The characteristics of the instruments are shown in Table 8. During the performance verification phase the PSDs of the two LE failed. About half a year later one of the channel plates failed. However, overall the LE functioned up to the end, and discoveries were made using the X-ray grating spectrometers (built by SRON)[98; 99]. Most notable were the discovery of quasi-periodic oscillations (QPOs) of low mass X-ray binaries, the soft excesses from AGN, the red and blue shifted iron K line from SS433, the characterization of many orbital periods of low mass X-ray binaries, and the discovery of new transient sources. The scientific highlights of the EXOSAT mission are reported in a special issue of the Memorie della Societa Astronomica Italiana [100]. ## Late 1970s and the 1980s: The program in Japan Japan, thanks to the leadership of Minoru Oda, contributed to X-ray astronomy with several missions, becoming a well recognized country in space science. The first of those missions was CoRSa-b renamed, after the successful launch, Hakucho. ## Hakucho Hakucho, Japanese for swan (like one of the archetypal X-ray sources, Cyg X-1) developed by the Institute of Space and Astronautical Science (ISAS), was launched \begin{table} \begin{tabular}{l c c c c c} \hline Instrument & \multicolumn{3}{c}{LE} & ME & GSPC \\ & cam & psd & Spectrometer & & \\ \hline Bandpass (keV) & 0.04–2 & 0.1–2 & – & 1–50 & 2–20 \\ Eff Area (cm\({}^{2}\)) & 0.4–10 & – & – & 1800 & \(\sim\) 10–100 \\ FOV & 2.2\({}^{\circ}\) & 1.5\({}^{\circ}\) & & & \\ Angular res (FWHM) & \multicolumn{3}{c}{on axis} & \multicolumn{1}{c}{\multirow{2}{*}{–}} & \multirow{2}{*}{45\({}^{\prime}\times 45^{\prime}\)} & \multirow{2}{*}{45\({}^{\prime}\)} \\ & 12\({}^{\prime\prime}\) & 50\({}^{\prime\prime}\) & & & \\ Energy res (FWHM) & none & \(\Delta\)E/E= 44/E (keV)\({}^{1/2}\) \% & & 21\% at 6 keV (Ar) & 27/E (keV)\({}^{1/2}\) \% \\ & & & 18\% at 22 keV (Xe) & & \\ \hline \end{tabular} \end{table} Table 8: EXOSAT’s science payload: the LE, ME and GSPC. from the Kagoshima Space Center on February 21, 1979. It was placed into a near-circular orbit with an apogee of 572 km, a perigee of 545 km, an inclination of 29.9\({}^{\circ}\) and an orbital period of 96 minutes [101]. It was the second of the series CoRSa (Cosmic Radiation Satellite)11. The mission ended on April 16, 1985. Its main goal was the study of transient phenomena using three different instruments: the Very Soft X-ray experiment (VSX), based on four units of proportional counters with very thin polypropylene windows. Two of the counters were oriented along the spin axis (VXP) and two were offset (VXV); the Soft X-ray instrument (SFX) that included six proportional counters with Beryllium windows. Two were equipped with a coarse modulation collimator (CMC), two with a fine modulation collimator (FMC), and the last two aimed at scanning the sky (SVC) operated in offset mode; the Hard X-ray (HDX) detector that consisted of 2 Na(T1) scintillators with an offset of 2.7\({}^{\circ}\). Table 9 summarizes the principal characteristics of the mission payload [102]. Hakucho data led to the discovery of many burst sources, and the soft X-ray transient sources Cen X-4 and Apl X-1. Footnote 11: Unfortunately the first of the satellite of the series Corsa-a failed to reach the orbit. ## Hinotori Hinotori, Japanese for Phoenix or Firebird, was the first of the series of Astra satellites. It was dedicated to the study of solar phenomena, in particular to solar flares during the solar maximum. It was launched from the Kagoshima Space Center (now Ichinoura) on February 21, 1981, and operated until October 8, 1982. The orbit was near-circular with an apogee altitude of 603 km, perigee of 548 km, inclination of 31.3\({}^{\circ}\), and period of 96.20 min. For the solar flare studies, Hinitori carried onboard the Solar X-ray Telescope (SXT), equipped with two sets of bi-grid modulation collimators for the imaging of the hard X-ray emission, using the rotating modulation collimator technique. In addition, the Solar X-ray Aspect Sensor (SXA) was a system of collimating lenses to determine the flare position with a resolution of 5 arcsec. The Soft X-ray Crystal Spectrometer (SOX, [103]) enabled the spectroscopy of X-ray emission lines from highly ionized iron during a flare. It consisted of coarse (SOX1) and fine (SOX2) Bragg spectrometers. Three additional instruments enabled \begin{table} \begin{tabular}{l c c c c c} \hline \hline Instrument & \multicolumn{2}{c}{VSX} & \multicolumn{2}{c}{SFX} & HDX \\ & vxp & vxv & cmc & fmc & svc & \\ \hline Bandpass (keV) & 0.1–1 & & 1.5–30 & & 10–100 \\ Eff Area (cm\({}^{2}\)) & \(\sim\) 78 & 69 & fmc1 40 & & \\ & & & fmc2 83 & & \(\sim\) 45 \\ FOV (FWHM) & \(6.3^{\circ}\times 2.9^{\circ}\) & \(17.6^{\circ}\) & \(24.9^{\circ}\times 2.9^{\circ}\) & \(5.8^{\circ}\) & \(4.4^{\circ}\times 10.0^{\circ}\) & \(50.3^{\circ}\times 1.7^{\circ}\) \\ \hline \hline \end{tabular} \end{table} Table 9: HAKUCHO (CoRSa-b). The payload carried three instruments for the detection of very soft (VSX), soft (SFX), and hard (HDX) X-rays. the monitoring of the flares over a large energy band: the Soft X-ray Flare Monitor (FLM), the Hard X-ray Flare Monitor (HXM), and the Solar Gamma Ray Detector (SGR [104]). The FLM was a gas (Xe) scintillation proportional counter while the HXM and SGR detectors were NaI(T1) and CsI(T1) scintillation counters, respectively. The 8 counters of the HXM instrument had different characteristics as reported in Table 10[105]. In addition to the aforementioned instruments, Hinotori hosted a particle ray monitor (PXM), a plasma electron density measurement instrument (IMP), and a plasma electron temperature measurement instrument (TEL). All instruments were co-aligned with the spacecraft spin (Z) axis that was set 1\({}^{\circ}\) off the sun center and therefore no additional driving mechanism for the detectors was necessary. The main scientific results of Hinotori include: the time profile and spectrum of the X-ray flares [106], [107]; monitoring of the electron flux above 100 keV; discovery of high-temperature phenomena reaching up to 50 million \({}^{\circ}\)C and clouds of light-speed electrons floating in coronas [108]. ## Tenna Tenma, Japanese for Pegasus, developed by ISAS, was the second satellite of the ASTRO series (ASTRO-B), and the second Japanese satellite for X-ray astronomy. It was launched on February 1983 and placed into a near-circular orbit with apogee of 501 km, perigee of 497 km, and inclination angle of 31.5 degrees. The orbital period was 96 minutes. Its scientific payload consisted of four instruments: a scintillation proportional counter (SPC), an X-ray focusing collector (XFC), a transient sources monitor (TSM), and a radiation belt and Gamma-ray monitor (RBM/GBD, [109]). In particular, the SPC, devoted to spectral and temporal studies, consisted of 10 GSPC divided in 3 groups (SPC-A, B, C) of four, four, and two units, respectively. The XFC, consisting of mirrors and position sensitive proportional counters, was designed to observe very soft X-ray sources. The TSM served as an X-ray monitor because of its wide FOV. It included two detector groups: an Hadamard X-ray telescope system (HXT) and a scanner counting system (ZYT). Two small scintillation counters monitored the non-X-ray background and the gamma-ray burst \begin{table} \begin{tabular}{l c c c c c c c c} \hline Instrument & SXT & SXA & SOX & FLM & HXM & SGR & PXM \\ & & & sox1 & sox2 & & hxm1 & hxm2 & \\ \hline Bandpass (keV) & 17–40 & & 1.72-1.99Å & 1.83-1.89Å & 2–12 & 17–40 & 40–340 & 200–6 700 & 100-800 \\ Area (cm\({}^{2}\)) & 113 & & 6.69Å & 2.36Å & 0.5 & 57 & 62 & 2.2 \\ Ang. Res. (FWHM) & 30\({}^{\prime\prime}\) & 5\({}^{\prime\prime}\) & & & & & & \\ Time Res (ms) & \(\sim\) 6\(\times\)10\({}^{3}\) & & 6–10\(\times\)10\({}^{3}\) & 125 & 7.8 & 125 & 128ch/4s & 125/ch \\ Energy Res & & & 2mÅ & 0.15mÅ & & & 0.1 E\({}^{1/2}\) MeV & \\ \hline \end{tabular} \end{table} Table 10: HINOTRI (ASTRO-A). The payload consisted of a solar flare telescope (SXT), an analyzer (SXA), a spectrometer for soft X-ray (SOX), monitors for hard X-rays (HXM) and gamma-rays (SGR), for both solar flares (FLM) and for particle emission PXM). emissions. The entire payload was mostly aligned with the stabilized spin-axes (Z) of the satellite. Detailed information about the instruments are reported in Table 11. Tenma observations continued intermittently until 11 November 1985. The main results of the mission was the discovery and study of the iron line region of many classes of sources. Tenma science highlights include: The Discovery of hot plasma of several tens of millions of degrees located along the Galactic plane [110]; The discovery of the iron absorption line in the energy spectra of X-ray bursts, which was red-shifted in the strong gravitational field of the neutron star [111], [112], [113]; The identification in low-mass X-ray binaries of X-ray emission regions on the surface of the neutron star and in the accretion disk [114]. ## Ginga Ginga, Japanese for Galaxy, ASTRO-C before launch, was launched on Feb 5, 1987 and operated until November 1, 1991. Astro-C was the result of a collaboration between Japanese research institutions, the University of Leicester and the Rutherford-Appleton Laboratory in the UK, and the Los Alamos National Laboratory (USA). Ginga followed a near circular orbit at a perigee distance of 505 km and an apogee of 675 km. It was originally planned to make a circular orbit of 630 km but atmospheric conditions at launch constrained the satellite into an elliptic orbit. The inclination of the orbit was 31\({}^{\circ}\), and the period was 96 min. The primary mission objective was the study of the time variability of X-rays from active galaxies such as Seyfert galaxies, BL Lac objects, and quasars in the energy range 1.5-30 KeV. Accurate timing analysis of galactic X-ray sources was also one of the goals of the mission [115]. The payload of the satellite consisted of three instruments: a Large Area Proportional Counter (LAC, [116]), an All Sky Monitor (ASM, [117]), and a Gamma-ray Burst Detector (GBD, [118]). The LAC consisted of eight multi-cells proportional counters. The ASM consisted of 2 identical gas proportional counters. Each counter was equipped with a collimator which had 3 different FOVs. The GBD included two detectors: a proportional counter and a scintillation spectrometer. The characteristics of these instruments are summarized in Table 12. \begin{table} \begin{tabular}{l c c c c c c} \hline \multirow{2}{*}{Instrument} & \multicolumn{2}{c}{SPC} & \multirow{2}{*}{XFC} & \multicolumn{2}{c}{TSM} & RBM/GBD \\ & & & & & & hxt & zyt & \\ \hline Bandpass (keV) & \multicolumn{2}{c}{2-60} & 0.1-2 & 2-25 & 1.5-25 & 10-100 \\ Energy Res (FWHM) & \multicolumn{2}{c}{9.5\% at 5.9 keV} & & & & \\ Area (cm\({}^{2}\)) & \multicolumn{2}{c}{320} & 80 & 15 & 114 & 280 & 14 \\ (FOV) FWHM & 3.1\({}^{\circ}\) & 2.5\({}^{\circ}\) & 3.8\({}^{\circ}\) & 1.4\({}^{\circ}\)\(\times\)5\({}^{\circ}\) & 40\({}^{\circ}\)\(\times\)40\({}^{\circ}\) & 2\({}^{\circ}\)\(\times\)25\({}^{\circ}\) & 1sr \\ \hline \end{tabular} \end{table} Table 11: Tenma.The payload consisted of four instruments: the SPC, the XFC, the TSM, and a the RBM/GBD monitor. ## 10 Late 1970s and the 1980s: The program in Russia and India The X-ray astronomy program of the Soviet Union had modest beginnings in the 1970s with the FILIN X-ray experiment aboard the Salyut-4 space station. It continued in the 1980s with experiments onboard the Astron (1983-1988) and Mir (1987-2000) space stations. This latter program had a strong European involvement. These programs generally suffered from a limited observation time allocation because of other commitments of the manned spacecraft. During the late 1970s and into the 1980s, the Soviet program was focused on studies of gamma-ray bursts. The Konus experiments on the Venera 11-14 spacecraft yielded major advances in this field [119; 120]. A notable result at X-ray wavelengths was the discovery of an unusual gamma burst on March 5, 1979 with sustained X-ray emission that exhibited periodic pulsations [121]. ## 11 Filin/salyut-4 The FILIN X-ray instrument aboard the manned orbiting space station Salyut-4 (December 1974) consisted of three detectors sensitive in the 2-10 keV range [123] and a smaller proportional counter for soft X-ray studies (0.2-2 keV), with a rather large FOV (see Table 13). Gas flow proportional counters were used as the detectors. A gas flow system supplied a gas mixture for the counters. To determine the source coordinates, two star sensors were installed. The X-ray detectors, all optical sensors, and the gas flow system were mounted on the outside of the station, while the power supply and electronics was inside. Scanning observations were carried out for about 1 month and pointed observations for about 2 months; studies included observations of Sco X-1, Her X-1, and Cyg X-1 [123] and the X-ray nova A0620-00 [46; 124] \begin{table} \begin{tabular}{l l l} \hline Instrument & sFilin & Filin \\ \hline Bandpass (keV) & 0.2-2 & 2-10 \\ Area (cm\({}^{2}\)) & 40 & 450 \\ FoV (FWHM) & \(3^{\circ}\times 10^{\circ}\) & \\ \hline \end{tabular} \end{table} Table 13: FILIN. The x-ray instrument on-board the Salyut-4 space station. \begin{table} \begin{tabular}{l l l l} \hline Instrument & LAC & ASM & GBD \\ & & & pc sc \\ \hline Bandpass (keV) & 1.5-37 & 1-20 & 1.5-500 \\ Energy Res (FWHM) & 18\% at 6 keV & & \\ Area (cm\({}^{2}\)) & 4500 & 70 & 63 60 \\ FOV (FWHM) & \(0.8^{\circ}\times 1.7^{\circ}\) & \(1^{\circ}\times 45^{\circ}\) & all sky \\ \hline \end{tabular} \end{table} Table 12: Ginga (ASTRO-C). The primary instrument on-board was the LAC. The ASM and the GBD completed the payload. ## Skr-02m The experiment SKR-02M on the Astron station (1983) consisted of a large proportional counter of effective area \(\sim 0.17\) m\({}^{2}\)) sensitive from 2 to 25 keV [125]. The field of view was 3\({}^{\circ}\times\) 3\({}^{\circ}\) (FWHM). Data were sent via telemetry in 10 energy channels. Results have been reported from studies of the Crab nebula and pulsar, Her X-1, A0535 + 26, and Cen X-3 [125]. The prolonged low state of Her X-1 in 1983 was studied, and the 1984 turn-on was reported [126]. ## Xvantimir The Rontgen X-ray observatory was launched in 1987 aboard the Kvant module which docked to the MIR space station. The complement of detectors [127] included a sensitive high energy 15-200 keV X-ray experiment (HEXE, [128]), a coded-mask system for imaging high-energy photons (TTM, [129]), and a gas scintillation proportional counters (GSPC, [130]). It also carried two gamma-ray experiments, which reached down to 30-40 keV [127]. Rontgen was an international endeavor with contributions from Germany, UK, ESA, and the Netherlands. The highlight of the mission was the discovery and study (with Ginga) of the X-ray emission from SN1987A [131]. A high energy tail in the spectrum of the X-ray nova GS2000 + 25 was discovered [132] further indicating its similarity to the black-hole candidate A0620-00. Timing results for the Her X-1 pulsar in 1987-1988 showed it to be continuing its spin up [133]. The principal characteristics of the Rontgen X-ray observatory are reported in Table 14. ## Aryabata Aryabata, named after the Indian mathematician and astronomer of the fifth century, was the first satellite of India completely designed and built by the Indian Space Research Organization (ISRO). It was launched on April 19, 1975 from the Russian rocket launch site Kaputsin Yar. Its orbit had a perigee of 563 km, an apogee of 619 km, and inclination of 50.7\({}^{\circ}\). The period was of 96.46 min. The mission ended \begin{table} \begin{tabular}{l l l l} \hline Instrument & TTM & GSPC & HEXE \\ \hline Energy range (keV) & 2-32 & 4-100 & 15-200 \\ Energy resolution (FWHM) & 18\% at 6 keV & 10.5\% at 6 keV & 30\% at 60 keV \\ Area (cm\({}^{2}\)) & 600 & 300 & 800 \\ FOV (FWHM) & \(7.8^{\circ}\times 7.8^{\circ}\) & \(3^{\circ}\times 3^{\circ}\) & \(1.6^{\circ}\times 1.6^{\circ}\) \\ \hline \end{tabular} \end{table} Table 14: Röntgen X-ray observatory. on March 1981 and the satellite reentered the Earth's atmosphere on February 10, 1992. Three instruments dedicated to: Aeronomy, solar physics, and X-ray Astronomy were on-board. The X-ray detector consisted of a proportional counter filled with a mixture of Ar, CO\({}_{2}\) and He, and operated in parallel mode, in the energy range from 2.5 to 115 keV. The effective area was \(\sim 15.4\,\)cm\({}^{2}\) and the FOV, circular, with \(12.5^{\circ}\) (FWHM). In particular Aryabhata made observation of Cyg X-1, finding a hardening in its spectrum, [134], and of other two X-ray sources, namely GX17+2, and GX9+9 [135]. ## Bhaskara Two satellites Bhaskara I and II were developed by ISRO and named after the two famous Indian mathematicians Bhaskara (or Baskara I) of the 7th century, and Bhaskara II (or Bhaskaracharya, Bhaskara the teacher) of the 12th century. We will report here only about Bhaskara I, since Baskara II didn't carry X-ray instruments. Bhaskara I, was launched on June 7, 1979 from Kaputsin Yar. Its orbital perigee and apogee were 512 km and 557 km respectively, the inclination was of \(50.7^{\circ}\) and period 95.20 min. The mission ended on February 17, 1989, after almost 10 years. The main objectives of the mission were: 1) to conduct observations of the earth yielding data for hydrology, forestry, and geology applications; 2) to conduct ocean-surface study using a SAtellite MIcrowave Radiometer (SAMIR), and 3) among other minor investigations, to conduct investigation in X-ray astronomy. The X-ray instrument consisted by a Pinhole X-ray Survey camera operating in the energy range between 2 and 10 keV with the purpose of observing transient sources and long term variability of steady sources. At the image plane of the camera there was a position-sensitive proportional counter, the detector operated with success during the first month after launch. However, it had to be turned off, and when, after some time, was turned on again it didn't operate correctly; the reason of the malfunction was never found. ## The golden age of X-Ray astronomy, from the 1990s to the present The 1990s can be considered as a sort of renaissance of X astronomy. It consisted of years of significant missions, that brought X astronomy into its full maturity. The decade begins with the launch of the Soviet mission Granat (December 1989) and of the German mission ROSAT (June 1990), and soon after the Japanese ASCA. In the mid-1990s, BeppoSAX and RXTE were launched and in the late 1990s, Chandra and XMM-Newton. ### The program in the US #### 3.3.1 Ulysses The Ulysses mission was a joint mission between NASA and ESA to explore the solar environment at high ecliptic latitudes. Launched on October 6, 1990, it reached Jupiter for its 'gravitational slingshot' in February 1992. It passed the south solar pole in June 1994 and crossed the ecliptic equator in February 1995. In addition to its solar environment instruments, Ulysses also carried onboard plasma instruments to study the interstellar and Jovian regions, as well as two instruments for studying X- and Gamma-rays of both solar and cosmic origins. The mission could send data in 4 different telemetry modes at rates of 128, 256, 512, and 1024 b/s. The time resolution of the Gamma-ray burst instrument depended on the chosen data rate. The maximum telemetry allocation for the instrument was about 40 b/s. The Ulysses solar X-ray and cosmic gamma-ray burst experiment (GRB) had 3 main objectives: study and monitor solar flares, detect and localize cosmic gamma-ray bursts, and in-situ detection of Jovian auroras. Ulysses was the first satellite carrying a gamma burst detector which went outside the orbit of Mars. This resulted in a triangulation baseline of unprecedented length, thus allowing major improvements in burst localization accuracy. The instrument was turned on 9 November 1990. The GRB consisted of 2 CsI scintillators (called the Hard X-ray detectors)and 2 Si surface barrier detectors (called the Soft X-ray detectors). The detectors were mounted on a 3 m boom to reduce background generated by the spacecraft's radioisotope thermoelectric generator. The hard X-ray detectors operated in the range 15-150 keV. The detectors consisted of two 3 mm thick by 51 mm diameter CsI(Tl) crystals mounted via plastic light tubes to photomultipliers. The hard detector varied its operating mode depending on the measured count-rate, the ground command, or a change in spacecraft telemetry mode. The trigger level was normally set for 8-sigma above background corresponding to a sensitivity \(2\times 10^{-6}\) erg cm\({}^{-2}\) s\({}^{-1}\)[136]. When a burst trigger was recorded, the instrument switched to high resolution data, recording a 32-kbit memory for a slow telemetry read out. Burst data consisted of either 16 s of 8-ms resolution count rates or 64 s of 32-ms count rates from the sum of the 2 detectors. There were also 16 channel energy spectra from the sum of the 2 detectors (taken either in 1, 2, 4, 16, or 32 second integration). During 'wait' mode, the data were taken either in 0.25 or 0.5 s integration and 4 energy channels (with shortest integration time being 8 s). Again, the outputs of the 2 detectors were summed. The soft X-ray detectors consisted of two 500 \(\mu\)m thick, 0.5 cm\({}^{2}\) area Si surface barrier detectors. A 100 mg cm\({}^{-2}\) beryllium foil front window rejected the low energy X-rays and defined a conical field of view of 75\({}^{\circ}\) (half-angle). These detectors were passively cooled and operated in the temperature range \(-\)35 \({}^{\circ}\)C to \(-\)55 \({}^{\circ}\)C. This detector had 6 energy channels, covering the range 5-20 keV. Ulysses results have been mainly about the Sun and its influence on nearby space [137]. ## Bbxrt The Broad Band X-ray Telescope (BBXRT, [138]) was flown on the space shuttle Columbia (STS-35) as part of the ASTRO-1 payload (December 2, 1990 - December 11, 1990). It was designed and built by the Laboratory for High Energy Astrophysics at NASA/GSFC. BBXRT was the first focusing X-ray telescope operating over a broad energy range 0.3-12 keV, with moderate energy resolution (90 eV at 1 keV and 150 eV at 6 keV). It consisted of two identical co-aligned telescopes each with a segmented Si(Li) solid state spectrometer (detector A and B) with five pixels. The telescope consisted of two sets of nested grazing-incidence mirrors, whose geometry was close to the ideal paraboloidal/ hyperboloidal surfaces (modified Wolter type-I ). This simplified fabrication and made possible nesting many shells to yield a large geometric area. The effective on-axis areas was 0.03 m\({}^{2}\) at 1.5 keV and 0.015 m\({}^{2}\) at 7 keV. The focal plane consisted of a 5-element lithium drifted silicon detector with energy resolution of about 100 eV FWHM. Despite operational difficulties with the pointing systems, the BBXRT obtained high quality spectra from some 50 selected objects [139], both Galactic and extragalactic. Results included the resolved iron K line in the binaries Cen X-3 and Cyg X-2 [141], evidence of line broadening in NGC 4151 [142], and the study of cooling flow in clusters [143]. Details are reported in Table 15. ## Rxte The Rossi X-ray Timing Explorer (RXTE, [144]) was launched on December 30, 1995 from the NASA Kennedy Space Center. The mission was managed and controlled by NASA/GSFC. RXTE featured unprecedented time resolution in combination with moderate spectral resolution to explore the time variability of the X-ray sources. Time scales from microseconds to months were studied in the spectral range from 2 to 250 keV. Originally designed for a required lifetime of two years \begin{table} \begin{tabular}{l l} \hline Instrument & BBXRT on STS-35 \\ \hline Bandpass (keV) & 0.3-12 \\ Eff Area (cm\({}^{2}\)) (at 1.5 keV) & 765 \\ Eff Area (cm\({}^{2}\)) (at 7 keV) & 300 \\ FOV (diameter) & \(17.4^{\prime}\) \\ Central pixel FOV diameter & \(4^{\prime}\) \\ Angolar resolution & \(2^{\prime}-6^{\prime}\) \\ Energy resolution (eV, FWHM) at 1 90 \\ keV & Energy resolution (eV, FWHM) at 6 150 \\ \hline \end{tabular} \end{table} Table 15: BBXRT with a goal of five, RXTE completed 16 years of observations (!) before being decommissioned on January 5, 2012. The spacecraft was designed and built by the Applied Engineering and Technology Directorate at NASA/GSFC. The launch vehicle was a Delta II rocket that put RXTE into a low-earth circular orbit at an altitude of 580 km, corresponding to an orbital period of about 90 minutes, with an inclination of 23 degrees. Operations were managed at GSFC. The mission carried onboard two pointed, collimated instruments: the Proportional Counter Array (PCA, [145]) developed by GSFC to cover the lower part of the energy range, and the High Energy X-ray Timing Experiment (HEXTE, [146]) developed by the University of California at San Diego, covering the upper energy range. The PCA was an array of five proportional counters with a total collecting area of 6 500 cm\({}^{2}\). Each unit consisted of a layer Propane veto, 3 layers Xenon, each split in two, and a Xenon veto layer. HEXTE consisted of two clusters each containing four NaI/CsI phoswich scintillation counters. Each cluster could 'rock'along mutually orthogonal directions to provide background measurements (1.5 or 3.0 degrees away from the source) every 16 to 128 s. Automatic gain control was provided by using a 241Am radioactive source mounted in each detector's field of view. Part of the RXTE scientific payload, was an All-Sky Monitor (ASM) from MIT that scanned about 80% of the sky every orbit, allowing monitoring at time scales of 90 minutes or longer. The ASM [147] consisted of three wide-angle shadow cameras equipped with proportional counters with a total collecting area of 90 cm\({}^{2}\). The main details of the mission are reported in Table 16. RossiXTE was an extremely successful and productive mission. Science highlights include: The discovery of kilohertz Quasi-Periodic Oscillations (KHz QPOs) in NS systems [148]and High Frequency QPOs in BH systems[149]; The discovery of the first accreting millisecond pulsar, SAX J1808.4-3658[150], followed by several more accreting millisecond pulsars; The detection of X-ray afterglows from Gamma Ray Bursts[151]; the observation of the Bursting Pulsar over a broad range of luminosity, providing stringent test of accretion theories. \begin{table} \begin{tabular}{l l l l} \hline Instrument & ASM & PCA & HEXTE \\ \hline Bandpass (keV) & 2-10 & 2-60 & 15-250 \\ Eff Area (cm\({}^{2}\)) & 90 & 6 500 & 2\(\times\)800 \\ FOV & \(6^{\circ}\times 90^{\circ}\) e.u. & \(1^{\circ}\) & \(1^{\circ}\) \\ Time resolution & 80\% of the sky 1 \(\mu\)sec & 8 \(\mu\)sec \\ & in 90 minutes & & \\ Energy resolution & & \(<18\%\)(6 keV) & 15\%(60 keV) \\ Spatial resolution & \(3^{\prime}\times 15^{\prime}\) & & \\ \hline Sensitivity (milliCrab) & 30 & 0.1 & 1000(360 cts)/cluster \\ Background & & 2 mCrab & 50 count/s/cluster \\ \hline \end{tabular} \end{table} Table 16: Rossi XTE ## 5 Usa on Board Argos The Unconventional Stellar Aspect (USA [152]) experiment was a multi-purpose experiment based on an X-ray sensor. The main objectives included both basic research in X-ray astronomy and the test of X-ray sensors in space. The experiment was launched on February 23, 1999 from Vandenberg AFB, CA aboard the Advanced Research and Global Observation Satellite (ARGOS). USA operated from April 1999 to November 2000. It consisted of a pair of large-area gas scintillation proportional counters sensitive to 1-15 keV mounted on a two-axis pointing system (see Table 17). It was a reflight of the SPARTAN 1 instrument flown on the Space Shuttle Discovery in June 1985. USA included precise (roughly microsecond accuracy) time-tagging of events using an integrated GPS receiver. The experiment was used to provide a new atmospheric diagnostic based on atmospheric column density determinations. Energy-resolved photon extinction curves of X-ray celestial sources occulted by the Earth's atmosphere were used. This research was therefore the first to study the neutral atmosphere using X-ray source occultations, and complements UV airglow remote sensing techniques used aboard ARGOS that were insensitive to nighttime neutral density. The great majority of the observations made with USA had either neutron star or black hole sources as targets. Four prominent transients active during the life of USA were XTE 111 18+480, XTE J1550-564, XTE J1859+226 and GRS 1915+105.. ## 6 The program in Europe ### Rosat The Roentgen Satellite, ROSAT, a collaboration between Germany, the US, and the UK, was launched on June 1, 1990 into a near-circular orbit with perigee of 539 km and apogee 554 km. The inclination was of 53\({}^{\circ}\). The mission operated for almost 9 years until February 12, 1999. The first 6 months of the mission were dedicated to the all sky-survey, followed by the pointing phase. ROSAT obtained the first X-ray \begin{table} \begin{tabular}{l l} \hline Instrument & USA \\ \hline Bandpass (keV) & 0.5-25 \\ Eff Area (cm\({}^{2}\)) & 2000 at 3 keV \\ Field of view (FOV) & \(1.2^{\circ}\times 1.2^{\circ}\) \\ Time resolution \(\mu\)s & 2 \\ Energy resolution & 0.17 (1 keV at 5.9 keV), 128 raw PHA channel \\ \hline \end{tabular} \end{table} Table 17: USA on board ARGOS and XUV all-sky surveys using an imaging telescope with an X-ray sensitivity of about a factor of 1 000 better than that of UHURU. The main instrument of the ROSAT observatory was a fourfold nested Wolter-I X-ray telescope (XRT) whose focal plane assembly consisted of a carousel carrying three imaging X-ray detectors. Two of them were position sensitive proportional counters (PSPCs). The third imaging detector was a high resolution imager (HRI) provided by NASA [153]. The Wide Field Camera (WFC) was an EUV telescope (with Wolter-Schwarzschild mirrors) designed and built by the UK by the University of Leicester. The focal plane instrumentation of the WFC consisted of a curved micro-channel plate (MCP) [154]. The main information on ROSAT's scientific payload is reported in Table 18. ROSAT was a very successful mission. Main highlights include: * X-ray all-sky survey catalog, more than 150 000 objects * XUV all-sky survey catalog (479 objects) * Source catalogs from the pointed phase (PSPC and HRI) containing \(\sim\)100 000 serendipitous sources * Detailed morphology of supernova remnants and clusters of galaxies. * Detection of shadowing of diffuse X-ray emission by molecular clouds. * Detection (Finally!) of pulsations from Geminga. * Detection of isolated neutron stars. * Discovery of X-ray emission from comets. * Observation of X-ray emission from the collision of Comet Shoemaker-Levy with Jupiter. ## 0.1 Bepposax The SAX Mission (1996 - 2002) was a program of the Italian Space Agency (ASI) with participation of the NIVR. The mission was supported by a consortium of institutes in Italy together with institutes in the Netherlands, and the Space Science Department of ESA. The acronym SAX stands for 'Satellite per Astronomia X', \begin{table} \begin{tabular}{l c c c} \hline Instrument & \multicolumn{2}{c}{XRT} & \multicolumn{2}{c}{WFC} \\ Telescope & \multicolumn{2}{c}{Wolter-I} & Wolter-Schwarzschild \\ Focal length (m) & \multicolumn{2}{c}{2.40} & \multicolumn{2}{c}{0.525} \\ \hline Detector & PSPC & HRI & MCP \\ \hline Bandpass (keV) & 0.1–2 & & 0.062–0.21 \\ Energy Res (FWHM) & 45\% at 1 keV & & \\ Area (cm\({}^{2}\)) & 420 & 80 & 17.1 \\ FOV & \(2^{\circ}\) & \(36^{\prime}\times 36^{\prime}\) & \(5^{\circ}\) \\ FWHM & \(0.8^{\circ}\times 1.7^{\circ}\) & \(1^{\circ}\times 45^{\circ}\) & \\ \hline \end{tabular} \end{table} Table 18: ROSAT carried onboard the XRT and a WFC. Italian for 'X-Ray Astronomy Satellite'. SAX was launched on April 30, 1996 by an Atlas Centaur rocket directly into a 600 km, 96 min orbit at 3.9\({}^{\circ}\) inclination. The satellite thus nearly avoided the South Atlantic Anomaly and took full advantage of the screening effect of the Earth's magnetic field in reducing the cosmic ray induced background, an aspect particularly relevant for the high energy instruments. After the successful launch was renamed BeppoSAX in honor of Giuseppe "Beppo" Occhialini. The BeppoSAX mission ended on April 30 2002, and the satellite re-enterd into the atmosphere on April 29, 2003. The payload of BeppoSAX consisted of five narrow field instruments (NFIs) and a wide field camera (WFC) [155]. The NFIs consisted of four instruments: 1) the Low Energy Concentrator Spectrometer (LECS), a low energy telescope with a thin window position sensitive gas scintillation proportional counter in its focal plane [156]; 2) the Medium Energy Concentrator Spectrometers (MECS), a medium energy set of three identical grazing incidence telescopes with double cone geometry, with position sensitive gas scintillation proportional counters in their focal planes [157]; 3) a collimated High Pressure Gas Scintillation Proportional Counter (HPGSPC) [158]; and 4) a collimated Phoswich Detector System (PDS) [159]. The WFC observation were performed by two coded mask proportional counters (PSMPC) [160] that provided access to large regions of the sky. Table 19 reports the main parameters of this instrumentation. The main feature of BeppoSAX was its broad band, extending from the fraction of keV to more than 100 keV. This allowed the study of broad band spectra of many classes of galactic and extragalactic objects. BeppoSAX produced outstanding results on many classes of galactic and extra galactic sources, however the most spectacular breakthrough of the mission was the discovery of the X-ray afterglows of GRB which allowed the discovery of their optical counterparts confirming the extra-galactic nature of the GRBs [2; 161]. \begin{table} \begin{tabular}{l c c c c c} \hline & & & NFI & & WFC \\ Instrument & LECS & MECS & HPGSPC & PDS & PSMPC \\ \hline Bandpass (keV) & 0.1–10 & 1.3–10 & 3–120 & 15–300 & 2–30 \\ Energy Res (FWHM) & 32\% at 0.28 keV & 8\% at 6 keV & 10\% at 6 keV & 15\% at 60 keV & 20\% at 60 kev \\ & 22 @0.25 keV & 150 @6 keV & & & \\ Area (cm\({}^{2}\)) & 50 @6 keV & 101 @8 keV & 450 & 640 & 600 \\ Angular Resolution & 3.5\({}^{\prime}\) @0.25 keV & 2.1\({}^{\prime}\) @6 keV & & collimated & 5\({}^{\prime}\) \\ FOV (FWHM) & & 0.5\({}^{\circ}\) & 1\({}^{\circ}\times 1^{\circ}\) & 1.3\({}^{\circ}\) & 27\({}^{\circ}\times 27^{\circ}\) \\ \hline \end{tabular} \end{table} Table 19: BeppoSAX. ## The program in Japan ### Asca The satellite ASCA, Japanese for "flying bird", _Asuka_, and also an acronym for Advanced Satellite for Cosmology and Astrophysics (ASTRO-D before launch) was launched on February 20, 1993 and operated until July 15, 2000. Its orbit was a near circular one with perigee and apogee distances of 523 km and 615 km respectively and an inclination of 31.1\({}^{\circ}\). It was the first satellite that used Charged Coupled Devices (CCDs) detectors for X-ray astronomy. It combined imaging capability with broad band-pass, good spectral resolution and large effective area [162]. The scientific payload consisted of 4 identical grazing-incidence X-ray telescope (XRT) each composed of 120 nested gold-coated aluminum foil surfaces and equipped with an imaging spectrometer at its focal plane. Two of the focal detectors were CCD cameras (Solid Imaging Spectrometers or SIS), the other two scintillation proportional counters (Gas Imaging Spectrometer or GIS) [163]. Table 20 summarizes the characteristics of the instruments on board of ASCA. ### Suzaku ASTRO-E2 [164] was the successor of ASTRO-E lost after launch on February 10, 2000. After its successful launch on July 10, 2005, ASTRO-E2 was renamed Suzaku after the mythical Vermilion bird of Asian mythology, the guardian of the southern skies. The mission ended on September 2, 2015. It was developed at ISAS in collaboration NASA/GSFC. It entered a circular orbit at 570 km altitude with an inclination angle of 31\({}^{\circ}\). The orbital period was about 96 minutes. Suzaku carried five Wolter type-I X-ray telescopes (XRT) [165]. One of them, XRT-S with a fo \begin{table} \begin{tabular}{l c c c} \hline Instrument & \multicolumn{2}{c}{XRT} & SIS & GIS \\ & at 1 keV at 7 keV & & \\ \hline Bandpass (keV) & 0.5-12 & 0.4-12 & 0.8-12 \\ Energy resolution & & 2\% at 5.9 keV & 8\% at 5.9 keV \\ Area (cm\({}^{2}\)) & 1200 & 600 & 105 & 50 \\ Angular resolution & 3\({}^{\prime\prime}\) FWHM & 30\({}^{\prime\prime}\) & 0.5\({}^{\prime}\) at 5.9 keV \\ FWHM & 24\({}^{\prime}\) & 16\({}^{\prime}\) & & \\ FOV & 24\({}^{\prime}\) at 1 keV & 22\({}^{\prime}\)\(\times\)22\({}^{\prime}\) & 50\({}^{\prime}\) \\ \hline \end{tabular} \end{table} Table 20: ASCA (ASTRO-D). The satellite carried 4 x-ray telescopes (XRT). cal length of 4.5, featured as X-Ray Spectrometer (XRS) on the focal plane the first X-ray microcalorimeter flown on an orbiting observatory. The X-ray microcalorimeters, the Adiabatic Demagnetization Refrigerator (ADR), and the liquid helium tank were supplied by the NASA/GSFC. The solid neon tank surrounding the helium tank and the mechanical cooler to cool the neon tank were built by ISAS/JAXA. XRS was expected to have an unprecedented energy resolution of 6 - 7 eV (FWHM) over the 0.3 - 12 keV energy band. Unfortunately the XRS failed about 29 days after launch, due to the malfunction of the cooling system and the consequent complete loss of liquid He. The other four (XRT-Is), with a focal length of 4.75 m, were devoted to the X-ray Imaging Spectrometer (XIS). The scientific payload of Suzaku was completed by the non-imaging, collimated Hard X-ray Detector (HXD). These instruments operated for ten years, until the end of the mission [166]. The X-ray Imaging Spectrometer (XIS) was composed of four X-ray sensitive imaging CCD cameras (three front-illuminated and one back-illuminated) of unprecedented energy resolution, each accommodated at the focus of the dedicate telescope (xrt-i0, xrt-i1, xrt-i2, xrt-i3) [167]. The HXD was a combination of silicon pin diodes and gso scintillators [168]. All main characteristics of the SUZAKU instruments are summarized in Table 21. ## Hitomi HITOMI, previously ASTRO-H and NEw X-ray Telescope (NEXT), was the result of a international collaboration led by JAXA, the Japanese Aerospace Exploration Agency, that involved many institutions, among others the NASA/GSFC, ESA, CSA, SRON. Hitomi means eye's pupil and probably symbolizes a black hole. The main scientific goal of Hitomi was, namely, the exploration of the structure of \begin{table} \begin{tabular}{l c c c c c} \hline & \multicolumn{3}{c}{XRT} & \\ Instrument & XRT-S & \multicolumn{2}{c}{XRT-I} & & \\ XRS & \multicolumn{2}{c}{XIS} & \multicolumn{2}{c}{HXD} & \\ & & FI & BI & pin & gso \\ \hline Angular resolution & \multicolumn{3}{c}{\(\sim 2^{\prime}\)} & & \\ Bandpass (keV) & 0.3-12 & 0.2-12 & & 10-60 & \\ Area (cm\({}^{2}\)) & 190 & 340 & 390 & \(\sim\)145 & 315 \\ Energy resolution (eV) \(\sim\)6.5 at 6 keV & 130 at 6 eV & 3\(\times\)10\({}^{3}\) FWHM & 7.6/(E MeV) 0.5 \% FWHM \\ FOV & 2.9\({}^{\prime}\)\(\times\)2.9\({}^{\prime}\) & 18\({}^{\prime}\)\(\times\)18\({}^{\prime}\) & 34\({}^{\prime}\)\(\times\)34\({}^{\prime}\) & 4.5\({}^{\circ}\)\(\times\)4.5\({}^{\circ}\) \\ \hline \end{tabular} \end{table} Table 21: SUZAKU (ASTRO-E2) carried onboard the XRS (failed after a few weeks), the XIS, and the HXD. the universe: galaxy clusters, black holes, the formation of heavy elements, and the study of physics in extreme conditions with the use of high-resolution spectroscopy combined with a wide-band energy coverage [169]. Hitomi was equipped with a Soft X-ray spectrometer (SXS), a Soft X-ray imager (SXI), a hard X-ray imager (HXI), and two non-focusing soft Gamma-ray detectors (SGDs). The SXS consisted of a microcalorimeter array of 36-pixel system with an energy resolution \(<\)7 eV between 0.3 and 12 keV [170]. The SXI used 4 CCDs cameras [171]. The HXI consisted of four layers of 0.5 mm thick double-sided silicon strip detectors and one layer of newly developed 0.75 mm thick CdTe double-sided cross-strip detector [172]. The SGD was developed to measure soft \(\gamma\)-rays via reconstruction of Compton scattering in the semiconductor Compton camera [173]. These instruments together covered a wide energy range between 0.3-600 keV. The science payload of the mission is summarized in Table 22. Hitomi featured four optics: two units for the SXT with focal length 5.6 m, one pointed for the SXS, and the other for the SXI; the other two units were hard X-ray telescopes of 12 m focal length and had at their focal plane the HXIs. The mission was launched on February 17, 2016 and contact was lost on March 26, 2016. This was due to a series of incidents with the attitude control system leading to an uncontrolled spin rate and breakup of weak mechanical elements. ### The program in Russia and India #### Granat The Granat mission was launched on December 1, 1989 into a highly eccentric 98-hour (4-day) orbit which enabled long uninterrupted observations. The perigee and apogee of the orbit were 2 000 and 20 0000 km, respectively, and the inclination was 51.9\({}^{\circ}\). After an initial period of pointed observations, Granat went into survey mode \begin{table} \begin{tabular}{l c c c c} \hline Instrument & SXS & SXI & HXI & SGD \\ \hline Bandpass (keV) & 0.3-12 & 0.4-12 & 5-80 & 60-600 \\ Energy Res. (FWHM) (keV at keV) & \(<\)7\(\times\)10\({}^{-3}\) at 6 & \(<\)0.2 at 6 & \(<\)2 at 60 & \(<\)4 at 60 \\ Area (cm\({}^{2}\) at keV) & 300 at 6 & 350 at 6 & 300 at 30 & \(>\) 20 at 100 \\ & 250 at 1 & 370 at 1 & & \\ Effective FOV & 3\({}^{\prime}\)\(\times\)3\({}^{\prime}\) & 38\({}^{\prime}\)\(\times\)38\({}^{\prime}\) & 9\({}^{\prime}\)\(\times\)9\({}^{\prime}\) & 0.6\({}^{\circ}\)\(\times\)0.6\({}^{\circ}\) \\ Time resolution (\(\mu\)s) & 5 & 4/2/0.5/0.1 sNOTA & 25.6 & 25.6 \\ Angular resolution & \(\sim\)1.2\({}^{\prime}\) & \(\sim\)1.3\({}^{\prime}\) & 1.7\({}^{\prime}\) at 30 keV & \\ \hline \end{tabular} \end{table} Table 22: HITOMI (ASTRO-H) consisted of 2 SXTs and 2 HXTs. in September 1994 and operated until November 27, 1998. It carried seven X-ray and gamma-ray instruments [174]. ART-S [175], a large area (4\(\times\) 625 cm\({}^{2}\)) system of four high-pressure xenon multiwire proportional counters (MWPCs) reached \(\sim\) 100 keV with an energy resolution of \(\sim\)10% at 60 keV. The collimators of the telescope could work in rocking mode, ON and OFF the target, which enabled the study of timing and spectral parameters of X-ray sources in the pointing mode. A similar system, ART-P [175], with four identical modules contained position sensitive MWPC together with a coded mask, which enabled imaging with 6\({}^{\prime}\) resolution in the energy range 4-60 keV. In the range 4-100 keV the telescope worked as a spectrometer. The geometric sensitive area of ART-P was 4\(\times\)630 cm\({}^{2}\), the field of view 1.8\({}^{\circ}\times\) 1.8\({}^{\circ}\) FWHM. The other principal instrument, SIGMA [176], provided by France, was primarily a gamma-ray instrument (30-1300 keV), but with hard X-ray sensitivity. It featured a coded mask of 13\({}^{\prime}\) resolution and a position-sensitive scintillation system (PSD, [174]). The coded aperture, located 2.5 m from the PSD was an array of 49\(\times\)53 square elements, whose basic pattern was a 29\(\times\)31 Uniformly Redundant Array (URA) [177]. The opaque 1.5 cm thick tungsten mask elements were bonded to a honeycomb plate that supported and stiffened the mask assembly without hindering the transparency of the open mask elements. The dimensions of the URA mask cell (9.4 mm \(\times\) 9.4 mm) imposed the following key properties of the telescope: * the maximum sensitivity rectangular field of view: 4.3\({}^{\circ}\times\) 4.7\({}^{\circ}\), surrounded by a wider field of decreasing sensitivity such that the half-maximum sensitivity boundary was a 10.9\({}^{\circ}\times\) 11.5\({}^{\circ}\) rectangle * the total detection area: the 794 cm\({}^{2}\) central rectangular zone of the PSD whose size matched the basic 29\(\times\)31 mask pattern * the intrinsic angular resolution: 13\({}^{\prime}\) * the point--source location accuracy: less than 2\({}^{\prime}\) taking into account the PSD coding element size (1.175 mm\(\times\)1.175 mm). An all-sky monitor, WATCH, designed at the Danish Space Research Institute [178; 179], consisted of four rotating modulation collimator systems with phoswich detectors for the purpose of monitoring persistent X-ray sources and localizing transient events in a significant part of the sky [179]. The detector consisted of parallel NaI (Tl) and CsI(Tl) scintillator strips. The energy range was from 6 to 180 keV for NaI and from 20 to 180 keV for CsI. The energy resolution of the detector was modest, about 30% FWHM at 60 keV. A substantial gamma-burst capability was also on board, including sensitivity in the X-ray and optical regions. The PHEBUS instrument [180], developed by CESR (Toulouse) to record high energy transient events in the range 100 keV to 100 MeV, consisted of two independent detectors. Each detector consisted of a bismuth germanate (BGO) crystal 78 mm in diameter by 120 mm thick, surrounded by a plastic anti-coincidence jacket. The two detectors were arranged on the spacecraft so as to observe 4\(\pi\) steradians. The burst mode was triggered when the count rate in the 0.1-1.5 MeV energy range exceeded the background level by 8 sigma in either 0.25 or 1.0 seconds. There were 116 energy channels. The KONUS-B instrument[181], designed by the Ioffe Physico-Technical Institute in St. Petersburg, consisted of seven detectors distributed around the spacecraft to detect photons from 10 keV to 8 MeV. The detectors consisted of NaI(Tl) scintillator crystals 200 mm in diameter by 50 mm thick behind a Be entrance window. Spectra were taken in two 31-channel pulse height analyzers (PHAs), of which the first eight were measured with 1/16 s time resolution and the remaining with variable time resolutions depending on the count rate. The range of resolutions covered 0.25 to 8 s. The KONUS-B instrument operated from December 11, 1989 until February 20, 1990. Over that period, the 'on' time for the experiment was 27 days. Some 60 solar flares and 19 cosmic gamma-ray bursts were detected. The French TOURNESOL instrument consisted of four proportional counters and two optical detectors. The proportional counters detected photons between 2 keV and 20 MeV in a \(6^{\circ}\times 6^{\circ}\) field of view. The visible detectors had a field of view of \(5^{\circ}\times 5^{\circ}\). The instrument was designed to look for optical counterparts of high-energy burst sources, as well as performing spectral analysis of the high-energy events. Table 23 reports the principal characteristics of the experiments onboard Granat. Over the initial four years of directed observations, Granat observed many galactic and extra-galactic X-ray sources with emphasis on the deep imaging and spectroscopy of the galactic center, broad-band observations of black hole candidates, and X-ray novae. After 1994, the observatory was switched to survey mode and carried out a sensitive all-sky survey in the 40 to 200 keV energy band. Some of the highlights included: A very deep imaging of the galactic center region [182]; The discovery of electron-positron annihilation lines from the galactic microquasar 1E1740-294 [183] and the X-ray Nova Muscae [184]; Study of spectra and time variability of black hole candidates. \begin{table} \begin{tabular}{l c c c c|c c c} \hline & & & & & \multicolumn{3}{c}{\(\gamma-\)ray burst experiment} \\ Payload & ART-S & ART-P & SIGMA & WATCH & PHEBUS & KONUS-B & TOURNESOL \\ \hline Bandpass (keV) & 3-100 & 4-60\({}^{a}\) & 30-1300 & 6-180 & & & \\ Bandpass (MeV) & & & & & 0.1-100 & 0.02-8 & 0.002-20 \\ Area (cm\({}^{2}\)) & 2400 & 1250 & 800 & 45 & 100 & 315 & \\ Ang.Resol. & & \(6^{\prime}\) & \(13^{\prime}\) & & & \\ Field of View (FOV) & \(2^{\circ}\times 2^{\circ}\) & \(1.8^{\circ}\times 1.8^{\circ}\) & \(5^{\circ}\times 5^{\circ}\) & All sky & All sky & All sky & \(5^{\circ}\times 5^{\circ}\) \\ Ener.Res. at 5.9 keV & 21\% & 25\% & & & & \\ Ener.Res. at 22.1 keV & 13\% & & & & & \\ Ener.Res. at 59.5 keV & 11\% & 14\% & & 30\% & & \\ Time Resol. (\(\mu\)s) & 200 & & & & & \\ \hline \end{tabular} \({}^{a}\) Notes: for imaging. The energy range for spectroscopy and timing was 4-100 keV \end{table} Table 23: Granat.The payload consisted of seven experiments: ## Irs-P3 IRS-P3 was the sixth satellite in Indian Remote Sensing satellite series, an Earth observation mission launched under the National Natural Resources Management System programme (NNRMS) undertaken by ISRO. The launch of IRS-P3 took place on March 21, 1996, its orbit was initially an elliptical one, but after few weeks it was stabilized in a sun-synchronous circular orbit with an altitude of 817 km, an inclination of 98.7\({}^{\circ}\), a repeat cycle of 24 days, and a period of 101.35 min. The objectives of the mission was the processing and interpretation of data generated by its two instruments, the Wide-Field Sensor (WiFS) and Modular Opto-electric Sensor (MOS), developed by the German Aerospace Center (DLR). It also hosted a scientific instrument, the Indian X-ray Astronomy Experiment (IXAE), for the study of X-ray astronomy. The payload was designed to study periodic and aperiodic intensity and spectral variations in galactic and extra-galactic X-ray sources like pulsars, X-ray binaries, Seyfert galaxies, quasars etc., and to study light curves and spectral evolution of transient phenomena. The instrumentation consisted of a Pointing Proportional Counter (PPC) and an X-ray Sky Monitor (XSM) [185]. The principal characteristic of the X-ray astronomy experiment are summarized in Table 24 ## Conclusions With this chapter, we wanted to summarise the history of X-ray Astronomy from its inception until around the end of the 1990s. We have chosen for this contribution an approach that allows the reader to find the essential data, and references for X-ray astronomy missions (be they rockets, balloons, or satellites). The history of X-ray astronomy has obviously not stopped and the new millennium has seen the launch of new missions that are still active, Chandra, XMM-Newton, and ASTROSAT to name but a few. There are specific chapters on these and other missions in the X-ray section of the Handbook. We would like to conclude this chapter with a tribute to the most significant figure in the history of Astronomy X: Riccardo Giacconi. There is no doubt that Bruno Rossi is to be credited with the initial intuition of attempting X-ray observation of the sky. However, it was Riccardo Giacconi who gave substance to that idea from the very first visionary studies of X-ray telescopes, effectively opening up a new Astronomy. It is therefore no coincidence that he was \begin{table} \begin{tabular}{l c c} \hline Experiment & PPC & XSM \\ \hline Energy Range (keV) & 2-20 & 2-8 \\ Area (cm\({}^{2}\)) & 1.1\(\times\)1.1 & 1 \\ Field of View (FOV) & \(2^{\circ}\times 2^{\circ}\)\(90^{\circ}\times 90^{\circ}\) \\ \hline \end{tabular} \end{table} Table 24: IRS-P3 carried on board two instrumentation to conduct X-ray astronomy: a Pointing Proportiona Counter (PPC) and an X-ray Sky Monitor (XSM) awarded the Nobel Prize in 2002 with the motivation "for pioneering contributions to astrophysics, which have led to the discovery of cosmic X-ray sources". Not only did Giacconi open up the vision of the X-ray sky, but as Josh Grindlay noted12 "In fact he not only opened the entire field of X-ray Astronomy/Astrophysics but also opened the then new field of Time domain Astrophysics. The broad phenomenology of Bursts (X-ray, Gamma-Ray, now Radio...) was not part of the Astronomical Landscape until dramatically variable sources like Cyg X-1 and "X-ray Novae" were discovered. Then, GRBs, enormous Flaring from Blazars, BH-LMXB outbursts demanded physical understanding! Exciting new Astrophysics, continuing through today". In the coming years, new missions will be launched such as SVOM, EP, and XRISM. Other missions such as eXTP and ATHENA are at an advanced stage of study. We are therefore confident that this chapter will grow over time, enriched with new stories. Footnote 12: In his seminar at the Memorial Symposium to Honor Riccardo Giacconi, held at the National Academy of Sciences in Washington, DC on May 29-30, 2019.
2306.11007
Protecting IoT Servers Against Flood Attacks with the Quasi Deterministic Transmission Policy
IoT Servers that receive and process packets from IoT devices should meet the QoS needs of incoming packets, and support Attack Detection software that analyzes the incoming traffic to identify and discard packets that may be part of a Cyberattack. Since UDP Flood Attacks can overwhelm IoT Servers by creating congestion that paralyzes their operation and limits their ability to conduct timely Attack Detection, this paper proposes and evaluates a simple architecture to protect a Server that is connected to a Local Area Network, using a Quasi Deterministic Transmission Policy Forwarder (SQF) at its input port. This Forwarder shapes the incoming traffic, sends it to the Server in a manner which does not modify the overall delay of the packets, and avoids congestion inside the Server. The relevant theoretical background is briefly reviewed, and measurements during a UDP Flood Attack are provided to compare the Server performance, with and without the Forwarder. It is seen that during a UDP Flood Attack, the Forwarder protects the Server from congestion allowing it to effectively identify Attack Packets. On the other hand, the resulting Forwarder congestion can also be eliminated at the Forwarder with "drop" commands generated by the Forwarder itself, or sent by the Server to the Forwarder.
Erol Gelenbe, Mohammed Nasereddin
2023-06-19T15:16:14Z
http://arxiv.org/abs/2306.11007v1
# Protecting IoT Servers Against Flood Attacks with the Quasi Deterministic Transmission Policy ###### Abstract IoT Servers that receive and process packets from IoT devices should meet the QoS needs of incoming packets, and support Attack Detection software that analyzes the incoming traffic to identify and discard packets that may be part of a Cyberattack. Since UDP Flood Attacks can overwhelm IoT Servers by creating congestion that parallges their operation and limits their ability to conduct timely Attack Detection, this paper proposes and evaluates a simple architecture to protect a Server that is connected to a Local Area Network, using a Quasi-Deterministic Transmission Policy Forwarder (SQF) at its input port. This Forwarder shapes the incoming traffic, sends it to the Server in a manner which does not modify the overall delay of the packets, and avoids congestion inside the Server. The relevant theoretical background is briefly reviewed, and measurements during a UDP Flood Attack are provided to compare the Server performance, with and without the Forwarder. It is seen that during a UDP Flood Attack, the Forwarder protects the Server from congestion allowing it to effectively identify Attack Packets. On the other hand, the resulting Forwarder congestion can also be eliminated at the Forwarder with "drop" commands generated by the Forwarder itself, or sent by the Server to the Forwarder. Internet of Things (IoT), Cyberattack Detection, Traffic Shaping, Quasi-Deterministic Transmission Policy (QDTP), Quality of Service ## I Introduction With some \(30\) Billion devices on the Internet [1], many types of anomalies have been observed as a result of cyberattacks [2, 3, 4, 5], including Denial of service (DoS) attacks that disable target systems by flooding them with huge streams of requests [6]. While many such attacks go unreported when they occur, just one Distributed DoS attack in \(2017\) targeting Google, compromised \(180,000\) web servers which flooded Google servers at overall bitrates of \(2.54\) Tera-bits/sec [7]. Other attacks aim mainly at the IoT [8, 9, 10, 11], while Botnet attacks [12] are particularly various since they spread by inducing their victims to become attackers [13, 14, 15]. UDP Flood attacks [16] are also exploited by Botnets to create massive congestion that overwhelms network nodes and ports. Using spoofed-source-address UDP packets, they cause their victims to crash due to high traffic volumes, creating denial of service, causing lost data and resulting in missing and incomplete readings of the data carried by legitimate IoT traffic [17, 18]. ### _Prior Work_ Because of the concern about cybersecurity, there is a large literature on cyberattacks and Attack Detection (AD) methods [6, 19, 20]. These methods are typically evaluated for accuracy using statistical methods [11, 21], and various Machine Learning based AD algorithms are often tested under ideal conditions on general purpose computers [22, 23, 24, 25, 26], where attack traffic is treated as data, and the attack's overload on the processing capacity and performance of the victim node is not taken into account. Various AD test-beds [27] for cyber-physical and IoT networks are presented in [28, 29, 30]. Experiments on windfarms under SYN attacks are discussed in [31] and other experimental IoT security studies can be found in [32, 33, 34]. Data collection and display for flood attacks are discussed in [35] while in [36] real-time data collection for IoT DNS attacks is presented. Denial of Service (DoS) attacks against Software Defined Networks (SDN) that support the IoT have also been studied [37]. However this prior work relates to attack emulation environments that do not include the overload caused by attacks, as recently discussed for autonomous vehicles [38] and IoT servers [39]. ### _Motivation and Research Plan_ Thus the present paper is motivated by the need to: * Experimentally evaluate the effect of IoT Server overload during an ongoing UDP Flood attack, * Understand the attack's impact on the Server's capacity to carry out Attack Detection (AD) and other useful processing functions, * Demonstrate a system architecture, and a traffic shaping policy [40] that was initially proposed to mitigate the IoT's Massive Access Problem (MAP) [41, 42, 43], to guarantee that in the presence of attacks that create large packet flows, the Server can operate seamlessly and accomplish its role for AD and other useful IoT processing functions, * Experimentally demonstrate that mitigation actions can be triggered to rapidly eliminate the long-term effects of such UDP Flood attacks from the system as a whole. Thus in Section II, we provide new measurements on the experimental test-bed shown in Figure 1, to illustrate the effect of a UDP Flood attack emanating from an IoT traffic source against the IoT Server that receives packets from different IoT devices. These measurements show that the Server is significantly impacted during an attack and is impeded from conducting its AD functions in a timely fashion. Based on this observation, Section III proposes and evaluates a novel system architecture shown in Figure 6 where the Server is preceded by a Smart "Quasi-Deterministic Forwarding Policy (QDP)" Forwarder (SQF) that shapes the traffic that is forwarded to the Server. Our results show that if we select the SQF parameters based on mathematical principles [40], then the SQF effectively limits the undesirable effects of an attack against the Server. However, attack packets accumulate at the SQF which protects the Server, and mitigation actions may discard the accumulated attack traffic. ## II Initial Measurement Results As a first step, we have conducted experiments on the Local Area Network (LAN) test-bed for a system shown in Figure 1, in which IoT devices represented by several Raspberry Pi machines, send UDP traffic to the Server. One of the Pi machines is also programmed to generate attack traffic either at predetermined instants or at random. These Raspberry Pi \(4\) Model B Rev \(1.2\) machines (RPi1 and RPi2) machines, each have a \(1.5\)GHz ARM Cortex-A\(72\) quad-core processor and \(2\)GB LPDDR\(4-3200\) SDRAM, running Raspbian GNU/Linux \(11\) (bullseye), a Debian-based operating system optimized for Raspberry Pi hardware. The normally operating (uncompromised) Raspberry PIs periodically send UDP Protocol packets that simply contain the measurements of the temperature of the Raspberry Pi, to the Server that is shown in Figure 1. The choice of this particular data is simply in order to provide an example of real data that the Raspberry Pis can send. The Server is an Intel 8-Core i\(7-8705\)G, with a Linux \(5.15.0-60-\) generic \(66-\)Ubuntu SMP based operating system. It is equipped with \(16\)GB of RAM, it runs at \(3.1\)Ghz and has a \(500\)GB hard drive. It communicates with the Raspberry Pis via the Ethernet Local Area Network (LAN) shown in Figure 1, and receives IoT traffic from them via the UDP protocol. As shown in Figure 2, the Server supports the UDP protocol with SNMP for incoming packets. It operates the accurate AD algorithm reported in [26], and supports the other normal processing needs of incoming UDP packets. The UDP protocol's simplicity fits the needs of the simple IoT devices that we use, since UDP does not establish a connection before transmitting and does not use ACKs or error recovery for communications [44]. While many datasets can be used to generate attack traffic, including the KDD\(99\) dataset or its improved version NSL-KDD, UNSW-NB15, CICDS\(2017\), and the Bot-IoT dataset [36], in this work we use MHDDoS [45] containing \(56\) recent real-world DoS attacks with \(56\) different techniques, for attack traffic emulation in this Ethernet based test-bed. In Figure 3 we show measurements of the effect of a \(60\) sec Flood Attack, which overwhelms the 8-Core Server with some \(400,000\) packets that accumulate at its input buffer. The Server's activities, including AD, are paralyzed by the attack and the packet backlog takes nearly \(300\) minutes to clear. Thus we see that the attack significantly impairs its capability for AD, its ability to discard attack packets and to subsequently process benign packets. The detailed measurements of the Server's AD processing times per packet, when there is no attack, and when a UDP Flood Attack occurs, are reported in Figure 4 and Figure 5. We observe that the Server's AD processing time per packet, when **no attacks** occur, has an average value of \(2.98\) milliseconds (ms). On the other hand, when the Server is targeted by a UDP Flood Attack, we observe a substantial increase in the AD algorithm's average processing time to \(4.82\) ms. Moreover, the AD processing time per packet when the Server is under attack, exhibits some very large outliers, as shown in Figure 5. We observe that these "outlier" processing times are close to \(10^{3}\) times larger than the typical values, showing that during a UDP Flood Attack the Server's AD Fig. 1: The test-bed composed of an Ethernet network with Raspberry Pi machines that generate normal traffic, as well as possible traffic. An IoT Server receives the IoT traffic via the same network. Fig. 2: Internal software architecture of the Server, containing a SNMP network protocol manager, an AD system [26] that identifies attack packets, and software for processing the incoming packet contents. processing of packets is repeatedly paralyzed for a substantial amount of time, as also shown in Figure 3. ### _Lindley's Equation when the SQF is not Used_ If the SQF module is **not** being used as shown in Figure 1: * Let \(0=a_{0}\leq a_{1}\leq a_{2},\...\,\) be the successive packet arrival instants at the Server through the Ethernet LAN from any of the IoT devices connected to the LAN. We also define the interarrival time \(A_{n+1}=a_{n+1}-a_{n}\). * Let \(T_{n}\) denote the Server's AD processing time for the \(n-th\) packet, and assume that the Server processes packets in First Come First Served (FCFS) order. Then the total waiting time \(L_{n+1}\) of the \(n+1\)-th incoming packet, between the instant \(a_{n}\) and the start of the AD processing time of the Server, is given by the well known Lindley's equation [46, 47]: \[L_{n+1}=\left[L_{n}+T_{n}-A_{n+1}\right]^{+},\ n\geq 0,\ L_{0}=0\, \tag{1}\] where for a real number \(X\), we use the notation: \[\left[X\right]^{+}=X\ if\ X>0,\ and\ \left[X\right]^{+}=0\ if\ X\leq 0. \tag{2}\] Note that \(L_{0}=0\) because the first incoming packet encounters an empty queue infront of the AD. Note also that whenever we have \(T_{n}>A_{n+1}\) then \(L_{n+1}>L_{n}\), i.e. the waiting time increases. During a Flood Attack, the values of \(A_{n}\) and \(T_{n}\) will be modified, as we see from Figure 3, indicating that packet arrival rates have considerably increased so that the values of \(A_{n}\) are much smaller, while Figure 4 shows that the values of \(T_{n}\) are also larger. However the form of (1) does not change. ## III Effect of the Smart QDTP Forwarder (SQF) In Figure 6, we present our proposed modified architecture where the Server, whose role is to process incoming IoT packets - including operating the AD module in order to detect attacks - is "protected" by a Smart QDTP Forwarder (SQF) which is placed between the Ethernet based sources of IoT traffic, and the Server's input port. The role of the SDF is to shape the incoming traffic directed at the Server using the _Quasi-Deterministic Transmission Policy (QDTP)_[48, 40]. QDTP is a simple policy that delays some of the packets it receives, by forwarding them to the Server at time \(t_{n}\geq a_{n}\), where \(a_{n}\) is the \(n\)-th packet's arrival instant to the SQF, and \(t_{n}\) is the instant at which SQF forwards the packet to the Server, and is defined by: \[t_{n+1}=\max\{t_{n}+D,a_{n+1}\},\ t_{0}=a_{0},\ n\geq 0, \tag{3}\] \[so\ that\ t_{n+1}-t_{n}\geq D\, \tag{4}\] where \(D>0\) is a constant parameter of the QDTP algorithm that needs to be fixed. When the \(n-th\) packet is transmitted by the SQF, we assume that it arrives instantaneously at the Server's input queue for AD processing. Here we are in fact assuming that Fig. 4: In the Upper figure, we show the histogram of measurements of the Server’s AD processing time per packet, when there is no attack, exhibiting an average processing time of \(2.98\) ms and variance \(0.0055\ ms^{2}\). In the Lower figure an attack is occurring: the Server’s measured average AD processing time of packets rises substantially to \(4.82\) ms with a variance \(0.51\ ms^{2}\), Fig. 3: Experimentally measured queue length (the \(y\)-axis is in number of packets) over time (the \(x\)-axis is in seconds) at the Server input prior to processing at the AD module, during a \(60\) second UDP Flood Attack launched from one of the Rasberry Pis of Figure 1 against the Server. The backlog of packets at the Server initially rises rapidly to \(40,000\) packets, and without human intervention the congestion at the Server lasts far longer than the attack itself, up to several hours, due to the fact that the Server is paralyzed and stops its AD processing packets for long time intervals. These long interruptions in AD processing time are observed as the large outliers in AD processing times in Figure 5. the physical transmission time from the SQF to the Server, and the network protocol service time inside the Server, are tiny compared to the AD processing duration \(T_{n}\) at the Server. Thus the total delay \(Q_{n}\) experienced by the \(n\)-th packet due to the action of the SQF, that elapses from the arrival of the \(n\)-th packet to the SQF at \(a_{n}\), until its arrival to the AD at the Server at \(t_{n}\), is: \[Q_{0}=t_{0}-a_{0}=0, \tag{5}\] \[Q_{n+1}=t_{n+1}-a_{n+1},\] \[=\max\{t_{n}+D,a_{n+1}\}-a_{n+1},\] \[=0,\ if\ t_{n}+D\leq a_{n+1},\ and\] \[=t_{n}+D-a_{n+1},\ otherwise. \tag{6}\] Since \(t_{n}=Q_{n}+a_{n}\), we obtain the recursive expression: \[Q_{n+1}=[t_{n}+D-a_{n+1}]^{+},\] \[=[Q_{n}+D-A_{n+1}]^{+},\ n\geq 0, \tag{7}\] Fig. 5: In the Upper figure, we show successive measurements of the Server’s AD processing time per packet during a UDP Flood Attack (in the absence of the QTPF Forward SQF), showing large outliers that initially become more severe, and gradually become less frequent over time. In the Lower figure, the AD processing time of packets that is measured after the UDP Attack begins, reveals very large outliers in AD processing times, indicating that the AD is intermittently paralyzed or unable to operate. Fig. 6: The figure shows the modified system architecture where a Smart QDTP Forwarder (SQF) constantly acts as a traffic shaping interface between the Ethernet LAN and the Server. The effect of the SQF is to eliminate the analyzing effect of the packet flood at the Server, buffering packets within the SQF and forwarding in a manner which allows the Server to conduct its AD processing and other work in a timely fashion. Fig. 7: The queue length at the Server in the presence of a \(60\) second UDP Flood Attack. The figure Above shows the case **without** the SQF, and we see that the queue length peaks to \(400,000\) packets and condensers slowly over some \(15,000\) seconds. The figure Below compares the queue length in logarithmic scale, **with SQF in Blue** using the parameter \(D=3\ ms\), against the case **without SQF in **Red**, with the same UDP Flood Attack which lasts \(60\) seconds. Note that because the value of \(D\) we use is very close to the average value of \(T_{n}\) measured to be \(2.98\) ms in the absence of an attack, as shown in Figure, the fluctuations in the values of \(T_{n}\) will cause a small queue buildup (in the order of a few packets), as seen in te Blue plot in the figure Below. which is also an instance of Lindley's equation (1). On the other hand, the Server's AD module also acts as a FCFS queue and we can exploit Lindley's equation again to compute \(W_{n},\ n\geq 0\) the waiting time of the \(n\)-th packet that arrives to the Server to be processed for attack detection, which is: \[W_{n+1} = [W_{n}+T_{n}-(t_{n+1}-t_{n})]^{+},\ W_{0}=0, \tag{8}\] \[\leq W_{n}+T_{n}-(t_{n}-t_{n+1}), \tag{9}\] since the \(n\)-th packet's AD service time is \(T_{n}\) and the \(n+1\)-th interarrival interval to the Server's AD queue is \(t_{n+1}-t_{n}\). Therefore using equations (9) and (4) we obtain: \[W_{n+1}\leq W_{n}+T_{n}-D, \tag{10}\] and we have the following key insight into how to choose \(D\): **Result 1.** If we fix the parameter \(D\) in the QDTP policy for the SQF to a value so that \(D>T_{n}\) for all \(n\geq 0\), then the waiting time \(W_{n}\) at the Server will remain at the value \(W_{n}=0\) for all \(n\geq 0\). We now present experiments showing the usefulness of Result 1. Noting from Figure 4 that the measured average Fig. 8: We measure the Server queue length (represented logarithmically) when the Server is targeted by a UDP Flood Attack that lasts \(30-sec\) (Above) and \(10-sec\) (Below). The Red curves show the case without the SQF traffic shaping Forwarder, while the Blue Curves show the effect of the use of the SQF which uses QDTP. We observe the huge difference in queue length. For both the \(30\) and \(10\) second attacks, we have set \(D=3\ ms\). Fig. 10: AD Processing Time at the Server when the SQF with the QDTP Policy is installed and the parameter \(D=3.20\ ms\) is used. We observe that the AD processing time \(T_{n}\) has an average value of \(3.00\ ms\) and variance of \(0.0036\ sec^{2}\) in the absence of an attack (Above). In the presence of a UDP Flood Attack (Below) the average processing time of the AD per packet is quasi-identical on average, at \(2.99\ ms\) with a variance of \(0.0067\ sec^{2}\) so that in this case too, the SQF is effective in protecting the Server from paralysis and excessive slowdowns. Fig. 9: AD Processing Time at the Server when the SQF with the QDTP Policy is installed and the parameter \(D=2.7\ ms\) is used. We observe that the AD processing time \(T_{n}\) has an average value of \(2.97\ ms\) and variance of \(0.0041\ sec^{2}\) in the absence of an attack (Above). In the presence of a UDP Flood Attack (Below) the average processing time of the AD per packet is higher by roughly 10% on average, at \(3.28\ ms\) with a variance of \(0.0023\ sec^{2}\) so that the SQF is effective in protecting the Server from paralysis and excessive slowdown. value of \(T_{n}\) is \(2.98\ ms\) when there is no attack, we first select \(D=3\ ms\) which is just above that value. Figure 7 compares the case **without SQF** (Above) and **with **SQF** (Below) during a \(60\ sec\) UDP Flood Attack. Note that the figure Above represents the Server queue length varying over time, without the SQF. The figure Below is in logarithmic scale for the Server queue length, and compares the cases without SQF (in Red) and with SQF (in Blue) for the Server queue length varying over time. Since \(D=3\ ms\) is very close to the average of \(T_{n}\), the fluctuations in the values of \(T_{n}\) cause a small queue buildup of a few packets, as seen in the Blue plot in the lower part of the shown Below. Figure 8 shows the results of four experiments where we measure the queue length at the Server when a UDP Flood Attack lasts \(30\) (Above) and \(10\) (Below) seconds, without (Red) and with (Blue) the SQF Forwarder. Without SQF, the Server's AD processing time increases significantly. In the \(30\ sec\) attack, approximately \(470,000\) packets are received at the Server and without SQF it takes \(44.45\) minutes for the Server to return to normal process them, while in the \(10\ sec\) attack \(153,667\) packets are received and it takes the Server roughly \(15\) minutes to process them. Note that in these curves it takes some \(99\) seconds for the compromised RPi to launch the attack. Figure 9 shows that when we use the SQF based system with \(D=2.7\ ms\), which is smaller than the value recommended by Result 1, when there is no attack this choice of \(D\) has very little effect. However, when a UDP Flood Attack occurs, the Server's AD processing is somewhat slowed down and the average value of \(T_{n}\) increases by roughly 10%. On the other hand, Figure 10 confirms **Result 1** since it shows that, if we take \(D=3.2\ ms\) which guarantees that \(D>T_{n}\) most of the time, then the measured average value of \(T_{n}\) remains at around \(3\ ms\) showing that it has not been slowed down by the attack's overload effect. Of course the same is seen when no attack occurs. ## IV SQF Queue Buildup and Attack Mitigation When a Flood Attack occurs, the SQF accumulates packets in its input queue, and forwards them to the Server using the QDTP algorithm with \(D=3\ ms\), so that the Server does not experience any AD slowdown, ensuring that the Server continues to operate as usual. Figure 11 shows the sudden increase and then slow decrease of the SQF input queue when a UDP Flood attack lasts for \(60\) seconds, and the SQF uses \(D=3\ ms\). Since both the SQF (and the Server) do not drop packets, the attack packets will accumulate at the input queue of the SQF. Thus in this section we test a possible mitigating action that the SQF can take. Since Flood Attacks are characterized by an unusually high packet arrival rate, and this is also one of the attack detection parameters used by the AD used in this work [26], we now test an additional feature, as follows: 1. If the SQF receives more than \(N\) packets in a time interval that is less than or equal to \(D\), it drops all incoming packets for the next \(K.D\) time units. 2. Here \(N\) and \(K\) are parameters of the mitigating action. 3. The action is repeated as long as the condition 1) (above) on \(N\) persists. To illustrate the effect of this simple policy, in a first experiment we set \(N=10\) and \(K=3\) and implement the suggested drop-based mitigation policy. In this experiment, an RPi launches a \(10\) second Flood Attack, and the resulting queue length at the input of the SQF is shown in Figure 12, where we see that the SQF input buffer reaches a small value of \(12\) packets. The attack starts at the \(34\)-th second and lasts \(10\) seconds, but thanks to the mitigation policy there is no accumulation of packets. After the attack ends the SQF can continue to operate normally. Figure 13 displays the queue length of the SQF input buffer in a second experiment, when the attack lasts \(60\) seconds, showing similar results to the first experiment. Both measurements show the importance of having a simple mitigating action to deal with high volume Flood Attacks. However, although this policy appears attractive, it comes at the cost of dropping legitimate (non-attack) packets that come from non-compromised IoT devices. ## V Conclusions This paper has considered the effect of UDP Flood attacks on an IoT Server that processes incoming traffic from IoT devices via a local area network. The Server also incorporates an AD module. We first show that such attacks, even when they last just a few seconds, create overload for the IoT Server, so that its normal operations, including AD, are substantially slowed down. We see, in particular, that a \(60\) second attack may create a backlog of packets at the Server, that may require several hours to clear out. Thus we propose that the Server's input be "protected" by a special SQF front-end that operates the QDTP policy, in order to allow the timely operation of the Server even when Fig. 11: SQF queue length (\(y\)-axis in number of packets) against time (\(x\)-axis in seconds) when a UDP Flood attack lasts for \(60\) seconds. We have used \(D=3\ ms\), and no mitigation action takes place. an attack occurs. This approach requires that an inexpensive lightweight hardware addition, such as an RPi, be installed between the local area network that supports the IoT devices and the Server. Several experiments are used to illustrate the effectiveness of the proposed approach. However, the SQF with its QDTP policy requires that a key timing parameter \(D\) be chosen. Therefore, we provide a theoretical analysis of how \(D\) should be selected: we show that it must be just larger than the AD processing time of the Server under normal, i.e. non-attack, conditions. We then validate this observation with several experiments and show that the SQF can preserve the Server from congestion and overload, and allow it to operate normally. However, we note that the congestion that has been eliminated at the Server may now accumulate at the SQF input, although this in itself does not stop the RPi based SQF from continuing its normal operation. Furthermore, when the incoming traffic rate is such that it clearly indicates an attack, or when the Server informs the SQF that an attack is occuring, we can implement a mitigating action at the SQF to drop incoming packets in relatively short successive time intervals. This approach is tested experimentally and shown to be effective. However, the fact that such a policy may also drop incoming legitimate packets implies that there will be circumstances when it cannot be used and a close coupling between AD at the Server and packet drop actions at the SQF will be needed. While this paper has focused on an architecture with multiple sources of IoT traffic represented by several RPi, future work will consider Edge Systems having multiple IoT servers, as well as multiple IoT devices and packet sources, and will study the usage of dynamic policies for AD and traffic routing at the edge for complex IoT Server and SQF gateway architectures. Another important issue that should be addressed in future work is the energy consumption of such edge systems [49], so that dynamic management policies may be used to minimize energy consumption, as well as to optimize Quality of Service and Cybersecurity. ## Acknowledgements The support of the EU H2020 IoTAC Research and Innovation Action under GA No. 952684 is gratefully acknowledged.
2302.04843
Accurate and Interpretable Solution of the Inverse Rig for Realistic Blendshape Models with Quadratic Corrective Terms
We propose a new model-based algorithm solving the inverse rig problem in facial animation retargeting, exhibiting higher accuracy of the fit and sparser, more interpretable weight vector compared to SOTA. The proposed method targets a specific subdomain of human face animation - highly-realistic blendshape models used in the production of movies and video games. In this paper, we formulate an optimization problem that takes into account all the requirements of targeted models. Our objective goes beyond a linear blendshape model and employs the quadratic corrective terms necessary for correctly fitting fine details of the mesh. We show that the solution to the proposed problem yields highly accurate mesh reconstruction even when general-purpose solvers, like SQP, are used. The results obtained using SQP are highly accurate in the mesh space but do not exhibit favorable qualities in terms of weight sparsity and smoothness, and for this reason, we further propose a novel algorithm relying on a MM technique. The algorithm is specifically suited for solving the proposed objective, yielding a high-accuracy mesh fit while respecting the constraints and producing a sparse and smooth set of weights easy to manipulate and interpret by artists. Our algorithm is benchmarked with SOTA approaches, and shows an overall superiority of the results, yielding a smooth animation reconstruction with a relative improvement up to 45 percent in root mean squared mesh error while keeping the cardinality comparable with benchmark methods. This paper gives a comprehensive set of evaluation metrics that cover different aspects of the solution, including mesh accuracy, sparsity of the weights, and smoothness of the animation curves, as well as the appearance of the produced animation, which human experts evaluated.
Stevo Racković, Cláudia Soares, Dušan Jakovetić, Zoranka Desnica
2023-02-09T18:51:04Z
http://arxiv.org/abs/2302.04843v2
Accurate and Interpretable Solution of the Inverse Rig for Realistic Blendshape Models with Quadratic Corrective Terms ###### Abstract We propose a new model-based algorithm for solving the inverse rig problem in facial animation retargeting, exhibiting higher accuracy of the fit and sparser, more interpretable weight vector compared to state-of-the-art methods. The proposed method targets a specific subdomain of human face animation -- highly-realistic blendshape models used in the production of movies and video games. In this paper, we formulate an optimization problem that takes into account all the requirements of targeted models. Unlike the prior solutions, our objective goes beyond a linear blendshape model and employs the quadratic corrective terms necessary for correctly fitting fine details of the mesh. Further, we show that the solution to the proposed problem yields highly accurate mesh reconstruction even when general-purpose solvers, like sequential quadratic programming, are used. The results obtained using general-purpose solvers are highly accurate in the mesh space but do not exhibit favorable qualities in terms of weight sparsity and smoothness, and for this reason, we further propose a novel algorithm relying on a majorization-minimization technique. The algorithm is specifically suited for solving the proposed objective, yielding a high-accuracy mesh fit while respecting the constraints and producing a sparse and smooth set of weights that are easy to manipulate and interpret by animation artists. We show results using both proprietary and open-source animated characters of high quality and level of detail. Our algorithm is benchmarked with state-of-the-art approaches, and shows an overall superiority of the results, yielding a smooth animation reconstruction with a relative improvement up to \(45\%\) in root mean squared mesh error while keeping the cardinality comparable with benchmark methods. This paper gives a comprehensive set of evaluation metrics that cover different aspects of the solution, including mesh accuracy, sparsity of the weights, and smoothness of the animation curves, as well as the appearance of the produced animation, which human experts evaluated. Inverse Rig Quadratic blendshape model Majorization Minimization ## 1 Introduction Facial animation is a growing research topic in academia as well as in industry due to the central role of facial expressions in verbal and non-verbal communication, impacting mainly the arts (animation, video games) and other areas, like marketing (animated advertisement, video chatbots). Although there is a number of models developed to credibly deform a 3D character's face, including the underlying anatomical structure Choe et al. (2001); Sifakis et al. (2005); Wu et al. (2016); Zoss et al. (2018, 2019); Marco et al. (2020), physics-based deformations Hahn et al. (2013); Ichim et al. (2016, 2017), or morphable models Hu et al. (2005); Smet and Gool (2010); Thies et al. (2015); Mengjiao et al. (2020), the most popular approach is the blendshape model Pighin et al. (1998); Choe and Ko (2006); Joshi et al. (2006); Deng et al. (2006); Lewis and Anjyo (2010); Li et al. (2010); Lewis et al. (2014). Even though this approach might suffer from reduced expressivity, it provides intuitive controls and is easy to use and understand Ichim et al. (2017). Traditionally, the (delta) blendshape model is presented as a linear mapping \[f_{L}(\mathbf{w})=\mathbf{b}_{0}+\sum_{i=1}^{m}w_{i}\mathbf{b}_{i}=\mathbf{b} _{0}+\mathbf{B}\mathbf{w}, \tag{1}\] where \(\mathbf{b}_{0}\in\mathbb{R}^{3n}\) is a column vector representing the face mesh (with \(n\) vertices) in a resting position and \(\mathbf{b}_{1},...,\mathbf{b}_{m}\in\mathbb{R}^{3n}\) are delta blendshape vectors, i.e., these are topologically identical copies of \(\mathbf{b}_{0}\), but each of them is locally deformed, corresponding to an atomic facial expression. Blendshape vectors span the space of feasible expressions of the 3D character and can be invoked isolated or combined with other blendshapes to build more complex facial expressions. This is achieved by assigning corresponding activation weights \(w_{1},...,w_{m}\) to each blendshape. These weights are scalars, and are often (but not exclusively) restricted to the \([0,1]\) interval. To ease the notation, blendshape weights can be concatenated into a column weight vector \(\mathbf{w}=[w_{1},...,w_{m}]^{T}\), and blendshape vectors can be collected to build columns of a blendshape matrix \(\mathbf{B}=[\mathbf{b}_{1},...,\mathbf{b}_{m}]\), which gives a matrix expression in (1). (More details will be given in Sec. 3.1) Once the blendshape model is built, a character can be animated directly by setting the values of the weights for each animation frame. Since high-quality characters usually have over one hundred blendshapes, manually setting the weights for each frame involves intensive labor, and large production times. The animation can be automated if the reference motion is available in the form of 3D scans or a set of markers obtained via motion capture (MoCap). This process is called animation retargeting, and algorithms are developed to estimate the optimal set of activation weights for each frame so that the character mesh is deformed to closely resemble a reference mesh \(\mathbf{\tilde{b}}\in\mathbb{R}^{3n}\) at a given frame. Figure 1: The reference frame mesh and the estimates of the approach that solves the proposed objective function using our novel algorithm (_MM_), the approach that solves the proposed objective applying a general-purpose solver (_SQP_), and linear approaches proposed by Cetinaslan and Orvalho (2020a) (_Cet_ for the standard case, and _Cet-loc_ for localized approximation) and Seol et al. (2011) (Seol). The top row shows obtained meshes, while the bottom represents corresponding activations of the blendshape weights. Red tones in the meshes indicate a higher error of the fit, according to the color bar on the right. The average weight activation of each solution is indicated with a horizontal line. The average mesh error and cardinality (i.e., the number of non-zero weights) of the weight vector are given in a text box for each method — we aim for the lowest error while keeping the cardinality relatively low. _SQP_ yields the cleanest mesh fit of all, yet the cardinality of the weight vector is too high, hence it might compromise the stability of the solution. On the other side, linear methods _Cet, Cet-loc_ and _Seol_ give visible flaws in mesh reconstruction. _MM_ is the only method that provides both an accurate mesh fit and a stable solution. We call the problem of finding an optimal **w** the **inverse rig problem**. This solution vector needs to satisfy several properties in order to be considered good. In the first place, we measure the data fidelity, i.e., the resemblance between a reconstructed and a reference mesh. Another essential feature is the stability of the solution, so that if the few activated weights are slightly adjusted afterwards, the mesh behaves predictably and does not produce artifacts like discontinuities or anatomically impossible positions (Figure 2). This is important because the results of the automatic retargeting are often adjusted manually in production Cetinaslan and Orvalho (2020). For the same reason, it is preferable to have lower cardinality, i.e., the number of non-zero values of the weight vector. If the animation retargeting is performed over a continuous time sequence, there should not be visible discontinuities between the frames. Finally, if the range of weight values is restricted by the animation framework in use (e.g., \(0\leq w_{i}\leq 1\)), the solution needs to respect the constraints. Our main interest in this paper is a solution to the inverse rig problem for realistic human characters (Figure 3). We make a distinction between _data-based_ and _model-based_ approaches for solving the inverse rig -- the former assumes a rig function as a black box and demands long animation sequences that span a whole space of expected expressions in order to train a good regressor, while the latter only requires a well-defined rig function with corresponding basis vectors. The literature offers several model-based methods for solving the inverse rig under the linear blendshape model Choe and Ko (2006); Joshi et al. (2006); Lewis and Anjyo (2010); Seol et al. (2011); Liu et al. (2010); Cetinaslan and Orvalho (2020); however, due to the increasing complexity and level of realism of the avatars in the movie and gaming industry (but also for purposes of communication, education, virtual reality), linear models do not provide a high-enough level of detail. A possible approach is the application of data-based machine learning algorithms, yet this is an expensive alternative as it demands a large amount of data to provide a good fit. In this paper, we propose a model-based algorithm that solves the inverse rig problem for realistic human face blendshape models used in the industry, taking into account the quadratic corrective terms of a blendshape model and the constraints over the weights vector. Our algorithm is benchmarked with state-of-the-art methods Seol et al. (2011); Cetinaslan and Orvalho (2020), and it exhibits a significant advantage in data fidelity of the reconstructed meshes (a relative improvement up to \(45\%\) in root mean squared mesh error, computed as explained in Section 5.2) while at the same time cardinality (i.e., the number of non-zero elements) of the weight vector is comparable to previous solutions, and the frame-to-frame transitions are smooth. ### Contributions The contributions of this paper are two-fold. * We address the inverse rig problem with quadratic corrective terms Seo et al. (2011); Lewis et al. (2014), and show evidence of decreasing mesh error when compared with the commonly used linear blendshape model (1). To the best of our knowledge, this paper presents the first model-based solution to the inverse rig problem under the quadratic blendshape function. We show experimentally that the new objective can be optimized using standard solvers with significantly reduced error in the mesh space compared to the state-of-the-art methods. Specifically, we use an interior-point-based solver within Python library scipy Virtanen et al. (2020), that is based on the sequential Quadratic Programming (SQP) applied iteratively with Trust-Region method Byrd et al. (1999); Conn et al. (2000). However, using such a general-purpose solver empirically shows an increased number of activated blendshapes and frame-to-frame discontinuities. This leads us to our second contribution. * This paper formulates an algorithm that is specifically suited for solving the inverse rig problem for a quadratic blendshape model. (Details of the quadratic blendshape model are given in Section 3.2.) In particular, we apply the majorization-minimization technique Sun et al. (2016); Marnissi et al. (2020) and devise a surrogate Figure 2: Examples of dense activation vectors. All the activation weights of this character model are set to a random value within a feasible interval \([0,1]\), and we can see that there are many anatomically incorrect deformations as well as the breaking of the mesh. function that allows for an efficient iterative solution to the proposed optimization problem. The obtained solution has a low mesh error, while simultaneously decreasing the number of activated blendshape weights (cardinality). Frame-to-frame transitions are shown to be smooth, and a user study with animation artists has evidenced the higher quality of our approach. This paper presents the solution from a domain point of view, including an explanation of how the method works, a comprehensive set of experiments on proprietary and open-source state-of-the-art animation characters, a detailed modeling rationale and motivation for introducing and modeling quadratic blendshape models, the intuition behind the method and implementation details. In a companion paper not for review [2022], we present the detailed method derivation from the optimization theory perspective and convergence analysis of the algorithm. The rest of this paper is organized as follows. In Section 2, we cover the prior work and relate our method to the corresponding directions of research. In Section 3, we cover the main concepts of the blendshape model for facial animation and the problem of rig inversion. Section 4 introduces the proposed method for solving the inverse rig, and Section 5 gives an extensive numerical evaluation of the proposed method. Finally, Section 6 concludes the paper. ### Notation Throughout this paper, scalar values will be denoted with lowercase Latin \(a,b,c\), or lowercase Greek \(\alpha,\beta,\gamma\) letters. Vectors are represented with bold lowercase **a,b,c** and are indexed using a subscript, i.e., the \(i^{th}\) element of the vector **a** is \(a_{i}\). If there is a subscript and the letter is still in bold, it is not indexing -- we will use this to differentiate blendshape vectors (\(\textbf{h}_{0},\textbf{b}_{1},...,\textbf{b}_{m}\)) as they have similar properties, or to indicate that a vector a takes specific value at iteration \(i\) of an iterative algorithm, which is denoted by a subscript within the brackets \(\textbf{a}_{(i)}\). We use **0** and **1** to denote vectors of all zeros and all ones, respectively. When we use order relations (\(\geq,\leq,=\)) between two vectors, it is assumed component-wise. All the vectors are assumed to be column vectors, and \([a_{1},...,a_{n}]^{T}\) represents a column vector obtained by stacking \(n\) scalars. Matrices are written in bold capital letters **A, B, C** and also indexed using subscripts -- \(\textbf{A}_{i}\) is the \(i^{th}\) row of a matrix **A**, and \(A_{ij}\) is an element of a matrix **A** in a row \(i\) and a column \(j\). If a superscript is given within the brackets \(\textbf{A}^{(i)}\) it denotes a specific matrix corresponding to the (vertex) position \(i\). A notation \(\textbf{A}=[\textbf{a}_{1},...,\textbf{a}_{n}]\) means that a matrix **A** is obtained by stacking vectors \(\textbf{a}_{1},...,\textbf{a}_{n}\) along the columns. Functions are given using lowercase Latin or Greek characters, but always with corresponding parameters in the brackets \(f(\cdot),g(\cdot),\phi(\cdot),\psi(\cdot)\). A set of real numbers and a set of positive integers are given by \(\mathbb{R}\) and \(\mathbb{N}\), respectively. The Euclidean norm is denoted by \(\|\cdot\|\), and the L1 norm by \(\|\cdot\|_{1}\). ## 2 Related Work Blendshape animation has been a research topic for more than two decades Pighin et al. [1998], Choe and Ko [2006], Choe et al. [2001]. The main tasks in terms of intensive manual labor and extension of production time are (1) the creation of the blendshapes, and (2) the actual animation of the blendshape basis (our work). The main components of a blendshape model are a neutral face mesh and a blendshape basis (local deformations of a neutral face), and there is a body of work proposing automated solutions for creating blendshapes. Two main approaches are (1) building a basis from a dense set of captured data Smet and Gool [2010], Neumann et al. [2013], Chaudhuri et al. [2020], Mengjiao et al. [2020] or (2) deforming a generic set of blendshapes to produce personalized blendshape meshes Li et al. [2010, 2013], Ribera et al. [2017], Zhang et al. [2020], Han et al. [2021], Seol et al. [2016]. In this paper, we assume that the blendshapes are already available and that they closely resemble the actor or user. The main focus of our paper is to solve the inverse rig problem to produce animation, i.e., to automatically adjust the activation weights so that the resulting mesh follows a reference motion. Reference motion is a (sparse or dense) set of markers recorded from an actor's face using motion capture (MoCap) systems Deng et al. [2006], Seol and Lewis [2014]. A sparse set of markers is a common approach, particularly if the motion should be retargeted to a fantasy character with a face significantly different from the source actor Sifakis et al. [2005], Kei and Tomoyuki [2012], Seol et al. [2012], Ouzounis et al. [2017], Serra et al. [2018], and it demands special care in positioning the markers on both source and target faces Reverdy et al. [2015]. Although this technique is sufficient for general-purpose MoCap, it fails to capture fine details of the face Furukawa and Ponce [2009], Reverdy et al. [2015]. For this reason, markerless methods are developed to provide high-fidelity performance capture Bradley et al. [2010], Beeler et al. [2011], Thies et al. [2015]. The approaches to solving the inverse rig problem can be divided into data-based (regression models that demand long animated sequences for the training phase) and model-based (that do not demand animation for training, only the rig function with the basis vectors). Data-based solutions are popular due to their ability to provide accurate solutions even for complex rig functions, and commonly apply neural networks Holden et al. [2016], Bailey et al. [2020], Song et al. (2020); Seonghyeon et al. (2021); radial basis functions Deng et al. (2006); Song et al. (2011); Seol and Lewis (2014); Holden et al. (2015) or other forms of regression Feng et al. (2008); Yu and Liu (2014); Bouaziz et al. (2013). However, the data acquisition may be too expensive, which is why we consider a model-based approach in this paper. Within model-based approaches, the literature examines only simplified linear blendshape models to fit the acquired mesh, yielding convex optimization problems easy to solve with a closed form Choe and Ko (2006); Sifakis et al. (2005); Cetinaslan (2016); Li et al. (2010); Seol et al. (2011). In contrast, we specifically target realistic facial animation with a high level of detail, and for this reason, we need to go beyond the linear model and include quadratic corrective terms, as studied in Holden et al. (2016); Song et al. (2017); Kim and Singh (2021). In this paper, we are not concerned with real-time execution but aim for a more precise mesh fit, hence we assume models with a large number of vertices (we will consider 4000 vertices for each animated character in our experiments). A different line of work addressing the problem of rig inversion is a direct manipulation of the mesh. It aims for the algorithms and tools that are appended to a sculpted character, allowing an artist to refine the pose by dragging specific vertices of the face directly and producing the desired expression Zhang et al. (2004); Lewis and Anjyo (2010); Seo et al. (2011); Cetinaslan and Orvalho (2020a,b). In this case, it is important that the animator receives feedback during the dragging, hence the optimization takes into account only a sparse set of markers or even a single vertex Lewis and Anjyo (2010); Anjyo et al. (2012). Despite our formulation being parallelizable, we do not focus on distributed models here, and all the experiments in this paper are performed sequentially over a non-segmented face mesh. Nevertheless, our method is parallelizable, yielding a problem that is separable by components, hence the computations might be distributed to the level of blendshapes. Another approach toward distributed inverse rig solvers is via using face segmentation or clustering. It allows different face regions to be observed and processed independently or in parallel. Early works consider a simple split of the face into upper and lower sets of markers Choe and Ko (2006). More recent papers model complex splits, either manually Seol et al. (2011); Liu et al. (2010); semi-automatically Na and Jung (2011); Tena et al. (2011); Marco et al. (2020) or automatically James and Twigg (2005); Joshi et al. (2006); Kei and Tomoyuki (2012); Reverdy et al. (2015); Song et al. Figure 3: A schematic representation of the considered problem. Some reference mesh \(\widehat{\mathbf{b}}\) is taken as an input, and the algorithm needs to estimate the activation weights of the quadratic blendshape model of the character in order to closely reconstruct the reference mesh and respect the structure of the model. The optimization problem might be solved using general-purpose quadratic solvers, like Byrd et al. (1999), or using the algorithm proposed in this paper. Further, the estimated weights are plugged-in into the animation software to give the final reconstructed expression. (2017); Bailey et al. (2020). Clustering based on the underlying deformation model has been considered in Romeo and Schwartzman (2020) and Rackovic et al. (2021), where the goal of the former was to add a secondary motion to an animated character, and the latter proposes a segmentation for solving the inverse rig locally in a distributed fashion. All the methods available in the literature solve the inverse rig problem using a linear blendshape function. It turns out it is easy and fast to work with. However, it is of significant interest to work with more complex face models that closely resemble a source actor -- a linear model does not exhibit high-enough accuracy for this purpose. In this paper, we introduce a method that allows the application of a more complex blendshape function, that includes quadratic corrective terms, to produce an accurate and sparse solution to the inverse rig problem. Further details on the principles of blendshape animation can be found in references Lewis et al. (2014) and Cetinaslan (2016). ## 3 Background on Rig Approximation and Inverse Rig In this section, we give a concise presentation on the main principles of blendshape animation. Section 3.1 introduces the linear delta blendshape model. Section 3.2 introduces quadratic corrective terms that are added on top of a linear model in order to increase the mesh fidelity of realistic human characters. Finally, Section 3.3 explains how inverse rig problems have been formulated and solved according to existing literature. ### Linear Blendshape Model Traditionally, a _blendshape model_ consists of a neutral face, that is represented by a column vector \(\mathbf{b}_{0}\in\mathbb{R}^{3n}\), and a set of \(m\) blendshape vectors \(\mathbf{b}_{1},...,\mathbf{b}_{m}\in\mathbb{R}^{3n}\) that correspond to atomic expressions obtained by local deformations over \(\mathbf{b}_{0}\) (Figure 4). Each blendshape \(\mathbf{b}_{i}\) is assigned an activation parameter \(w_{i}\) that usually (but not exclusively) takes values between \(0\) and \(1\)Seol et al. (2012). It is common to use a delta formulation of the blendshape model, where the elements of the blendshape vectors \(\mathbf{b}_{1},...,\mathbf{b}_{m}\) are not the actual coordinates of the deformed face, but the offsets from their corresponding positions in the neutral \(\mathbf{b}_{0}\). (For this reason, some authors use the notation \(\Delta\mathbf{b}_{i}\) instead of \(\mathbf{b}_{i}\); however, we will not use the \(\Delta\) symbol in order to ease the notation.) A _linear delta blendshape function_\(f_{L}(\cdot):\mathbb{R}^{m}\rightarrow\mathbb{R}^{3n}\) maps activation weights \(w_{1},...,w_{m}\) onto the mesh space, and it is defined as \[f_{L}(w_{1},...,w_{m})=\mathbf{b}_{0}+\sum_{i=1}^{m}w_{i}\mathbf{b}_{i}. \tag{2}\] If we collect blendshape vectors into a matrix \(\mathbf{B}=[\mathbf{b}_{1},...,\mathbf{b}_{m}]\), \(\mathbf{B}\in\mathbb{R}^{3n\times m}\), the function can be written in a matrix form as \[f_{L}(\mathbf{w})=\mathbf{b}_{0}+\mathbf{B}\mathbf{w}, \tag{3}\] where \(\mathbf{w}=[w_{1},...,w_{m}]^{T}\) represents a column vector of blendshape weights. Figure 4: Vectorization of meshes. Neutral mesh \(\mathbf{b}_{0}\) on top, and example blendshapes below. Each face vertex \(\mathbf{v}_{i}\) for \(i=1,...,n\) is unraveled into a vector of coordinates \(x,y,z\), and those coordinate vectors are stacked into a single blendshape vector. ### Quadratic Blendshape Model In modern animation, with an increasing level of detail and with avatars that closely resemble an actor (or a user), linear models are too restrictive and fail to span a desired space of motion. For this reason, additional _corrective shapes_ (also known as combination shapes) are included Seo et al. (2011); Lewis et al. (2014), and these are usually more numerous than the base vectors. In particular, the quadratic corrective terms are very common, and adding them on top of a linear function (3) significantly improves the accuracy of the representation; hence, we introduce a quadratic blendshape model in the following lines. A pair of blendshapes \(\mathbf{b}_{i}\) and \(\mathbf{b}_{j}\) that deform the same local area can produce mesh artifacts when activated simultaneously, so the additional corrective term \(\mathbf{b}^{\{i,j\}}\in\mathbb{R}^{3n}\) is included to adjust the resulting deformation. It is constructed as \(\mathbf{b}^{\{i,j\}}=\widehat{\mathbf{b}}^{\{i,j\}}-(\mathbf{b}_{0}+\mathbf{ b}_{i}+\mathbf{b}_{j})\), where \(\widehat{\mathbf{b}}^{\{i,j\}}\) represents a desired result of joint activation of deformers \(i\) and \(j\) (an artist sculpts it manually). Now, whenever the blendshapes \(\mathbf{b}_{i}\) and \(\mathbf{b}_{j}\) are activated simultaneously, the corrective blendshape \(\mathbf{b}^{\{i,j\}}\) is activated as well, so that the corrective contribution due to simultaneous activation of \(\mathbf{b}_{i}\) and \(\mathbf{b}_{j}\) equals \(w_{i}w_{j}\mathbf{b}^{\{i,j\}}\). A _quadratic blendshape function_\(f_{Q}(\cdot):\mathbb{R}^{m}\rightarrow\mathbb{R}^{3n}\) can now be defined as \[f_{Q}(\mathbf{w})=\mathbf{b}_{0}+\mathbf{B}\mathbf{w}+\sum_{(i,j)\in\mathcal{ P}}w_{i}w_{j}\mathbf{b}^{\{i,j\}}, \tag{4}\] where \(\mathcal{P}\) represents a set of tuples \((i,j)\) such that there is a quadratic corrective term between corresponding blendshapes \(\mathbf{b}_{i}\) and \(\mathbf{b}_{j}\). In practice, it is common to use additional levels of corrections, yielding even more complex blendshape models -- all the animated characters that we use for the experiments in this paper have at least one more level of corrections. However, the quadratic terms (first level of correction) already give a considerably more accurate approximation of the rig compared to a simple linear model -- in Figure 5 we compare the error of reconstruction of the ground truth animation frames, under linear and quadratic blendshape model approximations, for the animated character _Omar_ (for a complete description of datasets see Section 5). Regardless of their wide use in practice, the corrective terms of the blendshape model are largely overlooked in the literature. ### Inverse Rig Problem In this section, we describe the concept that is central to this paper -- the inverse rig problem for facial animation. Further, we cover state-of-the-art solutions before introducing our novel approach in the next section. The _inverse rig problem_ in automatic motion retargeting is the problem of finding optimal activation parameters \(\mathbf{w}\) (blendshape weights) to produce a target mesh \(\widehat{\mathbf{b}}\), which is usually given as a 3D scan of an actor or a set of MoCap markers. It is common to pose the problem in the least squares framework: \[\operatorname*{minimize}_{\mathbf{w}}\|f(\mathbf{w})-\widehat{\mathbf{b}}\|^ {2}, \tag{5}\] where \(f(\mathbf{w}):\mathbb{R}^{m}\rightarrow\mathbb{R}^{3n}\) is a rig function, \(\widehat{\mathbf{b}}\in\mathbb{R}^{3n}\) is a target mesh, and additional constraints and regularization terms might be included. Regularization terms are added to produce a more stable solution, but might also help to make vector \(\mathbf{w}\) sparser -- this is desirable because animators usually need to alter the solutions by hand, which gets much harder if a large number of blendshapes are already activated Seol et al. (2011). In Joshi et al. (2006), the objective Figure 5: RMSE between ground truth meshes and meshes obtained using different approximations, for the animated character _Omar_. Besides the linear and quadratic blendshape model, an additional _zero_ approximation is included to give a better idea of the error scale — it corresponds to a difference between the original mesh and a neutral expression. function takes exactly the form of (5); in Choe and Ko (2006); Liu et al. (2010) the constraints are invoked to keep the weights in the \([0,1]\) interval; Lewis and Anjyo (2010); Cetinaslan (2016) add a regularization term \(\|\mathbf{w}\|^{2}\), while Ribera et al. (2017) uses a \(\|\mathbf{w}\|_{1}\) regularization, which is known to enhance sparsity of the solution. Importantly, all the mentioned papers assume the blendshape model is linear. We take the formulation given in Cetinaslan and Oravlho (2020), as a state-of-the-art approach using a linear blendshape model: \[\operatorname*{minimize}_{\mathbf{w}}\|\mathbf{B}\mathbf{w}-\widehat{\mathbf{ b}}\|^{2}+\alpha\|\mathbf{w}\|^{2}, \tag{6}\] where \(\alpha\geq 0\). The weight constraints are not explicitly included in the optimization, but the values of the resulting vector that are outside of the feasible set are clipped afterward in order to satisfy the model constraints. Note that a neutral face \(\mathbf{b}_{0}\) is omitted in the above formulation, hence the target \(\widehat{\mathbf{b}}\) is also taken as an offset from the neutral face and not an actual mesh. One adjustment to the above approach, given in the same paper, is using a sparse approximation \(\mathbf{B}^{loc}\) of a matrix \(\mathbf{B}\), instead of an actual blendshape matrix. This excludes irrelevant blendshape effects in the local regions and leads to a sparser solution and lower computational cost; however, it might affect the accuracy of the reconstructed mesh. A different approach is given by Seol et al. (2011), where the problem is solved sequentially, for a single blendshape at a time (_step 1_ below), and the residual mesh \(\widehat{\mathbf{b}}\) is updated after each iteration (_step 2_), before proceeding for the next controller: \[\text{step 1:}\quad w_{i}=\operatorname*{argmin}_{w}\|\mathbf{b}_{i }w-\widehat{\mathbf{b}}\|^{2} \tag{7}\] \[\text{step 2:}\quad\widehat{\mathbf{b}}\leftarrow\widehat{ \mathbf{b}}-\mathbf{b}_{i}w_{i}.\] The order in which weights are optimized is crucial here. The authors suggest sorting them according to the average magnitude of deformation each blendshape produces over a whole face. This method yields a sparse solution and avoids simultaneous activation of mutually exclusive blendshapes Seol et al. (2011); Lewis et al. (2014). To the best of our knowledge, no prior work considered adding the non-linear terms in the objective when solving the inverse rig problem. While working with linear approximation yields a simple and fast solution, it lacks accuracy and fails to closely resemble a source actor. In the next section, we introduce a method that allows the application of the quadratic blendshape function (4) to produce an accurate and sparse solution for the inverse rig problem. ## 4 Proposed Method A linear blendshape function is, in general, convenient for solving the inverse rig problems, especially for cartoonh characters whose face proportions differ significantly from a source actor. However, we propose a more suitable model for a specific subdomain of facial animation -- character face models that are sculpted to closely resemble a source actor, which demands higher accuracy compared to what a linear function offers. We consider the quadratic blendshape function (4). Additionally, we assume that activation weights \(w_{1},...,w_{m}\) must stay within the \([0,1]\) interval1. Figure 6: Graphical representation of the construction of the proposed upper bound function \(\psi(\cdot)\). To simplify the representation, in this figure we use a notion \(f_{Q}\) instead of \(f_{Q}(\mathbf{w}+\mathbf{v})\). We will first formulate the objective function that should be minimized in order to provide the desired solution and show how it can be solved using standard ad-hoc quadratic solvers. However, we will show experimentally that such a solution is not entirely satisfactory; hence we further introduce an algorithm that we developed for solving this specific optimization problem, which yields a solution with an optimal trade-off between several metrics important for the domain application. ### Objective Function Our objective will encode the two most desired properties of the solution to the inverse rig problem: (1) the difference between a reference and reconstructed mesh should be as low as possible, hence we want to incorporate the quadratic corrective terms instead of using only a linear approximation of the blendshape model; (2) the solution vector should exhibit low cardinality, i.e., the number of non-zero elements, since that helps both for the posterior manual modification, and ensuring the stability of the solution; and finally, the weights must stay in the feasible interval \([0,1]\). The proposed objective takes into account all the above assumptions: \[\operatorname*{minimize}_{\mathbf{\theta}\leq\mathbf{w}\leq \mathbf{1}^{2}}\|f_{Q}(\mathbf{w})-\widehat{\mathbf{b}}\|^{2}+\alpha\mathbf{ I}^{T}\mathbf{w}. \tag{8}\] The regularization term in (8), with \(\alpha\geq 0\), is invoked to encourage a sparse solution -- note that, as \(\mathbf{w}\geq\mathbf{0}\) in the feasible set, \(\mathbf{1}^{T}\mathbf{w}\) equals the \(L1\) norm of \(\mathbf{w}\), which is known to be a sparsity-enhancing regularizer Schmidt (2005). We show that quadratic terms increase quality by using standard solvers and increase performance with a dedicated algorithm. Compared to the approaches described in Section 3.3, this formulation is a hard-to-solve nonconvex problem and, in particular, does not allow for a closed-form solution. However, it can be readily solved using some standard solvers. Specifically, in this paper, we show the results of applying the method from Byrd et al. (1999) that is a barrier (interior point) method in which the subproblems are solved approximately by a sequential quadratic programming (SQP) iteration with trust regions Conn et al. (2000), Wright et al. (1999). We decided to use Byrd et al. (1999) as it corresponds to a common python library implementation of a general-purpose solver that is suitable for our problem - it handles constrained, nonconvex problems - while utilizing SQP and interior point ideas2. This approach yields a high-accuracy fit of the solution, thus demonstrating the advantage of using quadratic terms. Footnote 2: docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html docs.scipy.org/doc/scipy/reference/optimize.minimize-trustconstr.html However, this is a general-purpose algorithm, and as expected, the solution obtained by it exhibits other issues. We will show in our experiments that, even though the data fidelity is exceptionally high, the resulting weight vector activates all the elements, yielding an undesirable solution. On top of that, frame-to-frame transitions are far from being smooth. To address all these issues, we propose a novel algorithm for solving the problem (8). ### Proposed Algorithm The objective in (8) is hard to minimize, hence we resort to the _majorization-minimization_Schifano et al. (2010), Sun et al. (2016) technique to obtain a good-enough approximate solution that clearly improves on the linear model. The idea is to approximate the objective with an iteration over functions \(\psi(\cdot)\), which are easier to solve. In the following lines, we will explain how the appropriate function is constructed (Figure 6). Further, we will see that it is separable by coordinates, hence we estimate each blendshape weight independently, solving the set of simple problems. This approach is iterative and suitable for parallelization since the weights are fitted independently. Further, as we will confirm in the numerical experiments, the obtained solution is accurate while keeping the low cardinality (the number of non-zero weights) and smooth transitions over the frames. We start by rewriting the objective (8) in a _Levenberg-Marquardt_ fashion Ananth (2004), i.e., we assume some initial weight vector \(\mathbf{w}\) is available, and we are looking for an increment vector \(\mathbf{v}\) that would lead to a better solution \(\mathbf{w}\leftarrow\mathbf{w}+\mathbf{v}\). Optimization problem is then reformulated as \[\operatorname*{minimize}_{-\mathbf{w}\leq\mathbf{v}\leq\mathbf{1} \cdot\mathbf{w}}\|f_{Q}(\mathbf{w}+\mathbf{v})-\widehat{\mathbf{b}}\|^{2}+ \alpha\mathbf{I}^{T}\mathbf{v}. \tag{9}\] A quadratic term in function \(f_{Q}\) makes this problem too complex to solve exactly, so we apply a majorization-minimization paradigm that solves an approximate problem. We introduce an upper bound function \(\psi(\mathbf{v};\mathbf{w}):\mathbb{R}^{m}\rightarrow\mathbb{R}\) over the above objective (9), such that \(\|f_{Q}(\mathbf{w}+\mathbf{v})-\widehat{\mathbf{b}}\|^{2}+\alpha\mathbf{I}^{T} \mathbf{v}\leq\psi(\mathbf{v};\mathbf{w})\) holds for \(\mathbf{0}\leq\mathbf{w}+\mathbf{v}\leq\mathbf{1}\). This bound is easier to minimize than the original objective, and we continue with the problem in the form \[\operatorname*{minimize}_{-\mathbf{w}\leq\mathbf{v}\leq\mathbf{1} \cdot\mathbf{w}}\psi(\mathbf{v};\mathbf{w}). \tag{10}\] In other words, for a current solution estimate \(\mathbf{w}\), we search for increment \(\mathbf{v}\) to construct the new solution estimate \(\mathbf{w+v}\) by minimizing a surrogate function \(\psi(\mathbf{v};\mathbf{w})\) in (10) instead of the original cost function in (9). The surrogate function (a global upper bound on the cost function in (9)) is carefully constructed such that it represents a good, \(\mathbf{w}\)-dependent approximation of the cost in (9) around the current point \(\mathbf{w}\), and such that (10) is easy to solve. Further, it needs to respect the following two conditions: * for any feasible vector \(\mathbf{0}\leq\mathbf{w}\leq\mathbf{1}\), and any increment vector \(\mathbf{v}\) such that \(\mathbf{0}\leq\mathbf{w}+\mathbf{v}\leq\mathbf{1}\), the original objective is bounded by the surrogate from above, i.e., \[\|f_{Q}(\mathbf{w+v})-\widehat{\mathbf{b}}\|^{2}+\alpha\mathbf{1}^{T}\mathbf{ v}\leq\psi(\mathbf{v};\mathbf{w});\] (11) * at point \(\mathbf{v}=\mathbf{0}\), the values of the objective and of the majorizer function are equal, that is, \[\|f_{Q}(\mathbf{w})-\widehat{\mathbf{b}}\|^{2}=\psi(\mathbf{0};\mathbf{w}).\] (12) In the rest of this paper we will write \(\psi(\mathbf{v})\) instead of \(\psi(\mathbf{v};\mathbf{w})\) for the sake of simplicity. We will consider a surrogate function such that the regularization term in (9) is kept the same, and we only approximate the data fidelity term. Note that the data fidelity term is a sum of squares over the mesh coordinates. Hence, we will look for the bound that can also be written as a sum, and bound the function at each of the coordinates: \[\psi(\mathbf{v})=\sum_{i=1}^{3n}\psi_{i}(\mathbf{v})+\alpha\mathbf{1}^{T} \mathbf{v}, \tag{13}\] where \(\psi_{i}(\cdot):\mathbb{R}^{m}\rightarrow\mathbb{R}\) is constructed to upper-bound the data fidelity term in the mesh coordinate \(i\), i.e., \(([f_{Q}(\mathbf{w+v})]_{i}-\widehat{b}_{i})^{2}\leq\psi_{i}(\mathbf{v})\). The notation \([f_{Q}(\mathbf{w+v})]_{i}\) indicates that we are observing the \(i^{th}\) coordinate of the resulting mesh. We want to rewrite this in a canonical quadratic matrix form, hence we introduce a symmetric matrix \(\mathbf{D}^{(i)}\in\mathbb{R}^{m\times m}\), such that its elements are coordinates of the corrective terms for the corresponding blendshape pairs, \(D^{(i)}_{jk}=\frac{1}{2}b^{\{j,k\}}_{i}\). Now we have \([f_{Q}(\mathbf{w})]_{i}=\mathbf{B}^{T}_{i}\mathbf{w}+\mathbf{w}^{T}\mathbf{D} ^{(i)}\mathbf{w}\), so the coordinate-wise surrogate function should satisfy \[\left(\mathbf{B}^{T}_{i}(\mathbf{w}+\mathbf{v})+(\mathbf{w}+\mathbf{v})^{T} \mathbf{D}^{(i)}(\mathbf{w}+\mathbf{v})-\widehat{b}_{i}\right)^{2}\leq\psi_{ i}(\mathbf{v}). \tag{14}\] Since we are minimizing with respect to the increment vector \(\mathbf{v}\), the vector \(\mathbf{w}\) is a constant; hence the left-hand side can be simplified to \[\left(g_{i}+\mathbf{h}^{T}_{i}\mathbf{v}+\mathbf{v}^{T}\mathbf{D}^{(i)} \mathbf{v}\right)^{2}, \tag{15}\] where \(g_{i}=\mathbf{B}^{T}_{i}\mathbf{w}+\mathbf{w}^{T}\mathbf{D}^{(i)}\mathbf{w}- \widehat{b}_{i}\), and \(\mathbf{h}_{i}=\mathbf{B}_{i}+\mathbf{w}^{T}\mathbf{D}^{(i)}\) are the terms that do not depend on \(\mathbf{v}\). The upper bound function is then derived by bounding the non-linear terms of the above expression separately (a linear term is kept the same). A coordinate-wise bound that satisfies the above conditions is \[\psi_{i}(\mathbf{v})=g_{i}+\mathbf{h}^{T}_{i}\mathbf{v}+g_{i}\lambda_{M}( \mathbf{D}^{(i)},g_{i})\|\mathbf{v}\|^{2}+2\|\mathbf{h}_{i}\|^{2}\|\mathbf{v} \|^{2}+2m\sigma^{2}(\mathbf{D}^{(i)})\sum_{j=1}^{m}v_{j}^{4}, \tag{16}\] where a function \(\lambda_{M}(\cdot,\cdot):(\mathbb{R}^{m\times m})\rightarrow\mathbb{R}\) is defined as \[\lambda_{M}(\mathbf{D}^{(i)},g_{i}):=\begin{cases}\lambda_{\text{min}}( \mathbf{D}^{(i)}),&\text{if }g_{i}<0,\\ \lambda_{\text{max}}(\mathbf{D}^{(i)}),&\text{if }g_{i}\geq 0,\end{cases} \tag{17}\] and \(\lambda_{min}(\mathbf{D}^{(i)})\), \(\lambda_{max}(\mathbf{D}^{(i)})\) stand for the minimum and maximum eigenvalues of the matrix \(\mathbf{D}^{(i)}\) respectively, and \(\sigma(\mathbf{D}^{(i)})\) is the largest singular value of the matrix \(\mathbf{D}^{(i)}\). A complete bound is reached by summing all coordinate-wise bounds and adding a regularization term: \[\psi(\mathbf{v})=\sum_{i=1}^{n}\left(g_{i}^{2}+2g_{i}\sum_{j=1}^{m}h_{ij}v_{j}+ 2\left(g_{i}\lambda_{M}(\mathbf{D}^{(i)},g_{i})+\|\mathbf{h}_{i}\|^{2}\right) \sum_{j=1}^{m}v_{j}^{2}+2m\sigma^{2}(\mathbf{D}^{(i)})\sum_{j=1}^{m}v_{j}^{4} \right)+\alpha\mathbf{1}^{T}\mathbf{v}. \tag{18}\] Figure 6 illustrates the above process. One can see that the upper bound (18) is separable by components, and that the solution will be available in closed form. If we consider a single blendshape index \(j\in\{1,...,m\}\) and regroup the coefficients of the bound function as \[q=\sum_{i=1}^{3n}g_{i}h_{ij}+\alpha,\quad r=\sum_{i=1}^{3n}(g_{i}\lambda_{M}( \textbf{D}^{(i)},g_{i})+\|\textbf{h}_{i}\|^{2}),\quad s=m\sum_{i=1}^{3n}\sigma^{ 2}(\textbf{D}^{(i)}), \tag{19}\] we can write an objective function in the form of a quartic equation without a cubic term: \[\begin{split}\underset{v_{j}}{\mathrm{minimize}}& qv_{j}+rv_{j}^{2}+sv_{j}^{4},\\ &\text{s.t. }0\leq w_{j}+v_{j}\leq 1.\end{split} \tag{20}\] This procedure is summarized in Algorithm 1 and we refer to it as an _inner iteration_. Algorithm 1 solves the problem (20) for each component to give a full increment vector **v**, that is, solving (18) is equivalent to solving (20) for each \(j=1,...,m\) independently. In this sense, our approach is somewhat similar to the solution of Seol et al. (2011), as given in (7); however, we do not update vector **w** before all the components are optimized, which helps to avoid the issues with making the right update order, and additionally opens the possibility for a parallel implementation of the procedure. ``` 0: Blendshape matrix \(\textbf{B}\in\mathbb{R}^{3n\times m}\), corrective blendshape matrices \(\textbf{D}^{(i)}\in\mathbb{R}^{m\times m}\) for \(i=1,...,3n\), target mesh \(\tilde{\textbf{b}}\in\mathbb{R}^{3n}\), regularization parameter \(\alpha>0\) and weight vector \(\textbf{w}\in[0,1]^{m}\). 0: An optimal increment vector \(\hat{\textbf{v}}\) as a solution to (20). Compute coefficients \(q,r\) and \(s\) from eq. (19) and solve for an optimal increment vector \(\hat{\textbf{v}}\): \(r=2\sum_{i=1}^{3n}(g_{i}\lambda_{M}(\textbf{D}^{(i)},g_{i})+\|\textbf{h}_{i} \|^{2})\), \(s=2m\sum_{i=1}^{3n}\sigma^{2}(\textbf{D}^{(i)})\), for\(j=1,...,m\)do \(q=2\sum_{i=1}^{3n}g_{i}h_{ij}+\alpha\) \(\hat{v}_{j}=\underset{v}{\mathrm{argmin}}_{v}qv+rv^{2}+sv^{4}\) s.t. \(-w_{j}\leq v\leq 1-w_{j}\) endfor return\(\hat{\textbf{v}}\) ``` **Algorithm 1** Inner Iteration The solution of the inner iteration will depend on the initial weight vector, hence we repeat the procedure in Algorithm 1 multiple times in order to provide an increasingly good estimate **w** of the solution to (9), as explained in Algorithm 2. After each iteration \(t=1,...,T\), we update the weight vector as \(\textbf{w}_{(t+1)}=\textbf{w}_{(t)}+\textbf{v}_{(t)}\). An initial vector can be chosen anywhere within the feasible space \(\textbf{0}\leq\textbf{w}_{(0)}\leq\textbf{1}\), but in Section 5 we mention strategies for initialization based on domain knowledge, that lead to faster convergence and yield better results. Additional details on the construction of the surrogate and a discussion of the algorithm convergence are outside of the scope of this work, but we include them in the companion paper not for review (2022). ## 5 Evaluation As mentioned earlier, we consider realistic human characters with a high level of detail and activation weights restricted to lie between 0 and 1. The first three characters that we present in these results are publicly available within the MetaHuman3 platform -- _Omar, Danielle_ and _Myles_ (@unrealengine.com/en-US/eula/mhc), as shown in Figure 7. The additional two datasets that we used to evaluate the method are the property of the animation studio -- _Char 4_ and _Char 5_. All the characters are accompanied by a short animation sequence covering a wide range of facial expressions that were used to evaluate the methods4. We exclude inactive vertices and the vertices in the neck and shoulder regions for each character, so after the subsampling, each model has \(n=4000\) vertices. The scale of the head is also similar between all the characters, and the width between the left and right ear is approximately \(18\,cm\). However, the number of blendshapes differs (ranging between \(60\) and \(150\)), and that means that different choices of the regularization parameter \(\alpha\geq 0\) (Eq. (6) and (8)) might be optimal for various models. It is important to note that _Char 5_ has a more complex rig than the other four characters, with a number of deformers that are not based on a blendshape deformation (rotational and joint-like deformers); still, we include it in the experiments to show that our algorithm is robust enough to produce satisfying results even in such a case. ### Benchmark Methods As mentioned in Section 3.3, a state-of-the-art representative of model-based approaches to solving the inverse rig problem is a method given by Cetinaslan and Orvalho (2020a) in Eq. (6) -- in the rest of this paper we will denote this approach as _Cet_. In the same paper Cetinaslan and Orvalho (2020a), the authors propose a modification of the solution using the heat kernel of Crane et al. (2013) to transform an original blendshape matrix \(\mathbf{B}\) into a sparse approximation \(\mathbf{B}^{loc}\). The idea is that the vertices of the face should not affect the activation of blendshapes whose main impact is localized in a distant face region. Hence, this method provides localized and more stable results. The optimization problem for that case is posed identically to (6), except that matrix \(\mathbf{B}\) is substituted with \(\mathbf{B}^{loc}\). This approach will be denoted by _Cet-loc_. A different approach is proposed in Seol et al. (2011), where the weights are optimized sequentially (7), starting from the ones that have a larger overall effect on the face. This method does not include any weight constraints, yet our animated characters are strictly demanded to have values between \(0\) and \(1\). Hence, when applying this method, we Figure 7: Head models available at MetaHuman Creator platform (©unrealengine.com). project the estimated weight onto a feasible interval after each iteration. This method will be denoted as _Seol_. Note that in this approach, there is no regularization parameter. Our method will be evaluated for two cases, as mentioned earlier. The first approach is to consider the proposed objective function (8), but minimize it by applying a general-purpose quadratic solver -- we use the scipy implementation of the interior point solver Byrd et al. (1999), whose inner iterations are solved by applying sequential quadratic programming technique and thrust region method5. We denote this _SQP_ solution. The second approach solves the problem (8) using the majorization-minimization-based algorithm proposed in Section 4.2, and it will be denoted _MM_. The solutions might be affected by choice of the initial point. In Section 4, we mentioned that the algorithm might be initialized with any feasible **w**; however, random initialization leads to slower convergence, and the results are often poor in terms of both mesh fidelity and sparsity, hence we aim for a more educated guess of the initial vector. A simple choice is \(\textbf{w}=\textbf{0}\), as this would lead to a sparse solution. The results of our method obtained with zero initialization will be denoted _MM-0_. Another possibility is initializing our method with the solution of a problem under the linear blendshape function approximation. In this case, we chose a solution of _Cet_ because it gives a solution in closed form using a pseudoinverse of a matrix **B**; hence, we refer to this as _MM-psd_. In the same way, we can use the results of _Seol_ to initialize our method, however, it does not show any advantage over the other, hence we proceed with the above two initialization strategies. (_SQP_ is also an iterative solver, but in our experiments different initialization strategies did not show significant differences in the results. For this approach, all the presented results are initialized with a zero vector.) Footnote 5: docs.scipy.org/doc/scipy/reference/optimize.minimize-trustconstr.html docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html ### Metrics In order to evaluate the goodness of the results, we will consider several different metrics. The main metric of interest is a root mean squared error (RMSE), which will serve to measure the data fidelity of the results and is defined as: \[RMSE(\mathbf{\widehat{b}},\mathbf{\hat{b}})=\sqrt{\frac{1}{n}\sum_{i=1}^{n}( \mathbf{\widehat{b}}_{i}-\mathbf{\tilde{b}}_{i})^{2}}, \tag{21}\] where \(\mathbf{\widehat{b}}\) is a target face mesh and \(\mathbf{\tilde{b}}\) is a predicted mesh. The meshes \(\mathbf{\tilde{b}}\) are obtained by plugging the predicted weight vectors in Autodesk Maya Autodesk, INC. [2019-01-15], hence all the methods are evaluated on the same level of rig complexity. One important thing to note here is that RMSE in this standard form might not be the best choice for assessing the results of mesh reconstruction. The problem is that, when a large number of vertices is only slightly misplaced, the value of the metric might be similar to the case when only a small group of vertices is misplaced but to a large extent. We need to rate these two scenarios differently since the latter is what actually produces visible misfits. In order to accomplish this, we will not use the mean value of the squared distances as in (21), but instead, we take the \(95^{th}\) percentile of the error. While the mesh error is the metric of primary importance to us, we also want a solution to be as sparse as possible while keeping the mesh fidelity high (i.e., RMSE low). An appropriate metric for this is the cardinality of a predicted weight vector, i.e., the number of non-zero elements of **w**. Some estimated weights might have values that are very close to zero and, in practice, negligible, but they will still count when measuring cardinality. For this reason, we include an additional indicator of the sparsity -- \(L_{1}\) norm of the solution vector. Finally, as mentioned earlier, we would prefer if the weights for the consecutive frames of the animation sequence have smooth transitions. If we take a single blendshape weight \(w_{i}\) and observe its values over \(T\) time frames, we get a discrete time series \([w_{i}^{(1)},...,w_{i}^{(T)}]\). To evaluate its smoothness, we will use the second-order differences Green and Silverman (1993); Mehlum and Tarrou (1998) \[\text{Smoothness factor}(w_{i})=\sum_{t=2}^{T-1}(w_{i}^{(t-1)}-2w_{i}^{(t)}+ w_{i}^{(t+1)})^{2}. \tag{22}\] Lower values indicate smoother time series, while the minimum value of \(0\) is achieved for a constant vector. Note also that this metric is evaluated per component over time, while the previous ones take a static frame and evaluate over the entire set of weights/vertices. ### Numerical Results The experiments are executed on a user-level computer with the processor Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz 1.99 GHz and 8GB of RAM. Our method, under _SQP_ and under _MM_, as well as the other benchmark models, are tested with a wide range of regularization parameter values \(\alpha\in\{0,0.001,0.01,0.1,1,10,100\}\). We chose this interval of values as it covers both extremely low and extremely high values, hence we expect the optimal value to be somewhere within. However, a reader should be aware that this range might be considerably different depending on the considered animation character. For example, in Cetinaslan and Orvalho (2020), the authors mention that the values of \(\alpha>1\) make the regularization term completely dominates the objective, yet in their experiments with direct manipulation, only a few face vertices are included in the objective. On the other side, we use \(n=4000\) vertices for each character in our retargeting experiments; hence, the regularization term takes dominion only when the value of \(\alpha\) is much closer to \(100\). Besides the number of vertices \(n\), the size of the head model (or the units of measure) and the number of blendshapes \(m\) will also affect the choice of \(\alpha\). The desired solution for the rig inversion should have high data fidelity while keeping the number of activated components low, hence we look at the trade-off curves between RMSE (mesh error) and cardinality of the weight vectors in Figure 8. Additionally, Figure 9 complements the results showing a trade-off between RMSE and L1 norm of the weight vectors. Predictions are made over the training sets of 80 animated frames for each character. The training frames are subsampled from the original animation sequence so that there are at least 10 time units between any two frames in the resulting set -- this was done in order to avoid the redundancy of the consecutive frames Tena et al. (2011). The results of _Seol_ are presented by a single point (a gray square) in Figures 8 and 9, since the approach of Seol et al. (2011) does not include any regularization. The other methods are presented for different values of \(\alpha\) indicated by the size of a corresponding marker and connected by a same-color line. The first thing to notice is that _SQP_ (green stars) always activates all the blendshape weights, yielding a trade-off curve that is equal to \(m\) on the \(y-\)axis. On the other side, if we consider the plot of RMSE versus L1 norm, its trade-off is favorable, and actually, _SQP_ curves are always under the other methods. This tells us that, most probably, many of those components are activated by a value that is very close to zero, and if further effort is invested, this method might be adjusted to give low cardinality and high-accuracy reconstructions. Results of _MM_ are presented in round dots (red for _psd_ and orange for 0 initialization). Figure 8: A trade-off between mesh error (RMSE) and cardinality (i.e. the number of non-zero weights) of the estimated solutions for different methods and varying values of regularization parameter \(\alpha\). Different marker sizes correspond to values of \(\alpha\), as indicated on the far right. Figure 9: A trade-off between mesh error (RMSE) and L1 norm of the estimated solutions for different methods and varying values of regularization parameter \(\alpha\). Different marker sizes correspond to values of \(\alpha\), as indicated on the far right. While the zero initialization shows a relatively high mesh error, _MM-psd_ gives a trade-off curve that is always below those of the other methods, indicating its superiority. Results of _Cet_ and _Cet-loc_ (light and dark violet diamonds, respectively) are relatively similar, with the localized version having slightly lower cardinality and higher RMSE. To compare the results in more detail, we pick an optimal value of \(\alpha\), using the elbow technique, for each approach and each character, and evaluate the predictions over the test set. Test sets consist of 500 frames for each character. In this case, we take the animation sequences instead of the isolated frames. This will allow us to compare the smoothness of the results and to produce the animated clips in order to visually inspect the properties of different methods (check the supplementary video materials). Figure 10 gives the values of the four metrics for _Omar_ over the test set, as well as the average computational time per frame. Boxplots show the median values and quartiles (across 500 frames in case of RMSE, cardinality, and L1; and across \(m=130\) blendshape weights in the case of smoothness factor), while the mean values are given in Table 1. Table 1 additionally contains the selected optimal \(\alpha\) values (notice that the table contains only _MM-psd_, _SQP_, _Cet_ and _Seol_ as the most significant of the six approaches). Looking at the values of RMSE, _SQP_ clearly outperforms all the other approaches; however, it is inferior in all the other respects. Cardinality is, without exception, always set to the maximum value (\(m=130\)), and the L1 norm is visibly higher than for the other methods, aside from _Seol_. Values of the smoothness factor are far higher than for the other approaches, comparable only to that of _Seol_, hence it was necessary to zoom in on the region of the bottom left subfigure in order to compare the other boxes. Finally, the computation time is much longer than that of _MM_, and since _SQP_ is not straightforward to implement in parallel, this cost is irreducible. On the other side, _MM-psd_ has a bit higher RMSE than _SQP_, yet still significantly lower than the previous state-of-the-art methods. At the same time, it keeps the cardinality relatively low, somewhere in between _Cet_ and _Seol_, and the results are relatively smooth. We address the reader to check the supplementary video materials for a complete visual comparison of the animation sequences. The only aspect where other methods exhibit better performance is execution time. However, since this method targets the production of movie and game animation, it is not restricted to real-time computations, and the performance speed in our results (around \(10s\) per frame) is feasible. This is especially favorable when contrasted with the solution of a general-purpose solver (_SQP_), which is about 15 times slower. Note that here we implemented the algorithm sequentially, but due to the construction, _inner iterations_ could be implemented in parallel to reduce the execution time further. In Figure 1, we see an example frame for character _Omar_, with meshes and activation weights. Again we note that, while the mesh yield by _SQP_ gives a flawless reconstruction of the reference mesh, it exhibits too high cardinality of the weights vector, and hence might affect the stability of the result and make posterior manual editing impossible. On the other side, the three methods under the linear blendshape approximation (_Cet, Cet-loc_ and _Seol_) show visible misfits in Figure 10: Values of the four metrics (mesh RMSE, weights cardinality, weights \(L1\) norm, and temporal smoothness of weight curves) and execution time (in seconds) for _Omar_. Execution time is presented in a log-scale, because of the wide range — for _Cet-loc_ it takes \(0.0003\) s, and for _SQP_\(172.7\) s. See Table 1 for numerical details. the mesh -- meshes of _Cet_ and _Cet-loc_ are very similar, and both are slightly off in the mouth corner region; notice that corners are not as widely spread as in the reference, that the upper lip covers a larger surface of the teeth and that the shadow under the bottom lip is not as visible. _Seol_ exhibits low cardinality and lower average error than _Cet_, yet it gives a completely different mouth expression. Our method _MM_ is the only one that gives both a good reconstruction of the mesh and low cardinality of the weight vector. Metric values for _Danielle_ are given in Figure 11 and Table 2. The conclusions are somewhat similar to the case of _Omar_. Here, _MM_ actually gives lower RMSE than _SOP_ (and any other approach), and even the cardinality is lower than that of _Seol_. Another notable difference is that _SOP_ contrast even more with the other methods in term of smoothness, completely dominating the corresponding subfigure. The example frame with predictions for _MM-psd_, _Cet_, and _Seol_ is presented in Figure 12. Both _Cet_ and _Seol_ show higher errors in the region around the mouth (as indicated by the red color), with _Cet_ giving slightly less stretched mouth corners compared to the original, while _Seol_ poorly reconstructs the lower lip -- the bottom row of teeth is completely visible, while it is occluded in the reference frame. Our method \begin{table} \begin{tabular}{c|c|c|c|c|c|c} & RMSE & Card. & L1 & \begin{tabular}{c} Smooth. \\ factor \\ \end{tabular} & Time & \(\alpha\) \\ \hline MM & 0.0898 & 85.2 & **9.10** & 0.0034 & 11.29 & 5 \\ SQP & **0.0325** & 130. & 12.4 & 0.0201 & 172.7 & 5 \\ Cet & 0.1671 & 95.4 & 9.38 & **0.0029** & **0.002** & 5 \\ Seol & 0.1625 & **78.9** & 12.7 & 0.0207 & 0.017 & / \\ \end{tabular} \end{table} Table 1: Comparison of the obtained metrics values for the four selected approaches, for _Omar_, where \(MM\) is with \(psd\) initialization. The best value for each metric is given in bold. The last column gives an optimal value of a regularization parameter \(\alpha\). Figure 11: Values of the three metrics (mesh RMSE, weights cardinality, weights \(L1\) norm, and temporal smoothness of weight curves) and execution time (in seconds) for _Danielle_. Execution time is presented in a log-scale, because of the wide range — for _Cet-loc_ it takes \(0.0003\) s, and for _SQP_\(143.2\) s. See Table 2 for numerical details. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} & RMSE & Card. & L1 & \begin{tabular}{c} Smooth. \\ factor \\ \end{tabular} & Time & \(\alpha\) \\ \hline MM & **0.0756** & **69.3** & 6.72 & 0.0062 & 12.73 & 5 \\ SQP & 0.0894 & 128. & **4.64** & 0.3390 & 143.2 & 5 \\ Cet & 0.1280 & 84.6 & 7.02 & **0.0051** & **0.002** & 5 \\ Seol & 0.1389 & 71.5 & 11.1 & 0.0212 & 0.019 & / \\ \end{tabular} \end{table} Table 2: Comparison of the obtained metrics values for the four selected approaches, for _Danielle_. The best value for each metric is given in bold. The last column gives an optimal value of a regularization parameter \(\alpha\). gives a mesh reconstruction that closely resembles the original, while the cardinality is comparable to the benchmark methods. For _Myles_, boxplots are presented in Figure 13 (and mean values in Table 3). Our method _MM_ gives the lowest RMSE and the cardinality that is in between that of _Cet_ and _Seol_. In this case, _Cet_ (and _Cet-loc_) gives quite low RMSE -- in the median value, it is even lower than _SQP_, yet it varies much more, giving significantly larger upper quantiles. The example frame for _Myles_ is in Figure 14. While the average RMSE is larger (and more red tones are visible in the mesh) for _Cet_ than for _MM_, neither of the two produces visible flaws, and bar plots of the wight activations are very similar. On the other side, _Seol_ shows a poor fit in the shapes of the lips and cheeks. _Char 4_ and _Char 5_ are proprietary models, hence we are not showing the meshes; however, the results are summarized in Figures 15 and 16 and in Tables 4 and 5, respectively. As mentioned earlier, a face model of _Char 5_ has a different structure, with many non-blendshape components, while our algorithm targets detailed and accurate blendshape rigs. However, these results show that even with relaxed assumptions about the face model, our method is comparable to state-of-the-art solutions. For _Char 4_ the conclusions are similar to those for _Myles_, while for _Char 5_ the main takeaway is that _MM_ is slightly better than other methods in terms of RMSE, and significantly outperforms the benchmarks in terms of cardinality. A difference in the smoothness factor between the methods is not so drastic here, with only _Seol_ showing considerably higher values. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} & RMSE & Card. & L1 & \begin{tabular}{c} Smooth. \\ factor \\ \end{tabular} & Time & \(\alpha\) \\ \hline MM & **0.0564** & **28.3** & 2.74 & 0.1638 & 2.985 & 5 \\ SQP & 0.0745 & 62.0 & **1.82** & 0.2189 & 32.24 & 5 \\ Cet & 0.0611 & 39.8 & 3.26 & **0.1550** & **0.002** & 5 \\ Seol & 0.0834 & **38.2** & 5.68 & 0.4908 & 0.008 & / \\ \end{tabular} \end{table} Table 5: Comparison of the obtained metrics values for the four selected approaches, for _Char 5_. The best value for each metric is given in bold. The last column gives an optimal value of a regularization parameter \(\alpha\). Figure 14: Example frame prediction for _Myles_. The top row shows obtained meshes, while the bottom represents corresponding activations of the controller weights. Red tones in the meshes indicate a higher error of the fit, according to the color bar on the right. The average weight activation of each solution is indicated with a horizontal line. The average mesh error and cardinality of the solution are given in a text box. Figure 13: Values of the four metrics (mesh RMSE, weights cardinality, weights \(L1\) norm, and temporal smoothness of weight curves) and execution time (in seconds) for _Myles_. Execution time is presented in a log-scale, because of the wide range — for _Cet-loc_ it takes \(0.0004\) s, and for _SQP_\(164.3\) s. See Table 3 for numerical details. Figure 16: Values of the four metrics (mesh RMSE, weights cardinality, weights \(L1\) norm, and temporal smoothness of the weight curves) and execution time (in seconds) for _Char 5_. Execution time is presented in a log-scale, because of the wide range — for _Cet-loc_ it takes \(0.0002\) s, and for _SQP_\(32.24\) s. See Table 5 for numerical details. Figure 15: Values of the four metrics (mesh RMSE, weights cardinality, weights \(L1\) norm, and temporal smoothness of the weight curves) and execution time (in seconds) for _Char 4_. Execution time is presented in a log-scale, because of the wide range — for _Cet-loc_ it takes \(0.0016\) s, and for _SQP_\(84.5\) s. See Table 4 for numerical details. It is interesting to point out that, even though no temporal prior was included in the objective, _MM_ and _Cet_ yield smooth results for all five datasets, as indicated by the respective smoothness factor and also confirmed by the supplementary video materials. This was not the case for _SQP_ or _Seol_. In Seol et al. (2011), the solution of _Seol_ was modified afterward, using the graph simplification technique, to produce smooth animation curves. However, this was outside of the scope of our paper, hence we did not apply those additional steps. ### Animator Feedback To make the analysis of the results complete, we include here also the feedback from the expert animators. Five animators were asked to rank the methods (_MM, SQP, Cet, Cet-loc, Seol_) from the best performing to the worst. The criterion was the appearance of the animated results compared to the reference motion -- the animators were looking at the flip-tests provided as the supplementary video materials, except that the method names were masked to avoid possible bias. The answers of the animators were heterogenous: sometimes the same animator would give different rankings of the methods for different animated characters (_Omar, Danielle, Myles_), sometimes there would be ties between two or more methods, and in some cases, the animator would only be able to tell the best and the worst of five results. This leaves us with a set of pairwise comparisons of the results. We will estimate the overall ranking of the five methods using the _Bradley-Terry model_Bradley and Terry (1952), Chen et al. (2013), as it is a common ranking procedure when the outcomes of the pairwise comparisons are available. The Bradley-Terry model is an iterative procedure for estimating the relative strength of the objects (methods in our case) from the set of pairwise comparisons. The outcomes of the comparisons are collected into a square matrix \(\mathbf{S}\in\mathbb{R}^{N\times N}\), as seen in Table 6; the value in row \(i\) and column \(j\) shows how many times the \(i^{th}\) object (method) ranked better than the \(j^{th}\), while the value in a row \(j\) and column \(i\) indicates the number of the opposite outcomes. From Table 6, we can already see that the values in the first row are the highest, telling us that _MM_ was most often ranked as the better method compared to the others, while _SQP_ (the second row) is slightly behind. Also, we see that the values of the first column are relatively lower than for the others, confirming that _MM_ rarely lost the pairwise comparisons. Further, the vector of relative strengths of the methods, \(\mathbf{s}\in\mathbb{R}^{N}\), is estimated iteratively, as follows. For each object \(i\), the \(i^{th}\) coordinate of the vector is \[s_{i}\leftarrow\frac{\sum_{j\neq i}S_{ij}}{\sum_{j\neq i}\frac{S_{ij}+S_{ji}}{ s_{i}+s_{j}}}. \tag{23}\] The final values of the relative strength vector \(\mathbf{s}\), scaled so that it sums to 1, are presented in Figure 17. _MM_ shows a clear advantage over the other methods, and it is followed by _SQP_. Other methods are far behind these two. While _SQP_ was often showing a lower RMSE than MM, the ranking was more in favor of _MM_, and we can interpret this by the fact that _SQP_ produces high cardinality, which led to less stable (or less smooth) transitions between the frames of animation; hence, the estimated animation sequences did not look as organic as _MM_, leading to the lower relative score assigned by the animators. Some remarks that animators made, aside from the rankings, are that the differences are easiest to tell in the region of the mouth, especially when the corners are wide open, and that _MM_ and _SQP_ show a closer fit in this region. This is something that we could have expected since the two methods include the corrective terms when fitting, and the majority of the quadratic corrective terms are usually targeting the lips of the characters. ## 6 Conclusion The method proposed in this paper applies the majorization minimization paradigm in order to easily achieve the solution to the problem of rig inversion even when the quadratic blendshape terms are utilized. Our method gives a better \begin{table} \begin{tabular}{c|c c c c c} & MM & SQP & Cet & Cet-loc & Seol \\ \hline MM & - & 5 & 12 & 12 & 11 \\ SQP & 6 & - & 9 & 9 & 12 \\ Cet & 0 & 3 & - & 3 & 3 \\ Cet-loc & 0 & 2 & 2 & - & 3 \\ Seol & 2 & 3 & 5 & 5 & - \\ \end{tabular} \end{table} Table 6: Matrix \(\mathbf{S}\) of the outcomes of pairwise comparisons between the five methods. A value in row \(i\) and column \(j\) tells how many times the \(i^{th}\) method ranked better than the \(j^{th}\). fit in the details of the face mesh while not increasing the cardinality of the weight vector compared to state-of-the-art methods proposed in Seol et al. (2011) and Cetinaslan and Orvalho (2020); hence, the method is highly applicable in realistic face animation and targets the applications where accuracy is preferable to real-time execution such as the close shot animations in video games or movies production. It is further worth mentioning that the construction of the algorithm gives space for the parallel implementation of the inner iterations, and in future work, we will address this to reduce the execution time additionally. Another aspect that we will address in future research is to include an additional step of face segmentation. This might lead to distributed model and possibly even higher precision in fitting the fine details of the face mesh. **Video Materials** Flip-tests for each of the three MetaHumans are available at: * Omar [https://youtu.be/7RJo9KLaM48](https://youtu.be/7RJo9KLaM48) for the colored version and [https://youtu.be/AVztr9sOBhY](https://youtu.be/AVztr9sOBhY) for the gray version. * Danielle [https://youtu.be/yjMS8D1He20](https://youtu.be/yjMS8D1He20) colored and [https://youtu.be/mhAHGYeA-fY](https://youtu.be/mhAHGYeA-fY) gray. * Myles [https://youtu.be/Auy3vE1J8r0](https://youtu.be/Auy3vE1J8r0) colored and [https://youtu.be/r2MraUrXlew](https://youtu.be/r2MraUrXlew) gray. #### Acknowledgements The authors would like to thank 3Lateral studio for guidelines in the animation applications domain, and in specific a group of animators, Aleksa Bracic, Nikola Stosic, Dorde Ilic, Igor Eric, and Lazar Damjanov, for the valuable feedback on the animated results. We want to thank Dr. Filipa Valdeira for sharing her expertise in the ranking methods to help us evaluate the animators' feedback. #### Funding This work has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 812912, from FCT IP strategic project NOVA LINCS (FCT UIDB/04516/2020) and project DSAIPA/AI/0087/2018. The work has also been supported in part by the Ministry of Education, Science and Technological Development of the Republic of Serbia (Grant No. 451-03-9/2021-14/200125).
2306.07665
Liquid-Crystal-Based Controllable Attenuators Operating in the 1-4 Terahertz Band
Liquid-crystal devices (LCDs) offer a potential route toward adaptive optical components for use in the < 2 THz band of the electromagnetic spectrum. We demonstrate LCDs using a commercially available material (E7), with unbiased birefringence values of 0.14-0.18 in the 0.3-4 THz band. We exploit the linear dichroism of the material to modulate the emission from a 3.4-THz quantum cascade laser by up to 40%, dependent upon both the liquid-crystal layer thickness and the bias voltage applied.
Aniela Dunn, Zhaopeng Zhang, Michael D. Horbury, Eleanor V. Nuttall, Yingjun Han, Mohammed Salih, Lianhe Li, Abigail Bond, Ehab Saleh, Russell Harris, Diego Pardo, Brian N. Ellison, Andrew D. Burnett, Helen F. Gleeson, Alexander Valavanis
2023-06-13T10:16:53Z
http://arxiv.org/abs/2306.07665v1
# Liquid-Crystal-Based Controllable Attenuators Operating in the 1-4 Terahertz Band ###### Abstract Liquid-crystal devices (LCDs) offer a potential route toward adaptive optical components for use in the \(<\)2 THz band of the electromagnetic spectrum. We demonstrate LCDs using a commercially available material (E7), with unbiased birefringence values of 0.14-0.18 in the 0.3-4 THz band. We exploit the linear dichroism of the material to modulate the emission from a 3.4-THz quantum cascade laser by up to 40%, dependent upon both the liquid-crystal layer thickness and the bias voltage applied. Liquid crystals, adaptive optics, variable attenuators, quantum-cascade lasers. ## I Introduction The development of terahertz (THz) technology has been motivated in part by the diverse range of potential applications including atmospheric and space research, biomedical and security imaging, and industrial inspection [1]. However, THz adaptive optics (AO) technology is significantly less mature than in other spectral bands, and this potentially limits THz systems development within a range of practical scenarios. AO systems employ components whose properties can be controlled dynamically to manipulate the wavefront of an optical field. These enable, for example, automated image compensation for atmospheric turbulence [2], laser power stabilization, beam-steering, dynamic beam focusing, polarization control [3], and rapid single-pixel imaging [4]. However, the relatively short wavelengths of THz radiation (\(\sim\)\(100\,\mathrm{\SIUnitSymbolMicro m}\)) introduce extremely challenging machining tolerances for the micro-electromechanical systems, deformable reflectors or micro-mirror arrays commonly used in millimeter-wave systems. Furthermore, many materials used in infrared or visible optics are opaque at THz frequencies, and diffraction limits the use of microlens systems. Nevertheless, previous studies into terahertz-frequency AO systems have both developed new techniques and adapted those used at other wavelengths. Hybrid semiconductor-polymer structures [5], interdigitated \(p\)-\(n\) junctions [6], hybrid split-ring resonators on both silicon and GaAs [7, 8], graphene metastructures [9], and bismuth nanofilms [10] have all been investigated at frequencies below \(2\,\mathrm{THz}\). These systems provide ultra-fast intensity modulation [5, 9, 7], reflection cancellation [6], and real-time control and manipulation of THz radiation [8, 10], but they rely on complex manufacturing and lithography processes. Liquid crystals (LCs), on the other hand, are a widely used technology in infrared and visible AO systems, based on relatively simple manufacturing techniques, and there is scope to extend their use to THz frequencies. Several studies have already shown that liquid-crystal materials exhibit high birefringence at THz frequencies [11, 12, 13, 14, 15, 16], and devices have been manufactured for THz modulation [17, 18], phase control [19, 20, 21, 22, 23], and frequency tuning [24, 25] applications. Aside from a few exceptions [9, 26, 27], though, these studies were restricted to frequencies below \(2\,\mathrm{THz}\). It therefore remains highly desirable to develop LC adaptive optics for use with either broadband THz sources, or high-powered narrowband quantum-cascade lasers (QCLs), which operate in the \(\sim\)2-5 THz range. Key potential applications include electrically-controlled attenuators, and shutters for remote sensing and imaging, or phase-shifters and waveplates for time-domain spectroscopy. We have recently outlined preliminary findings that indicate that the commercially-available nematic LC material, E7, provides a large and controllable birefringence in the 2-5 THz band, with scope for power modulation applications [28, 29]. However, at these frequencies, the wavelength of the radiation is comparable to typical LC film thickness, resulting in strong etalon effects. In this work, we provide the first detailed thin-film THz time-domain spectroscopic analysis of the dielectric properties of E7 up to 4 THz, accounting for the complex interplay with the device geometry. We show that controllable attenuation can be achieved across the 2-5 THz band, based on the linear dichroism of the material. We demonstrate power-modulation depths in excess of 40% using a 3.4-THz quantum cascade laser source. ## II Fabrication of liquid-crystal devices The devices fabricated for this study were based on an LC material layer enclosed within planar THz-transmissive cells, a schematic of which is shown in Fig. 1. Although this work is focused on devices with controllable absorption, LC materials with high birefringence \(\Delta n\) are generally desirable for AO components, as this enables large phase-retardation without the need for thick LC layers. For these proof-of-concept devices, a commercially available nematic LC mixture, E7, was selected for this study, based on prior measurements of its large birefringence at visible wavelengths (\(\Delta n=\) 0.21-0.26) [14, 15] and from 0.2-2.0 THz (\(\Delta n=\) 0.13-0.15) [11, 12, 13, 14, 15, 16]. Most materials used in the construction of conventional visible or infrared LCDs have poor transparency at THz frequencies. As such, the devices developed in this study used alternative window and electrode materials. Specifically, glass is typically used as a window material for visible LCDs. However, ionic impurities lead to ionic polarizability higher than that of materials such as silica glass [30], making it a less desirable choice for THz applications. Fused quartz (silica) was used in the first paper describing an LC for THz applications [31], and is widely used in many other applications. As such, fused quartz slides with a nominal thickness of \(1\,\mathrm{mm}\) were chosen for the liquid crystal device (LCD) substrate. Similarly, indium tin oxide layers, which are commonly used as electrodes in conventional LCDs, exhibit high THz absorption [32, 33], and are practically opaque at thicknesses as small as tens of nanometres [33]. Therefore, PEDOT:PSS, a conductive polymer [DMSO-doped poly(3,4-ethylene dioxythiophene) poly-4-styrene sulfonate] [34, 35, 36], was chosen as the electrode layer material, as it provides a transmittance of up to 83.5% at \(1.22\,\mathrm{THz}\)[20], decreasing to approximately 70% at \(2.5\,\mathrm{THz}\)[37]. More recent studies have shown that PEDOT:PSS has a relatively constant transmittance from 1-6 \(\mathrm{THz}\)[38], and a low-frequency conductivity comparable to a sputtered indium tin oxide thin film [20]. The PEDOT:PSS layer was spin-coated onto the fused quartz substrate, resulting in a thickness on the order of 10s of nanometres. Three LC devices, labeled A-C (see Table I), were fabricated for this study, using the materials described above. Device A was manufactured using a thick LC layer (\(320\,\mathrm{\SIUnitSymbolMicro m}\)) and without an electrode layer to allow accurate characterization of THz spectral parameters of the bulk LC material from 0.3-4.0 THz. Devices B and C were manufactured using thinner LC layers (\(100\,\mathrm{\SIUnitSymbolMicro m}\) and \(13\,\mathrm{\SIUnitSymbolMicro m}\), respectively) and with electrode layers, to enable characterization of the effect of an applied LC bias voltage on THz transmission. The thicker devices (A & B) present a challenge for fabrication, as conventional surface-alignment techniques cannot readily achieve good LC mono-domain alignment at this scale. Therefore, to promote alignment, a 4% long-chain polyimide (SE-3510) layer was spin-coated onto the electrode layer and then rubbed to orientate the polymer, confining the director along the rubbing direction and parallel to the substrate surface. The LCDs were assembled in a parallel-plate arrangement: two fused quartz/electrode layer (where used)/alignment layer substrates were sandwiched together, with the alignment layers facing, and separated from each other using a spacer to maintain the LC layer thickness. The E7 material was then capillary filled into the empty cell. Wires were connected to the two electrode layers of LCDs (where used) using both indium soldering and UV cured glue to provide a robust electrical connection. Reference samples of the quartz slide and polyimide layers were used to determine the refractive indices and absorption coefficients of these materials to aid in building a transfer function for the LCD. ## III Experimental characterization Terahertz (THz) transmission measurements were performed on each of the LCDs using a broadband THz time-domain spectroscopy (TDS) system, which provides a free-space bandwidth of 0.3-8 \(\mathrm{THz}\)[39] using an ultrafast Ti/sapphire Vitara-HP (Coherent) laser. However, a phonon mode in fused quartz causes an absorption at \(\sim\)5 THz, which effectively limits the measurement bandwidth in these samples to approximately 4 \(\mathrm{THz}\). The 800-nm output (20 \(\mathrm{f}\)s pulse width, 80 \(\mathrm{MHz}\) repetition rate) from the laser was separated into two beams, where 90% was used to generate horizontally-polarized THz radiation from a low-temperature-grown GaAs (LT-GaAs) bow-tie-shaped photoconductive emitter on a 2-mm-thick quartz substrate [40], with an electrode spacing of 100 \(\mathrm{\SIUnitSymbolMicro m}\). This was biased using a 7-kHz AC square wave. This modulation signal was also used as a reference frequency for lock-in detection. The THz radiation generated from the emitter was collected in a backwards geometry (_i.e._, from the same surface of the emitter that was excited by the laser), thus avoiding absorp Fig. 1: Showing the typical composition of the planar liquid-crystal cells used in these measurements. The cells consisted of two fused quartz windows in a parallel-plate arrangement, both with a conductive polymer [DMSO-doped poly(3,4-ethylene dioxythiophene) poly-4-styrene sulfonate] electrode layer and long-chain polyimide (SE-3510) alignment layer, separated from each other by a spacer to maintain the liquid-crystal layer thickness. undoped LT-GaAs and quartz substrate, and improving the high-frequency components of the THz spectrum. The emitted THz radiation was collected and focused onto the sample using a pair of off-axis parabolic mirrors. A second pair of off-axis parabolic mirrors was used to collect the THz radiation transmitted through the sample and focus it onto an LT-GaAs photoconductive detector, identical to the photoconductive emitter, alongside the remaining 10% of the laser as an optical probe. A diagram of this is shown in Fig. 2(a). The transient current generated in the photoconductive detector by the interaction of the THz and 800-nm beams was amplified using a low-noise pre-amplifier at 50 nA/V and measured using a lock-in amplifier. The system was located within an enclosure that allows the atmosphere to be purged with dry air to a relative humidity of \(<\)2%, minimizing attenuation by water absorption lines in the THz spectra. The LCDs were rotated to orientate the LC director to be either perpendicular or parallel to the polarization direction of the THz radiation, to measure the ordinary and extraordinary refractive indices and absorption coefficients respectively. ### _Material properties at THz frequencies_ Device A (320-\(\mathrm{\SIUnitSymbolMicro m}\) LC layer) was designed to characterize the E7 material at extended THz frequencies, as thicker material layers provide a stronger THz response, increasing the signal-to-noise ratio of the obtained spectral parameters, while omitting electrode layers reduces the complexity of the spectral analysis. The complex refractive indices of the LC material were extracted numerically from the THz-TDS signal by the fitting of a transfer function using the data processing tool Nelly [41, 42]. Examples of the time-domain signals that were acquired from the broadband THz spectroscopy can be seen in Fig. 2 (b), which shows a purged-air reference (black line), the THz transmission measured through the 13-\(\mathrm{\SIUnitSymbolMicro m}\) and 100-\(\mathrm{\SIUnitSymbolMicro m}\) liquid-crystal devices (yellow and blue lines respectively) and the THz transmission though an empty liquid-crystal cell with a 100-\(\mathrm{\SIUnitSymbolMicro m}\) air-gap instead of a liquid-crystal layer (green line). In the time-domain traces, shown in Fig. 2, reflections of the THz radiation occurring from each interface within the sample are easily identifiable. The fused quartz window materials act as an etalon, reflecting approximately 5% of the THz signal at each LC/electrode/quartz interface and 10% of the THz signal at the quartz/air interfaces. The LC/electrode/quartz reflections are seen as small peaks in the time-domain signal at 1.3 ps (and subsequent reflections at 2.3 ps and 3.9 ps) and 0.15 ps after the main THz signal for the 100-\(\mathrm{\SIUnitSymbolMicro m}\) and 13-\(\mathrm{\SIUnitSymbolMicro m}\) LCDs respectively. This effect is exaggerated for the empty 100-\(\mathrm{\SIUnitSymbolMicro m}\) LC cell, as the Fresnel coefficients for the reflections of the THz radiation between the two fused quartz windows is increased, owing to the lower refractive index of air compared to the LC material E7. The quartz/air reflections are seen approximately 13 ps after the main signal for both samples (not shown). Typically, time-domain data are windowed to remove reflections from the sample/air interfaces, minimizing the oscillations seen in the frequency domain. However, for a multilayered structure, especially one that contains thin layers such as these LCDs, removal of reflections can be challenging. We also chose to use a free-space air measurement for the reference. However, as noted by others [41, 43, 44], this method can lead to both uncertainty in phase unwrapping during the transfer function fitting, along with an improper cancellation of etalons when fitting, particularly for high bandwidth measurements. As such, the data was truncated and zero padded to min Fig. 2: (a) Schematic diagram showing the system used to perform broadband terahertz spectroscopy measurements on the liquid-crystal devices. Time-domain traces acquired using this system are shown in (b), displaying a purged-air terahertz reference (black line), the terahertz response through the 13\(\mathrm{\SIUnitSymbolMicro m}\) and 100\(\mathrm{\SIUnitSymbolMicro m}\) liquid-crystal devices (yellow and blue lines respectively), the terahertz response through an empty liquid-crystal cell with a 100\(\mathrm{\SIUnitSymbolMicro m}\) air-gap instead of a liquid-crystal layer (green line), and the truncation window applied to the data to remove the effects of reflections in the time-domain (dashed orange line). imize the influence of etalons caused by reflections between layers in the extracted complex permittivity of the LC layer. An example of a truncation window is shown in Fig. 2(b) as a dashed orange line, which removed the etalon effects by restricting the time-domain signal to approximately \(1\,\mathrm{ps}\) after the main THz peak. It is important to note that the 320-\(\mathrm{\SIUnitSymbolMicro m}\) LCD (Device A) did not contain electrode layers and as such did not exhibit these etalon effects, and only the quartz/air reflections approximately \(13\,\mathrm{ps}\) after the main THz peak needed to be accounted for. Initial reference measurements of fused quartz and polyimide-on-quartz were used to determine their complex refractive index. This information was then used to define a suitable transfer function for Device A. The thicknesses of each layer were then determined by means of a total variance analysis [45] of the transfer function fitting of the unwindowed time-domain data trace (70-\(\mathrm{ps}\) scan range) _i.e._, containing all interface reflections. Here the measurement of the cell was used as the sample and a purged measurement without the cell was used as a reference. From this, the LC thickness was determined to be \(309\pm 1\,\mathrm{\SIUnitSymbolMicro m}\), compared to the nominal LC thickness of \(320\,\mathrm{\SIUnitSymbolMicro m}\). As can be seen from Fig. 3(a), the ordinary and extraordinary refractive indices are distinct from one another, suggesting reasonable mono-domain alignment of the LC layer, even in this thicker device. The magnitudes of these refractive indices are slightly lower than those previously published [13, 15, 16], outside three standard variations, however, the authors note there is already a slight variation in the literature values depending on the sample thickness and data extraction method used. While the values measured in this work appear slightly lower than previously published, it is most likely because of the use of total variance analysis used in this work, allowing the thickness of the LCD layers to be obtained _in situ_, which the other published works do not include. It is observed that both \(n_{o}\) and \(n_{e}\) remain relatively flat above 2 THz which is consistent with the behavior observed below 2 THz [13, 15, 16]. A birefringence of \(\Delta n=\) 0.14-0.18 was observed between \(0.3\,\mathrm{THz}\) and \(4.0\,\mathrm{THz}\) by comparing the real extraordinary and ordinary refractive indices of the E7 LC material. These values are comparable to those previously measured for E7 below 2 THz [14, 15, 16], and are shown in Figure 3(b). The absorption coefficients for E7 are shown in Figure 3(c) and display linear dichroism, with the extraordinary axis displaying a lower absorption coefficient than the ordinary axis, again consistent with previous measurements at lower THz frequencies. This dichroic effect increases with THz frequency, shown in Figure 3(d). Errors on these measurements were estimated by combining an analysis of the influence of time-domain truncation on the extracted complex permitivity, with the uncertainty of the thickness of the LC layer and repeat measurements. Devices B and C, with 100-\(\mathrm{\SIUnitSymbolMicro m}\) and 13-\(\mathrm{\SIUnitSymbolMicro m}\) LC layer thicknesses respectively, were characterized to analyze the effect of applying a 5-kHz sinusoidal voltage from a signal Fig. 3: (a) The ordinary and extraordinary refractive indices of a planar sample of the LC material, E7, extracted from terahertz time-domain measurements of a LCD with a \(320\,\mathrm{\SIUnitSymbolMicro m}\) LC layer thickness (Device A). The calculated birefringence (\(\Delta n=n_{e}-n_{o}\)) is shown in (b). The corresponding absorption coefficients of the ordinary and extraordinary axes are shown in (c) and the linear dichroism (\(\alpha_{o}-\alpha_{e}\)) is shown in (d). The shaded areas indicate the error on the measurements based on the variability across repeat measurements. generator to wires connected to the electrode layers. Initially, the devices were investigated using Polarized Optical Microscopy (POM), which confirmed good quality alignment, with the PEDOT:PSS electrodes working well over the device area, and the E7 material responding uniformly to an applied electric field. THz-TDS measurements were performed for both LC axes at a range of bias voltage amplitudes (\(V_{\text{rms}}\)). To demonstrate the effect of biasing the LC layer, Fig. 4 shows the relative change in THz transmission of Device B (\(100\,\mathrm{\SIUnitSymbolMicro m}\)) upon biasing for both the extraordinary and ordinary axes, relative to the unbiased device in the respective axis. All data presented in Fig. 4(a) have been windowed to remove the etalons associated with the quartz/air interface reflection approximately \(13\,\mathrm{ps}\) after the main peak, as these reflections will be unaffected by the bias voltage applied to the LC layer. However, the etalons associated with the LC/electrode/quartz interface are dependent on the refractive index of the LC layer which will vary as a bias is applied across the device. As such, analysis was performed for both the untruncated (including LC/electrode/quartz etalon effects) and truncated (etalon effects removed) data. Figure 4(a) shows the change in the extraordinary transmission through the 100-\(\mathrm{\SIUnitSymbolMicro m}\) LCD (Device B) as a function of applied bias for both the untruncated (solid lines) and truncated (dashed lines). As the bias is ramped up to \(2.8\,\mathrm{V_{rms}}\) the LC layer begins to align with a clear change in the THz transmission through the device. After this point the LC layer has almost completely aligned with the field direction and further increasing the bias has a negligible effect on the THz transmission of the device. Strong oscillations in the untruncated data (solid lines) presented in Fig. 4(a) are attributed to the etalon effects of LC/electrode/quartz interface, with the etalon peaks' centers, widths, and amplitudes all changing as the effective birefringence of the LC changes (which in turn causes a change in the Fresnel Coefficients at the interface). By truncating the data before analysis, it is possible to separate the etalon effects from the response of the LC layer itself. To compare with the narrowband measurements discussed later in this work, the amplitude of the calculated change in THz transmission at 3.4 THz is shown in Fig. 4(b) for the ordinary axis (black down-triangles), and the extraordinary axis, both untruncated (yellow circles) and truncated (green up-triangles). It is important to note that the data in Fig. 4(b) are normalized using the transmission through the unbiased LCD along their respective axes _i.e._, the (extra)ordinary data is normalized using the transmission through an unbiased reference along the (extra)ordinary axis. Minimal change is seen for the ordinary THz transmission as a function of increasing bias voltage, whereas a large change is seen in the transmission on the extraordinary axis as the voltage bias is applied, stabilizing above 2.8 \(\mathrm{V_{rms}}\) for both the truncated and untruncated data. Errors were estimated based on the variability across repeat measurements. The frequency dependence of the change in THz transmission for the truncated data follows that of the linear dichroism shown in Fig. 3(d). As such it is reasonable to assume that the change in absorption coefficient of the LC material under applied bias is responsible for the change in THz transmission through the LCD, rather than any birefringent effects. Indeed, the 100-\(\mathrm{\SIUnitSymbolMicro m}\) LC film is too thin compared with the wavelength range of the incident radiation (1000-\(75\,\mathrm{\SIUnitSymbolMicro m}\)) for birefringence to introduce significant phase-retardation between the ordinary and extraordinary components. At \(3.4\,\mathrm{THz}\), the linear dichroism is (\(-3.4\pm 0.8)\,\mathrm{mm}^{-1}\), which would give \(\sim\)30% lower absorption along the ordinary axis than the extraordinary axis of the 100-\(\mathrm{\SIUnitSymbolMicro m}\) LCD. This is comparable to the amplitudes of the truncated extraordinary axis data shown in Fig. 4(b). For the untruncated data, a further 10% decrease is seen in the THz transmission as a result of biasing the LC layer, highlighting that interference from the change in Fresnel reflection coefficients at the LC/electrode/quartz interface also plays a substantial part in the modulation of THz radiation through the biased LC layer. Although the 13-\(\mathrm{\SIUnitSymbolMicro m}\) LCD was also investigated in this Fig. 4: (a) Terahertz transmission through the extraordinary axis of the \(100\,\mathrm{\SIUnitSymbolMicro m}\) LCD (Device B) for untruncated (solid lines) and truncated (dashed lines) data at different LC bias voltages. The THz transmission through the \(100\,\mathrm{\SIUnitSymbolMicro m}\) LCD at 3.4 THz is shown in (b), for both the ordinary (black down-triangles) and extraordinary (untruncated: yellow circles; truncated: green up-triangles) axes. manner, the response was small, compared with the uncertainty of the measurement, meaning that no THz transmission data could be determined reliably from the THz-TDS measurements. ### _Terahertz modulation analysis_ A 3.4-THz waveguide-integrated QCL, described in Ref. [46], was selected as a narrowband characterization source, with a total \(\sim\)15-GHz tunability across several emission modes, controlled by varying the bias current applied to the QCL. The total QCL bandwidth is much narrower than the free-spectral range of the etalons in the LCD, allowing us to still treat this as a narrowband measurement technique. To determine the effect of the LCD as a THz modulator, the continuous-wave power output from the QCL was collected and collimated using an off-axis parabolic mirror, before being passed through a wire-grid polarizer (Microtech Instruments) to ensure a linearly polarized THz beam. This linearly polarized beam was optically chopped at 167 Hz and passed through the liquid crystal devices perpendicular to their surfaces, before being detected using a helium-cooled Si-bolometer (QMC Instruments). The bolometer signal was monitored using a lock-in amplifier, which was referenced to the chopper frequency. The LCDs were again biased with a 5-kHz sinusoidal voltage and could be rotated freely within the THz beam. This is shown in Fig. 5. As with the THz-TDS measurements, it was observed that the LCD strongly attenuated the THz radiation, even without a bias applied to the liquid crystal layer. Approximately 99.7% and 99.83% of the incident THz signal was lost through the 100- and 13-m LCDs respectively, as a result of both reflection and absorption of the THz radiation. The greatest contributor to this loss in THz transmission through the LCDs is the fused quartz window material; a single 1-mm-thick fused quartz window was seen to attenuate the THz signal in THz-TDS measurements by 34%, with approximately 11% reflection loss at each air/quartz interface, and the remainder lost through absorption within the quartz window itself. As QCLs are relatively powerful sources of THz radiation, the transmitted THz radiation through the LCDs still gave a large enough transmitted intensity to easily allow the effect of biasing the LC layer to be observed. The bolometer signal was monitored as a function of time, as the bias across the LCD was varied. Measurements were taken with the LC director orientated both parallel (ordinary axis) and perpendicular (extraordinary axis) to the polarization direction of the collimated THz beam, and also as a function of the incident THz power, _i.e._, the bias applied to the QCL device. Figure 6 shows the THz transmission through (a) the 100-m LCD (Device B) and the (b) 13-m LCD (Device C) for the situation in which the LC director was orientated parallel to the polarization of the collimated THz beam _i.e._, the extraordinary axis. Below a certain LC bias-voltage threshold (\(\approx\)1 V\({}_{\mathrm{rms}}\)), no change is observed in the transmitted THz radiation through both of the LCDs. However, above this threshold, the THz transmission through both devices decreases for increasing bias voltage. A maximum change in the THz transmission _i.e._, a modulation of the THz signal, of 40% and 4.5% is seen for the 100-m and 13-m LCDs respectively. The measured THz transmission values shown in Fig. 6(a) for the the 100 m LCD are consistent with the values determined from THz-TDS in Fig. 4. A slight transient increase in the transmission can be seen at some biases for the the 100 m LCD. This is likely explained by the etalon shifts that occur as the LC material responds to the applied bias. It can be seen from Fig. 4(a) that these etalon oscillations can lead to transmission greater than the unbiased value. However, the observation of such an effect could be complicated by the switching speed within the LC layer (which is inversely proportional to layer thickness), the result of multiple domains forming within the LC layer, and/or the sampling speed of the measurements themselves, which could reduce the appearance of an effect that occurred more quickly than the smallest time step. Using the linear dichroism values calculated for the 100-m LCD, it is possible to estimate the maximum attenuation of the THz radiation through the the 13-m LCD at 3.4 THz. This gives a value of (4 \(\pm\) 1)%, which is comparable with the values measured through experiment and shown in Fig. 6(b). When the LC director was perpendicular to the polarization direction of the THz radiation (_i.e._, the ordinary axis) there was no change in the THz transmission through the device for increasing bias voltage, consistent with the THz transmission data shown in Fig. 4(b) for the ordinary axis. The speed of this modulation depends on both the bias voltage applied to, and the thickness of, the LC layer with minimum fall times of (0.9 \(\pm\) 0.1) s above a 2.5 V\({}_{\mathrm{rms}}\) bias, and (5.0 \(\pm\) 0.2) s at a 14 V\({}_{\mathrm{rms}}\) bias for the 13-m and 100-m LCDs respectively. Rise times (data not shown) were also observed to increase with the modulation depth, on the order of a few seconds for the 13-m device and upwards of 400 s for the 100-m device at maximum biasing conditions (7 V\({}_{rms}\) and 14 V\({}_{\mathrm{rms}}\) respectively). The modulation depths and speeds were found to be independent of the THz source power and Fig. 5: Schematic diagram of the experimental setup used for narrowband characterization of the liquid-crystal devices. The output from a 3.4-THz QCL, mounted in a helium-cooled cryostat, was collected and collimated with an off-axis parabolic mirror and passed through a wire-grid polarizer to ensure linearity of the polarized terahertz beam. This linearly polarized beam was optically chopped at 167 Hz and subsequently passed through the liquid-crystal devices that were biased with a 5 kHz sinusoidal voltage. The transmitted terahertz radiation was detected by a helium-cooled Si-bolometer, which was referenced at the chopper frequency. as such the modulation is considered to be independent of frequency over the \(\sim\)15 GHz tuning range of the QCL, as expected from the relatively flat absorption coefficients at this range of frequencies. ## IV Conclusions and further work The optical properties of the commercially available liquid crystal material E7 have been characterized up to \(4.0\,\mathrm{THz}\), with a birefringence of 0.14-0.18 for an unbiased device. It was determined that the change in THz transmission through the LC layer could be explained by a combination of linear dichroism and etalon interference from the LC/electrode/quartz interface, of which both effects have significant influence on the LCD acting as a THz modulator. These liquid crystal devices have also been used to modulate the output of a THz QCL to depths of up to 40%, determined by the LC layer thickness and the bias applied. The authors note that narrowband measurements at 3.4 THz sit at the bottom of an etalon minima, strongly enhancing the modulation effect experienced at that frequency. By carefully choosing the LC layer thickness to control the etalon reflection timings, narrowband operation at a frequency of interest could be enhanced, whereas the use of index-matched polymers to reduce the etalon effects could allow for more broadband devices using this LCD technology. Whilst power modulation offers immediate potential, liquid crystals that are birefringent at THz frequencies present opportunities for the development of more advanced THz adaptive optical components including variable wave plates, phase shifters and wavefront modulators. By using a more suitable window material (reduced reflection/absorption at THz-frequencies), anti-reflective coatings, and a liquid crystal material with higher birefringence and linear dichroism (improving both modulation depth and requiring thinner devices, improving modulation speeds), it is expected that significant improvements can be made on these original devices, improving their viability for THz optical components in the future. ## Data availability statement Data underlying the results presented in this paper are openly available from the University of Leeds Data Repository [47]. ## Acknowledgments The authors would like to acknowledge financial support from UK Research & Innovation (Future Leader Fellowship MR/S016929/1), the UK Centre for Earth Observation Instrumentation (Fast Track Contract RP10G0435A03), the UK Space Agency (Pathfinder Contract NSTP3-PF3-078), and the Engineering and Physical Sciences Research Council (EP/P007449/1 and EP/P027687/1). For the purpose of open access, the author has applied a CC-BY public copyright licence to any Author Accepted Manuscript (AAM) version arising from this submission. ## Author declarations ### _Conflict of interest_ The authors have no conflicts to disclose. ### _Author contributions_ A. Dunn: writing -- original draft (lead); data curation (lead); investigation (lead); formal analysis (equal); methodology (equal); validation (equal); visualization (lead). Z. Zhang: investigation (equal); methodology (equal); resources (equal). M. D. Horbury: investigation (supporting); methodology (supporting). E. V. Nuttall: investigation (supporting). Y. Han: investigation (supporting). M. Salih: investigation (supporting); resources (supporting). L. Li: investigation (supporting); resources (lead). A. Bond: conceptualization (supporting); methodology (supporting). E. Saleh: funding acquisition (supporting); conceptualization (supporting). R. Harris: funding acquisition (supporting); conceptualization (supporting). N. Daghestani: funding acquisition (supporting). D. Pardo: Fig. 6: THz transmission through the extraordinary axis of (a) the 100-\(\mathrm{\SIUnitSymbolMicro m}\) LCD (Device B) and (b) the 13-\(\mathrm{\SIUnitSymbolMicro m}\) LCD (Device C) as a function of the \(V_{\mathrm{rms}}\) voltage applied. The liquid crystal director was orientated parallel to the polarization direction of the collimated THz beam. funding acquisition (supporting). B. N. Ellison: funding acquisition (supporting). A. D. Burnett: investigation (supporting); funding acquisition (supporting); conceptualization (supporting); formal analysis (equal); methodology (equal); validation (equal); supervision (supporting); visualization (supporting); writing -- review and editing (equal). H. F. Gleeson: funding acquisition (supporting); conceptualization (supporting); methodology (equal); supervision (supporting); writing -- review and editing (equal). A. Valavanis: funding acquisition (lead); investigation (supporting); conceptualization (lead); methodology (equal); project administration (lead); supervision (lead); writing -- review and editing (equal).
2307.10030
Learned Proximal Operator for Solving Seismic Deconvolution Problem
Seismic deconvolution is an essential step in seismic data processing that aims to extract layer information from noisy observed traces. In general, this is an ill-posed problem with non-unique solutions. Due to the sparse nature of the reflectivity sequence, spike-promoting regularizers such as the $\ell_1$-norm are frequently used. They either require rigorous coefficient tuning or strong assumptions about reflectivity, such as assuming reflectivity as sparse signals with known sparsity levels and zero-mean Gaussian noise with known noise levels. To overcome the limitations of traditional regularizers, learning-based regularizers are proposed in the recent past. This paper proposes a Learned Proximal operator for Seismic Deconvolution (LP4SD), which leverages a neural network to learn the proximal operator of a regularizer. LP4SD is trained in a loop unrolled manner and is capable of learning complicated structures from the training data. It is worth mentioning that the network is trained with synthetic data and evaluated on both synthetic and real data. LP4SD is shown to generate better reconstruction results in terms of three different metrics as compared to learning a direct inverse.
Peimeng Guan, Naveed Iqbal, Mark A. Davenport, Mudassir Masood
2023-07-19T15:15:17Z
http://arxiv.org/abs/2307.10030v1
# Learned Proximal Operator for ###### Abstract Seismic deconvolution is an essential step in seismic data processing that aims to extract layer information from noisy observed traces. In general, this is an ill-posed problem with non-unique solutions. Due to the sparse nature of the reflectivity sequence, spike-promoting regularizers such as the \(\ell_{1}\)-norm are frequently used. They either require rigorous coefficient tuning or strong assumptions about reflectivity, such as assuming reflectivity as sparse signals with known sparsity levels and zero-mean Gaussian noise with known noise levels. To overcome the limitations of traditional regularizers, learning-based regularizers are proposed in the recent past. This paper proposes a Learned Proximal operator for Seismic Deconvolution (LP4SD), which leverages a neural network to learn the proximal operator of a regularizer. LP4SD is trained in a loop unrolled manner and is capable of learning complicated structures from the training data. It is worth mentioning that the network is trained with synthetic data and evaluated on both synthetic and real data. LP4SD is shown to generate better reconstruction results in terms of three different metrics as compared to learning a direct inverse. ## 1 Introduction Extracting subsurface layer reflectivities from acquired data is a key goal in seismic data processing. In a geophysical survey, a source wave is generated on the earth's surface using a vibroseis truck and the reflected signals are collected by geophones after the wave reflects back from the boundaries of the layers of the earth. The collected data is known as the _seismic trace_ or _trace_. Reflectivity measures the ratio of impedance changes between neighboring earth layers, and the reflectivity series is a list of coefficients collected at varying depths in the earth. This list is ideally a series consisting mostly of zeros, except for depth locations at the layer boundaries. Therefore, the reflectivity coefficients are modeled as a _sparse_ signal, with the number of non-zero values denoted as the sequence's _sparsity_[1]. Depending on the layer depth and the sampling frequency, the sparsity level might be different from case to case and is usually unknown in advance. This paper is concerned with estimating the reflectivity series through a process known as deconvolution. In attempting to estimate the reflectivity sequence, the source is modeled as a wavelet, and the trace is modeled as the convolution of the source wavelet with the reflectivity sequence [1, 2]. Thus reflectivity sequence can be estimated through the deconvolution process. Deconvolution is an important step in the exploration of seismology [3, 4], as it increases the vertical/depth resolution. However, deconvolution is an ill-posed problem, with non-unique solutions. This behavior stems from the fact that 1) the noise in the measurement process is usually unknown, and 2) the wavelet acts as a band-pass filter to the reflectivity sequence thus losing important high-frequency components in observed traces. Therefore, the peaks in an observed trace might come from a superposition of multiple closely-spaced boundaries. The goal of this paper is to solve the seismic trace deconvolution problem using an amalgamation of optimization- and data-driven methods. ### Previous Works Classical optimization approaches like Iterative Soft-thresholding Algorithm (ISTA) [5] and Fast Soft-thresholding Algorithm (FISTA) [6] have been used for reflectivity estimation. These algorithms update the estimate of reflectivity by iteratively applying a proximal operator for \(\ell_{1}\) regularization with a specific sparsity level. This always requires knowing the sparsity level in advance or parameter tuning, and it usually takes hundreds of iterations until the optimization problem converges. By knowing the sparsity level, ISTA can find the location of reflectivities, however, inaccurately reflects the magnitude. Machine learning has shown its success in learning unknown functions from training data. Works like [7, 8, 9, 10] use neural networks to learn the direct inverse that maps from the observed trace to the reflectivity sequence by knowing that the reflectivity is sparse. Sparse-promoting neural networks [11] were also developed in this context and can be applied to data with multiple traces. In particular, U-Net promotes sparsity by finding a compressed representation of traces. However, these methods do not use the physical forward operators, but learn a black-box inverse mapping, which is lack of interpretability. In addition, training a robust network to unseen sparsity requires a large amount of data with various sparsity levels. To increase interpretability, Loop Unrolling (LU) algorithms [12] unfold the optimization steps into a sequence of weight-tying neural networks. Such algorithms have shown success in wide applications because they utilize the forward operator as part of the recovery process [12, 13]. Some of these applications include medical imaging reconstruction [14, 15, 16], image deblurring [14, 17, 18], compressive sensing [14, 19, 20], etc. Many other variations of LU algorithms use more advanced network architectures for learned regularizers that make the overall network more powerful. For example, a transformer is trained in [21] as the regularizer to improve the performance for Magnetic Resonance Imaging reconstruction, and [14] extends the LU iterations to a potentially infinite number of layers until converging to a fixed point solution, so one can control the output quality based on needs in the evaluation stage. Many of them achieve state-of-the-art results in different tasks. The authors in [22, 23, 24] adopt the idea of LU for sparse recovery problems with specific designs for the proximal operators. Considering the reflectivity sequence as a sparse signal, [25] unfolds the iterative soft-thresholding algorithm and replaces the proximal operator with a minimax-concave penalty to retrieve layer reflectivities. However, this setup limits its direct application on 2D data. This network is trained only on 1D synthetic data, and is evaluated trace-by-trace to obtain 2D recovery. Hence useful information from neighboring traces is not utilized. In addition, similar to ISTA and FISTA, [25] recognizes the location of non-zeros but inaccurately retrieves the magnitudes due to the restricted network setup for the proximal operator. ### Our Contribution This work proposes a novel approach named Learned Proximal operator for Seismic Deconvolution (LP4SD), a loop-unrolled architecture for learning regularizers in seismic deconvolution. Unlike networks that learn Figure 1: Convolutional model of seismic deconvolution the direct inverse (i.e. U-Net), the proposed network breaks down the task of learning deconvolution into \(K\) smaller/easier tasks of learning proximal operators, using the knowledge of the forward operator. Unlike classical optimization approaches that impose potentially erroneous predetermined regularizers in the objective function, LP4SD learns the proximal operator from data to avoid making incorrect assumptions of the reflectivity and tedious hyperparameter tuning. This fully data-driven proximal operator also allows reconstruction of multiple traces simultaneously. This paper employs LP4SD with a known sampling frequency and source wavelet. Several constraints are applied to stabilize the process: 1) the trainable step-size in loop unrolled iterations is wrapped in a range function to avoid explicit hyper-parameter tuning and to ensure stable convergence, 2) the measurement is passed as a direct input to the network together with intermediate reconstruction to efficiently correct the noise and artifacts. To our knowledge, this is the first paper introducing LU with a generic data-driven regularizer in solving seismic deconvolution problems. In the experiments, the network is trained on synthetic data and evaluated in both synthetic and real data. We show that LP4SD outperforms the sparsity-promoting U-Net with less number of layers in both single trace and multiple traces cases, and is more robust to unknown sparsity and noise levels. The rest of the paper is organized as follows. Section 2 states the seismic deconvolution problem and its mathematical model. Section 3 overviews the optimization program for recovering reflectivity, then introduces the optimization-guided neural network and how to train this network. Section 4 presents the experiment results on synthetic data, baseline model (Marmousi2), and real data. Finally, Section 5 concludes the paper. ## 2 Problem Description The objective of seismic deconvolution is to reconstruct the reflectivity sequence from noisy traces. The received trace \(\mathbf{y}\in\mathbb{R}^{n}\) (\(n\) denotes the number of time samples) is modeled as \[\mathbf{y}=\mathbf{w}\ast\mathbf{x}+\mathbf{\epsilon}, \tag{1}\] where \(\mathbf{w}\in\mathbb{R}^{d}\) represents the source signal wavelet, \(\mathbf{x}\in\mathbb{R}^{n}\) is the reflectivity sequence and \(\ast\) denotes the convolution operation. Further, \(\mathbf{\epsilon}\in\mathbb{R}^{n}\) represents the unknown noise. Generally, \(d\ll n\) and \(\mathbf{w}\) are assumed to be known. The convolutional model of (1) is illustrated in Fig. 1 for clarity. Note that convolution is a linear operation and thus (1) can be written as \[\mathbf{y}=\mathbf{A}\mathbf{x}+\mathbf{\epsilon}, \tag{2}\] where \(\mathbf{A}\in\mathbb{R}^{n\times n}\) is the convolutional matrix and follows the Toeplitz structure formed using \(\mathbf{w}\). The columns of \(\mathbf{A}\) can be formed sequentially by shifting the vector \(\mathbf{w}\) by one time-step at a time. Figure 2 illustrates this process, where the entries in white are zero-paddings. \(\mathbf{A}\) is also called the forward operator of the seismic deconvolution problem. An estimate of the reflectivity sequence \(\hat{\mathbf{x}}\) can be obtained by solving the following optimization problem, \[\hat{\mathbf{x}}=\min_{\mathbf{x}}\frac{1}{2}\|\mathbf{y}-A\mathbf{x}\|_{2}^{2}+\gamma\,r(\mathbf{ x}). \tag{3}\] Figure 2: Generating convolutional matrix \(\mathbf{A}\) from wavelet \(\mathbf{w}\), shaded entries represent nonzero values. Here, \(\|\mathbf{y}-A\mathbf{x}\|_{2}^{2}\) penalizes the data misfit using current estimate of \(\mathbf{x}\). Furthermore, \(r:\mathbb{R}^{n}\rightarrow\mathbb{R}_{\geq 0}\) is a regularization function. The choice of the \(r\) depends on the prior beliefs of the underlying signal \(\mathbf{x}\) and the computational feasibility. For example, \(\ell_{2}\)-norm encourages minimum norm solutions, while \(\ell_{0}\)-norm encourages sparse solutions [26]. In addition, \(\gamma\) is the regularization hyper-parameter that is usually well-tuned to balance the data misfit and the regularization term. The problem in (3) can be solved via iterative optimization methods, such as gradient descent. The gradient descent method iteratively updates the estimate by taking a step in the direction of the negative gradient of the objective function. For (3), this update takes the form \[\hat{\mathbf{x}}_{k+1}=\hat{\mathbf{x}}_{k}+\eta\mathbf{A}^{\top}(\mathbf{y}-\mathbf{A}\hat{\mathbf{x} }_{k})-\eta\nabla r(\hat{\mathbf{x}}_{k}), \tag{4}\] for \(k=1,2,3,...\), where \(\eta>0\) represents a constant step-size for all \(k\). Note that the step-size can vary for each \(k\), but this work only focuses on the constant step-size scenario. While Gradient Descent is an attractive algorithm, it may diverge when the regularization function is non-differentiable. An alternative to overcome this is the proximal gradient method, where in each iteration, the proximal operator of \(r\) is applied to the gradient update of the data misfit term, as shown below, \[\hat{\mathbf{x}}_{k+1}=pro\alpha_{\eta,r}(\hat{\mathbf{x}}_{k}+\eta\mathbf{A}^{\top}(\mathbf{y }-\mathbf{A}\hat{\mathbf{x}}_{k})). \tag{5}\] The proximal operator enforces the structure that the regularization function \(r\) attempts to encourage. For simple regularizers like \(\ell_{1}\) and \(\ell_{2}\), the proximal operators have closed-form solutions but for general regularizers, closed-form solutions may not exist. Although in seismic deconvolution problems, the reflectivity sequence \(\mathbf{x}\) is always treated as a sparse signal, its sparsity, determined by the earth structure as well as the sampling frequency, is unknown in advance. In particular, a high sampling frequency of traces is synonymous to acquiring a higher number of samples along the vertical direction. A direct consequence of this is to have a higher number of zeros in the estimated reflectivity and hence a higher sparsity. Similarly, low sampling rates are translated into a less sparse reflectivity. Together with the unknown measurement noise \(\mathbf{\epsilon}\), we want a general regularizer that can not only capture complicated structures of \(\mathbf{x}\) but its proximal operator can also be easily solved. ## 3 Optimization-guided Neural Networks This section will first overview important neural network concepts and introduce the proposed optimization-guided neural network that can (i) relax the prior assumption about the reflectivity sequences and noise, (ii) learn the complicated structures of \(\mathbf{x}\) from training examples, and (iii) use the knowledge of the forward operator. Figure 4: Loop Unrolling Gradient Descent Network, where \(-\eta\nabla r(\cdot)\) is replaced with a neural network Figure 3: Architecture of a Convolutional Neural Network ### Neural Network Overview Neural networks can learn complex structures from examples. It is composed of a sequence of linear and non-linear layers with trainable parameters. A loss is calculated between the ground-truth data and the output of the neural network, and the network parameters are updated by calculating the Jacobian loss with respect to the input. Convolutional Neural Networks (CNN) [27] are commonly used in image processing tasks, due to their effectiveness in obtaining local correlation from neighboring pixels. There are many variations of CNN, but a classic architecture contains a sequence of convolutional layers followed by normalization and nonlinear activation layers. In each convolutional layer, a convolutional kernel is applied to compute the spatial correlation between the kernel and the input. Training CNN will update the parameters in kernels. BatchNorm (BN) and GroupNorm (GN) are commonly used as normalization layers, which normalize the output from the previous layer to alleviate the internal covariate shift [28], thus allowing stable training of deeper neural networks. Activation layers introduce non-linearity to the neural network, where some commonly used activation functions are ReLU, LeakyReLU, Sigmoid and etc. Figure 3 shows the architecture of a simple CNN. A larger kernel size (\(\kappa\)) of a convolutional kernel can capture information from a broader range, but it means more parameters to train. Thus the kernel size is usually kept small due to memory constraints. This paper also examines the effect of kernel size in seismic deconvolution tasks. ### Learned Proximal Operator for Seismic Deconvolution Although in classical deconvolution algorithms, the regularizers always use \(\ell_{0}\)- and \(\ell_{1}\)-norm, they oversimplify the model for reflectivity sequence with the presents of unknown noise and sparsity level. To avoid simple assumptions on \(r\), we can instead learn the regularization update using a neural network. In particular, the gradient update of \(r\) in (4) and the proximal operator in (5) can be replaced by neural networks, and the entire optimization iterations in Gradient Descent (4) and Proximal Gradient Descent (5) can be unfolded into a sequence of weight-tying neural network blocks, which refers to Loop Unrolling (LU) [12]. One optimization iteration in the networks for gradient descent and proximal gradient descent is illustrated respectively in Figures 4 and 5. Notice that \(\hat{\mathbf{x}}_{k}+\eta\mathbf{A}^{\top}(\mathbf{y}-\mathbf{A}\hat{\mathbf{x}}_{k})\) incorporate the knowledge of the forward operator \(\mathbf{A}\), but is independent of the regularizer. This paper only focuses on the proximal gradient method for seismic deconvolution. Because the underlying reflectivity sequence is compressible meaning most reflectivity coefficients are close to zero, \(\ell_{1}\)-norm is used as the regularization function. In the gradient descent version of LU, the network tries to learn the gradient of \(\ell_{1}\)-norm which is discontinuous. Learning discontinuity is, in general, a hard task for a feedforward neural network such as CNN, hence, the proximal gradient method is the primary focus. The gradient of the least-squares term in (3) incorporates the forward operator and indicates the update direction to match the noisy trace, and the proximal operator is a correction term that learns complicated structures of \(\mathbf{x}\). Thus the proximal operator can also be viewed as a denoising process. Different network architectures can serve as a denoiser, but this paper illustrates the idea using only Convolutional Neural Networks. We call it the Learned Proximal operator for Seismic Deconvolution (LP4SD) which is illustrated in Figure 6, where a 5-layer CNN with 64 kernels in hidden layers is used. The detailed layer structures are listed in Appendix. At each loop unrolling iteration \(k=1,...,K\), the estimate \(\hat{\mathbf{x}}_{k+1}\) is updated as \[\hat{\mathbf{x}}_{k+1}=CNN(\hat{\mathbf{x}}_{k}+\eta\mathbf{A}^{\top}(\mathbf{y}-\mathbf{A}\hat{ \mathbf{x}}_{k}),\mathbf{y}), \tag{6}\] Notice that the measurement \(\mathbf{y}\) is fed to the CNN to stabilize training together with the gradient step in the data domain. Since LP4SD is trained end-to-end (will discuss more detail in Section 3.3), there is no Figure 5: Loop Unrolling Proximal Gradient Descent Network, where \(prox(\cdot)\) is replaced with a Neural Network explicit constraint on intermediate results \(\hat{\mathbf{x}}_{k}\), thus \(\mathbf{y}\) can efficiently correct the reconstruction errors at the intermediate stages. ### Training Process Given the forward operator \(\mathbf{A}\) and a pair of ground truth reflectivity and corrupted trace \((\mathbf{x},\mathbf{y})\), we initialize \(\hat{\mathbf{x}}_{0}\) with a signal in the same domain as \(\mathbf{x}\). Some common choices of \(\hat{\mathbf{x}}_{0}\) are \(\mathbf{A}^{\top}\mathbf{y}\) when \(\mathbf{x}\) and \(\mathbf{y}\) are from different domains, and \(\mathbf{y}\) when they are from the same domain. Then after \(K\) loop unrolled iterations, compute the Mean-Squared Error (MSE) between \(\hat{\mathbf{x}}_{K}\) and \(\mathbf{x}\) and backpropagate using the MSE loss, which is referred as end-to-end training. Notice that the choice of step-size \(\eta\) affects the convergence in the proximal gradient method. In LP4SD, the step-size \(\eta\) is registered as a network parameter along training to avoid explicit hyper-parameter tuning for \(\eta\), but imposing bounds on \(\eta\) is essential. Intuitively, a small step-size results in slow convergence, and a large step-size diverges the process. The range of the step-size that leads to a proper convergence is well-studied for convex optimization [], which always imposes further assumptions on the objective function, such as strong convexity and smoothness. The range of the step-size always depends on the parameters of those assumptions. In LP4SD, since the proximal operator is an arbitrary non-convex neural network, the classical theorems do not apply anymore, but idea of ensuring a range is still necessary. Therefore, based on the sensitivity analysis, we heuristically assign an initial value to \(\eta\) and constrain the network to update \(\eta\) within a range of \(0\) and \(0.15\) by wrapping \(\eta\) with a Sigmoid function. So the constrained step-size \(s\) becomes \[s(\eta)=0.15\,\frac{1}{1+e^{-\eta}}. \tag{7}\] It empirically ensures each proximal gradient step updates properly and the input at each loop unrolling iteration to update in a reasonable range. Figure 6: Overall Architecture of LU4SD, where the outer box in gray is an update step at iteration \(k\), and the inner box in yellow is the learned proximal operator by a 5-layer CNN. The input to the CNN is a concatenation of \(\hat{\mathbf{x}}_{k}+\eta A^{\top}(\mathbf{y}-A\hat{\mathbf{x}}_{k})\) and \(\mathbf{y}\). Figure 7: 2D synthetic reconstruction of selected trace using U-Net, LP4SD with kernel size of 5 and 7. The first row compares the reconstructed reflectivity sequences, \(\hat{\mathbf{x}}\), and the second row shows the residual, \(\hat{\mathbf{x}}-\mathbf{x}\). Experiments and Discussions In this work, the experiments to assess the efficacy of the proposed approach for the estimation of reflectivity sequence in various scenarios are presented. The experiments implement the proposed network with kernel sizes of 5 and 7 in the CNN, denoted as LP4SD (\(\kappa=5\)) and LP4SD (\(\kappa=7\)) respectively. Table 1 shows the layer details for CNN in LP4SD with a kernel size of 5. We denote \(C_{in}\) as the input channels, \(C_{out}\) as the output channels, \(\kappa\) as the kernel size, \(s\) as the stride, and \(pad\) as the padding number. GroupNorm (GN) and ReLU activation are appended after each hidden convolutional layer. Padding = 3 when \(\kappa=7\) to match the dimension. Then, the proposed networks are compared to the sparse-promoting U-Net. U-Net is a deep convolutional network with a narrow "neck" in the middle, where the input features are transformed onto a lower dimensional space thus helps to promote sparsity of the output. The detailed structure of U-Net is presented in Appendix. We consider both single-trace (1D) and multiple-trace (2D) reconstruction scenarios in this work. In 1D, a trace \(\mathbf{y}\in\mathbb{R}^{n}\) follows the convolutional model in Eq (1). In 2D, the received trace \(\mathbf{Y}\in\mathbb{R}^{n\times m}\) and the reflectivity to reconstruct \(\mathbf{X}\in\mathbb{R}^{n\times m}\) are both matrices, where \(n\) is the number of time samples and \(m\) is the number of traces collected. Thus, Eq (1) becomes \[\mathbf{Y}=\mathbf{A}\mathbf{X}+\mathbf{\epsilon} \tag{8}\] and the proximal update rule becomes \[\mathbf{\hat{X}}_{k+1}=prox_{\eta,r}(\mathbf{\hat{X}}_{k}+\eta\mathbf{A}^{\top}(\mathbf{Y}- \mathbf{A}\mathbf{\hat{X}}_{k})) \tag{9}\] The convolutional layers in LP4SD are extended to 2D in multiple-trace scenarios. For example, conv1d(\(C_{in}\):64, \(C_{out}\):64, \(\kappa\):7, \(s\):1, \(pad\):3) becomes conv2d(\(C_{in}\):64, \(C_{out}\):64, \(\kappa\):(7,7), \(s\):(1,1), \(pad\):(3,3)). Due to the narrow-neck design of the U-Net, for 2D input with higher input dimension, a deeper network is required to narrow the "neck" and to further promote sparsity. The first three columns in Table 2 summarize the methods to compare in various scenarios. All networks are trained using synthetic data, then evaluated using the benchmark synthetic Marmousi2 model, and finally evaluated using real data. ### Evaluation Criteria The performance of the proposed method is evaluated using three different metrics. These metrics measure the similarities between the true and estimated reflectivities. The evaluation metrics and their formulas are listed below. **Mean-squared error (MSE)** \[MSE(\hat{\mathbf{x}},\mathbf{x})=\|\hat{\mathbf{x}}-\mathbf{x}\|_{2}^{2}, \tag{10}\] **Correlation coefficient** \[\gamma(\hat{\mathbf{x}},\mathbf{x})=\frac{\hat{\mathbf{x}}^{\top}\mathbf{x}}{\|\hat{\mathbf{x}} \|_{2}\|\mathbf{x}\|_{2}}, \tag{11}\] \begin{table} \begin{tabular}{c c} Layer & Details \\ \hline L1 & conv(\(C_{in}\):64, \(K_{out}\):64, \(ks\):5, \(s\):1, \(pad\):2) + GN + ReLU \\ L2 & conv(\(C_{in}\):64, \(C_{out}\):64, \(ks\):5, \(s\):1, \(pad\):2) + GN + ReLU \\ L3 & conv(\(C_{in}\):64, \(C_{out}\):64, \(ks\):5, \(s\):1, \(pad\):2) + GN + ReLU \\ L4 & conv(\(C_{in}\):64, \(C_{out}\):1, \(ks\):5, \(s\):1, \(pad\):2) + GN + ReLU \\ L5 & conv(\(C_{in}\):1, \(C_{out}\):1, \(ks\):1, \(s\):1, \(pad\):0) \\ \end{tabular} \end{table} Table 1: Architecture of LP4SD \(\kappa=5\). When \(\kappa=7\), change padding to 3 to match the dimension. **Reconstruction quality** \[Q(\hat{\mathbf{x}},\mathbf{x})=10\log_{10}\left(\frac{||\mathbf{x}||_{2}^{2}}{\left\|\mathbf{x}- \hat{\mathbf{x}}(\hat{\mathbf{x}}^{\top}\mathbf{x})/||\hat{\mathbf{x}}||_{2}^{2}\right\|_{2}^{2 }}\right). \tag{12}\] ### Training on Synthetic Data All networks are trained on the same set of synthetic data, which is generated following the procedure in [29]. 2D data is generated at \(m=352\) traces per shot, collecting \(n=352\) data points per trace along the Earth's depth at a sampling frequency of 500 Hz. Notice that \(m\) and \(n\) do not need to be the same. We use 40 Hz Ricker Wavelet to generate the observed trace \(\mathbf{y}\) by following the model in (1). Additive white Gaussian noise corresponding to various signal-to-noise ratios (SNR) is used to generate the data. SNR is defined as \[\text{SNR}=10\log_{10}\left(\frac{\|\mathbf{w}*\mathbf{x}\|_{2}^{2}}{\|\mathbf{\epsilon} \|_{2}^{2}}\right), \tag{13}\] where \(\|\mathbf{w}*\mathbf{x}\|_{2}^{2}\) is the energy of the clean signal and \(\|\mathbf{\epsilon}\|_{2}^{2}\) is the energy of noise. 1D data are selected from a random trace in a shot from 2D data. In the noiseless case, 8,000 samples are generated for training and 1,000 for testing, for other noise levels, 20,000 samples are generated for training and 2,000 for testing. All traces are normalized \(\mathbf{y}/\max(|\mathbf{y}|)\) such that \(\max|\mathbf{y}|=1\). Notice that the mean of \(\mathbf{y}\) is not subtracted to preserve the magnitude of zeros in reflectivity. The magnification factor, \(\max(|\mathbf{y}|)\), is recorded to bring the recovered reflectivity to its correct level. The same procedure is applied in the testing phases. ### Testing on Synthetic Data #### 4.3.1 Noiseless Case First, the LP4SD methodology is validated for recovering reflectivity in the absence of noise. Numerical results for 1D and 2D synthetic data are highlighted in Table 2. In both cases, the LP4SD algorithm outperforms the U-Net with fewer layers. Furthermore, since 2D data contains additional information from neighboring traces, it achieves better reconstruction quality. In 1D, a larger kernel size improves the quality in all metrics by a noticeable amount, while in 2D, a large kernel size still leads the results in all metrics but the advantage is minimal. A major takeaway is that even without prior assumption to reflectivity \(\hat{\mathbf{x}}\), LP4SD can recover it more accurately. Figure 7 shows the reconstruction in blue and the reconstruction error (residual) in black of 2D data using U-Net and LP4SD with kernel sizes of 5 and 7. LP4SD significantly reduces the error. Figure 8 shows the 2D image of the reflectivity sequences, where both variants of LP4SD restore more detailed layer structures, which can be viewed in the enlarged boxes. #### 4.3.2 Noisy Case The reconstructions at various noise levels are also analyzed and compared. The networks are trained using SNRs of 15, 20, 25, 30, and 35 dB. Figure 9 illustrates the testing MSE for 2D data. LP4SD outperforms U-Net in all cases and the average MSE gap is significant. This figure also shows that the kernel size in LP4SD is not a determinant factor in 2D reconstruction. In general, CNN with a smaller kernel size runs \begin{table} \begin{tabular}{c c c c c c} & Methods & \# Layers & MSE & \(\gamma\) & Q \\ \hline \multirow{2}{*}{1D} & U-Net & 18 & 0.00174 & 0.926 & 9.12 \\ & LP4SD (\(\kappa=5\)) & 5 & 0.00122 & 0.948 & 10.67 \\ & LP4SD (\(\kappa=7\)) & 5 & **0.00081** & **0.966** & **12.38** \\ \hline \multirow{2}{*}{2D} & U-Net & 22 & 0.000625 & 0.890 & 7.40 \\ & LP4SD (\(\kappa=5\)) & 5 & 0.000040 & 0.993 & 20.22 \\ \cline{1-1} & LP4SD (\(\kappa=7\)) & 5 & **0.000037** & **0.994** & **20.86** \\ \end{tabular} \end{table} Table 2: Synthetic 1D and 2D testing results, where the best performances for each metric are in bold. Figure 8: The first row shows the observed traces and the true reflectivity for 2D synthetic data. The bottom row shows the reconstructed reflectivities, where the selected regions are enlarged. Two variants of LP4SD restore more detailed layer structures than U-Net. Results are better viewed electronically. Figure 9: Average 2D testing MSE with additive noise at different SNRs, where LP4SD shows significant MSE reduction at all noise levels. longer because the convolutional kernels take longer to scan over the input, whereas CNN with a larger kernel size has a tradeoff of more network parameters to train. One has the flexibility of choosing the kernel size according to their needs. ### Testing on Marmousi2 Model Marmousi2 model [30] reflects synthetic elastic data, where the survey covered 17 km along the surface and 3.5 km depth. It is a widely-used benchmark dataset for complex seismic processing tasks. The impedance model is obtained from element-wise multiplication between the density and velocity model, and the true reflectivity is then computed from the change of impedance in the vertical direction. The observed trace is obtained by convolving a 40 Hz Ricker wavelet with the reflectivity profile plus random noise. Marmousi2 is a densely sampled model and we also downsample the velocity and density model to resemble more compact reflectivity sequences, as can be seen in the dotted red lines in Figure 10. While synthetic data is used to mimic the real trace and reflectivity, it is generally impossible to cover all possible cases in real distribution. However, it is observed that LP4SD generalizes better for reflectivities with unseen sparsity. Figure 10 compares the residuals of reconstruction \(\mathbf{\hat{x}}-\mathbf{x}\) of the U-Net and LP4SD with kernel size 7 in different sparsity levels. When the sampling frequencies used during evaluation differ from those used while training (visualized/reflected as sparsity in the reflectivity), U-Net tends to produce more error. On the other hand, LP4SD is more robust to unseen sparsity. The numerical results are displayed in Table 3 and the sampled 2D results can be found in Figure 11. This is because LP4SD breaks down Figure 10: Residual of reconstruction, \(\mathbf{\hat{x}}-\mathbf{x}\), on Marmousi2 data (trace 5) using 1D U-Net and LP4SD with kernel size 7. The dotted red line shows the true reflectivity and the solid black line shows the residual. the inverse problem into \(K\) proximal updates using the forward operator. The learning-free gradient step gives a better approximation of \(\mathbf{x}\) by minimizing \(||y-Ax||^{2}\), so that each proximal step (CNN) only learns a denoising process. Whereas U-Net learns the direct mapping from the seen distribution of seismic traces \(\mathbf{y}\) to reflectivities \(\mathbf{x}\), so it generalizes poorly when the trace is out of distribution, i.e., with unseen sparsity levels. ### Testing on Real Data Finally, LP4SD is evaluated on the real dataset, where ground-truth reflectivity is unknown. A 2D landline from east Texas, USA [1] is used in this work. The survey contains 18 shots, and 594 traces in line with each trace having 1501 samples and a time sampling interval of 2 ms. The Common mid-point (CMP) gather is extracted from raw data, which is then divided and padded into patches of size that can be fed to the network. Then Automatic Gain Control (AGC) is applied to correct the magnitude. Note that the missing traces are muted (filled with zeros). Figure 12 (a), (b) and (c) depict the observed traces, concatenated reconstructions obtained directly from the network output, and recovered reflectivity after applying AGC, respectively. The layers are emphasized and more structural details are revealed in the reconstruction. ## 5 Conclusion In this work, a Learned Proximal operator for Seismic Deconvolution (LP4SD) is proposed. The network architecture is both model-driven and data-driven. Hence, the proposed approach takes into account the strengths of model- and data-driven methods. LP4SD unfolded optimization iterations into a sequence of proximal gradient steps (model-driven) and replace the proximal operator with a 5-layer CNN (data-driven). The experimental test revealed the LP4SD's following advantages: 1) does not rely on predetermined assumptions on the reflectivity and thus avoids prior-specific parameter tuning, 2) breaks down the extremely challenging task of learning direct inverse into smaller manageable and easy-to-learn tasks of learning proximal operators, 3) depends on the forward operator thus more robust to unseen sparsity, and 4) can handle multiple traces (in 2D setup) simultaneously which is shown to give better reconstruction quality. In the experiments, the above advantages are observed and shown that LP4SD outperforms U-Net which learns the direct inverse in different noise levels. \begin{table} \begin{tabular}{c c c c c} \hline 1D Marmousi2 & Methods & MSE & \(\gamma\) & Q \\ \hline \multirow{3}{*}{Full sample} & U-Net & 0.002627 & 0.909 & 9.95 \\ & LP4SD (\(\kappa=5\)) & **0.000318** & 0.961 & **17.48** \\ & LP4SD (\(\kappa=7\)) & 0.000390 & **0.963** & 17.18 \\ \hline \multirow{3}{*}{Downsampled by 2} & U-Net & 0.002214 & 0.916 & 9.52 \\ & LP4SD (\(\kappa=5\)) & 0.000382 & 0.975 & 15.44 \\ & LP4SD (\(\kappa=7\)) & **0.000334** & **0.977** & **16.29** \\ \hline \multirow{3}{*}{Downsampled by 3} & U-Net & 0.002149 & 0.912 & 8.78 \\ & LP4SD (\(\kappa=5\)) & **0.000449** & 0.975 & 14.10 \\ & LP4SD (\(\kappa=7\)) & 0.000486 & **0.976** & **15.07** \\ \hline \hline 2D Marmousi2 & methods & MSE & \(\gamma\) & Q \\ \hline \multirow{3}{*}{Full sample} & U-Net & 0.001448 & 0.852 & 6.06 \\ & LP4SD (\(\kappa=5\)) & **0.000053** & **0.991** & **18.80** \\ & LP4SD (\(\kappa=7\)) & 0.000369 & 0.977 & 15.01 \\ \hline \multirow{3}{*}{Downsampled by 2} & U-Net & 0.002378 & 0.840 & 5.59 \\ & LP4SD (\(\kappa=5\)) & 0.000709 & 0.952 & 11.67 \\ & LP4SD (\(\kappa=7\)) & **0.000327** & **0.975** & **15.67** \\ \hline \multirow{3}{*}{Downsampled by 3} & U-Net & 0.002843 & 0.812 & 4.88 \\ & LP4SD (\(\kappa=5\)) & 0.000911 & 0.931 & 9.79 \\ \cline{1-1} & LP4SD (\(\kappa=7\)) & **0.000137** & **0.980** & **16.74** \\ \hline \end{tabular} \end{table} Table 3: Marmousi2 1D and 2D testing result, where the best metrics in each experiment is marked in bold. Figure 11: Evaluation results on 2D Larmanusi2 data using LP4SD Figure 12: Evaluation results on 2D real data, (a) observed traces, (b) reconstructed reflectivity and (c) reconstructed reflectivity after AGC ### Detailed Architecture of U-Net Figure 13 shows the U-Net for 1D input, whereas the U-Net for 2D input is similar to 1D, but adds two convolutional layers and two Transposed Convolutional layers in the middle of the network.
2305.09318
Conditional Rate-Distortion-Perception Trade-Off
Recent advances in machine learning-aided lossy compression are incorporating perceptual fidelity into the rate-distortion theory. In this paper, we study the rate-distortion-perception trade-off when the perceptual quality is measured by the total variation distance between the empirical and product distributions of the discrete memoryless source and its reconstruction. We consider the general setting, where two types of resources are available at both the encoder and decoder: a common side information sequence, correlated with the source sequence, and common randomness. We consider both the strong perceptual constraint and the weaker empirical perceptual constraint. The required communication rate for achieving the distortion and empirical perceptual constraint is the minimum conditional mutual information, and similar result holds for strong perceptual constraint when sufficient common randomness is provided and the output along with the side information is constraint to an independent and identically distributed sequence.
Xueyan Niu, Deniz Gündüz, Bo Bai, Wei Han
2023-05-16T09:50:06Z
http://arxiv.org/abs/2305.09318v2
# Conditional Rate-Distortion-Perception Trade-Off ###### Abstract Recent advances in machine learning-aided lossy compression are incorporating perceptual fidelity into the rate-distortion theory. In this paper, we study the rate-distortion-perception trade-off when the perceptual quality is measured by the total variation distance between the empirical and product distributions of the discrete memoryless source and its reconstruction. We consider the general setting, where two types of resources are available at both the encoder and decoder: a common side information sequence, correlated with the source sequence, and common randomness. We consider both the strong perceptual constraint and the weaker empirical perceptual constraint. The required communication rate for achieving the distortion and empirical perceptual constraint is the minimum conditional mutual information, and similar result holds for strong perceptual constraint when sufficient common randomness is provided and the output along with the side information is constraint to an independent and identically distributed sequence. ## I Introduction The practice of lossy compression is rich with recent success stories, where machine learning driven methods outperform traditional codecs in image, video, and audio compression. In data-driven approaches, training often relies on minimizing discrepancies between the source and reconstruction distributions (e.g., cross-entropy, Wasserstein distance) to capture the quality of the reconstructed data perceived by humans [1]. Given a source distribution and a distortion measure, the rate-distortion theory studies the minimum rate required to achieve the target distortion level measured by an additive distortion measure, and it has been the theoretical framework for the design and evaluation of lossy compression codecs. However, it has been shown that the mean-squared error does not reflect the perceptual quality of reconstructions [2, 3, 4]. In [5], Blau and Michaeli propose the information rate-distortion-perception function to characterize the three-way tradeoff between the rate, distortion, and perceptual quality by imposing a constraint on the distribution of the reconstruction. The additional perceptual constraint is intimately related to the theory of coordination. In [6], two notions of coordination, _empirical coordination_ and _strong coordination_, are distinguished. Both require the output distribution to be close to a target distribution in terms of total variation. As the names suggest, empirical coordination concerns the empirical distribution (type), while strong coordination deals with the joint distribution over the block of symbols. In this paper, we also study two notions of perceptual quality corresponding to the empirical and strong coordination requirements. In machine learning scenarios, the empirical perceptual constraint can be interpreted using generative modeling, such as generative adversarial networks (GANs) [7], where the goal of the generator is to minimize a certain divergence between the data distribution and the empirical distribution of the synthetic samples. The typicality of empirical distributions has been studied by [8] and [9] in different contexts. The strong perceptual constraint demands that the order of the samples is also preserved, which is related to natural language processing problems [10], where grammar and word order are essential in the reconstruction. Unlike the _average_ distortion measures, the perceptual fidelity compares two probability distributions. One of the insights developed in [11] is that the \(n\)-letter perception fidelity evaluated by total variation between product distributions precludes single-letterization of the rate region, and that a channel resolvability code may be necessary. Accordingly, much work has focused on the so-called _perfect realism_ condition, which requires the coding to be distribution-preserving, i.e., the distribution of the output approximates arbitrarily well the source distribution in total variation. Saldi, Linder, and Yuksel [12] connect distribution-constrained lossy coding with distributed channel synthesis. In their setup, apart from the distortion constraint, the output sequence is restricted to follow a given distribution of an independent and identically distributed (i.i.d.) sequence. Thus, the _perfect realism_ setting in [13] can be viewed as a special case of [12]. The output distribution constraint requires vanishing total variation error, whereas in our setting, the perceptual constraint allows a bounded total variation. The extension is not straightforward, as in the achievability proof, an idealized distribution which approximates the source-reproduction joint distribution is used to facilitate the analysis of the distortion and perception performances. Another challenge arises from the common side information, for which we tailor the local channel synthesis argument [14] to our setup. Recent research suggests that common randomness plays an important role in achieving the rate-distortion-perception function [13, 15, 16, 17]. As noted by these works, the common randomness, though being a scarce resource, can be realized by agreeing upon some seeds for the pseudo-random number generator in advance, and the seeds can also be encoded as messages. Nevertheless, as shown in [6, 17], in the setting of empirical coordination, common randomness is not necessary. We consider a general situation where the sender and the receiver possess two types of resources: common side information and common randomness. The common side information is a random process that may be correlated with the source, providing additional information for the reconstruction; while the common randomness is a random variable observed by both the sender and the receiver, independent of the source signal, and it is usually assumed to be uniformly distributed on a finite set. We present rate regions when the amount of common randomness is constrained. The conditional rate-distortion-perception function is derived for empirical perceptual constraint, and for the strong perceptual constraint when sufficient common randomness is available and the output along with the side information is limited to an i.i.d. sequence. ## II Problem Setup and Main Result Let \(\{X_{i}\}_{i=1}^{\infty}\), \(\{Z_{i}\}_{i=1}^{\infty}\) be memoryless sources drawn from finite alphabets \(\mathcal{X}\) and \(\mathcal{Z}\) according to a joint distribution \(P_{XZ}(x,z)\). We drop the arguments of the distribution when it does not cause ambiguity. The \(n\)-sequence \((X_{1},X_{2},\ldots,X_{n})\) is denoted by \(X^{n},\) and \((X_{t},X_{t+1},\ldots,X_{n})\) is denoted by \(X_{t}^{n}\). So \(P_{X^{n}Z^{n}}=\prod_{i=1}^{n}P_{XZ}.\) We use \([m]\) to denote the set \(\{1,2,\ldots,[m]\}\) for \(m>0,\) and \(\mathcal{C}\) to denote the codebook. For sequences \((x^{n},y^{n})\in\mathcal{X}^{n}\times\mathcal{Y}^{m},\) the empirical distribution is defined as \[\hat{P}_{x^{n},y^{n}}(x,y)=\frac{1}{n}\sum_{i=1}^{n}\mathbf{1}\{(x_{i},y_{i} )=(x,y)\}. \tag{1}\] Extensions to multiple arguments can be defined similarly. Consider the model depicted in Fig. 1, in which the encoder and the decoder have access to a shared source of randomness \(K\) uniformly distributed over \([2^{nR_{0}}].\) The encoder observes source \(X^{n}\) and side information \(Z^{n},\) and selects a message \(M.\) The decoder observes the message, and it also has access to the common randomness \(K\) and side information \(Z^{n},\) and tries to recover input \(X^{n}.\) **Definition 1**: _An \((n,2^{nR},2^{nR_{0}})\) code with common randomness and common side information consists of an encoding function_ \[f_{n}:\mathcal{X}^{n}\times\mathcal{Z}^{n}\times[2^{nR_{0}}]\mapsto[2^{nR}] \quad\text{(possibly stochastic)}\] _and a decoding function_ \[g_{n}:[2^{nR}]\times\mathcal{Z}^{n}\times[2^{nR_{0}}]\mapsto\mathcal{X}^{n} \quad\text{(possibly stochastic)}.\] As usual, the degree of distortion between the source sequence and the reconstruction is the average of a per-letter distance. **Definition 2**: _Given a per-letter distortion measure \(D:\mathcal{X}\times\mathcal{X}\mapsto[0,d_{max}]\) with \(d_{max}<\infty,\) the (average) distortion between two sequences \(x^{n}\) and \(y^{n}\) is defined as_ \[D(x^{n},y^{n}):=\frac{1}{n}\sum_{i=1}^{n}D(x_{i},y_{i}).\] _Much work has been focused on total variation distance as a measure of coordination [6, 8, 11, 13, 14, 18], and we also use total variation to evaluate the perceptual quality._ **Definition 3**: _The total variation between two distributions \(P_{X}\) and \(P_{Y}\) defined on the same \(\sigma\)-algebra \((\mathcal{X},\mathcal{F})\) is_ \[d_{\mathrm{TV}}(P_{X},P_{Y})=\sup_{A\subseteq\mathcal{X}}|P_{X}(A)-P_{Y}(A)|.\] Inspired by the theory of coordination [6], we define two notions of achievability with respect to empirical and strong perceptual constraints. In the case of the empirical one, permutation of the \(n\)-sequence does not affect the perception, whereas for the strong perceptual constraint, the order of the pairs \((X_{i},Y_{i})\) matters. **Definition 4** (Achievability): _The tuple \((R,R_{0},\Delta,\Pi)\) is achievable with respect to empirical and strong perceptual constraints if for any \(\epsilon>0,\) there exists a sequence of \((n,2^{n(R+\epsilon)},2^{n(R_{0}+\epsilon)})\) codes \((f_{n},g_{n})\) such that_ \[\mathds{E}_{P}[D(X^{n},Y^{n})]\leq\Delta+\epsilon,\] _where \(Y^{n}=g_{n}(f_{n}(X^{n},Z^{n},K),Z^{n},K),\) and one of the following corresponding constraints hold:_ \[\mathds{E}_{\mathcal{C}^{(n)}}[d_{\mathrm{TV}}(\hat{P}_{X^{n}}, \hat{P}_{Y^{n}})] \leq\Pi+\epsilon\qquad\text{(empirical perception)}\] \[d_{\mathrm{TV}}(P_{X^{n}},P_{Y^{n}}) \leq\Pi+\epsilon.\qquad\text{(strong perception)}\] _Note that for empirical perception, the total variation is between two empirical distributions as defined in Eq (1), while for strong perception, the total variation is between the entire coding block. In fact, as can be seen from Property 2 in Sec. III, we have_ \[\mathds{E}_{\mathcal{C}^{(n)}}[d_{\mathrm{TV}}(\hat{P}_{X^{n}},\hat{P}_{Y^{n} })]\leq\max_{i}d_{\mathrm{TV}}(P_{X_{i}},P_{Y_{i}})\leq d_{\mathrm{TV}}(P_{X^ {n}},P_{Y^{n}}).\] We define the following rate regions. **Definition 5**: _For arbitrarily small \(\gamma>0,\) the \(\gamma\)-rate-distortion-perception regions with common randomness and common side information are_ \[\mathcal{R}_{\gamma}^{(s)-}=\{(R,R_{0},\Delta,\Pi): \exists\ (U,Y)\ \text{s.t.}\ \Pi<f(\Delta),R_{0}>0\] \[X\perp Y\ |\ U,Z\] \[R\geq I(X;U|Z)+\gamma\] \[R+R_{0} \geq I(Y;U|Z)+\gamma\] \[\Delta \geq\mathds{E}_{P}[D(X,Y)]\] \[\Pi \geq\limsup_{n\rightarrow\infty}d_{\mathrm{TV}}(\prod_{i=1}^{n}P_{ X},\prod_{i=1}^{n}P_{Y})\}\] \[\mathcal{R}_{\gamma}^{(s)+}=\{(R,R_{0},\Delta,\Pi): \exists\ Y\ \text{s.t.}\ \Pi\geq f(\Delta),R_{0}\geq 0\] \[R \geq I(X;Y|Z)+\gamma\] \[\Delta \geq\mathds{E}_{P}[D(X,Y)]\}\] _for strong perception and_ \[\mathcal{R}_{\gamma}^{(e)}=\{(R,R_{0},\Delta,\Pi): \exists\ Y\ \text{s.t.}\ R_{0}\geq 0\] \[R \geq I(X;Y|Z)+\gamma\] \[\Delta \geq\mathds{E}_{P}[D(X,Y)]\] \[\Pi \geq d_{\mathrm{TV}}(P_{X},P_{Y})\}\] Fig. 1: Rate-distortion-perception model with common randomness and common side information. for empirical perception, where \(f(\Delta):=g(R(\Delta),\Delta)\) with \[g(R,\Delta) \mathrel{\mathop{:}}=\inf_{\begin{subarray}{c}P_{Y|XZ}:I(X,Y|Z) \leq R,\\ \mathds{E}_{P}[D(X,Y)]\leq\Delta\end{subarray}}\limsup_{n\to\infty}d_{\mathrm{ TV}}(\prod_{i=1}^{n}P_{X},\prod_{i=1}^{n}P_{Y})\] \[R(\Delta) \mathrel{\mathop{:}}=\inf_{P_{Y|XZ}:\mathds{E}_{P}[D(X,Y)]\leq \Delta}I(X;Y|Z).\] **Definition 6**: _The information rate-distortion-perception regions with common randomness and side information are_ \[\mathcal{R}^{(e)}=\bigcup_{\gamma>0}\mathcal{R}^{(e)}_{\gamma}\qquad\text{ and}\qquad\mathcal{R}^{(s)}=\bigcup_{\gamma>0}(\mathcal{R}^{(s)-}_{\gamma}\cup \mathcal{R}^{(s)+}_{\gamma}).\] We will prove the following theorems in Sec. IV and Sec. V. **Theorem II.1**: _For arbitrarily small \(\gamma>0\) and bounded distortion measure, the rate-distortion-perception tuple \((R,R_{0},\Delta,\Pi)\in\mathcal{R}^{(s)}_{\gamma}\) (resp. \(\mathcal{R}^{(e)}_{\gamma}\)) is achievable with respect to strong (resp. empirical) perceptual constraint._ **Theorem II.2**: _If \((R,R_{0},\Delta,\Pi)\) is achievable with respect to empirical perception, then\((R,R_{0},\Delta,\Pi)\in\mathrm{cl}(\mathcal{R}^{(e)})\). If \((R,R_{0},\Delta,\Pi)\) is achievable with respect to strong perception such that \(Y^{n}=g_{n}(f_{n}(X^{n},Z^{n},K),Z^{n},K)\) and \(Z^{n}\) are jointly i.i.d. sequences, then \((R,R_{0},\Delta,\Pi)\in\mathrm{cl}(\mathcal{R}^{(s)})\)._ The above theorems establish the rate regions for empirical perceptual constraint, and strong perceptual constraint under an i.i.d. output assumption. It is implied that common randomness \(R_{0}\) is not necessary for empirical perceptual constraint, which can be justified using the same line of argument as in Theorem 2 of [6]. Also, in the rate region \(\mathcal{R}^{(s)-}\), it is required that the strong perceptual loss \(\Pi<f(\Delta)\). This is because when \(\Pi\geq f(\Delta)\), the perceptual constraint is relaxed, so the problem reduces to the conventional rate-distortion problem where the common randomness is not necessary and the rate region is \(\mathcal{R}^{(s)+}\). The following lemma shows the relation between the rate regions. **Lemma 1**: \(\mathcal{R}^{(s)}\subseteq\mathcal{R}^{(e)}\)_._ Specifically, as a result of Theorem II.1 and Theorem II.2, the rate region of the empirical perceptual constraint induces a conditional rate-distortion-perception function. **Theorem II.3**: _The rate-distortion-perception function with respect to empirical perceptual constraint is_ \[R^{(e)}(\Delta,\Pi)=\inf_{\begin{subarray}{c}P_{Y|XZ}:\mathds{E}_{P}[D(X,Y)] \leq\Delta\\ d_{\mathrm{TV}}(P_{X},P_{Y})\leq\Pi\end{subarray}}I(X;Y|Z).\] When provided with sufficient common randomness, similar rate-distortion-perception function for strong perceptual constraint can be obtained. **Theorem II.4**: _When \(R_{0}=\infty\), the rate-distortion-perception function with strong perceptual constraint has the following bound_ \[R^{(s)}(\Delta,\Pi)\leq\inf_{\begin{subarray}{c}P_{Y|XZ}:\mathds{E}_{P}[D(X,Y) ]\leq\Delta\\ \limsup_{n\to\infty}d_{\mathrm{TV}}(\prod_{i=1}^{n}P_{X},\prod_{i=1}^{n}P_{Y} )\leq\Pi\end{subarray}}I(X;Y|Z),\] _and the bound is tight when the output sequence \(Y^{n}=g_{n}(f_{n}(X^{n},Z^{n},K),Z^{n},K)\) and the side information \(Z^{n}\) are jointly i.i.d._ We remark that a similar result for strong perception is obtained in [17] in the absence of correlated side information, assuming that the perception measure is tensorizable, which implicitly results in the optimality of an i.i.d. output sequence. However, since the total variation distance does not tensorize, we had to impose the i.i.d. output sequence requirement explicitly. ## III Preliminary Results The total variation measure has the following properties that will be useful in the proofs. These properties can be found in, for example, [19]. Let \(P,Q,R\) be probability measures on the same \(\sigma\)-algebra \((\mathcal{X}^{2},\mathcal{F})\). \[d_{\mathrm{TV}}(P_{X}P_{Y|X},Q_{X}P_{Y|X})=d_{\mathrm{TV}}(P_{X},Q_{X}). \text{(Property \ref{property:1})}\] \[d_{\mathrm{TV}}(P_{X},Q_{X})\leq d_{\mathrm{TV}}(P_{XY},Q_{XY}). \text{(Property \ref{property:2})}\] \[d_{\mathrm{TV}}(P,Q)\leq d_{\mathrm{TV}}(P,R)+d_{\mathrm{TV}}(R,Q). \text{(Property \ref{property:3})}\] The total variation is convex; that is, for \(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\in\mathbb{R},\) \[d_{\mathrm{TV}}(P,\sum_{i=1}^{n}\lambda_{i}Q_{i})\leq\sum_{i=1}^{n}\lambda_{i} d_{\mathrm{TV}}(P,Q_{i}).\text{ (Property \ref{property:4})}\] The proof of the achievability relies on the following local channel synthesis result given by Corollary VII.6 of [14]. **Lemma 2** ([14], Corollary VII.6): _Given joint distribution \(P_{UVW}\), and let \(\mathcal{C}^{(n)}=\{u^{n}(w^{n},j)\sim\prod_{i=1}^{n}P_{U|W}(u_{i}|w_{i})\}\) be a randomly generated collection of channel inputs indexed by \(j\in[2^{nR}]\) for any \(w^{n}\in\mathcal{W}^{n}\). Denote by \(\mathbf{Q}_{V^{n}}\) the output distribution of applying a uniformly randomly selected codeword \(U^{n}(w^{n},J)\) through the memoryless channel \(P_{V|UW}\). Let \(\widetilde{P}\) denote the joint distribution \(P_{UV|W}\hat{P}_{w^{n}}\). Given an arbitrary small \(\gamma>0\), if \(R\geq I_{\widetilde{P}}(U;V|W)+\gamma\), then_ \[\lim_{n\to\infty}\mathds{E}_{\mathcal{C}^{(n)}}[d_{\mathrm{TV}}(P_{V^{n}|W^{n} =w^{n}},\mathbf{Q}_{V^{n}|W^{n}=w^{n}})]=0.\] ## IV Achievability Next, we give the proof of Theorem II.1. We focus on the strong perceptual constraint, as the line of argument carries over to empirical perceptual constraint easily. Given \((R,R_{0},\Delta,\Pi)\in\mathcal{R}^{(s)}\) and an arbitrary small \(\gamma>0\), for any \(\epsilon>0\), we can find joint distribution \(\bar{P}_{XYZU}\) such that \[(X\perp Y\ |\ U,Z)_{\widetilde{P}}\] \[R+\epsilon \geq I_{\widetilde{P}}(X;U|Z)+\gamma\] \[R+R_{0}+\epsilon \geq I_{\widetilde{P}}(Y;U|Z)+\gamma\] \[\Delta+\epsilon \geq\mathds{E}_{\widetilde{P}}[D(X,Y)]\] \[\Pi+\epsilon \geq d_{\mathrm{TV}}(\bar{P}_{X^{n}},\bar{P}_{Y^{n}})\] Note that we can assume that the inputs follow the same process, i.e., \(\bar{P}_{XZ}=P_{XZ}\). We denote the product distribution by \(\bar{P}_{X^{n}Y^{n}Z^{n}U^{n}}(x^{n},y^{n},z^{n},u^{n}):=\prod_{i=1}^{n}\bar{P}_ {XYZU}(x_{i},y_{i},z_{i},u_{i})\). We construct a random codebook \[\mathcal{C}^{(n)}=\{u^{n}(z^{n},m,m_{0})\sim\prod_{i=1}^{n}\bar{P}_{U|Z}(u_{i}| z_{i})\}\] for \(z^{n}\in\mathcal{Z}^{n}\) and \((m,m_{0})\in[2^{nR}]\times[2^{nR_{0}}]\). We take advantage of the following likelihood encoder. _Encoder:_ Given source sequence \((x^{n},z^{n})\) and a realization \(m_{0}\in[2^{nR_{0}}]\) of common randomness, the encoder selects a message \(m\in[2^{nR}]\) with probability proportional to \(\bar{P}_{X^{n}|Z^{n}U^{n}}(x^{n}|z^{n},u^{n}(z^{n},m,m_{0}))\), i.e., the behavior of the encoder can be represented by a distribution \[\boldsymbol{F}_{M|X^{n}Z^{n}K} (m|x^{n},z^{n},m_{0})\] \[=\frac{\bar{P}_{X^{n}|Z^{n}U^{n}}(x^{n}|z^{n},u^{n}(z^{n},m,m_{0} ))}{\sum_{m^{\prime}\in[2^{nR}]}\bar{P}_{X^{n}|Z^{n}U^{n}}(x^{n}|z^{n},u^{n}(z^ {n},m^{\prime},m_{0}))},\] where \(u^{n}(z^{n},m,m_{0})\) are codewords specified by the codebook \(\mathcal{C}^{(n)}\). Let the common randomness be represented by a random variable \(K\) that is uniformly distributed over \([2^{nR_{0}}]\) and independent of \(X\) and \(Z\). Also, let the encoded message be denoted as random variable \(M\). _Decoder:_ The decoder receives message \(m\), and it also has access to \(z^{n}\) and common randomness \(m_{0}\). It then generates \(y^{n}\) according to the distribution \[\boldsymbol{G}_{Y^{n}|Z^{n}MK} (y^{n}\ |\ z^{n},m,m_{0})\] \[=\bar{P}_{Y^{n}|Z^{n}U^{n}}(y^{n}|z^{n},u^{n}(z^{n},m,m_{0}))\] The induced joint distribution according to the encoder and the decoder is \[\boldsymbol{P}_{X^{n}Y^{n}Z^{n}MK}=\frac{1}{[2^{nR_{0}}]}P_{X^{n}Z^{n}} \boldsymbol{F}_{M|X^{n}Z^{n}K}\boldsymbol{G}_{Y^{n}|Z^{n}MK}\] The distribution is stochastic because the codebook is random. _Analysis:_ We will consider an auxiliary distribution \(\boldsymbol{Q}\), such that \(\boldsymbol{Q}\) approximates both \(\boldsymbol{P}\) and \(\bar{P}\). We construct \(\boldsymbol{Q}\) using the same codebook \(\mathcal{C}^{(n)},\) such that \[\boldsymbol{Q}_{MK}(m,m_{0})=\frac{1}{[2^{nR_{0}}]},\ \forall(m,m_{0})\in[2^{nR}] \times[2^{nR_{0}}]\] and \[\boldsymbol{Q}_{X^{n}Y^{n}Z^{n}MK}(x^{n},y^{n},z^{n},m,m_{0})\] \[= Q_{MK}(m,m_{0})P_{Z^{n}}(z^{n})Q_{U^{n}|Z^{n}MK}(u^{n}|z^{n},m, m_{0})\] \[\bar{P}_{X^{n}|Z^{n}U^{n}}(x^{n}|z^{n},u^{n}(z^{n},m,m_{0}))\] \[\bar{P}_{Y^{n}|Z^{n}U^{n}}(y^{n}|z^{n},u^{n}(z^{n},m,m_{0}))\] \[= \frac{1}{[2^{n(R+R_{0})}]}P_{Z^{n}}(z^{n})\boldsymbol{1}\{u^{n}= U^{n}(z^{n},m,m_{0})\}\] \[\prod_{i=1}^{n}\bar{P}_{X|ZU}(x_{i}|z_{i},u_{i})\prod_{i=1}^{n} \bar{P}_{Y|ZU}(y_{i}|z_{i},u_{i})\] where \(U^{n}(z^{n},m,m_{0})\sim\prod_{i=1}^{n}\bar{P}_{U|Z}(u_{i}|z_{i})\). Note that \(\bar{P}_{X^{n}Y^{n}|Z^{n}U^{n}}=\bar{P}_{X^{n}|Z^{n}U^{n}}\bar{P}_{Y^{n}|Z^{n} U^{n}}\) according to the assumption \((X\perp Y|U,Z)_{\bar{P}}\). We make the following key observations: \[\boldsymbol{Q}_{M|X^{n}Z^{n}K} =\boldsymbol{F}_{M|X^{n}Z^{n}K},\] \[\boldsymbol{Q}_{Y^{n}|Z^{n}MK} =\boldsymbol{G}_{Y^{n}|Z^{n}MK}.\] The distribution \(\boldsymbol{Q}\) matches the system encoder and decoder. For any \(z^{n}\in\mathcal{Z}^{n},\) let \(\bar{P}=\bar{P}_{XYU|Z}\hat{P}_{z^{n}}\). By continuity of the mutual information, \(I_{\bar{P}}(X;U|Z)\to I_{\bar{P}}(X;U|Z)\) and \(I_{\bar{P}}(Y;U|Z)\to I_{\bar{P}}(Y;U|Z)\) as \(n\to\infty\). Applying Lemma 2, when \(R+R_{0}+2\epsilon\geq I_{\bar{P}}(Y;U|Z)+\gamma+\epsilon,\) we have \[\mathds{E}_{\mathcal{C}^{(n)}}[d_{\mathrm{TV}}(\bar{P}_{Y^{n}|Z^{n}=z^{n}}, \boldsymbol{Q}_{Y^{n}|Z^{n}=z^{n}})]<\epsilon_{n}.\] Since \(z^{n}\) are i.i.d. according to \(\bar{P}_{Z}=\boldsymbol{Q}_{Z},\) we have \[\mathds{E}_{\mathcal{C}^{(n)}}[d_{\mathrm{TV}}(\bar{P}_{Y^{n}Z^{n}}, \boldsymbol{Q}_{Y^{n}Z^{n}})]<\epsilon_{n}. \tag{2}\] For any fixed \(m^{\prime}_{0},\) we consider the sub-codebook \(\{u^{n}(z^{n},m,m^{\prime}_{0})\}\) and recall the local channel synthesis lemma. Applying Lemma 2 (cf. Eq (60) of [14]), when \(R+2\epsilon\geq I_{\bar{P}}(X;U|Z)+\gamma+\epsilon,\) we have \[\mathds{E}_{\mathcal{C}^{(n)}}[d_{\mathrm{TV}}(\bar{P}_{X^{n}|Z^{n}=z^{n},K=m^{ \prime}_{0}},\boldsymbol{Q}_{X^{n}|Z^{n}=z^{n},K=m^{\prime}_{0}})]<\epsilon_{n}.\] Since \(z^{n}\) are drawn i.i.d, and by Eq (61) of [14], \[d_{\mathrm{TV}}(\boldsymbol{P}_{X^{n}Z^{n}},\boldsymbol{Q}_{X^{n} Z^{n}})\leq d_{\mathrm{TV}}(\boldsymbol{P}_{X^{n}Z^{n}},\bar{P}_{X^{n}Z^{n}})\] \[+d_{\mathrm{TV}}(\bar{P}_{X^{n}Z^{n}},\boldsymbol{Q}_{X^{n}Z^{n}}) \leq\epsilon_{n}.\] Therefore, \[d_{\mathrm{TV}}(\boldsymbol{P}_{X^{n}Y^{n}}, \boldsymbol{Q}_{X^{n}Y^{n}})\] \[\stackrel{{\text{Property \ref{Property \ref{Property \ref{Property \ref{Property \ref{Property \ref{Property \ref{Property \ref{Property \ref{Property \eqref{Property \eqref{Property \eqref{Property \eqrefeqref{ \eqref The above proof is for strong perception. In the case of empirical perception, it follows from the line of argument in Theorem 2 of [6] that common randomness is not necessary. So the inequality \(R+R_{0}\geq I(Y;U|Z)\) can be removed. ## V Converse We use the time mixing technique for the proof. Proof:: We first focus on the strong perceptual constraint. Suppose the tuple \((R,R_{0},\Delta,\Pi)\) is achievable, then, given \(\epsilon>0\), there exists a sequence of \((n,2^{n(R+\epsilon)},2^{n(R_{0}+\epsilon)})\) codes with encoding decoding functions \((f_{n},g_{n})\), such that \[\mathds{E}_{P}[D(X^{n},Y^{n})] \leq\Delta+\epsilon\] \[d_{\mathrm{TV}}(P_{X^{n}},P_{Y^{n}}) \leq\Pi+\epsilon.\] For fixed \(n\), let \(M\) and \(K\) denote the message and the common randomness, i.e., \[M =f_{n}(X^{n},Z^{n},K)\] \[Y^{n} =g_{n}(M,Z^{n},K)=g_{n}(f_{n}(X^{n},Z^{n},K),Z^{n},K).\] Let \(T\sim\mathcal{U}([n])\) a uniform random variable over \([n]\). Also, let \(U:=(T,M,K)\). Notice that \(K\perp X^{n}\) and \(X^{n}\perp Y^{n}|U,Z^{n}\). The rest of the proof follows closely that of the converse part of Theorem 2 in [13]. We have \[n(R+\epsilon) \geq H(M)\] \[\geq H(M|Z^{n},K)\] \[\geq I(X^{n};M|Z^{n},K)\] \[=I(X^{n};M,K|Z^{n})\] \[=\sum_{t=1}^{n}I(X_{t};M,K|Z^{n},X^{t-1})\] \[=\sum_{t=1}^{n}I(X_{t};M,K,X^{t-1},Z^{t-1},Z^{n}_{t+1}|Z_{t})\] \[\geq\sum_{t=1}^{n}I(X_{t};M,K|Z_{t})\] \[=nI(X_{T};M,K|Z_{T},T)\] \[=nI(X_{T};M,K,T|Z_{T})\] \[=nI(X_{T};U|Z_{T}). \tag{5}\] By the assumption of Theorem 2.2 under strong perceptual constraint, \(Y^{n}\) and \(Z^{n}\) are jointly i.i.d. sequences, so \[n(R+R_{0}+\epsilon) \geq H(M,K)\] \[\geq H(M,K|Z^{n})\] \[\geq I(Y^{n};M,K|Z^{n})\] \[=\sum_{t=1}^{n}I(Y_{t};M,K|Z^{n},Y^{t-1})\] \[=\sum_{t=1}^{n}I(Y_{t};M,K,Y^{t-1},Z^{t-1},Z^{n}_{t+1}|Z_{t})\] \[\geq\sum_{t=1}^{n}I(Y_{t};M,K|Z_{t})\] \[=nI(Y_{T};M,K|Z_{T},T)\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K|Z_{T})\] \[=nI(Y_{T};M,K|Z_{T})\] \[=nI(Y_{T};M,K|Z_{T})\] \[=nI(Y_{T};M,K,T|Z_{T})\] \[=nI(Y_{T};M,K|Z_{T})\] \[=nI(Y_{T};M,
2306.08556
On Darboux theorems for geometric structures induced by closed forms
This work reviews the classical Darboux theorem for symplectic, presymplectic, and cosymplectic manifolds (which are used to describe regular and singular mechanical systems), and certain cases of multisymplectic manifolds, and extends it in new ways to k-symplectic and k-cosymplectic manifolds (all these structures appear in the geometric formulation of first-order classical field theories). Moreover, we discuss the existence of Darboux theorems for classes of precosymplectic, k-presymplectic, k-precosymplectic, and premultisymplectic manifolds, which are the geometrical structures underlying some kinds of singular field theories. Approaches to Darboux theorems based on flat connections associated with geometric structures are given, while new results on polarisations for (k-)(pre)(co)symplectic structures arise.
Xavier Gràcia, Javier de Lucas, Xavier Rivas, Narciso Román-Roy
2023-06-14T15:01:57Z
http://arxiv.org/abs/2306.08556v2
# On Darboux theorems for geometric structures ###### Abstract This work reviews the classical Darboux theorem for symplectic, presymplectic, and cosymplectic manifolds (which are used to describe regular and singular mechanical systems), and certain cases of multisymplectic manifolds, and extends it in new ways to \(k\)-symplectic and \(k\)-cosymplectic manifolds (all these structures appear in the geometric formulation of first-order classical field theories). Moreover, we discuss the existence of Darboux theorems for classes of precosymplectic, \(k\)-presymplectic, \(k\)-precosymplectic, and premultisymplectic manifolds, which are the geometrical structures underlying some kinds of singular field theories. Approaches to Darboux theorems based on flat connections associated with geometric structures are given, while new results on polarisations for (\(k\)-)(pre)(co)symplectic structures arise. **Keywords:** Darboux theorem, flat connection, \(k\)-cosymplectic manifold, \(k\)-precosymplectic manifold, \(k\)-presymplectic manifold, \(k\)-symplectic manifold. **MSC 2020:** Primary: 53C15, 53C12. Secondary: 53D05, 53C10. ###### Contents * 1 Introduction * 2 Darboux theorems, flat connections, and symplectic-like structures * 2.1 Symplectic and presymplectic manifolds * 2.2 Cosymplectic and precosymplectic manifolds * 3 \(k\)-symplectic and \(k\)-presymplectic manifolds * 4 \(k\)-cosymplectic and \(k\)-precosymplectic manifolds * 5 Multisymplectic and premultisymplectic structures * 6 Conclusions and outlook * 7 ## 1 Introduction Since its very origins, differential geometry has been applied to many branches of mathematical physics to study different kinds of physical systems, and it has led to many developments. Symplectic geometry, namely the study of closed non-degenerate two-forms, the so-called _symplectic forms_, was one of the first areas of differential geometry to be introduced. Symplectic geometry has its origins in the study of celestial mechanics [45], it has a relevant role in classical mechanics [2, 10], and it has inspired the development of many other useful geometric structures with relevant applications [1, 6, 27, 41, 42, 44]. One of the fundamental results in symplectic geometry is the _Darboux theorem_, which describes the local structure of finite-dimensional symplectic manifolds [18]. Roughly speaking, the Darboux theorem states that a symplectic form can be locally written as a differential form with constant coefficients of a particular type, namely as the canonical symplectic form of a cotangent bundle in adapted coordinates, \(\omega=\mathrm{d}q^{i}\wedge\mathrm{d}p_{i}\)[1, 43]. There exist several types of infinite-dimensional symplectic manifolds, and some of them do not admit a Darboux theorem [46]. Hereafter, we focus on finite-dimensional manifolds, unless otherwise stated. The Darboux theorem can be proved in different ways [1, 9], and its proof can be extended to presymplectic forms, namely closed two-forms of constant rank [17]. It is well-known that symplectic and presymplectic forms describe the phase spaces for autonomous regular and singular dynamical systems in mechanics. For non-autonomous mechanical systems, the suitable structures are the so-called _cosymplectic_ and _precosymplectic_ manifolds [16, 20]. As a preliminary goal, this paper reviews the theory of Darboux theorems for symplectic and presymplectic manifolds, and it analyses their relation to the so-called flat compatible symplectic and presymplectic connections [7, 34, 61]. Connections are hereafter assumed to be linear and torsionless, being the second condition usual in the literature and a key to the description of certain features of the differential forms and integrable distributions to be studied in this work. We also provide proofs of the Darboux theorems for cosymplectic and precosymplectic manifolds. The Darboux theorem for precosymplectic structures is assumed in many references but its proof seems to be absent in the literature [16]. To achieve a geometrical covariant description of (first-order) classical field theories, the above-mentioned structures have been generalised in several ways. The simplest ones are the so-called _\(k\)-symplectic manifolds_, introduced by A. Awane [4, 6] and used later by M. de Leon _et al._[22, 24, 25] and L.K. Norris [48, 51] for describing first-order classical field theories. They coincide with the _polysymplectic manifolds_ described by G.C. Gunther [39] (although these last ones are different from those introduced by G. Sardanashvily et al. [35, 56] and I.V. Kanatchikov [40], that are also called _polysymplectic_). This structure is used to give a geometric description of regular field theories whose Lagrangian and/or Hamiltonian functions, in a local description, do not depend on the space-time coordinates (or the analogous). In the degenerate case we use _\(k\)-presymplectic structures_, which allow us to describe the corresponding field theories given by singular Lagrangian functions [37]. It is worth stressing that there exist several ways of defining \(k\)-presymplectic manifolds, some of which have apparently been proposed and studied in the present work for the first time. A natural extension of \(k\)-symplectic manifolds are _\(k\)-cosymplectic manifolds_, which enable one to generalise the cosymplectic description of non-autonomous mechanical systems to regular field theories whose Lagrangian and/or Hamiltonian functions, in a local description, depend on the space-time coordinates (or the analogous) [21, 23]. As previously, the singular case of these theories leads to the introduction of _\(k\)-precosymplectic manifolds_, which can be defined in different manners, as shown in this paper and studied in previous works [38]. The Darboux theorem was generalised and proved for \(k\)-symplectic manifolds in [4, 25, 29] and for \(k\)-cosymplectic manifolds in [21]. The Darboux theorem plays a relevant role in these theories since, for instance, it significantly simplifies the proofs of many results [52]. In this work, we provide a (as far as we know) new Darboux theorem for \(k\)-symplectic and \(k\)-cosymplectic linear spaces. We also provide new proofs for the Darboux theorems for \(k\)-symplectic and \(k\)-cosymplectic manifolds. Our proofs reveal new properties of such types of manifolds concerning the properties of their Lagrangian submanifolds. In particular, new details about the existence of the hereafter called polarisations for \(k\)-symplectic and \(k\)-cosymplectic manifolds are obtained. Moreover, classical proofs of the \(k\)-symplectic manifold rely on coordinates and special, rather lengthy calculations [6]. Others are focused on connections and give indirect proofs [29]. Meanwhile, our proof of the \(k\)-symplectic Darboux theorem is intrinsic and short. Moreover, our proof could have been made shorten by relying on known results, but we decided to give a full explanation of all the structures and results involved, which made it longer than strictly needed to prove the canonical form of \(k\)-symplectic manifolds. Darboux theorems for \(k\)-symplectic manifolds are closely related to the notion of polarisation [4]. This means that we search for coordinates where the \(k\)-(co)symplectic structures take a form with constant coefficients of a particular type. Nevertheless, one could find new coordinates where the \(k\)-(co)symplectic forms would take constant coefficients of another different type. This would potentially lead to Darboux coordinates of other types. Moreover, one may try to find coordinates to put the differential forms of a \(k\)-symplectic structure on a canonical manner. This leads to the existence of a certain type of associated distribution. Notwithstanding, Darboux coordinates can be defined to additionally put a basis of the distribution in a particular manner. It is worth noting that, in the case of \(k\)-symplectic manifolds, the conditions to obtain Darboux coordinates putting the associated differential forms into canonical form ensure that there exists a canonical basis of the distribution too. Meanwhile, our \(k\)-cosymplectic Darboux theorem shows that the conditions to put the differential forms associated with a \(k\)-cosymplectic manifold are different from those needed to ensure also a canonical form for a basis of the associated distribution. Moreover, our analysis also sheds some light in the existence of Darboux coordinates for \(k\)-cosymplectic manifolds, and it complements the results given in previous works [32, 49]. In particular, it is worth noting that Theorem II.4 and Theorem 5.2.1 in [21, 49] can be slightly misleading, as part of the assumptions needed to prove such theorems are only described in Remark 2.5 and Note 5.2.1, after them, respectively. Then, we study \(k\)-presymplectic manifolds. These structures appear as a side problem in \(k\)-symplectic or multisymplectic theories [29]. We here prove that the very definition of a \(k\)-presymplectic manifold can be set in different ways, depending on the features that we want them to have, e.g. to fit the analysis of systems we are dealing with. Some of these notions of \(k\)-presymplectic manifold do not admit a Darboux theorem of the initially expected form, even for the linear \(k\)-presymplectic cases. Then, we study some different possible definitions of \(k\)-presymplectic manifolds, and we provide some counterexamples showing that a Darboux theorem does not need to exist for them. This is quite unexpected, as it was previously assumed that Darboux theorems must be satisfied for them. It is worth noting that the authors in [37] remark that the existence of a Darboux theorem for \(k\)-presymplectic manifolds is an open problem, although they skip this by giving intrinsic proofs of their results. The same happens when we consider \(k\)-precosymplectic manifolds [38] in order to deal with non-autonomous field theories described by singular Lagrangian functions. As in the \(k\)-presymplectic manifolds case, one has the same type of problems and similar solutions are given. A Darboux theorem for precosymplectic manifolds has been provided. Although this result has been used in the literature [16, 26], it seems that a proof is missing. More generally, we have provided definitions of \(k\)-pre(co)symplectic manifolds admitting Darboux theorems. This gives an alternative approach to previous point-wise and local Darboux theorems in [29] for the \(k\)-presymplectic case. Moreover, our point-wise and local \(k\)-precosymplectic Darboux theorems seem to be new. Note also that \(k\)-precosymplectic manifold do not have canonically defined Reeb vector fields. This causes that the Darboux theorems may not involve the existence of a basis of them given in a canonical form. Moreover, the distribution defined to put the differential forms of \(k\)-precosymplectic manifolds in a canonical manner does not admit a canonical basis unless additional conditions are given. Finally, we have the _multisymplectic manifolds_ first introduced by J. Kijowski, W.M. Tulczyjew, and other authors [31, 36, 41, 42], which constitute one of the most generic structures for studying the behaviour of Lagrangian and Hamiltonian field theories (see [54] and references therein). Nevertheless, although there are some partial results [47], a Darboux-type theorem for multisymplectic manifolds in general is not known. In particular, a class of multisymplectic manifolds with a local structure defined by Darboux type coordinates was characterised in [13], and certain kinds of multisymplectic manifolds admitting Darboux coordinates have been described in [19], giving a sufficient condition that guarantees the existence of Darboux charts. While studying the different geometric structures, we analyse the existence of linear connections compatible with them. Some of our results are known, see for instance symplectic connections [7, 34], \(k\)-symplectic connections [8], \(k\)-cosymplectic [49], multisymplectic and polysymplectic connections [30]. On the other hand, some of the connections compatible with these and other structures are here proposed. Moreover, we here review the subject and serves as a reference point for further works. The structure of the paper goes as follows. Section 2 reviews the Darboux theorems for symplectic, presymplectic, cosymplectic, and precosymplectic structures and its relation to flat compatible connections. In Section 3, we provide a new proof of the Darboux theorem for \(k\)-symplectic manifolds, which is simpler than previous proofs [4, 6, 53]. We also discuss the existence of Darboux coordinates in \(k\)-presymplectic manifolds and show that in order to ensure its existence, some very restrictive hypothesis are required. Section 4 is devoted to study the existence of Darboux coordinates in \(k\)-(pre)cosymplectic manifolds. We give a new proof of the Darboux theorem for \(k\)-cosymplectic manifolds [21]. We also see that it is not possible to ensure the existence of Darboux coordinates, unless some additional conditions are imposed. In Section 5 we review the existing results on Darboux coordinates for (pre)multisymplectic structures. Some new results on this topic are presented. Finally, Section 6 summarises our results and gives some hints on future work. It is worth noting that we explain how the use of flat connections with torsion may be used to study geometric structures related to differential forms that are not closed, such as contact ones. This will be the topic of another paper. ## 2 Darboux theorems, flat connections, and symplectic-like structures Let us set some general assumptions to be used throughout this work. It is hereafter assumed that all structures are smooth. Manifolds are real, Hausdorff, connected, second countable, and finite-dimensional. Differential forms are assumed to have constant rank, unless otherwise stated. Sum over crossed repeated indices is understood. Sometimes, the summation sign, \(\Sigma\), will be used to make clear the range of the indexes we are summing over. All our considerations are local, to avoid technical problems concerning the global existence of quotient manifolds and similar issues. Hereafter, \(M\) and \(Q\) are assumed to be manifolds, \(\mathfrak{X}(M)\) and \(\Omega^{k}(M)\) stand for the \(\mathscr{C}^{\infty}(M)\)-modules of vector fields and differential \(k\)-forms on \(M\). Moreover, connections are assumed to be linear and torsion-free. More particularly, this section reviews (co)symplectic and (co)presymplectic manifolds and give the corresponding Darboux theorems. It also analyses the relation of Darboux theorems with compatible flat connections. We will also introduce the concept of characteristic distribution, as it will play an important role when generalising, in Sections 3 and 4, the results of this section. ### Symplectic and presymplectic manifolds This section reviews the definition of symplectic and presymplectic manifolds, and it also analyses their corresponding Darboux theorems. In the context of presymplectic manifolds, we recall the definition of their characteristic distributions. For symplectic and presymplectic manifolds, the relation between compatible connections and Darboux coordinates is studied. **Definition 2.1**.: A _symplectic manifold_ is a pair \((M,\omega)\), where \(M\) is a manifold and \(\omega\) is a closed differential two-form on \(M\) that is _non-degenerate_, i.e. the contraction \(\iota_{X}\,\omega=0\), for a vector field \(X\) on \(M\), if and only if \(X=0\). The canonical model for symplectic manifolds is the cotangent bundle of a manifold \(Q\), namely \((\mathrm{T}^{*}Q,\omega_{Q})\), where \(\omega_{Q}\in\Omega^{2}(\mathrm{T}^{*}Q)\) is the canonical _symplectic two-form_ in \(\mathrm{T}^{*}Q\), whose local expression in adapted coordinates \(\{q^{i},p_{i}\}\) of \(\mathrm{T}^{*}Q\) on their associated coordinated open subset of \(\mathrm{T}^{*}Q\) is \(\omega_{Q}=\mathrm{d}q^{i}\wedge\mathrm{d}p_{i}\). A symplectic manifold \((M,\omega)\) gives rise to the _musical_ (vector bundle) isomorphism \(\flat:\mathrm{T}Q\to\mathrm{T}^{*}Q\) and its inverse \(\sharp:\mathrm{T}^{*}Q\to\mathrm{T}Q\) naturally induced by the \(\mathscr{C}^{\infty}(M)\)-module isomorphisms \[\flat\colon\mathfrak{X}(M) \longrightarrow\Omega^{1}(M)\] \[X \longmapsto\iota_{X}\,\omega\] and \(\sharp=\flat^{-1}\). Note that a vector bundle morphism \(\flat\) can be defined for every two-form \(\omega\), but \(\sharp\) only exists when \(\flat\) is invertible or, equivalently, \(\omega\) is non-degenerate. **Definition 2.2**.: Let \((M,\omega)\) be a symplectic manifold. Given a distribution \(D\subset\mathrm{T}M\), the _symplectic orthogonal_ of \(D\) is defined by \(D^{\perp}=\coprod_{x\in M}D^{\perp}_{x}\), where \[D^{\perp}_{x}=\{v\in\mathrm{T}_{x}M\mid\omega_{x}(v,u)=0,\,\forall u\in D_{x} \}\,,\] and \(\coprod_{x\in M}D^{\perp}_{x}\) stands for the disjoint sum of all \(D^{\perp}_{x}\) over \(x\in M\). Symplectic orthogonals allow us to introduce several types of submanifolds of symplectic manifolds. **Definition 2.3**.: Let \((M,\omega)\) be a symplectic manifold and consider a submanifold \(N\subset M\). Then, * the submanifold \(N\) is said to be _isotropic_, if \(\mathrm{T}N\subset\mathrm{T}N^{\perp}\). * the submanifold \(N\) is _coisotropic_, if \(\mathrm{T}N^{\perp}\subset\mathrm{T}N\). * the submanifold \(N\) is _Lagrangian_ if it is isotropic and coisotropic, namely if \(\mathrm{T}N^{\perp}=\mathrm{T}N\). Lagrangian submanifolds are also called _maximally isotropic_ and then \(2\dim N=\dim M\). **Definition 2.4**.: Two symplectic manifolds \((M_{1},\omega_{1})\) and \((M_{2},\omega_{2})\) are _symplectomorphic_ if there exists a diffeomorphism \(\phi:M_{1}\to M_{2}\) such that \(\phi^{*}\omega_{2}=\omega_{1}\). The classical _Darboux theorem_ states that every symplectic manifold is locally _symplectomorphic_ to a cotangent bundle endowed with its canonical symplectic structure [1, 9]. The Darboux theorem was initially proved by Darboux [18], but its modern standard proof relies on the so-called Moser's trick [43, Theorem 22.13]. The statement of the Darboux theorem for symplectic manifolds goes as follows. **Theorem 2.5**.: _Let \((M,\omega)\) be a symplectic manifold. Then, for every \(x\in M\), there exist local coordinates \(\{q^{i},p_{i}\}\) around \(x\) where \(\omega=\mathrm{d}q^{i}\wedge\mathrm{d}p_{i}\)._ Note that the Darboux theorem amounts to saying that there exist, on a neighbourhood of any \(x\in M\), two foliations by transversal Lagrangian submanifolds. In infinite-dimensional manifolds, one can still define a symplectic form, but the induced musical morphism \(\flat:\mathfrak{X}(M)\to\Omega^{1}(M)\) is, in general, only injective. This gives rise to the so-called _weak symplectic manifolds_. Meanwhile, if \(\flat:\mathfrak{X}(M)\to\Omega^{1}(M)\) is an isomorphism, then the symplectic manifold is said to be a _strong symplectic manifold_. There exists no Darboux theorem for general weak symplectic manifolds [46]. Nevertheless, by requiring appropriate additional conditions, an analogue can be derived [59]. Let us introduce the notion of symplectic connection [7, 57]. Note that the torsion-free assumption is common in the literature, and it is a key for certain results to be developed. Indeed, we will show in Section 6 that skipping it leads to a more general theory, but more involved, inappropriate, and unnecessary for our present work. Apart from that last comment to be given in the conclusions of this work, all connections are assumed to be torsion-free, unless otherwise stated. **Definition 2.6**.: A _symplectic connection_ on a symplectic manifold \((M,\omega)\) is a connection \(\nabla\) on \(M\) such that \(\nabla\omega=0\). The symplectic form \(\omega\) is said to be _parallel_ relative to \(\nabla\). Every symplectic manifold admits local symplectic connections. Indeed, as a consequence of the Darboux theorem, one can construct a local symplectic flat connection for \((M,\omega)\) around every \(x\in M\) by assuming that its Christoffel symbols vanish in some Darboux coordinates defined around \(x\). Such a connection is flat and torsion-free. In general, flat symplectic connections cannot be globally defined, as it is known that the curvature of a connection is linked to the topology of the manifold where it is defined on. Milnor proved and surveyed in [50] several results on the existence of connections on the tangent bundle to a manifold. For instance, the tangent bundle to a closed and oriented surface of genus \(g\) has no flat connection if \(|2-2g|\geq g\). The sphere \(\mathbb{S}^{2}\) has zero genus. Hence, there is no flat connection on \(\mathbb{S}^{2}\), which admits a natural symplectic structure \(\omega_{\mathbb{S}^{2}}\) that can be defined by considering \(\mathbb{S}^{2}\) embedded in \(\mathbb{R}^{3}\) and setting \[(\omega_{\mathbb{S}^{2}})_{x}(v_{x},v^{\prime}_{x})=\langle x,v_{x}\times v^{ \prime}_{x}\rangle\,,\qquad\forall x\in\mathbb{R}^{3}\,,\quad\forall v_{x},v^{ \prime}_{x}\in\mathrm{T}_{x}\mathbb{S}^{2}\subset\mathrm{T}_{x}\mathbb{R}^{3 }\simeq\mathbb{R}^{3}\,,\] where tangent vectors at \(x\in\mathbb{S}^{2}\) are naturally understood as vectors in \(\mathbb{R}^{3}\) and, hence, their vector products are defined. Meanwhile, \(\langle\cdot,\cdot\rangle\) stands for the natural scalar product in \(\mathbb{R}^{3}\). Let us recall that, if a connection is flat, the parallel transport of a tangent vector along a path contained in a small open set \(U\) does not depend on the path. Thus, a basis \(\{e_{1},\ldots,e_{n}\}\) of \(\mathrm{T}_{x}M\) gives rise, by parallel transport, to a family of vector fields \(X_{1},\ldots,X_{n}\) on \(U\subset M\), such that \(X_{i}(x)=e_{i}\) for \(i=1,\ldots,n\). Then, \(\nabla_{X_{i}}X_{j}=0\) and, since \(\nabla\) is torsion-free, one has that \[0=T(X_{i},X_{j})=\nabla_{X_{i}}X_{j}-\nabla_{X_{j}}X_{i}-[X_{i},X_{j}]=-[X_{i},X_{j}]\,,\qquad\forall i,j=1,\ldots,n\,.\] Hence, there exist coordinates \(\{x^{1},\ldots,x^{n}\}\) on a neighbourhood of \(x\) such that \(X_{i}=\partial/\partial x^{i}\) for \(i=1,\ldots,n\). Moreover, the Christoffel symbols of the connection vanish on \(U\). Using the above result and assuming the local existence of a flat torsion-free connection compatible with a symplectic form, one may prove the Darboux theorem in a very easy manner. In fact, every symplectic form \(\omega\) on \(M\) can be put into canonical form at any point \(x\in M\) for a certain basis \(\{e_{1},\ldots,e_{n}\}\) of \(\mathrm{T}_{x}^{*}M\), i.e. \[\omega_{x}=\sum_{i=1}^{n}e^{2i-1}\wedge e^{2i}\,.\] Recall that on a neighbourhood of every point \(x\in M\), one can define a coordinate system \(\{x^{1},\ldots,x^{n}\}\) around \(x\) so that there exists vector fields \(X_{i}=\partial/\partial x^{i}\), with \(i=1,\ldots,n\), such that \[\nabla_{X^{i}}X^{j}=0\,,\qquad\forall i,j=1,\ldots,n\,,\] and \(X_{i}(x)=e_{i}\) for \(i=1,\dots,n\). Since \(\nabla\) is a compatible symplectic connection for \(\omega\), one has that \[\nabla_{X^{i}}\omega(X^{j},X^{k})=0\,,\quad\forall i,j,k=1,\dots,n\,.\] Hence, one has \[\omega=\sum_{i=1}^{2n}\mathrm{d}x^{2i-1}\wedge\mathrm{d}x^{2i}\,.\] In a similar way, but weakening the conditions in Definition 2.1, we can introduce the concept of presymplectic manifold. Recall that we assume differential forms to have constant rank. **Definition 2.7**.: A _presymplectic form_ on \(M\) is a closed two-form \(\omega\in\Omega^{2}(M)\) of constant rank. The pair \((M,\omega)\) is called a _presymplectic manifold_. Let us construct a prototypical example of presymplectic manifold. Let \((M,\omega)\) be a symplectic manifold, and let \(N\) be a submanifold of \(M\). Consider the canonical embedding denoted by \(\jmath_{N}\colon N\hookrightarrow M\), and endow \(N\) with the induced two-form \(\omega_{N}=\jmath_{N}^{\star}\omega\), which is closed. Then, \((N,\omega_{N})\) is a presymplectic manifold provided the rank of \(\omega_{N}\) is constant. To see that the condition on the rank of \(\jmath_{N}^{\star}\omega\) is necessary, let us consider the counter example given by the canonical two form, \(\omega=\mathrm{d}x\wedge\mathrm{d}p_{x}+\mathrm{d}y\wedge\mathrm{d}p_{y}\), on \(\mathrm{T}^{\ast}\mathbb{R}^{2}\) and the immersed submanifold given by \(\jmath_{\mathrm{T}^{\ast}\mathbb{R}}:(x,p_{x})\in\mathrm{T}^{\ast}\mathbb{R} \mapsto(x,p_{x}^{2}/2,0,p_{x})\in\mathrm{T}^{\ast}\mathbb{R}\times\mathrm{T}^ {\ast}\mathbb{R}\simeq\mathrm{T}^{\ast}\mathbb{R}^{2}\). Then, \(\jmath_{\mathrm{T}^{\ast}\mathbb{R}}^{*}\omega=p_{x}\mathrm{d}x\wedge\mathrm{ d}p_{x}\), which is not symplectic at the zero section of \(\mathrm{T}^{\ast}\mathbb{R}\). Before introducing the characteristic distribution associated with a presymplectic manifold, let us fix some terminology about distributions. A (generalised) _distribution_ on \(M\) is a subset \(D\subset\mathrm{T}M\) such that \(D\cap\mathrm{T}_{x}M\) is a vector subspace of \(\mathrm{T}_{x}M\) for every \(x\in M\). A distribution \(D\) on \(M\) is said to be _smooth_ if, for every \(x\in M\), there exists a neighbourhood \(U_{x}\) of \(x\) and (smooth) vector fields \(X_{1},\dots,X_{k}\) on \(U_{x}\) so that \(D_{y}=\langle X_{1}(y),\dots,X_{k}(y)\rangle\) for every \(y\in U_{x}\). A generalised distribution \(D\) is _regular_ if it is smooth and has constant rank. A (generalised) _codistribution_ on \(M\) is a subset \(C\subset\mathrm{T}^{\ast}M\) such that \(C_{x}=C\cap\mathrm{T}_{x}^{\ast}M\) is a vector subspace of \(\mathrm{T}_{x}^{\ast}M\) for every \(x\in M\). The smooth and/or regular notions introduced for distributions also apply to codistributions. **Definition 2.8**.: Given a presymplectic manifold \((M,\omega)\), its _characteristic distribution_ is the distribution \[\mathcal{C}_{\omega}=\ker\omega=\{v\in\mathrm{T}M\mid\omega(v,\cdot)=0\}\,.\] A vector field \(X\in\mathfrak{X}(M)\) belonging to \(\mathcal{C}_{\omega}\), i.e. such that \(\iota_{X}\,\omega=0\), is called a _characteristic vector field_ of \((M,\omega)\). Note that \(\mathcal{C}_{\omega}=\ker\flat\). In the case of symplectic manifolds, \(\flat\) is a vector bundle isomorphism, and thus \(\mathcal{C}_{\omega}=\{0\}\). Moreover, \(\mathcal{C}_{\omega}\) is a distribution because \(\omega\) has constant rank. But the kernel of a general closed two-form does not need to be a smooth generalised distribution. For example, \(\omega_{P}=(x^{2}+y^{2})\mathrm{d}x\wedge\mathrm{d}y\) is a closed two-form on \(\mathbb{R}^{2}\), but it is not presymplectic, as its rank is not constant. Moreover, \(\ker\omega_{P}\) is a generalised distribution \((\ker\omega_{P})_{(0,0)}=\mathrm{T}_{(0,0)}\mathbb{R}^{2}\) while \((\ker\omega_{P})_{(x,y)}=0\) for every \((x,y)\in\mathbb{R}^{2}\) different from \((0,0)\). Indeed, \(\ker\omega_{P}\) is not even a smooth generalised distribution. **Proposition 2.9**.: _The characteristic distribution \(\mathcal{C}_{\omega}\) of a presymplectic manifold \((M,\omega)\) is integrable._ Proof.: The integrability of \(\mathcal{C}_{\omega}\) follows from the closedness of the symplectic form \(\omega\), the constancy of his rank, and the Frobenius theorem. If \(\omega\) is a presymplectic form on \(M\), its characteristic distribution is integrable. Moreover, around every \(x\in M\), there exists an open neighbourhood \(U\) of \(x\) such that the space of integral leaves of \(\mathcal{C}_{\omega}\), let us say \(U/\mathcal{C}_{\omega}\), admits a natural manifold structure and the projection \(\pi:U\to U/\mathcal{C}_{\omega}\) is a submersion. Let us prove this fact. Since \(\mathcal{C}_{\omega}\) is integrable, the Frobenius theorem ensures that, for every \(x\in M\), there exists a local basis of vector fields, \(\{\partial/\partial x^{1},\dots,\partial/\partial x^{k}\}\), spanning \(\mathcal{C}_{\omega}\) on a coordinated neighbourhood \(U_{x}\) of \(x\) with coordinates \(\{x^{1},\dots,x^{n}\}\). In a small enough open subset \(U\) of \(U_{x}\) containing \(x\), one can assume that \(x^{1},\dots,x^{n}\) take values in an open ball of \(\mathbb{R}^{n}\). Then, the space of leaves of \(U/\mathcal{C}_{\omega}\) is a manifold of dimension \(n-k\) and the mapping \(\pi:U\to\mathbb{R}^{n-k}\) is an open submersion. We will then say that \(\mathcal{C}_{\omega}\) is _simple_ on \(U\). Since \(\omega\) is invariant relative to the elements of its characteristic distribution, and it vanishes on them, there exists a unique two-form \(\widetilde{\omega}\) on \(U/\mathcal{C}_{\omega}\) such that \(\pi^{*}\widetilde{\omega}=\omega\). In this way, \(\widetilde{\omega}\) is closed and nondegenerate because if \(\iota_{X_{\mathcal{C}}}\,\widetilde{\omega}=0\), then there exists a vector field \(X\) on \(U\) such that \(\pi_{*}X=X_{\mathcal{C}}\), and then \(\iota_{X}\,\omega=0\). Since \(\mathcal{C}_{\omega}=\ker\omega=\ker\mathrm{T}\pi\), then \(X_{\mathcal{C}}=0\). With this in mind, we are ready to state the Darboux theorem for presymplectic forms. Note that this theorem can be stated since presymplectic forms are assumed to have constant rank. Otherwise, it would be difficult to establish a series of canonical forms for closed two-forms even in the most simple cases, e.g. on \(\mathbb{R}^{2}\). **Theorem 2.10** (Darboux theorem for presymplectic manifolds).: _Consider a presymplectic manifold \((M,\omega)\). Around every point \(x\in M\), there exist local coordinates \(\{q^{i},p_{i},z^{j}\}\), where \(i=1,\dots,r\) and \(j=1,\dots,d\), such that_ \[\omega=\sum_{i=1}^{r}\mathrm{d}q^{i}\wedge\mathrm{d}p_{i}\,,\] _where \(2r\) is the rank of \(\omega\). In particular, if \(r=0\), then \(\omega=0\) and \(d=\dim M\). If \(2r=\dim M\), then \(d=0\) and \(\{q^{i},p_{i}\}\) give a local coordinate system of \(M\)._ Proof.: Consider an open neighbourhood \(V\) of \(x\in M\) where the integral foliation \(\mathcal{F}\) defined by the distribution \(\mathcal{C}_{\omega}=\ker\omega\) is simple. Let \(P\) be the manifold of leaves of \(\mathcal{F}|_{V}\) and let \(\pi\colon V\to P\) be the canonical projection. There exists a symplectic form \(\bar{\omega}\) on \(P\) given by \(\omega=\pi^{*}\bar{\omega}\). The Darboux theorem for symplectic manifolds ensures that there exists an open coordinate neighbourhood \(\bar{U}\subset P\) of \(\pi(x)\) with local coordinates \(\{\bar{q}^{1},\dots,\bar{q}^{r},\bar{p}_{1},\dots,\bar{p}_{r}\}\) such that \(\bar{\omega}=\sum_{i=1}^{r}\mathrm{d}\bar{q}^{i}\wedge\mathrm{d}\bar{p}_{i}\) on \(\bar{U}\). Define \(q^{i}=\bar{q}^{i}\circ\pi\) and \(p_{i}=\bar{p}_{i}\circ\pi\) for \(i=1,\dots,r\) and we choose \(d=\dim M-2r\) other functions \(z^{1},\dots,z^{d}\), functionally independent relative to the previous ones. This gives rise to a local coordinate system \(\{q^{1},\dots,q^{r},p_{1},\dots,p_{r},z^{1},\dots,z^{d}\}\) around \(x\). This chart satisfies the conditions of the theorem. The definition of a presymplectic connection is a straightforward generalisation of Definition 2.6 to the presymplectic realm (see [61]). **Definition 2.11**.: A _presymplectic connection_ relative to a presymplectic manifold \((M,\omega)\) is a connection \(\nabla\) on \(M\) such that \(\nabla\omega=0\). The Darboux theorem for presymplectic forms implies that there exists, locally, a flat presymplectic connection. The other way around, the existence of a flat presymplectic connection for a presymplectic manifold \((M,\omega)\) allows us to prove the Darboux theorem as in the case of symplectic forms. In particular, we have proved the following. **Lemma 2.12**.: _Every presymplectic manifold \((M,\omega)\) admits locally defined flat presymplectic connections \(\nabla\), i.e. \(\nabla\omega=0\)._ At this point, it becomes clear that if a differential form admits a compatible flat torsion-less connection, it must be closed. Hence, no flat torsion-less compatible connection exist for contact forms, locally conformally symplectic forms, and other differential forms that are not closed [33, 60]. We have stressed the word "torsion-less" unless every connection in this work is assumed to be so, because in the conclusions of this work will show that removing this condition may lead to deal with no-closed differential forms. ### Cosymplectic and precosymplectic manifolds Let us review the definition of cosymplectic [3, 11, 26] and precosymplectic [16] manifolds, their corresponding Darboux theorems, and their relations to flat cosymplectic and precosymplectic connections. **Definition 2.13**.: A _cosymplectic structure_ in \(M\) is a pair \((\omega,\eta)\), where \(\omega\in\Omega^{2}(M)\) and \(\eta\in\Omega^{1}(M)\) are closed differential forms such that \(\eta\) does not vanish and \(\ker\eta\oplus\ker\omega=\mathrm{T}M\). The triple \((M,\omega,\eta)\) is said to be a _cosymplectic manifold_. Note that a cosymplectic structure on \(M\) implies that \(M\) is odd-dimensional. The fact that \(\eta\) is non-vanishing implies that \(\langle\eta\rangle\oplus\mathrm{Im}\,\omega=\mathrm{T}^{*}M\) and \(\dim M=2n+1\) for \(n\geq 0\). Then, \((M,\omega,\eta)\) is a cosymplectic manifold if and only if \(\eta\wedge\omega^{n}\) is a volume form on \(M\), where we assume that \(\omega^{0}=1\). In particular, a cosymplectic manifold \((M,\omega,\eta)\) yields a presymplectic manifold \((M,\omega)\). Note that the case \(\dim M=1\) may give rise to a cosymplectic manifold according to our definition [58]. The _characteristic distribution_ of a cosymplectic manifold \((M,\omega,\eta)\) is the rank one distribution given by \(\mathcal{C}_{\omega}=\ker\omega\), and it is often called the _Reeb distribution_. The following proposition states that \((M,\omega,\eta)\) induces a unique distinguished vector field, called _Reeb vector field_, taking values in \(\ker\omega\). **Proposition 2.14**.: _Given a cosymplectic manifold \((M,\omega,\eta)\), there exists a unique vector field \(R\in\mathfrak{X}(M)\) that satisfies_ \[\iota_{R}\,\eta=1\,,\quad\iota_{R}\,\omega=0\,.\] A cosymplectic manifold \((M,\omega,\eta)\) induces a \(\mathscr{C}^{\infty}(M)\)-module isomorphism \(\flat\colon\mathfrak{X}(M)\to\Omega^{1}(M)\) given by \(\flat(X)=\iota_{X}\,\omega+(\iota_{X}\,\eta)\eta\), whose inverse map is denoted by \(\sharp=\flat^{-1}\). Then, the Reeb vector field \(R\) reads \[R=\sharp\eta\,.\] Consider the product manifold \(\mathbb{R}\times\mathrm{T}^{*}Q\) and the projections \(\pi_{1}\colon\mathbb{R}\times\mathrm{T}^{*}Q\to\mathbb{R}\) and \(\pi_{2}\colon\mathbb{R}\times\mathrm{T}^{*}Q\to\mathrm{T}^{*}Q\) onto the first and second manifolds in \(\mathbb{R}\times\mathrm{T}^{*}Q\). If \(t\) is the natural coordinate in \(\mathbb{R}\) and \(\omega_{Q}\) is the canonical symplectic form on \(\mathrm{T}^{*}Q\), then the triple \[(\mathbb{R}\times\mathrm{T}^{*}Q,\pi_{2}^{*}\omega_{Q},\pi_{1}^{*}\mathrm{d}t) \tag{1}\] is a cosymplectic manifold. Let us consider the pull-back of \(t\) to \(\mathbb{R}\times\mathrm{T}^{*}Q\) via \(\pi_{1}\), and the pull-back of some Darboux coordinates \(\{q^{i},p_{i}\}\) for \(\omega_{Q}\) to \(\mathbb{R}\times\mathrm{T}^{*}Q\) via \(\pi_{2}\). Let us denote such pull-backs in the same way as the original coordinates to simplify the notation. Then, in the coordinates \(\{t,q^{i},p_{i}\}\), the Reeb vector field of \((\mathbb{R}\times\mathrm{T}^{*}Q,\pi_{2}^{*}\omega_{Q},\pi_{1}^{*}\mathrm{d}t)\) read \(\partial/\partial t\). Locally, \(\pi_{2}^{*}\omega_{Q}=\mathrm{d}q^{i}\wedge\mathrm{d}p_{i}\) and \(\pi_{1}^{*}\mathrm{d}t=\mathrm{d}t\). **Theorem 2.15** (Cosymplectic Darboux theorem [26]).: _Given a cosymplectic manifold \((M,\omega,\eta)\), there exists, around each point \(x\in M\), local coordinates \(\{t,q^{i},p_{i}\}\), where \(1\leq i\leq n\), such that_ \[\eta=\mathrm{d}t\,,\quad\omega=\mathrm{d}q^{i}\wedge\mathrm{d}p_{i}\,.\] Proof.: Since \((M,\omega)\) is a presymplectic manifold and \(\omega\) has corank one, there exist for any point \(x\in M\) a neighbourhood \(U\) of \(x\) with coordinates \(\{s,q^{i},p_{i}\}\), with \(i=1,\dots,n\), so that \(\omega=\mathrm{d}q^{i}\wedge\mathrm{d}p_{i}\). Consider now a potential function of \(\eta\), which exists because \(\eta\) is closed, and denote it by \(t\). Since \(\eta\wedge\omega^{n}\) is a volume form, \(\{t,q^{i},p_{i}\}\) is a coordinate system around \(x\) and \(\eta=\mathrm{d}t\) and \(\omega=\mathrm{d}q^{i}\wedge\mathrm{d}p_{i}\). The Darboux theorem for cosymplectic manifolds states that every cosymplectic manifold is locally diffeomorphic to the canonical model (1) (see [11, 26]). In Darboux coordinates, the Reeb vector field \(R\) for a cosymplectic manifold \((M,\omega,\eta)\) is written as \(R=\frac{\partial}{\partial t}\). The Darboux theorem for cosymplectic structures implies that there exists, around each point, a flat connection \(\nabla\) such that \(\nabla\eta=0\) and \(\nabla\omega=0\). Indeed, \(\nabla\) can be chosen to be the connection with zero Christoffel symbols relative to some Darboux coordinates. This justifies the following definition. **Definition 2.16**.: A _cosymplectic connection_ relative to \((M,\omega,\eta)\) is a connection on \(M\) such that \(\nabla\eta=0\) and \(\nabla\omega=0\). Let us show that the existence of flat cosymplectic connections allows us to prove the Darboux theorem for \((M,\omega,\eta)\). At a point \(x\in M\), the fact that \(\ker\eta_{x}\oplus\ker\omega_{x}=\mathrm{T}_{x}M\) implies that there exists a basis of \(\mathrm{T}_{x}M\) of the form \(\{e_{1},\dots,e_{2n+1}\}\) so that \(\eta_{x}=e^{2n+1}\) and \(\omega_{x}=\sum_{i=1}^{n}e^{2i-1}\wedge e^{2i}\) relative to the dual basis \(\{e^{1},\dots,e^{2n+1}\}\) in \(\mathrm{T}_{x}^{*}M\). Due to the fact that \(\nabla\) is flat, there exists a family of commuting parallel vector fields \(X_{1},\dots,X_{2n+1}\) such that \(X_{i}(x)=e_{i}\) for \(i=1,\dots,2n+1\). Since \[\nabla_{X_{i}}[\eta(X_{j})]=0\,,\qquad\nabla_{X_{i}}[\omega(X_{j},X_{k})]=0\,, \qquad i,j,k=1,\dots,2n+1\,,\] the dual basis of differential one-forms \(\tau^{1},\dots,\tau^{2n+1}\) to \(X_{1},\dots,X_{2n+1}\) is such that \[\eta=\tau^{2n+1},\qquad\omega=\sum_{i=1}^{n}\tau^{2i-1}\wedge\tau^{2i}\,.\] Since \(X_{1},\dots,X_{2n+1}\) admit a coordinate system so that \(X_{i}=\partial/\partial x_{i}\), with \(i=1,\dots,2n+1\), then \(\tau^{i}=\mathrm{d}x^{i}\) for \(i=1,\dots,2n+1\), and the Darboux theorem for cosymplectic manifolds follows. Note that this is due to the fact that the connection is assumed to be torsion-free. Cosymplectic manifolds can be generalised by assuming that \(\eta\in\Omega^{1}(M)\) and \(\omega\in\Omega^{2}(M)\) are closed forms on \(M\), but \(\ker\eta\cap\ker\omega\) is a distribution of fixed rank that is not necessarily zero. This implies that \(\omega\) is a presymplectic form on \(M\). This gives rise to the definition of a precosymplectic manifold. When \(\ker\eta\cap\ker\omega=\{0\}\), one retrieves the definition of a cosymplectic manifold. **Definition 2.17**.: A _precosymplectic structure_ in \(M\) is a pair \((\omega,\eta)\), where \(\omega\in\Omega^{2}(M)\) and \(\eta\in\Omega^{1}(M)\) are closed differential forms such that \(\ker\eta\cap\ker\omega\) is a regular distribution strictly included in \(\ker\omega\) at every \(x\in M\). If \(\operatorname{rank}\omega=2r<\dim M\), the triple \((M,\omega,\eta)\) is said to be a _precosymplectic manifold_ of rank \(2r\). It is worth stressing that the fact that \(\ker\eta\cap\ker\omega\) is a regular distribution strictly contained in \(\ker\omega\) implies that \(\eta\wedge\omega^{r}\) is a non-vanishing form and \(\omega^{r+1}=0\) for a certain fixed \(r\), and conversely. Therefore, \(\omega\) has constant rank \(2r\), with \(2r<\dim M\). **Remark 2.18**.: Let \((P,\omega)\) be a presymplectic manifold with Darboux coordinates \(\{q^{i},p_{i},z^{j}\}\). Consider the manifold \(\mathbb{R}\times P\) with the induced coordinates \(\{t,q^{i},p_{i},z^{j}\}\) obtained as usual, namely, \(q^{i},p_{i},z^{j}\) are the pull-back to \(\mathbb{R}\times P\) of the chosen variables in \(P\). Then, \((\mathbb{R}\times P,\pi_{2}^{*}\omega,\pi_{1}^{*}\mathrm{d}t)\) is a precosymplectic manifold. In the obtained local coordinates, \(\pi_{2}^{*}\omega=\mathrm{d}q^{i}\wedge\mathrm{d}p_{i}\) while \(\pi_{1}^{*}\mathrm{d}t\) is denoted by \(\mathrm{d}t\) to simplify the notation. **Remark 2.19**.: Consider the regular distribution \(D=\ker\omega\cap\ker\eta\) of a precosymplectic manifold \((M,\omega,\eta)\). Then, \(D\) is involutive because \(\ker\omega\) and \(\ker\eta\) are so. The foliation associated with \(D\) defines a local projection \[\pi\colon M\to\widetilde{M}=M/(\ker\omega\cap\ker\eta)\,,\] where \(\widetilde{M}\) is the quotient manifold of the leaves of \(D\). Recall that we are assuming that \(\widetilde{M}\) is a manifold for simplicity. Indeed, one of the general assumptions of our paper is that manifold structures and other existing mathematical local structures are defined globally. In reality, one can only ensure that for every \(x\in M\) and a local neighbourhood \(U_{x}\) of \(x\), the space \(M/(\ker\omega\cap\ker\eta)\) is a manifold. Hence, by our general assumptions, there exists a unique cosymplectic structure \((\widetilde{\omega},\widetilde{\eta})\) on \(\widetilde{M}\) such that \(\pi^{*}\widetilde{\omega}=\omega\) and \(\pi^{*}\widetilde{\eta}=\eta\). As in the case of cosymplectic manifolds, we can define special types of vector fields for precosymplectic manifolds. **Definition 2.20**.: Given a precosymplectic manifold \((M,\omega,\eta)\), a vector field \(X\in\mathfrak{X}(M)\) satisfying \[\iota_{X}\,\omega=0\,,\quad\iota_{X}\,\eta=1\,,\] is called a _Reeb vector field_. The space generated by Reeb vector fields, namely \(\ker\omega\), is called the _Reeb distribution_ of \((M,\omega,\eta)\). Note that, if \(R\in\mathfrak{X}(M)\) is a Reeb vector field, then \(R+Y\) is also a Reeb vector field for every \(Y\in\ker\omega\cap\ker\eta\). In other words, Reeb vector fields for precosymplectic manifolds need not be univocally defined. Finally, let us state the Darboux theorem for precosymplectic manifolds, whose proof seems, as far as we know, to be absent in the literature. Nevertheless, it is always implicitly assumed that it holds [16; 26] and it is quite straightforward. **Theorem 2.21** (Darboux Theorem for precosymplectic manifolds).: _Let \((M,\omega,\eta)\) be a precosymplectic manifold with \(\operatorname{rank}\omega=2r\leq\dim M-1\). For every \(x\in M\), there exist local coordinates \(\{t,q^{i},p_{i},z^{j}\}\) around \(x\), where \(1\leq i\leq r\) and \(1\leq j\leq\dim M-1-2r\), such that_ \[\eta=\mathrm{d}t\,,\quad\omega=\sum_{i=1}^{r}\mathrm{d}q^{i}\wedge\mathrm{d}p _{i}\,. \tag{2}\] Proof.: Since \(\omega\) is a presymplectic form, there exist coordinates \(\{q^{i},p_{i},z^{\prime}_{i}\}\) on a neighbourhood \(U\) of \(x\) such that \(\omega=\sum_{i=1}^{r}\mathrm{d}q^{i}\wedge\mathrm{d}p_{i}\). Since \((\ker\eta\cap\ker\omega)^{\circ}=\langle\eta\rangle\oplus\operatorname{Im}\omega\) and \(\eta\) does not vanish, one has that \(\operatorname{rank}\omega\leq\dim M-1\). One the other hand, \(\eta\) is closed and, therefore, there exists a function \(t\) on \(U\), where \(U\) can be chosen smaller if necessary, such that \(\eta=\mathrm{d}t\) and \(\omega=\mathrm{d}q^{i}\wedge\mathrm{d}p_{i}\). Since \(\eta\wedge\omega^{r}\) does not vanish, \(\{t,q^{i},p_{i}\}\) are functionally independent functions. Finally, one can choose additional functionally independent coordinates, \(z^{1},\ldots,z^{n}\) with respect to \(\{t,q^{i},p_{i}\}\) and (2) will hold. As in the previous cases, there exists a locally defined flat connection \(\nabla\) whose Christoffel symbols vanish on the chosen Darboux coordinates. Then, \(\eta\) and \(\omega\) become parallel differential forms relative to \(\nabla\). This motivates the following natural definition. **Definition 2.22**.: A _precosymplectic connection_ relative to a precosymplectic manifold \((M,\eta,\omega)\) is a connection on \(M\) such that \(\nabla\eta=0\) and \(\nabla\omega=0\). Note that, as previously, the existence of a flat precosymplectic connection allows one to provide a brief proof of the Darboux theorem for precosymplectic manifolds. ## 3 \(k\)-symplectic and \(k\)-presymplectic manifolds Let us introduce and provide Darboux theorems for \(k\)-symplectic manifolds. This will give a new, complementary approach, to the classical results [4, 14] and some new more modern approaches [29]. Moreover, we will discuss the existence of Darboux theorems for \(k\)-presymplectic manifolds. Furthermore, this will be done by providing new simpler, shorter and more geometrical proofs of Darboux theorems for \(k\)-symplectic manifolds while giving more details and, as far as we know, a new Darboux theorem for linear spaces [4]. Additionally, we will give a new proof about the existence of a complementary for a polarisation that is isotropic relative to the differential two-forms of a \(k\)-symplectic structure. On the other hand, Darboux theorems give rise to the hereafter called _flat \(k\)-symplectic and \(k\)-presymplectic connections_, which, in turn, lead to other proofs of respective Darboux theorems. It is worth noting that an alternative, somehow different, development of these ideas for the \(k\)-symplectic case can be found in [14]. Moreover, some new structures will arise in our approach and our results concerning \(k\)-presymplectic manifolds seem to be absolutely new. **Definition 3.1**.: Let \(M\) be an \(n(k{+}1)\)-dimensional manifold. A \(k\)_-symplectic structure_ on \(M\) is a family \((\omega^{1},\ldots,\omega^{k},V)\), where \(V\) is an integrable distribution on \(M\) of rank \(nk\), and \(\omega^{1},\ldots,\omega^{k}\) are closed differential 2-forms on \(M\) satisfying that 1. \(\left.\omega^{\alpha}\right|_{V\times V}=0\), for \(1\leq\alpha\leq k\), 2. \(\bigcap_{\alpha=1}^{k}\ker\omega^{\alpha}=\{0\}\). Under the above hypotheses, \((M,\omega^{1},\ldots,\omega^{k},V)\) is called a \(k\)_-symplectic manifold_. We call \(V\) a _polarisation_ of the \(k\)-symplectic manifold. Our notion of \(k\)-symplectic manifold matches the one given by A. Awane [4, 6]. Moreover, it is equivalent to the concepts of _standard polysymplectic structure_ of C. Gunther [39] and _integrable \(p\)-almost cotangent structure_ introduced by M. de Leon _et al_[24, 25]. In the case \(k=1\), Awane's definition reduces to the notion of _polarised symplectic manifold_, that is a symplectic manifold with a Lagrangian foliation. We will illustrate in forthcoming examples that the distribution \(V\) is needed to ensure the existence of a particular type of Darboux coordinates. In fact, Gunther calls polysymplectic manifolds the differential geometric structures obtained from our definition by removing the existence of the distribution \(V\). Meanwhile, a _standard_ polysymplectic manifold in Gunther's paper is a polysymplectic manifold admitting an atlas of Darboux coordinates. Note that a polysymplectic manifold may have atlas of Darboux coordinates without a distribution \(V\). In a particular case, if we think of symplectic manifolds as a one-symplectic manifold, then it is clear that it has local Darboux coordinates, but the standard symplectic manifold on the sphere does not have a polarisation [5]. Then, Gunther's definition is more general than ours, while it is equivalent to our definition if the compatibility of two charts Darboux coordinates \(\{y^{i},p_{i}^{\alpha}\}\) and \(\{x^{i},\pi_{i}^{\alpha}\}\) involves that \(x=x(y)\) and the momenta are transformed accordingly, namely \(\pi^{\alpha}=\pi^{\alpha}(y,p)\) are the momenta of the \(\{x^{i}\}\). Otherwise, the equivalence is only local. Let us provide a Darboux theorem at the tangent space of a point of a \(k\)-symplectic manifold. Since every \(k\)-symplectic manifold \((M,\omega^{1},\ldots,\omega^{k},V)\) induces at every \(\mathrm{T}_{x}M\) for \(x\in M\) a so-called \(k\)-_symplectic vector space_, Theorem 3.2 can be understood as a Darboux theorem for \(k\)-symplectic vector spaces. **Theorem 3.2**.: **(\(k\)-symplectic linear Darboux theorem)** _Assume that \((M,\omega^{1},\ldots,\omega^{k},V)\) is a \(k\)-symplectic manifold. For every \(x\in M\), there exists a basis \(\{e^{1},\ldots,e^{n};e_{1}^{\beta},\ldots,e_{n}^{\beta}\}_{\beta=1,\ldots,k}\) of \(\Upsilon_{x}^{*}M\) such that_ \[\omega^{\beta}=\sum_{i=1}^{n}e^{i}\wedge e_{i}^{\beta}\,,\qquad V=\bigoplus_{ \alpha=1}^{k}V_{\alpha},\qquad V_{\beta}=\langle e^{1}_{\beta},\ldots,e^{n}_{ \beta}\rangle,\qquad\beta=1,\ldots,k\,.\] _Note that \(\{e_{1},\ldots,e_{n},e^{1}_{\beta},\ldots,e^{n}_{\beta}\}\) is the dual basis in \(\mathrm{T}_{x}M\)._ Proof.: The result amounts to the Darboux theorem for symplectic linear spaces for \(k=1\). Hence, let us assume \(k>1\). Since \(\{0\}=\bigcap_{\alpha=1}^{k}\ker\omega^{\alpha}\), one has that \[\mathrm{T}_{x}^{*}M=\mathrm{Im}\,\omega_{x}^{1}+\cdots+\mathrm{Im}\,\omega_{x }^{k}\,,\qquad\forall x\in M\,. \tag{3}\] Although all posterior structures in this proof refer to the point \(x\), the point will be omitted to simplify the notation. Since \(\omega^{\beta}|_{V\times V}=0\), one has that \(\omega^{\beta}(V)\subset V^{\circ}\) for \(\beta=1,\ldots,k\). If \(W\) is a regular distribution supplementary to \(V\), then \(\mathrm{rank}\,W=n\), and \(\mathrm{rank}\,\omega^{\beta}\left(W\right)\leq n\). Note that \[\omega^{1}(V)+\cdots+\omega^{k}(V)\subset V^{\circ}\,.\] Due to (3) and the above discussion, one has that \[\omega^{1}(W)+\cdots+\omega^{k}(W)\] is a distribution of rank \(nk\), at least. This implies that \(\mathrm{rank}\,\omega^{\beta}(W)=n\) and \[\omega^{1}(W)\oplus\cdots\oplus\omega^{k}(W)\oplus V^{\circ}=\mathrm{T}^{*}M \,,\qquad V^{\circ}=\omega^{1}(V)+\ldots+\omega^{k}(V)\,.\] If \(\omega^{\alpha}(v+w)=0\), where \(v\in V\) and \(w\in W\), then \(\omega^{\alpha}(v)=-\omega^{\alpha}(w)\). Since \(\omega^{\alpha}(W)\cap\omega^{\alpha}(V)=0\) and \(\mathrm{rank}\,\omega^{\alpha}|_{W}=n\), then \(\omega^{\alpha}(v)=0\) and \(\omega^{\alpha}(w)=0\), which implies that \(w=0\) and \(v\in\ker\omega^{\alpha}\). Hence, \(\ker\omega^{\alpha}\subset V\). We can consider the distributions \[V_{\beta}=\bigcap_{\begin{subarray}{c}1\leq\alpha\leq k\\ \alpha\neq\beta\end{subarray}}\ker\omega^{\alpha}\,,\qquad\beta=1,\ldots,k \neq 1\,,\qquad\text{or}\qquad V^{1}=V\ \ (k=1)\,.\] Note that \(\omega^{\beta}(V_{\alpha})=0\) for \(\alpha\neq\beta\) and for every \(\alpha,\beta=1,\ldots,k\). Let \(\{w_{1},\ldots,w_{n}\}\) be a basis of \(W\). Since \(\omega^{\alpha}(W)\) has rank \(n\) and its elements do not belong to \(V^{\circ}\), then the restrictions of \(\omega^{\alpha}(w_{1}),\ldots,\omega^{\alpha}(w_{n})\) to \(V\) are linearly independent and there exist \(v_{1},\ldots,v_{n}\) in \(V\) such that \(\omega^{\alpha}(v_{1}),\ldots,\omega^{\alpha}(v_{n})\) are linearly independent on \(W\), e.g. \(\omega^{\alpha}(w_{i},v_{j})=\delta_{ij}\) for \(i,j=1,\ldots,n\). Hence, \(\mathrm{rank}\,\omega^{\alpha}(V)\geq n\). Since \(\omega^{\alpha}(V)\subset V^{\circ}\), then \(\mathrm{rank}\,\omega^{\alpha}(V)=n\). In particular, \(\mathrm{rank}\,\omega^{\alpha}(V)=n\) for every \(\alpha=1,\ldots,k\). Since \(\bigcap_{k=1}^{k}\ker\omega^{\alpha}=0\) and \(\mathrm{Im}\,\omega^{\alpha}(V)\subset V^{\alpha}\) for \(\alpha=1,\ldots,k\), it follows that \(\phi:v\in V\mapsto(\omega^{1}(v),\ldots,\omega^{k}(v))\in V^{\circ}\oplus \stackrel{{(k)}}{{\cdots}}\oplus V^{\circ}\), where \(\oplus\) stands for a Whitney sum of vector bundles in the natural way, is injective. Hence, \(\phi\) becomes an isomorphism and \(V\simeq\bigoplus_{\alpha=1}^{k}V_{\alpha}\). Indeed, \(v=\sum_{\alpha=1}^{k}\phi^{-1}(\mathrm{pr}_{\alpha}(v))\), where \(\mathrm{pr}_{\alpha}:(w_{1},\ldots,w_{k})\in V^{\circ}\oplus\ldots\oplus V^{ \circ}\mapsto(0,\ldots,0,w_{\alpha},0,\ldots,0)\mapsto V^{\circ}\oplus\ldots \oplus V^{\circ}\), is the corresponding decomposition. Since \(V=\bigoplus_{\alpha=1}^{k}V_{\alpha}\) and \(\omega^{\beta}(V_{\beta})\subset V^{\circ}\) has the same rank as \(V^{\beta}\), it follows that \(\mathrm{rank}\,V_{\beta}=n\). Hence, one can consider a basis \(\{e^{1},\ldots,e^{n}\}\) of \(V^{\circ}\). There exists a basis \(f^{1}_{\beta},\ldots,f^{n}_{\beta}\) of each \(V^{\beta}\) such that \(\omega^{\alpha}(f^{i}_{\beta})=-e^{i}\delta^{\beta}_{\alpha}\) for \(i=1,\ldots,n\) and \(\alpha,\beta=1\,\ldots,k\). Considering a dual basis \(\{f^{\beta}_{i},e^{i}\}\) of \(\mathrm{T}_{x}^{*}M\), one has that \[\omega^{\beta}=e^{i}\wedge f^{\beta}_{i}+c^{\beta}_{ij}e^{i}\wedge e^{j}\,, \qquad\beta=1,\ldots,k\,. \tag{4}\] If \(e^{\beta}_{i}=f^{\beta}_{i}+c^{\beta}_{ij}e^{j}\), then \[\omega^{\beta}=\sum_{i=2}^{n}e^{i}\wedge e^{\beta}_{i}\,,\qquad\beta=1,\ldots,k\,.\] Note that the change on the covectors \(e^{\beta}_{i}\) implies that, in the dual bases to the bases \(\{e^{i},e^{\alpha}_{i}\}\) and \(\{e^{i},f^{\alpha}_{i}\}\) in \(\mathrm{T}_{x}^{*}M\), one has that \(f^{i}_{\alpha}=e^{i}_{\alpha}\) for \(\alpha=1,\ldots,k\) and \(i=1,\ldots,n\). Hence, \(V_{\alpha}=(f^{1}_{\alpha},\ldots,f^{n}_{\alpha})=\langle e^{1}_{\alpha}, \ldots,e^{n}_{\alpha}\rangle\) for \(\alpha=1,\ldots,k\). It stems from Theorem 3.2 that \(\omega^{1},\ldots,\omega^{k}\) have constant rank. This fact comes from the definition of \(k\)-symplectic structure, the dimension of \(M\), and the rank of \(V\). Note also that the last paragraph in the proof of Theorem 3.2 can be almost straightforwardly changed to put a symplectic linear form and a Lagrangian subspace into a canonical form. **Definition 3.3**.: Given a \(k\)-symplectic manifold \((M,\omega^{1},\ldots,\omega^{k},V)\) with \(k\neq 1\), we set \[V_{\beta}=\bigcap_{\begin{subarray}{c}\alpha=1\\ \alpha\neq\beta\end{subarray}}^{k}\ker\omega^{\alpha}\,,\qquad\beta=1,\ldots,k.\] **Lemma 3.4**.: _For a \(k\)-symplectic manifold \((M,\omega^{1},\ldots,\omega^{k},V)\), the distributions \(V^{1},\ldots,V^{k}\) satisfy that every \(x\in M\) admits a coordinate system \(\{y^{1},\ldots,y^{n};y_{1}^{\alpha},\ldots,y_{n}^{\alpha}\}\) on a neighbourhood of \(x\) so that_ \[V_{\alpha}=\left\langle\frac{\partial}{\partial y_{1}^{\alpha}},\ldots,\frac {\partial}{\partial y_{n}^{\alpha}}\right\rangle,\qquad\alpha=1,\ldots,k.\] Proof.: Let \(y^{1},\ldots,y^{n}\) be common functionally independent first-integrals for all vector fields taking values in \(V\). If \(k=1\), the result follows trivially, so we assume \(k>1\). Given different \(\alpha_{1},\ldots,\alpha_{k-1}\in\{1,\ldots,k\}\), one has that \[V_{\alpha_{1}}\oplus\cdots\oplus V_{\alpha_{k-1}}=\ker\omega^{\beta},\] where \(\beta\) is the only number in \(\{1,\ldots,k\}\) not included in \(\{\alpha_{1},\ldots,\alpha_{k-1}\}\). Hence, the distribution \(V_{\alpha_{1}}\oplus\ldots\oplus V_{\alpha_{k-1}}\) has rank \(n(k-1)\), it is integrable because \(\omega^{\beta}\) is closed, and the vector fields taking values in it have \(n\) common local first-integrals \(y_{1}^{\beta},\ldots,y_{n}^{\beta}\) such that \(\mathrm{d}y_{1}^{\beta}\wedge\ldots\wedge\mathrm{d}y_{n}^{\beta}\wedge \mathrm{d}y^{1}\wedge\ldots\wedge\mathrm{d}y^{n}\neq 0\). By construction, \(\{y_{1}^{1},\ldots,y_{n}^{1},\ldots,y_{1}^{k},\ldots,y_{n}^{k},y^{1},\ldots,y ^{n}\}\) becomes a local coordinate system on \(M\) and \[V_{\alpha}=\left(\bigcap_{i=1}^{n}\ker\mathrm{d}y^{i}\right)\cap\left(\bigcap _{\begin{subarray}{c}\beta\neq\alpha\\ i=1,\ldots,n\end{subarray}}\ker\mathrm{d}y_{i}^{\beta}\right).\] Moreover, \(\frac{\partial}{\partial y_{1}^{\beta}},\ldots,\frac{\partial}{\partial y_{n} ^{\beta}}\) vanish on all coordinates \(y_{1}^{\alpha},\ldots,y_{n}^{\alpha}\) with \(\alpha\neq\beta\). Hence, \[V_{\beta}=\left\langle\frac{\partial}{\partial y_{1}^{\beta}},\ldots,\frac{ \partial}{\partial y_{n}^{\beta}}\right\rangle=\bigcap_{\alpha\neq\beta=1}^{ k}\ker\omega^{\alpha},\qquad\beta=1,\ldots,k.\] **Theorem 3.5** (**Darboux theorem for \(k\)-symplectic manifolds)**.: _Let \((M,\omega^{1},\ldots,\omega^{k},V)\) be a \(k\)-symplectic manifold. Around every point \(x\in M\), there exist local coordinates \(\{q^{i},p_{i}^{\alpha}\}\), with \(1\leq i\leq n\) and \(1\leq\alpha\leq k\), such that_ \[\omega^{\alpha}=\sum_{i=1}^{n}\mathrm{d}q^{i}\wedge\mathrm{d}p_{i}^{\alpha} \,,\quad V=\left\langle\frac{\partial}{\partial p_{i}^{\alpha}}\right\rangle_ {\begin{subarray}{c}i=1,\ldots,n,\\ \alpha=1,\ldots,k\end{subarray}}. \tag{5}\] Proof.: By our Darboux theorem for \(k\)-symplectic vector spaces, namely Theorem 3.2, there exists a basis \(\{e^{1},\ldots,e^{n};e_{1}^{\alpha},\ldots,e_{n}^{\alpha}\}_{\alpha=1,\ldots,k}\) of \(\mathrm{T}_{x}^{*}M\) such that \(\omega_{x}^{\alpha}=\sum_{i=1}^{n}e^{i}\wedge e_{i}^{\alpha}\) for \(\alpha=1,\ldots,k\). The basis is chosen so that the dual basis \(\{e_{1},\ldots,e_{n},e_{\alpha}^{1},\ldots,e_{\alpha}^{n}\}\), with \(\alpha=1,\ldots,k\), is such that \(V=\langle e_{\alpha}^{i}\rangle_{\begin{subarray}{c}\alpha=1,\ldots,k\\ i=1,\ldots,n\end{subarray}}\). Recall that the subspaces in \(\mathrm{T}_{x}M\) of the form \[V_{\beta x}=\bigcap_{\begin{subarray}{c}\alpha=1\\ \alpha\neq\beta\end{subarray}}^{k}\ker\omega_{x}^{\alpha}=\left\langle e_{1}^{ \beta},\ldots,e_{n}^{\beta}\right\rangle,\quad\beta=1,\ldots,k\,,\] satisfy that \(V_{x}=\bigoplus_{\beta=1}^{k}V_{\beta x}\). By Lemma 3.4, there exist variables \(\{y^{j},y_{j}^{\beta}\}\), with \(j=1,\ldots,n\) and \(\beta=1,\ldots,k\), such that, locally, \(V^{\beta}=\left\langle\frac{\partial}{\partial y_{1}^{\beta}},\ldots,\frac{ \partial}{\partial y_{n}^{\beta}}\right\rangle\), with \(\beta=1,\ldots,k\). Moreover, \(\ker\omega^{\beta}=\bigoplus_{\alpha\neq\beta}V^{\alpha}\). Using previous results and since \(\omega^{\beta}\big{|}_{V\times V}=0\), we have \(\omega^{\beta}=f_{i}^{j\beta}\mathrm{d}y^{i}\wedge\mathrm{d}y_{j}^{\beta}+g_{ ij}\mathrm{d}y^{i}\wedge\mathrm{d}y^{j}\) for certain functions \(g_{ij},f_{i}^{j\beta}\), with \(i,j=1,\ldots,n\) and \(\beta=1,\ldots,k\). Since \(\mathrm{d}\omega^{\beta}=0\), it follows that \(f_{i}^{j\beta}=f_{i}^{j\beta}(y^{l},y_{l}^{\beta})\) and \(g_{ij}=g_{ij}(y^{l},y_{l}^{\beta})\) for \(i,j,l=1,\ldots,n\). Therefore, each \(\omega^{\beta}\) can be considered as a differential two-form on \(\mathbb{R}^{2n}\). Moreover, each \(V^{\beta}\) can be then considered as a Lagrangian distribution of a symplectic two-form \(\omega^{\beta}\), when it is considered as a differential two-form on \(\mathbb{R}^{2n}\). Consequently, for a fixed \(\beta\), one has \[0=-\frac{\partial}{\partial y_{j}^{\beta}}y^{i}=\iota_{X_{y^{i}}^{\beta}} \,\iota_{\partial/\partial y_{j}^{\beta}}\,\omega^{\beta},\qquad i,j=1,\ldots,n\quad\Longrightarrow\quad X_{y^{i}}^{\beta}\in(V_{\beta})^{\perp}=V_{\beta} \,,\quad i=1,\ldots,n\,.\] Note that the orthogonal is relative to the restriction of \(\omega^{\beta}\) to \(\mathbb{R}^{2n}\). Whatever, by additionally considering \(\iota_{X_{y^{i}}^{\beta}}\omega^{\alpha}=0\) for \(\alpha\neq\beta\), one can also see that \(X_{y^{i}}^{\beta}\) becomes a vector field taking values in \(V^{\beta}\). What follows is an adaptation of the Liouville-Mineur-Arnold theorem (see also [15]). Since \(V^{\alpha}\) is integrable, we can consider a leaf \(F\) of \(V^{\alpha}\) and its canonical inclusion \(j_{F}:F\hookrightarrow M\). Let us define the map \(\zeta\colon x\in M\mapsto(y^{1}(x),\ldots,y^{n}(x))\in\mathbb{R}^{n}\). Consider a regular point \(x^{\prime}\in M\) of \(\zeta\). Since the map \(\zeta\) is regular in an open neighbourhood of \(x^{\prime}\), there exist vector fields \(Y_{1},\ldots,Y_{n}\) on a neighbourhood of \(x^{\prime}\) such that \(Y_{i}\) and \(\frac{\partial}{\partial y^{i}}\) on \(\mathbb{R}^{n}\) are \(\zeta\)-related for \(i=1,\ldots,n\). Consider the inner contractions \(\Theta_{i}^{\alpha}=\iota_{Y_{i}}\,\omega^{\alpha}\) for \(i=1,\ldots,n\) on a neighbourhood of \(x^{\prime}\) in \(M\) and the vector fields \(X_{y^{i}}^{\alpha}\), which take values in \(V_{\alpha}\). Then, \[\iota_{X_{y^{i}}^{\alpha}}\,\Theta_{j}^{\alpha}=\iota_{X_{y^{i}}^{\alpha}}\, \iota_{Y_{j}}\,\omega^{\alpha}=\omega^{\alpha}(Y_{j},X_{y^{i}}^{\alpha})=- \omega^{\alpha}(X_{y^{i}}^{\alpha},Y_{j})=-Y_{j}y^{i}=-\delta_{j}^{i},\qquad i,j=1,\ldots,n.\] Hence, given two vector fields \(X_{y^{i}}^{\alpha},X_{y^{j}}^{\alpha}\), one has \[(\mathrm{d}\Theta_{\ell}^{\alpha})(X_{y^{i}}^{\alpha},X_{y^{j}}^{\alpha})=X_{y ^{i}}^{\alpha}\Theta_{\ell}^{\alpha}(X_{y^{j}}^{\alpha})-X_{y^{j}}^{\alpha} \Theta_{\ell}^{\alpha}(X_{y^{i}}^{\alpha})-\Theta_{\ell}^{\alpha}([X_{y^{i}}^ {\alpha},X_{y^{j}}^{\alpha}])=0\,.\] The latter is due to the fact that \([X_{y^{i}}^{\alpha},X_{y^{j}}^{\alpha}]\) is the Hamiltonian vector field of \(\{y^{i},y^{j}\}=X_{y^{j}}^{\alpha}y^{i}=0\) because \(X_{y^{j}}\) takes values in \(V_{\alpha}\). Thus, \(j_{F}^{\alpha}\Theta^{\alpha}\) is closed and there exists a potential \(j_{F}^{*}\Theta_{i}^{\alpha}=\mathrm{d}p_{i}^{\alpha}\). And recalling that \(\omega^{\alpha}|_{V_{\alpha}\times V_{\alpha}}=0\), it follows that \(\omega^{\alpha}=\mathrm{d}y^{i}\wedge\mathrm{d}p_{i}^{\alpha}\). Moreover, it follows that \[V_{\alpha}=\left\langle\frac{\partial}{\partial p_{i}^{\alpha}}\right\rangle, \qquad\alpha=1,\ldots,k,\] and \(V\) takes the proposed form. Let us recall that the above proof could have been cut by the half by referring straightforwardly to the Liouville-Mineur-Arnold theorem, as since \(\{y_{i},y_{j}\}=0\), with \(i,j=1,\ldots,n\), that theorem implies that there are functions \(p_{1}^{\beta},\ldots,p_{n}^{\beta}\) with \(\beta=1,\ldots,k\) such that \(\omega^{\beta}=\mathrm{d}y^{i}\wedge\mathrm{d}p_{i}^{\beta}\) for each \(\beta=1,\ldots,k\). Instead, we decided to give a complete, self-contained proof. Without this full explanation, Theorem 3.5 would probably be the shortest direct proof of the Darboux theorem for \(k\)-symplectic manifolds in the literature. Although Theorem 3.5 relies on Lemma 3.4 and the \(k\)-symplectic linear Darboux theorem, Lemma 3.4 is a rather straightforward geometric result, which was described carefully to verify all the details, and only the fact that \(V=\bigoplus_{\alpha=1}^{k}V^{\alpha}\) is needed from the \(k\)-symplectic linear Darboux theorem to prove our full \(k\)-symplectic Darboux theorem. Moreover, note that one could have assumed that Darboux coordinates only are concerned with the canonical expressions of \(\omega^{1},\ldots,\omega^{k}\). It turns out that given the conditions on the distribution, once we put \(\omega^{1},\ldots,\omega^{k}\) in a canonical manner, we also put a basis of \(V\) in the desired form. We will see in next section that this is not the case for Darboux coordinates for other structures. **Definition 3.6**.: Given a \(k\)-symplectic manifold \((M,\omega^{1},\ldots,\omega^{k},V)\), we call \(k\)_-symplectic Darboux coordinates_ the coordinates allowing us to write \(\omega^{1},\ldots,\omega^{k}\) and \(V\) in the form (5). The \(k\)-symplectic Darboux coordinates will be called just Darboux coordinates when it does not lead to any misunderstanding. Note that the proof of Theorem 3.5 shows that \(k\)-symplectic Darboux coordinates induce the existence of a distribution \(V^{\prime}=\langle\partial/\partial y^{1},\ldots,\partial/\partial y^{n}\rangle\) that allows us to state the following result. **Corollary 3.7**.: _Every \(k\)-symplectic manifold \((M,\omega^{1},\ldots,\omega^{k},V)\) admits, locally, a supplementary integrable distribution \(V^{\prime}\) on \(M\) such that \(V\oplus V^{\prime}=\mathrm{T}M\) and \(\omega^{\alpha}|_{V^{\prime}\times V^{\prime}}=0\) for \(\alpha=1,\ldots,k\)._ The canonical model of a \(k\)-symplectic manifold is the cotangent bundle of \(k^{1}\)-cvelocities, namely \(\bigoplus^{k}\mathrm{T}^{*}Q=\mathrm{T}^{*}Q\oplus\overset{k}{\dots}\oplus \mathrm{T}^{*}Q\) (the Whitney sum of \(k\) copies of the cotangent bundle of a manifold \(Q\)), equipped with the distribution \(V=\ker\mathrm{T}\pi\), where \(\pi^{\alpha}\colon\bigoplus^{k}\mathrm{T}^{*}Q\to\mathrm{T}^{*}Q\) and \(\pi:\bigoplus^{k}\mathrm{T}^{*}Q\to Q\) are the canonical projections onto the \(\alpha\)-th component and \(Q\) respectively, and the canonical presymplectic two-forms \(\omega^{\alpha}=(\pi^{\alpha})^{*}\omega\) with \(\alpha=1,\ldots,k\), where \(\omega\) stands for the canonical symplectic two-form in \(\mathrm{T}^{*}Q\). In this model, natural coordinates are Darboux coordinates, and the \(k\)-symplectic Darboux theorem states that \(k\)-symplectic manifolds are locally diffeomorphic to a cotangent bundle of \(k^{1}\)-coveloities. Meanwhile, the distribution \(V^{\prime}\) is a distribution in \(\bigoplus_{\alpha=1}^{k}\mathrm{T}^{*}Q\) whose leaves project diffeomorphically onto \(Q\). As in the previous sections, one can introduce the notion of compatible connection with a \(k\)-symplectic manifold [8, 14]. **Definition 3.8**.: A \(k\)_-symplectic connection_ on a \(k\)-symplectic manifold \((M,\omega^{\alpha},V)\) is a connection \(\nabla\) on \(M\) such that \(\nabla\omega^{\alpha}=0\) for every \(\alpha=1,\ldots,k\). Again, Darboux coordinates allow us to define, locally, a connection, \(\nabla\), such that \(\nabla\omega^{\alpha}=0\) for \(\alpha=1,\ldots,k\). And vice versa, the \(k\)-symplectic linear Darboux Theorem allows us to put \(\omega^{1},\ldots,\omega^{k}\) and the distribution \(V\) into a canonical form on the tangent space at a point and, a flat connection compatible with the \(k\)-symplectic manifold enables us to expand this canonical form to an open neighbourhood of the initial point where \(\omega^{1},\ldots,\omega^{k}\) and \(V\) take the form (5). It is worth recalling the interesting work [14], where connections compatible with \(k\)-symplectic structures are studied. These connections depend on the existence of certain foliations and are canonical once such foliations are given. By using such foliations and distributions, the Darboux theorem can be proved. We find that our approach here is more direct than that in [14] and the Darboux theorem is given in our work more geometrically. Note that a \(k\)-symplectic Darboux theorem also appears as a particular case of the multisymplectic theory in [29]. Now, let us study Darboux theorems for \(k\)-presymplectic manifolds (see [30, 37] for some previous results on this case). This case poses several fundamental problems. First, there exist several possible definitions of \(k\)-presymplectic manifolds depending on their possible applications or representative cases. Some possible definition of \(k\)-presymplectic manifold can be found in [30]. Meanwhile, [37] defines a \(k\)-presymplectic manifold as a manifold equipped with \(k\) closed two-forms. It is clear that we will not have Darboux coordinates with such a general definition. As shown next, a direct analogue of the Darboux coordinates is not available in some of the possible definitions of \(k\)-presymplectic structure, while cases that admit Darboux coordinates may not be of physical interest. Let us give a brief analysis of this matter. **Definition 3.9**.: Let \(\bigoplus_{\alpha=1}^{k}\mathrm{T}^{*}Q\) be endowed with its canonical \(k\)-symplectic structure \(\omega^{1},\ldots,\omega^{k}\) and let \(\pi\colon\bigoplus_{\alpha=1}^{k}\mathrm{T}^{*}Q\to Q\) be the canonical projection onto \(Q\). A _canonical foliated \(k\)-presymplectic manifold_ is a tuple \((S,\omega^{1}_{S},\ldots,\omega^{k}_{S})\) given by a submanifold \(S\subset\bigoplus_{\alpha=1}^{k}\mathrm{T}^{*}Q\) such that \(\pi|_{S}\colon S\to Q\) is a fibre bundle and \(S\) is endowed, for \({}_{\mathcal{I}}\colon S\to\bigoplus_{\alpha=1}^{k}\mathrm{T}^{*}Q\) being the canonical inclusion, with the \(k\) differential two-forms \(\omega_{S}^{\alpha}=J_{S}^{*}\omega^{\alpha}\), for \(\alpha=1,\dots,k\). The rank of the fibration \(\pi|_{S}:S\to Q\) is called the rank of \((S,\omega_{S}^{1},\dots,\omega_{S}^{k})\) while \(\omega_{S}^{1},\dots,\omega_{S}^{k}\) are called a _canonical foliated \(k\)-presymplectic structure_. More generally, the above gives rise to the following definition. **Definition 3.10**.: A _foliated \(k\)-presymplectic manifold_ is a tuple \((M,\omega^{1},\dots,\omega^{k})\) such that there exists a canonical foliated \(k\)-presymplectic manifold \((S,\omega_{S}^{1},\dots,\omega_{S}^{k})\) and a global diffeomorphism \(\phi:M\to S\) such that \(\phi^{*}\omega_{S}^{\alpha}=\omega^{\alpha}\) for \(\alpha=1,\dots,k\). A foliated \(k\)-presymplectic manifold \((M,\omega^{1},\dots,\omega^{k})\) is _exact_ if \(\omega^{1},\dots,\omega^{k}\) are exact. It is worth noting that the previous definition also makes sense for \(\phi\) being, only, a local diffeomorphism. In that case, the main results to be displayed afterward remain valid, but many more technical details are to be considered to prove them. To keep our presentation simple and highlight the main ideas about Darboux coordinates, which are generically local, we have defined \(\phi\) to be a global diffeomorphism. Definition 3.9 implies that \(\omega_{S}^{1},\dots,\omega_{S}^{k}\) admit a natural distribution \(V=\ker\mathrm{T}\pi\cap\mathrm{T}S\) of rank \(\dim S-\dim Q\) such that \(\omega_{S}^{\alpha}|_{V\times V}=0\) for \(\alpha=1,\dots,k\). If \(S=\bigoplus_{\alpha=1}^{k}\mathrm{T}^{*}Q\), then \(V=\ker\mathrm{T}\pi\) and \(S\) gives rise to a \(k\)-symplectic structure admitting Darboux coordinates. Let us illustrate by means of a simple example why a Darboux \(k\)-presymplectic theorem does not exist for general foliated \(k\)-presymplectic manifolds. It is worth noting that Darboux coordinates for families of closed differential forms are, at the very last instance, a way of writing them in a coordinate system so that their associated coordinates are constant. The following theorem shows that this is impossible for general \(k\)-presymplectic manifolds. **Theorem 3.11**.: _Every rank-zero exact canonical foliated \(k\)-presymplectic structure is equivalent to \(k\) exact differential two-forms on \(Q\)._ Proof.: An exact canonical foliated \(k\)-presymplectic manifold \((S\subset\bigoplus_{\alpha=1}^{k}\mathrm{T}^{*}Q,\omega_{S}^{1},\dots,\omega_{ S}^{k})\) gives rise, as \(S\) is diffeomorphic to \(Q\) via \(\pi|_{S}:S\subset\bigoplus_{\alpha=1}^{k}\mathrm{T}^{*}Q\to Q\), to a unique family of exact differential two-forms, \(\omega_{Q}^{1},\dots,\omega_{Q}^{k}\), on \(Q\) satisfying that \(\pi|_{S}^{*}\omega_{Q}^{1}=\omega_{S}^{1},\dots,\pi|_{S}^{*}\omega_{Q}^{k}= \omega_{S}^{k}\). Conversely, \(k\) exact presymplectic two-forms \(\omega_{Q}^{1},\dots,\omega_{Q}^{k}\) on \(Q\) with potentials \(\theta^{1},\dots,\theta^{k}\) give rise to a section \(S=\{(q,\theta^{1}(q),\dots,\theta^{k}(q))\mid q\in Q\}\) of \(\pi\colon\bigoplus_{\alpha=1}^{k}\mathrm{T}^{*}Q\to Q\). Note that \[J_{S}^{*}\omega^{\alpha}=-J_{S}^{*}\mathrm{d}(p_{i}^{\alpha}\mathrm{d}y^{i})=- \mathrm{d}\theta^{\alpha}|_{S}=\pi|_{S}^{*}\omega_{Q}^{\alpha}\,,\qquad\alpha =1,\dots,k\,.\] Then, \(\omega_{Q}^{1},\dots,\omega_{Q}^{k}\) are exact and equivalent to a rank-zero canonical foliated \(k\)-presymplectic structure. Since there is no way to put \(k\) arbitrary closed differential two-forms on \(Q\) into a coordinate system so that all of them will have constant coefficients, there will be no general Darboux theorem for foliated \(k\)-presymplectic manifolds, and thus there is no Darboux theorem for \(k\)-presymplectic manifolds in general. Theorem 3.11 can be considered as an extreme case of canonical foliated \(k\)-presymplectic manifold. For the case of a fibration \(\pi|_{S}:S\to Q\) of rank one, it is simple to find new examples where there will be no Darboux coordinates. Assume the simple case of a fibration of rank one given by a submanifold \(S\subset\bigoplus_{\alpha=1}^{2}\mathrm{T}^{*}\mathbb{R}^{2}\) onto \(\mathbb{R}^{2}\). Since \(S\) has dimension three, the two differential forms \(\omega_{S}^{1},\omega_{S}^{2}\) can be assumed to have rank two and non-trivial common intersection of their kernels. In such a case, they are proportional. One of them can always be put into canonical form for certain variables, because is presymplectic. Since they are proportional and due to the closeness condition, they depend only on two variables. Hence, to put them in canonical form with some Darboux variables amounts to putting two different volume forms on \(\mathbb{R}^{2}\) in canonical form for the same Darboux variables, which is impossible. **Example 3.12**.: _Let us describe in more detail a more complex example of a foliated 2-presymplectic manifold that does not admit Darboux coordinates. Consider \(\bigoplus_{\alpha=1}^{2}\mathrm{T}^{*}\mathbb{R}^{2}\) and the fibration of the submanifold \(S\) onto \(\mathbb{R}^{2}\) with rank one of the form_ \[S=\{(p_{1}^{(1)}(\lambda,y^{1},y^{2})\mathrm{d}y^{1}+p_{2}^{(1)}(\lambda,y^{1 },y^{2})\mathrm{d}y^{2},p_{1}^{(2)}(\lambda,y^{1},y^{2})\mathrm{d}y^{1}+p_{2}^ {(2)}(\lambda,y^{1},y^{2})\mathrm{d}y^{2}):\lambda,y^{1},y^{2}\in\mathbb{R}\}.\] _In particular, consider_ \[p_{1}^{(1)}=\lambda\,,\qquad p_{2}^{(1)}=0\,,\qquad p_{1}^{(2)}=f(\lambda,y^{1 })\,,\qquad p_{2}^{(2)}=0\,,\] _for a certain function \(f(\lambda,y^{1})\) such that \(\partial f/\partial\lambda\) is different from the constant functions zero and one. Hence, \(\omega_{S}^{1}=\mathrm{d}\lambda\wedge\mathrm{d}y^{1}\) and \(\omega_{S}^{2}=\frac{\partial f}{\partial\lambda}(\lambda,y^{1})\mathrm{d} \lambda\wedge\mathrm{d}y^{1}\), which are closed, proportional, have rank-one kernel and cannot be put into a canonical form for canonical coordinates because \(\omega_{S}^{1},\omega_{S}^{2}\) amount to two different volume forms on \(\mathbb{R}^{2}\)._ There are several manners of defining a \(k\)-presymplectic manifold. The following one offers a possibility. **Definition 3.13**.: Let \(M\) be a \((n(k+1)-m)\)-dimensional manifold, with \(0\leq m\leq nk\). A \(k\)_-presymplectic structure_ on \(M\) is a family \((\omega^{1},\ldots,\omega^{k},V)\), where \(V\) is an \(r\)-dimensional integrable distribution and \(\omega^{1},\ldots,\omega^{k}\) are closed differential two-forms on \(M\) with \(\mathrm{rank}\,\omega^{\alpha}=2r_{\alpha}\) and \(r=\sum_{\alpha=1}^{k}r_{\alpha}\), where \(1\leq r_{\alpha}\leq n\), satisfying that \[\omega^{\alpha}|_{V\times V}=0\,,\qquad\alpha=1,\ldots,k\,.\] A manifold \(M\) endowed with a \(k\)-presymplectic structure is called a \(k\)_-presymplectic manifold_. We would expect to obtain for every \(k\)-presymplectic structure \((M,\omega^{1},\ldots,\omega^{k},V)\), where \(\mathrm{rank}\,\omega^{\alpha}=2r_{\alpha}\), with \(1\leq r_{\alpha}\leq n\), and every \(x\in M\) a local coordinate system \(\{y^{i},p_{i}^{\alpha}\}\) around \(x\) so that \[\omega^{\alpha}=\mathrm{d}y^{i_{j}^{\alpha}}\wedge\mathrm{d}p_{i_{j}^{\alpha}} ^{\alpha}\,,\qquad\alpha=1,\ldots,k\,,\] for certain \(i_{j}^{\alpha}\in\{1,\ldots,n\}\) for \(j=1,\ldots,r_{\alpha}\) for every \(\alpha=1,\ldots,k\). Nevertheless, Example 3.12 represents a counterexample for the existence of Darboux coordinate system for \(k\)-presymplectic structures. Contrary to previous examples, we will give conditions ensuring that a \(k\)-presymplectic manifold admits Darboux coordinates. Indeed, the manifold \(S\) is three-dimensional, while \(k=2\). The associated presymplectic forms have rank two. The distribution \(V\) is then two-dimensional and generated, for instance, by the vector fields \(\langle\partial/\partial\lambda,\partial/\partial y^{2}\rangle\). Then, \(n\) and \(m\) can be fixed to be two and three. It is worth noting that for a \(k\)-presymplectic structure on \(M\), any Riemannian metric \(g\) on \(M\) allows one to obtain a decomposition of a subspace \(E\subset\mathrm{T}_{x}M\) as a direct sum of subspaces \[E^{\kappa_{1},\ldots,\kappa_{k}}=E\cap\left(\bigcap_{\alpha=1}^{k}(\ker\omega_ {x}^{\alpha})^{\kappa_{\alpha}}\right)\,, \tag{6}\] where \(\kappa_{\alpha}\in\{0,1\}\), while \((\ker\omega_{x}^{\alpha})^{0}=\ker\omega_{x}^{\alpha}\) and \((\ker\omega_{x}^{\alpha})^{1}=(\ker\omega_{x}^{\alpha})^{\perp_{g}}\), where \(\perp_{g}\) is the orthogonal relative to the introduced metric \(g\). The main aim of this decomposition is to divide \(\mathrm{T}_{x}M\) into two subspaces, \(V,S\), given by direct sums of the subspaces in (6) in such a manner that \(\omega^{\alpha}(V)\) and \(\omega^{\alpha}(S)\) have rank \(r_{\alpha}\), while \(\omega^{\alpha}(V)\cap\omega^{\alpha}(S)=0\) for \(\alpha=1,\ldots,k\). As in the case of \(k\)-symplectic linear spaces, one can now prove a \(k\)-presymplectic linear space Darboux theorem. **Lemma 3.14**.: **(\(k\)-presymplectic linear Darboux theorem)** _Given a \(k\)-presymplectic structure \((\omega^{1},\ldots,\omega^{k},V)\) on \(M\), where \(\operatorname{rank}\omega^{\alpha}=2r^{\alpha}\) for \(\alpha=1,\ldots,r\). Let \(D=\bigcap_{\alpha=1}^{k}\ker\omega^{\alpha}\) have rank \(d\) and let \(\operatorname{rank}V=r+d=\sum_{\alpha=1}^{k}r_{\alpha}+d\) be so that_ \[\dim V_{\alpha}=r_{\alpha},\quad V=D\oplus\bigoplus_{\beta=1}^{k}V_{\beta}\,, \quad D+V_{\alpha}=V\cap\left(\bigcap_{\beta\neq\alpha}\ker\omega^{\beta} \right)\,,\ (k\neq 1)\quad\alpha=1,\ldots,k\, \tag{7}\] _and \(\dim M=n+r+d\). Then, at every \(\mathrm{T}_{x}^{*}M\), for \(x\in M\), one can set a basis of the form \(\{e^{1},\ldots,e^{n};e^{\alpha}_{\mu_{j}^{\alpha}},v^{1},\ldots,v^{d}\}\), with \(\mu_{j}^{\alpha}\in I_{\alpha}\subset\{1,\ldots,n\}\) and \(|I_{\alpha}|=r_{\alpha}\) with \(\alpha=1,\ldots,k\), of \(\mathrm{T}_{x}^{*}M\) such that_ \[\omega_{x}^{\alpha}=\sum_{j=1}^{r_{\alpha}}e^{\mu_{j}^{\alpha}}\wedge e^{ \alpha}_{\mu_{j}^{\beta}}\,,\qquad\alpha=1,\ldots,k\,,\qquad D_{x}=\langle v_{ 1},\ldots,v_{d}\rangle,\qquad V_{\alpha x}=\left\langle e^{\mu_{j}^{\alpha}}_{ \alpha}\right\rangle.\] Proof.: Note that \(\dim M=n+r+d\). Since \(D=\bigcap_{\alpha=1}^{k}\ker\omega^{\alpha}\) has rank \(d\), one has that \[D_{x}^{\circ}=\operatorname{Im}\omega_{x}^{1}+\ldots+\operatorname{Im}\omega _{x}^{k}\,,\qquad\forall x\in M\,, \tag{8}\] is such that \(\operatorname{rank}D^{\circ}=n+r\). Since \(\omega^{\beta}|_{V\times V}=0\), it follows that \(\omega^{\beta}(V)\subset V^{\circ}\) for every \(\beta\). We have \[\omega^{1}(V)+\ldots+\omega^{k}(V)\subset V^{\circ}.\] Note that \(\operatorname{rank}V^{\circ}=n\). From the second and third condition in (7), it follows that \(V_{\alpha}\cap\ker\omega^{\alpha}=0\). Moreover, one has that \(\operatorname{rank}\omega^{\alpha}(V_{\alpha})=r_{\alpha}=\operatorname{rank} \omega^{\alpha}(V)\) for \(\alpha=1,\ldots,k\). Consider the supplementary \(S=V^{\perp_{g}}\) to \(V\). Then, \(\operatorname{rank}V^{\perp_{g}}=\dim M-r-d=n\) and \(\operatorname{rank}\omega_{x}^{\beta}(S_{x})\leq n\) for every \(x\in M\). Due to (8) and the above, one has that \[\omega^{1}(V^{\perp_{g}})+\ldots+\omega^{k}(V^{\perp_{g}})\] is a distribution of rank \(r\) at least. By our decomposition, every \(\alpha\) allows us to divide \(V^{\perp_{g}}\) into two spaces in the form \(V^{\perp_{g}}=\Upsilon_{\alpha}\oplus\left(\ker\omega^{\alpha}\cap V^{\perp_{g}}\right)\), where \(\Upsilon_{\alpha}\) has rank \(r_{\alpha}\) because \(\ker\omega^{\alpha}=n+r+d-2r_{\alpha}\) and \(\ker\omega^{\alpha}\cap V=r+d-r_{\alpha}\). Then, \(\omega^{\alpha}(V^{\perp_{g}})\) is equal to the image of a subspace of rank \(r_{\alpha}\) of \(V^{\perp_{g}}\) and it therefore has rank \(r_{\alpha}\) and \(\omega^{\alpha}(S)\cap\omega^{\alpha}(V)=0\). Then, \[\operatorname{rank}(\omega^{1}(V^{\perp_{g}})+\ldots+\omega^{k}(V^{\perp_{g}}) )=r\,,\quad\omega^{1}(V)+\ldots+\omega^{k}(V)=V^{\circ}\,. \tag{9}\] Note that \(\ker\omega^{\alpha}=n+d+r-2r_{\alpha}\) and \(\omega^{\alpha}(V)=\omega^{\alpha}(V_{\alpha}\oplus(\ker\omega^{\alpha}\cap V))\). Due to the second expression in (9), the sum of the codistributions \(S_{*}^{\alpha}=\omega^{\alpha}(V)\) of \(\mathrm{T}^{*}M\) for \(\alpha=1,\ldots,k\) has rank \(n\), but they do not need to be in direct sum. A non-degenerate contravariant symmetric tensor, \(g^{*}\). on \(S_{*}=S_{*}^{1}+\ldots+S_{*}^{k}\) can be used to give a decomposition of it into subspaces in direct sum of the form \[S_{*}^{\kappa_{1},\ldots,\kappa_{k}}=\bigcap_{\alpha=1}^{k}S_{\alpha*}^{ \kappa_{\alpha}},\] where \(\kappa_{\alpha}\in\{0,1\}\) for \(\alpha=1,\ldots,k\), while \(S_{\alpha*}^{1}=S_{*}^{\alpha}\) and \(S_{\alpha}^{0}=(S_{*}^{\alpha})^{\perp_{g^{*}}}\), namely the orthogonal in \(S_{*}\) of \(S_{*}^{\alpha}\) relative to \(g^{*}\). Take a basis of \(S_{*}\) associated with our decomposition. For the elements of such a basis spanning \(S_{*}^{\alpha}\), there will be unique elements in \(V_{\alpha}\) whose image under \(\omega^{\alpha}\) give minus the corresponding basis in \(S_{*}^{\alpha}\). Take a supplementary to \(S_{*}\) in \(\mathrm{T}^{*}M\), of dimension \(d+r\), dual to a basis adapted to the decomposition of \(V\) and vanishing on \(V^{\perp_{g}}\). It is worth noting that we have a decomposition \[\mathrm{T}M=\left[\Upsilon_{\alpha}\oplus(\ker\omega^{\alpha}\cap V^{\perp_{g }})\right]\oplus\left[V_{\alpha}\oplus\left(\bigoplus_{\beta\neq\alpha}V_{ \beta}\right)\oplus D\right]\] and a dual one in \(S_{*}\oplus(V^{\perp_{g}})^{\circ}\). In such a basis, the form of \(\omega^{\alpha}\) goes back to (4) and the same technique in Theorem 3.2 gives the canonical form for every \(\omega^{\alpha}\) with \(\alpha=1,\ldots,k\). Finally, if \(w_{1},\ldots,w_{d}\) is a basis of \(D\) dual to the one chosen in \(\mathrm{T}_{x}^{*}M\), one has that \[\omega^{\beta}=\sum_{j=1}^{r_{\beta}}e^{\mu_{\beta}^{\beta}}\wedge e^{\beta}_ {\mu_{j}^{\beta}}\,,\qquad V_{\beta}=\left\langle e^{\mu_{\beta}^{\beta}}_{ \beta}\right\rangle,\qquad\beta=1,\ldots,k.\] As proved above, depending on their exact definition, \(k\)-presymplectic manifolds need not have a Darboux theorem (whatever this means, because we can have different ways of defining such an object). That is why we hereafter a definition of \(k\)-presymplectic manifold ensuring the existence of a particular case of \(k\)-presymplectic Darboux theorem. This is done by assuming the existence of certain integrable distribution with particular properties. **Definition 3.15**.: A _\(k\)-presymplectic manifold_\((M,\omega^{1},\ldots,\omega^{k},V)\) is a \(k\)-presymplectic manifold such that \(\dim M=n+r+d\) where \(d=\operatorname{rank}\bigcap_{\alpha=1}^{k}\ker\omega^{\alpha}\) and \(\operatorname{rank}\omega^{\alpha}=2r^{\alpha}\), the \(V\) is an integrable distribution such that \(\omega|_{V\times V}=0\) of rank \(r+d\) and there are integrable distributions \(\bigoplus_{\alpha=1}^{k}V_{\alpha}\), \(V_{1},\ldots,V_{k},D\) so that \[V=\bigoplus_{\alpha=1}^{k}V_{\alpha}\oplus D,\qquad D=\bigcap_{\alpha=1}^{k} \ker\omega^{\alpha},\qquad D+V_{\beta}=\bigcap_{\beta\neq\alpha}\ker\omega^{ \alpha}\cap V\ \ (k\neq 1)\,,\qquad\beta=1,\ldots k\,.\] **Lemma 3.16**.: _Given a \(k\)-presymplectic manifold \((M,\omega^{1},\ldots,\omega^{k},V)\), the distributions \(V^{\alpha}\), with \(\alpha=1,\ldots,k\), satisfy that every \(x\in M\) admits a coordinated neighbourhood with coordinates_ \[\{y^{1},\ldots,y^{n},z^{1},\ldots,z^{d},y_{1}^{\alpha},\ldots,y_{r_{\alpha}}^ {\alpha}\}\,,\qquad\alpha=1,\ldots,k\,,\] _on a neighbourhood of \(x\) so that_ \[V_{\alpha}=\left\langle\frac{\partial}{\partial y_{1}^{\alpha}},\ldots,\frac {\partial}{\partial y_{r_{\alpha}}^{\alpha}}\right\rangle\,,\qquad\alpha=1, \ldots,k\,,\qquad D=\left\langle\frac{\partial}{\partial z^{1}},\ldots,\frac {\partial}{\partial z^{d}}\right\rangle\,.\] Proof.: Let \(y^{1},\ldots,y^{n}\) be common functionally independent first-integrals for all vector fields taking values in \(V\). Since \(D\) is a regular distribution of rank \(d\) given by the intersection of kernels of the closed forms \(\omega^{1},\ldots,\omega^{k}\), it is integrable. It is assumed that \(\bigoplus_{\alpha=1}^{k}V_{\alpha}\) is integrable. Hence, \(V_{1}\oplus\ldots\oplus V_{k}\) has common first-integrals \(z^{1},\ldots,z^{d}\) such that \(\mathrm{d}z^{1}\wedge\ldots\wedge\mathrm{d}z^{d}\wedge\mathrm{d}y^{1}\wedge \ldots\wedge\mathrm{d}y^{n}\neq 0\). If \(k=1\), the result of our lemma easily follows. Assume that \(k>1\). Given different integers \(\alpha_{1},\ldots,\alpha_{k-1}\in\{1,\ldots,k\}\), one has that, \[V_{\alpha_{1}}\oplus\cdots\oplus V_{\alpha_{k-1}}\oplus D=\ker\omega^{\beta} \cap V\,,\] where \(\beta\) is the only number in \(\{1,\ldots,k\}\) not included in \(\{\alpha_{1},\ldots,\alpha_{k-1}\}\). Hence, the distribution \(V_{\alpha_{1}}\oplus\ldots\oplus V_{\alpha_{k-1}}\oplus D\) has corank \(r_{\beta}\), it is integrable, and the vector fields taking values in it have \(r_{\beta}\) common local first-integrals \(y_{1}^{\beta},\ldots,y_{r_{\beta}}^{\beta}\) such that \[\mathrm{d}z_{1}\wedge\ldots\wedge\mathrm{d}z_{d}\wedge\mathrm{d}y_{1}^{\beta} \wedge\ldots\wedge\mathrm{d}y_{r_{\beta}}^{\beta}\wedge\mathrm{d}y^{1}\wedge \ldots\wedge\mathrm{d}y^{n}\neq 0.\] By construction, \(\{y_{1}^{1},\ldots,y_{r_{1}}^{1},\ldots,y_{1}^{k},\ldots,y_{r_{k}}^{k},z^{1}, \ldots,z^{d},y^{1},\ldots,y^{n}\}\) becomes a local coordinate system on \(M\) and \[V_{\alpha}=\left(\bigcap_{i=1}^{d}\ker\mathrm{d}z^{i}\right)\cap\left(\bigcap_{ i=1}^{n}\ker\mathrm{d}y^{i}\right)\cap\bigcap_{\begin{subarray}{c}\beta\neq \alpha\\ i=1,\ldots,r_{\beta}\end{subarray}}\ker\mathrm{d}y_{i}^{\beta}\,.\] Moreover, \(\dfrac{\partial}{\partial y_{i}^{\beta}}\) with \(i=1,\dots,r_{\beta}\) vanish on all coordinates \(y_{j}^{\alpha}\) with \(\alpha\neq\beta\) and \(j=1,\dots,r_{\alpha}\). Hence, \[\left\langle\dfrac{\partial}{\partial y_{1}^{\beta}},\dots,\dfrac{\partial}{ \partial y_{r_{\beta}}^{\beta}}\right\rangle=V_{\beta}\,,\qquad\beta=1,\dots,k\,,\] and \[\left\langle\dfrac{\partial}{\partial z^{1}},\dots,\dfrac{\partial}{\partial z ^{d}}\right\rangle=D\,.\] Once the above is proved, the following theorem is immediate. One only has to slightly adapt Theorem 3.5 by considering that \(\operatorname{rank}V_{\alpha}=r_{\alpha}\) for \(\alpha=1,\dots,k\) and to restrict \(\omega^{\alpha}\) to the integral submanifolds of \(V_{\alpha}\oplus\Upsilon_{\alpha}\), which have dimension \(2r^{\alpha}\), where \(\omega^{\alpha}\) becomes symplectic. **Theorem 3.17** (\(k\)-presymplectic Darboux theorem).: _Let \((M,\omega^{1},\dots,\omega^{k},V)\) be a \(k\)-presymplectic manifold such that \(\operatorname{rank}\omega^{\alpha}=2r_{\alpha}\), with \(1\leq r_{\alpha}\leq n\). The dimension of \(M\) is \(n+r+d\). For every point \(x\in M\), there exist local coordinates \(\{y^{i},y_{\mu_{j}^{\alpha}}^{\alpha},z^{j}\}\), with \(1\leq i\leq n\), \(\mu_{j}^{\alpha}\in I_{\alpha}\subseteq\{1,\dots,n\}\), \(|I_{\alpha}|=r_{\alpha}\), \(1\leq j\leq r_{\alpha}\) and \(1\leq\alpha\leq k\), such that_ \[\omega^{\alpha}=\sum_{j=1}^{r_{\alpha}}\mathrm{d}y^{\mu_{j}^{\alpha}}\wedge \mathrm{d}y_{\mu_{j}^{\alpha}}^{\alpha},\quad V_{\alpha}=\left\langle\dfrac{ \partial}{\partial y_{\mu_{j}^{\alpha}}^{\alpha}}\right\rangle\,,\quad\alpha =1,\dots,k\,,\quad\bigcap_{\alpha=1}^{k}\ker\omega^{\alpha}=\left\langle \dfrac{\partial}{\partial z^{j}}\right\rangle\,.\] ## 4 \(k\)-cosymplectic and \(k\)-precosymplectic manifolds Similarly to previous sections, let us study \(k\)-cosymplectic and \(k\)-precosymplectic manifolds. Our investigation will introduce relevant technical issues to be addressed that were not present in previous sections. One of its main differences with respect to previous Darboux theorems relies on the fact that Reeb vector fields are not uniquely defined in the case of \(k\)-precosymplectic manifolds. This suggests that Darboux coordinates for \(k\)-precosymplectic manifolds should not assume a canonical form form Reeb vector fields. Moreover, additional conditions will be needed to assume so as to obtain canonical bases for the distributions after having the corresponding differential forms written in a canonical manner. **Definition 4.1**.: Let \(M\) be an \((n(k+1)+k)\)-dimensional manifold. A \(k\)_-cosymplectic structure_ in \(M\) is a family \((\eta^{\alpha},\omega^{\alpha},V)\), with \(1\leq\alpha\leq k\), where \(\eta^{1},\dots,\eta^{k}\) are closed one-forms on \(M\), while \(\omega^{1},\dots,\omega^{k}\) are closed two-forms in \(M\), and \(V\) is an integrable \(nk\)-dimensional integrable distribution in \(M\) satisfying that 1. \(\eta^{1}\wedge\dots\wedge\eta^{k}\neq 0\,,\quad\eta^{\alpha}|_{V}=0\,,\quad \omega^{\alpha}|_{V\times V}=0\,,\) 2. \(\bigcap_{\alpha=1}^{k}\left(\ker\eta^{\alpha}\cap\ker\omega^{\alpha}\right)= \{0\}\,,\quad\operatorname{rank}\bigcap_{\alpha=1}^{k}\ker\omega^{\alpha}=k\,.\) A manifold \(M\) endowed with a \(k\)-cosymplectic structure is said to be a \(k\)_-cosymplectic manifold_. Every \(k\)-cosymplectic structure \((\eta^{\alpha},\omega^{\alpha},V)\) in \(M\) admits a unique family of vector fields \(R_{1},\dots,R_{k}\) on \(M\), called _Reeb vector fields_, such that \[\iota_{R_{\alpha}}\,\eta^{\beta}=\delta_{\alpha}^{\beta}\,,\qquad\iota_{R_{ \alpha}}\,\omega^{\beta}=0\,,\qquad\alpha,\beta=1,\dots,k\,. \tag{10}\] Note that the existence of Reeb vector fields is independent of the existence or not of the distribution \(V\). Given a one-cosymplectic manifold \((M,\eta,\omega,V)\), the pair \((\eta,\omega)\) is a special type of cosymplectic structure in \(M\) that additionally admits the distribution \(V\). Not every cosymplectic structure admits such a \(V\). In fact, consider \((M=\mathbb{R}\times\mathbb{S}^{2},\eta,\omega)\), where \(\eta\) is the one-form on \(M\) obtained by pulling-back the one form \(\mathrm{d}t\) on \(\mathbb{R}\), and \(\omega\) is the pull-back to \(M\) of the standard symplectic form on \(\mathbb{S}^{2}\). Then, \((M=\mathbb{R}\times\mathbb{S}^{2},\eta,\omega)\) is not a one-cosymplectic manifold because the standard symplectic form on \(\mathbb{S}^{2}\) does not admit a distribution as commented previously in this paper. **Theorem 4.2** (\(k\)-cosymplectic Darboux theorem).: _Given a \(k\)-cosymplectic manifold of the form \((M,\eta^{1},\ldots,\eta^{k},\omega^{1},\ldots,\omega^{k},V)\), every point \(x\in M\) admits a neighbourhood with local coordinates \(\{x^{\alpha},y^{i},y_{i}^{\alpha}\}\), with \(1\leq\alpha\leq k\), \(1\leq i\leq n\), such that_ \[\eta^{\alpha}=\mathrm{d}x^{\alpha}\,,\qquad\omega^{\alpha}=\sum_{i=1}^{n} \mathrm{d}y^{i}\wedge\mathrm{d}y_{i}^{\alpha}\,,\qquad\alpha=1,\ldots,k.\] _In these coordinates, \(R_{\alpha}=\dfrac{\partial}{\partial x^{\alpha}}\) for \(\alpha=1,\ldots,k\). If \(k\neq 1\), then \(V=\left\langle\dfrac{\partial}{\partial y_{i}^{\alpha}}\right\rangle\) where \(\alpha=1,\ldots,k\) and \(i=1,\ldots,n\). If \(k=1\) and \([\ker\omega,V]\subset\ker\omega\oplus V\), then \(V=\left\langle\dfrac{\partial}{\partial y_{i}^{\alpha}}\right\rangle\)._ Proof.: Since \(\eta^{1},\ldots,\eta^{k}\) are closed and \(\eta^{1}\wedge\ldots\wedge\eta^{k}\) does not vanish at any point of \(M\), one has that \(H=\bigcap_{\alpha=1}^{k}\ker\eta^{\alpha}\) is an integrable distribution of corank \(k\). Moreover, \(V\) is contained in \(H\) by the definition of \(k\)-cosymplectic manifolds. Consider one of the integral leaves, \(\mathcal{S}\), of \(H\), and the natural local immersion \(j_{\mathcal{S}}:\mathcal{S}\hookrightarrow M\). The \(j_{\mathcal{S}}^{*}\omega^{\alpha}\) along with the restriction of \(V\) to \(\mathcal{S}\) give rise to a \(k\)-symplectic manifold since a vector field taking values in \(H\) that is orthogonal to \(H\) relative to \(\omega^{1},\ldots,\omega^{k}\) belongs to \(\bigcap_{\alpha=1}^{k}(\ker\eta^{\alpha}\cap\ker\omega^{\alpha})=0\). Hence, \(j_{\mathcal{S}}^{*}\omega^{1},\ldots,j_{\mathcal{S}}^{*}\omega^{k}\) admit \(k\)-symplectic Darboux coordinates. Doing the same along different leaves of \(H\) and gluing the results, we obtain that \(\omega^{1},\ldots,\omega^{k},\eta^{1},\ldots,\eta^{k}\) have their canonical form. Let us explain this in detail. The differential forms \(\omega^{1},\ldots,\omega^{k},\eta^{1},\ldots,\eta^{k}\) are invariant relative to the Reeb vector fields of the \(k\)-cosymplectic manifold and their value in \(M\) can be understood as the extension to \(M\) obtained from their value on \(\mathcal{S}\) by the extension by one-parametric groups of diffeomorphisms of the vector fields \(R_{1},\ldots,R_{k}\). Consider coordinates \(x^{1},\ldots,x^{k}\) rectifying simultaneously the vector fields \(R_{1},\ldots,R_{k}\). If one consider the coordinate system in \(M\) given by the coordinates \(x^{\alpha},y^{i},y_{i}^{\alpha}\) on \(M\), where \(y^{i},y_{i}^{\alpha}\) are invariant under the flows of \(R_{1},\ldots,R_{k}\) and match the \(k\)-symplectic Darboux coordinates on \(\mathcal{S}\), one gets that the \(x^{\alpha}\) are functionally independent of the \(y^{i},y_{i}^{\alpha}\). Moreover, since \(R_{1},\ldots,R_{k}\) are in the kernels of \(\omega^{1},\ldots,\omega^{k}\) and they are invariant relative to \(R_{1},\ldots,R_{k}\), it follows that their form on \(M\) is the same as in \(\mathcal{S}\). Meanwhile, \(\eta^{\alpha}=dx^{\alpha}\) for \(\alpha=1,\ldots,k\). and the forms \(\omega^{1},\ldots,\omega^{k},\eta^{1},\ldots,\eta^{k}\) on \(M\) take a canonical form. To obtain a canonical basis of the distribution \(V\), additional conditions must be added for \(k=1\). On the other hand, if \(k>1\), then each distribution \(V_{\alpha}\) is the intersection of the kernels of \(\omega^{\beta}\) for \(\beta\neq\alpha\) along with the intersection with \(\bigcap_{\beta=1}^{k}\ker\eta^{\beta}\). They are therefore invariant relative to the Reeb vector fields. So, they can be put in canonical form on \(\mathcal{S}\) and extended as previously from \(\mathcal{S}\) to vector fields on \(M\) with a canonical form. On the other hand, if \(k=1\), one has that \(V\) may not be the kernel of a closed form invariant relative to the associated Reeb vector field and the previous method fails. To ensure this, one has to assume \([\ker\omega,V]\subset\ker\omega\oplus V\). Equivalently, \([R,V]\subset V\) for the unique Reeb vector field of the one-cosymplectic manifold. The conditions given in [49, Lemma 5.1.1] and [21] for the Darboux theorem for \(k\)-cosymplectic manifolds may be a little bit misleading since a necessary condition in the case \(k=1\), namely \(V\) must be invariant relative to the action of the Reeb vector field, is not given in [21, Theorem II.4] and [49, Lemma 5.1.1], but in [21, Remark II.5] or [49, Note 5.2.1], respectively, after them. Moreover, the above-mentioned condition in [49, 21, Lemma 5.1.1] can be rewritten in a new way, namely \([R_{\alpha},V]\subset V\), with \(\alpha=1,\dots,k\), can be rewritten by saying that the distributions \(\ker\omega\) and \(V\) are integrable and their direct sum is integrable. This is also commented in [32]. As shown in the previous theorem, the condition \([\ker\omega,V]\subset V\oplus\ker\omega\) is necessary in order to ensure a canonical form for the elements of a basis of \(V\). Notwithstanding, if one is mainly concerned with the canonical form of the \(\eta^{1},\dots,\eta^{k},\omega^{1},\dots,\omega^{k}\), this condition can be avoided. This is the reason why we skipped \([\ker\omega,V]\subset V\oplus\ker\omega\) in our definition of \(k\)-cosymplectic manifolds. **Example 4.3**.: _Let \(\{x^{1},\dots,x^{k}\}\) be a linear coordinate system on \(\mathbb{R}^{k}\). Given the canonical projections \(\bar{\pi}_{1}\colon\mathbb{R}^{k}\times(\mathrm{T}^{1}_{k})^{*}Q\to\mathbb{R} ^{k}\), \(\bar{\pi}_{2}\colon\mathbb{R}^{k}\times(\mathrm{T}^{1}_{k})^{*}Q\to(\mathrm{ T}^{1}_{k})^{*}Q\), \(\bar{\pi}_{0}\colon\mathbb{R}^{k}\times(\mathrm{T}^{1}_{k})^{*}Q\to\mathbb{R} ^{k}\times Q\). The canonical model for \(k\)-cosymplectic structures is_ \[(\mathbb{R}^{k}\times(\mathrm{T}^{1}_{k})^{*}Q,(\bar{\pi}_{1})^{*}\mathrm{d}x ^{\alpha},(\bar{\pi}_{2})^{*}\omega^{\alpha},V=\ker(\bar{\pi}_{0})_{*})\,,\] _where \(\omega^{1},\dots,\omega^{k}\) are the two-forms of the canonical \(k\)-symplectic structure on \((\mathrm{T}^{1}_{k})^{*}Q\)._ More generally, one has the following construction. **Example 4.4**.: _Let \((N,\varpi^{\alpha},\mathcal{V})\) be an arbitrary \(k\)-symplectic manifold. Given the canonical projections_ \[\pi_{\mathbb{R}^{k}}\colon\mathbb{R}^{k}\times N\longrightarrow\mathbb{R}^{k} \,,\qquad\pi_{N}\colon\mathbb{R}^{k}\times N\longrightarrow N\] _define the differential forms_ \[\eta^{\alpha}=\pi_{\mathbb{R}^{k}}^{*}(\mathrm{d}x^{\alpha})\,,\quad\omega^{ \alpha}=\pi_{N}^{*}\varpi^{\alpha}\,,\qquad\alpha=1,\dots,k\,.\] _The distribution \(\mathcal{V}\) in \(N\) defines a distribution \(V\) in \(M=\mathbb{R}^{k}\times N\) by considering the vector fields on \(N\) as vector fields in \(M\) in the natural way via the isomorphism \(\mathrm{T}M=\mathbb{R}^{k}\oplus\mathrm{T}N\). All conditions given in Definition 4.1 are verified, and hence \((M=\mathbb{R}^{k}\times N,\eta^{\alpha},\omega^{\alpha},V)\) is a \(k\)-cosymplectic manifold._ As in the case of \(k\)-presymplectic manifolds, there are many ways of defining a \(k\)-precosymplectic structure. Note that in the \(k\)-precosymplectic case, one cannot, in general, extend the notion of Reeb vector fields to give an object that is uniquely defined. Hence, one may wonder about the necessity of putting them into a canonical form in Darboux coordinates, since they are not unique. Taking this into account, let us give one of the possible definitions for \(k\)-precosymplectic manifolds. No condition for the determination of the canonical form of the Reeb vector fields will be assumed. **Definition 4.5**.: Let \(M\) be a manifold of dimension \(n(k+1)+k-m\), with \(0\leq m\leq nk\). A \(k\)_-precosymplectic structure_ in \(M\) is a family \((\eta^{\alpha},\omega^{\alpha},V)\), with \(1\leq\alpha\leq k\), where \(\eta^{\alpha}\) are closed one-forms in \(M\), while \(\omega^{\alpha}\) are closed two-forms in \(M\) such that \(\mathrm{rank}\,\omega^{\alpha}=2r_{\alpha}\), with \(1\leq r_{\alpha}\leq n\), and \(V\) is an integrable distribution in \(M\) of corank \(n+k\) satisfying that 1. \(\eta^{1}\wedge\dots\wedge\eta^{k}\neq 0\,,\quad\eta^{\alpha}|_{V}=0\,,\quad \omega^{\alpha}|_{V\times V}=0\,,\qquad\alpha=1,\dots,k\), 2. \(\mathrm{rank}\bigcap_{\alpha=1}^{k}\ker\omega^{\alpha}=k+d\,,\) 3. \(\mathrm{rank}\bigcap_{\alpha=1}^{k}\big{(}\ker\omega^{\alpha}\cap\ker\eta^{ \alpha}\big{)}=d\,,\) 4. one has that \(V\) is an integrable distribution admitting a decomposition into integrable distributions \(V=\bigoplus_{\alpha=1}^{k}V_{\alpha}\oplus D\) such that \(D+V_{\beta}=\left(\bigcap_{\beta\neq\alpha}\ker\omega^{\alpha}\right)\cap V\) for \(\beta=1,\ldots,k\) and \(k\neq 1\) for \(\dim V_{\beta}=r_{\beta}\). A manifold \(M\) endowed with a \(k\)-precosymplectic structure is called a _\(k\)-precosymplectic manifold_. We here after define \(r=\sum_{\alpha=1}^{k}r_{\alpha}\). **Example 4.6**.: _Consider a \(k\)-presymplectic manifold \((P,\varpi^{\alpha},V)\). Let us construct a \(k\)-precosymplectic structure on \(\mathbb{R}^{k}\times P\). First, consider the canonical projections_ \[\mathbb{R}^{k}\times P\stackrel{{\pi}}{{\longrightarrow}}P\,, \qquad\mathbb{R}^{k}\times P\stackrel{{\tau}}{{\longrightarrow}} \mathbb{R}^{k}\,.\] _Then, define \(\eta^{\alpha}=\tau^{*}\mathrm{d}x^{\alpha}\), where \(x^{1},\ldots,x^{k}\) are linear coordinates in \(\mathbb{R}^{k}\), and \(\omega^{\alpha}=\pi^{*}\varpi^{\alpha}\) for \(\alpha=1,\ldots,k\). Then, \((\mathbb{R}^{k}\times P,\eta^{\alpha},\omega^{\alpha})\) is a \(k\)-precosymplectic manifold._ Let us prove a technical result that is necessary to asses the role played by the distribution \(\bigcap_{\alpha=1}^{k}\ker\eta^{\alpha}\) in \(k\)-precosymplectic manifolds. **Lemma 4.7**.: _Given a \(k\)-precosymplectic manifold \((M,\eta^{1},\ldots,\eta^{k},\omega^{1},\ldots,\omega^{k},V)\), every \(x\in M\) admits a coordinated neighbourhood with coordinates_ \[\{x^{1},\ldots,x^{k},y^{1},\ldots,y^{n},z^{1},\ldots,z^{d},y_{1}^{\alpha}, \ldots,y_{r_{\alpha}}^{\alpha}\}\,,\qquad\alpha=1,\ldots,k\,,\] _on a neighbourhood of \(x\) so that_ \[V_{\alpha}=\left\langle\frac{\partial}{\partial y_{1}^{\alpha}},\ldots,\frac {\partial}{\partial y_{r_{\alpha}}^{\alpha}}\right\rangle\,,\qquad\alpha=1, \ldots,k\,,\qquad D=\left\langle\frac{\partial}{\partial z^{1}},\ldots,\frac {\partial}{\partial z^{d}}\right\rangle\,.\] Proof.: Since \(\eta^{1},\ldots,\eta^{k}\) are closed, they admit potentials \(x^{1},\ldots,x^{k}\), respectively. Let \(y^{1},\ldots,y^{n}\) be common functionally independent first integrals for all vector fields taking values in the integrable distribution \(V\) such that \[\mathrm{d}x^{1}\wedge\ldots\wedge\mathrm{d}x^{k}\wedge\mathrm{d}y^{1}\wedge \ldots\wedge\mathrm{d}y^{n}\neq 0.\] It is assumed that \(\bigoplus_{\alpha=1}^{k}V_{\alpha}\) is integrable. Hence, \(V_{1}\oplus\ldots\oplus V_{k}\) has common first integrals \(z^{1},\ldots,z^{d}\) such that \[\mu=\mathrm{d}x^{1}\wedge\ldots\wedge\mathrm{d}x^{k}\wedge\mathrm{d}z^{1} \wedge\ldots\wedge\mathrm{d}z^{d}\wedge\mathrm{d}y^{1}\wedge\ldots\wedge \mathrm{d}y^{n}\neq 0.\] Given different integers \(\alpha_{1},\ldots,\alpha_{k-1}\in\{1,\ldots,k\}\), and \(k>1\), one has that, \[V_{\alpha_{1}}\oplus\cdots\oplus V_{\alpha_{k-1}}\oplus D=\ker\omega^{\beta} \cap V\,,\] where \(\beta\) is the only number in \(\{1,\ldots,k\}\) not included in \(\{\alpha_{1},\ldots,\alpha_{k-1}\}\). Hence, the distribution \(V_{\alpha_{1}}\oplus\ldots\oplus V_{\alpha_{k-1}}\oplus D\) has corank \(r_{\beta}\) in \(V\), it is integrable, and the vector fields taking values in it have \(r_{\beta}\) common local first-integrals \(y_{1}^{\beta},\ldots,y_{r_{\beta}}^{\beta}\) such that \(\mathrm{d}y_{1}^{\beta}\wedge\ldots\wedge\mathrm{d}y_{r_{\beta}}^{\beta} \wedge\mu\neq 0\). Note that, if \(k=1\), a similar result can be obtained by considering \(V=V_{1}\oplus D\) and some \(r_{1}\) functionally independent integrals of \(D\). By construction, \(\{x^{1},\ldots,x^{k},y_{1}^{1},\ldots,y_{r_{1}}^{1},\ldots,y_{k}^{1},\ldots,y_ {r_{k}}^{k},z^{1},\ldots,z^{d},y^{1},\ldots,y^{n}\}\) becomes a local coordinate system on \(M\) and \[V_{\alpha}=\left(\bigcap_{i=1}^{d}\ker\mathrm{d}z^{i}\right)\cap\left(\bigcap_ {i=1}^{n}\ker\mathrm{d}y^{i}\right)\cap\Bigg{(}\bigcap_{\begin{subarray}{c} \beta\neq\alpha\\ i=1,\ldots,r_{\beta}\end{subarray}}\ker\mathrm{d}y_{i}^{\beta}\Bigg{)}\cap \Bigg{(}\bigcap_{\beta=1}^{k}\ker\mathrm{d}x^{\beta}\Bigg{)}.\] for \(k>1\). For \(k=1\), a similar expression is obtained by skipping the kernels of the \(\mathrm{d}y_{i}^{1}\). Moreover, \(\frac{\partial}{\partial y_{i}^{\beta}}\) with \(i=1,\ldots,r_{\beta}\) vanish on all coordinates \(y_{j}^{\alpha},y^{i}\), with \(\alpha\neq\beta\) and \(j=1,\ldots,r_{\alpha}\), and the \(z^{1},\ldots,z^{d}\). Hence, \[\left\langle\frac{\partial}{\partial y_{1}^{\beta}},\ldots,\frac{\partial}{ \partial y_{r_{\beta}}^{\beta}}\right\rangle=V_{\beta}\,,\qquad\beta=1,\ldots,k\,,\] and \[\left\langle\frac{\partial}{\partial z^{1}},\ldots,\frac{\partial}{\partial z^{d}} \right\rangle=D\,.\] The corresponding Darboux theorem for \(k\)-precosymplectic manifolds reads as follows. **Theorem 4.8** (\(k\)-precosymplectic Darboux Theorem).: _Let \(M\) be a \(k\)-precosymplectic manifold such that \(\dim M=n+d+r+k\), while \(\operatorname{rank}\omega^{\alpha}=2r_{\alpha}\), with \(1\leq r_{\alpha}\leq n\). Let us assume the existence of \(k\) Reeb vector fields \(R_{1},\ldots,R_{k}\) spanning an integrable \(k\)-dimensional distribution and such that they commute among themselves. For every \(x\in M\), there exists a local chart of coordinates_ \[\left\{x^{\alpha},y^{i},y^{\alpha}_{\mu_{\alpha}},z^{j}\right\},\quad 1\leq \alpha\leq k\,,\quad 1\leq i\leq n\,,\quad\mu_{\alpha}\in I_{\alpha}\subseteq\{1, \ldots,n\}\,,\quad|I_{\alpha}|=r_{\alpha}\,,\quad 1\leq j\leq d\,,\] _such that_ \[\eta^{\alpha}=\mathrm{d}x^{\alpha}\,,\quad\omega^{\alpha}=\sum_{\mu_{\alpha} \in I_{\alpha}}\mathrm{d}y^{\mu_{\alpha}}\wedge\mathrm{d}y^{\alpha}_{\mu_{ \alpha}}\qquad\alpha=1,\ldots,k\,.\] _If additionally \([R_{i},V]\subset V\), then_ \[V=\left\langle\frac{\partial}{\partial y^{\alpha}_{\mu_{\alpha}}},\frac{ \partial}{\partial z^{j}}\right\rangle\,,\qquad\bigcap_{\alpha=1}^{k}(\ker \eta^{\alpha}\cap\ker\omega^{\alpha})=\left\langle\frac{\partial}{\partial z ^{j}}\right\rangle\,.\] Proof.: Consider the distribution \(\Upsilon=\bigcap_{\alpha=1}^{k}\ker\eta^{\alpha}\), which is integrable of rank \(n+d+r\). One can define a leaf \(S_{\lambda}\) of \(\Upsilon\). Then, one has the immersion \({}_{\mathcal{J}}:S_{\lambda}\hookrightarrow M\). Since \(V\) is included in \(\Upsilon\), one has that \({}_{\mathcal{J}}^{*}\omega^{1},\ldots,{}_{\mathcal{J}}^{*}\omega^{k}\) allow us to define a \(k\)-presymplectic manifold with Darboux coordinates for the \(\eta^{\alpha}\) and the \(\omega^{\alpha}\) which depend smoothly on \(\lambda\). Note that the \(\omega^{\alpha},\eta^{\alpha}\) are invariant relative to some Reeb vector fields \(R_{1},\ldots,R_{k}\) spanning an involutive distribution and commuting among themselves. Using this fact and proceeding as in the Darboux \(k\)-cosymplectic manifold structure, we obtain our Darboux coordinates for the \(\eta^{\alpha}\) and the \(\omega^{\alpha}\). Note that the same applies to the canonical basis for \(\bigcap_{\alpha=1}^{k}(\ker\eta^{\alpha}\cap\ker\omega^{\alpha})\) even for \(k=1\). Notwithstanding, the form of the basis for the distribution \(V\) needs the additional condition about its invariance relative to \(R_{1},\ldots,R_{k}\). Then, gluing together as in Theorem 4.2, the result follows. **Remark 4.9**.: Note that \(k\)-precosymplectic manifolds admit Reeb vector fields, but they are not uniquely defined by conditions (10). One must impose some additional condition on \(M\) to determine them uniquely. For instance, let us restrict ourselves to a \(k\)-precosymplectic structure on \(\mathbb{R}^{k}\times M\), where \(M\) is a \(k\)-presymplectic manifold. Then, if we ask the Reeb vectors fields to be vertical with respect to the projection \(\mathbb{R}^{k}\times M\to\mathbb{R}^{k}\), the system of equations (10) determines univocally Reeb vector fields. An equivalent way of obtaining this same family is taking the vector fields \(\left\{\frac{\partial}{\partial x^{\alpha}}\right\}\) on \(\mathbb{R}^{k}\) and lifting them to \(\mathbb{R}^{k}\times M\) with the trivial connection \(\mathrm{d}x^{\alpha}\otimes\frac{\partial}{\partial x^{\alpha}}\). As it is obvious, in Darboux coordinates we have that these vector fields are \(R_{\alpha}=\frac{\partial}{\partial x^{\alpha}}\). Note that every \(k\)-presymplectic structure in this case will also satisfy the conditions established in our Darboux theorem. ## 5 Multisymplectic and premultisymplectic structures Let us now comment on certain results on Darboux coordinates for multisymplectic forms [13, 29, 55]. First, let us detail some results on (pre)multisymplectic geometry (see [12, 13, 28] for further references). In the context of (pre)multisymplectic geometry, the standard kernel of a differential form is called the _one-kernel_. **Definition 5.1**.: Let \(M\) be an \(n\)-dimensional differentiable manifold. A closed form \(\Omega\in\Omega^{k}(M)\) whose one-kernel is a distribution of constant rank is called a _premultisymplectic form_. Additionally, if \(\iota_{X}\,\Omega=0\) for a vector field \(X\in\mathfrak{X}(M)\) implies that \(X=0\), then \(\Omega\) is said to be _one-nondegenerate_ and it becomes a _multisymplectic form_. The pair \((M,\Omega)\) is said to be a _premultisymplectic_ or a _multisymplectic manifold_ of _degree_\(k\), if the one-kernel of \(\Omega\) is one-degenerate or one-nondegenerate, respectively. First examples of multisymplectic manifolds are _symplectic manifolds_, i.e. multisymplectic manifolds of degree \(2\), and _orientable manifolds_, namely multisymplectic manifolds with a _volume form_. The following is a linear analogue of (pre)multisymplectic manifolds. **Definition 5.2**.: A \(k\)-covector \(\Omega\) on \(\mathbb{R}^{n}\) is called a _premultisymplectic linear form_. If \(\iota_{v}\,\Omega=0\) for some \(x\in\mathbb{R}^{n}\) implies that \(v=0\), then \(\Omega\) is said to be _one-nondegenerate_ and it becomes a _multisymplectic linear form_ or \(k\)_-plectic linear form_. The pair \((\mathbb{R}^{n},\Omega)\) is said to be a _premultisymplectic linear space_ or a _linear multisymplectic linear space_ of _degree_\(k\), respectively. Multisymplectic linear spaces given by a \(k\)-covector are also called \(k\)_-plectic vector spaces_. Other typical examples of multisymplectic manifolds are given by the so-called _bundles of forms_, which, in addition, are the canonical models of multisymplectic manifolds. These canonical models are constructed as follows. * Let \(Q\) be a manifold. Consider the bundle \(\rho\colon\Lambda^{k}(\mathrm{T}^{*}Q)\to Q\), i.e. the _bundle of \(k\)-forms_ in \(Q\) (also called the \(k\)_-multicotangent bundle_ of \(Q\)). This bundle is endowed with a canonical structure called the _tautological_ or _canonical form_\(\Theta_{Q}\in\Omega^{k}(\Lambda^{k}(\mathrm{T}^{*}Q))\) given by \[\Theta_{Q_{\widehat{\mu}}}(V_{1},\ldots,V_{k})=\iota(\rho_{*}V_{1}\wedge\ldots \wedge\rho_{*}V_{k})\widehat{\mu},\] for every \(\widehat{\mu}\in\Lambda^{k}(\mathrm{T}^{*}Q)\) and \(V_{1},\ldots,V_{k}\in\mathrm{T}_{\widehat{\mu}}(\Lambda^{k}(\mathrm{T}^{*}Q))\). Then, \(\Omega_{Q}=\mathrm{d}\Theta_{Q}\in\Omega^{k+1}(\Lambda^{k}(\mathrm{T}^{*}Q))\) is a one-nondegenerate form and hence \((\Lambda^{k}(\mathrm{T}^{*}Q),\Omega_{Q})\) is a multisymplectic manifold of degree \(k+1\). Furthermore, denoting by \(\{x^{i},p_{i_{1}\ldots i_{k}}\}\) the charts of natural coordinates in \(\Lambda^{k}(\mathrm{T}^{*}Q)\), these canonical forms read locally as \[\Theta_{Q}=p_{i_{1}\ldots i_{k}}\mathrm{d}x^{i_{1}}\wedge\ldots\wedge\mathrm{ d}x^{i_{k}}\,,\quad\Omega_{Q}=\mathrm{d}p_{i_{1}\ldots i_{k}}\wedge\mathrm{d}x^{i_{1}} \wedge\ldots\wedge\mathrm{d}x^{i_{k}}\.\] Such coordinates are _Darboux coordinates_ in \(\Lambda^{k}(\mathrm{T}^{*}Q)\). * If \(\pi\colon Q\to M\) is a fibre bundle, let \(\rho_{r}\colon\Lambda^{k}_{r}(\mathrm{T}^{*}Q)\to Q\) be the subbundle of \(\Lambda^{k}(\mathrm{T}^{*}Q)\) made of the \(r\)-_horizontal \(k\)-forms_ in \(Q\) with respect to the projection \(\pi\), namely the \(k\)-forms in \(Q\) vanishing when applied to \(r\) vector fields that in \(Q\) that are \(\pi\)-vertical. If \(\rho_{r}^{k}\colon\Lambda^{k}_{r}(\mathrm{T}^{*}Q)\to\Lambda^{k}(\mathrm{T}^{* }Q)\) is the canonical injection, then \(\Theta^{r}_{Q}=(\rho_{r}^{k})^{*}\Theta_{Q}\in\Omega^{k}(\Lambda^{k}_{r}( \mathrm{T}^{*}Q))\) is the tautological \(k\)-form in \(\Lambda^{k}_{r}(\mathrm{T}^{*}Q)\), and then, taking \(\Omega^{r}_{Q}=\mathrm{d}\Theta^{r}_{Q}\in\Omega^{k+1}(\Lambda^{k}_{r}( \mathrm{T}^{*}Q))\), we have that \((\Lambda^{k}_{r}(\mathrm{T}^{*}Q),\Omega^{r}_{Q})\) is a multisymplectic manifold of degree \(k+1\). As above, the charts of natural coordinates in \(\Lambda^{k}_{r}(\mathrm{T}^{*}Q)\) are also charts of Darboux coordinates, on which these canonical forms have local expressions similar to the above ones. Nevertheless, in general, multisymplectic manifolds are not (locally) diffeomorphic to these canonical models. Note that a multisymplectic form with Darboux coordinates admits a local flat connection compatible with it. Furthermore, if the multisymplectic form has a compatible flat connection, it admits coordinates in which the multisymplectic form has constant coordinates, but it does not need to be of the previous form. In particular, if a multisymplectic form has kernels of higher order to those of \(\Omega_{Q}\), then there is no Darboux theorem in the above senses. In particular, this is a typical problem for Darboux coordinates: differential forms can be put into a form with constants coefficients in many manners and Darboux theorems use to stress one particular form over others, although others may be of interest too. In general, multisymplectic manifolds do not need to have a coordinate system that makes the multisymplectic form to have constant coordinates, which is the very most basic condition for the existence of a Darboux theorem. Indeed, multisymplectic manifolds of this type are called _flat_ in the literature [55]. The exact definition is given next. **Definition 5.3**.: A multisymplectic manifold \((M,\omega)\) is called _flat near_\(x\in M\), if there exists a mapping \(\phi:U\subset M\to\mathrm{T}_{x}M\) such that \(\phi(x)=0\) and \(\phi^{*}\omega_{x}=\omega\) for \(\omega_{x}\) being a constant-coefficient non-degenerate multilinear-form on \(\mathrm{T}_{x}M\). **Definition 5.4**.: An \((n+1)\)-plectic vector space \((V,\omega)\) is called _standard_ if there exists a linear subspace \(W\subset V\) such that \(\iota_{u\wedge v}\,\omega=0\) for all \(u,v\in W\), and \[\omega^{\sharp}:w\in W\mapsto\omega^{\sharp}(w)\in\Lambda^{n}(V/W)^{*}\] such that \(\omega^{\sharp}(w)(v_{1}+W,\dots,v_{n}+W)=\omega(w,v_{1},\dots,v_{n})\) for every \(v_{1},\dots,v_{n}\in V\), is an isomorphism. In the above situation, \(W\) is unique if \(n\geq 2\) and then often denoted \(W_{\omega}\). From [13, 47, 55], the following result can easily be derived. **Theorem 5.5**.: _Let \(n\geq 2\) and let \((M,\omega)\) be a standard \((n+1)\)-plectic manifold, i.e. \((M,\omega)\) has as constant linear type fixed standard \((n+1)\)-plectic vector space. Then, \(W_{\omega}=\bigsqcup_{x\in M}W_{\omega_{x}}\subset TM\) is a smooth distribution. Furthermore, \((M,\omega)\) is flat if and only if \(W_{\omega}\) is integrable._ Let us just recall that our definition of \((n+1)\)-plectic symplectic manifold is sometimes called a \(n\)-plectic manifold in the literature [55]. Let us now turn to a type of multisymplectic manifold for which we will obtain Darboux coordinates. **Definition 5.6**.: A _special multisymplectic manifold_ is a multisymplectic manifold \((M,\Omega)\) of degree \(k\) such that \(\Omega=\mathrm{d}\Theta\), for some \(\Theta\in\Omega^{k-1}(M)\), and there is a diffeomorphism \(\phi\colon M\to\Lambda^{k-1}(\mathrm{T}^{*}Q)\), \(\dim\,Q=n\geq k-1\), (or \(\phi\colon M\to\Lambda^{k-1}_{r}(\mathrm{T}^{*}Q)\)), and a fibration \(\pi\colon M\to Q\) such that \(\rho\circ\phi=\pi\) (resp. \(\rho_{r}\circ\phi=\pi\)), and \(\phi^{*}\Theta_{Q}=\Theta\) (resp. \(\phi^{*}\Theta^{\prime}_{Q}=\Theta\)). And, as a result of the above discussion, we state the following result. **Theorem 5.7** (Restricted multisymplectic Darboux Theorem).: _Special multisymplectic manifolds \((M,\Omega)\) are multisymplectomorphic to bundles of forms. Therefore, there is a local chart of Darboux coordinates around every point \(x\in M\)._ Like in the \(k\)-symplectic and \(k\)-cosymplectic cases, some additional properties are needed to assure the existence of Darboux-type coordinates [47] and then to have multisymplectic manifolds that locally behave as the canonical models. To state these additional conditions, we need to introduce some generalisations of concepts of symplectic geometry. So, if \((M,\Omega)\) is a multisymplectic manifold of degree \(k\) and \(\mathcal{W}\) a distribution on \(M\), we define [13, 19] the _\(r\)-orthogonal multisymplectic vector space_ at \(p\in M\) of \(\mathcal{W}\) as \[\mathcal{W}_{x}^{\perp,r}=\left\{v\in\mathrm{T}_{x}M\mid\iota(v\wedge w_{1} \wedge\ldots\wedge w_{r})\Omega_{p}=0\,,\ \forall w_{1},\ldots,w_{r}\in\mathcal{W}_{x}\right\}.\] Then, the _\(r\)-orthogonal multisymplectic complement_ of \(\mathcal{W}\) is the distribution \(\mathcal{W}^{\perp,r}=\bigsqcup_{x\in M}\mathcal{W}_{x}^{\perp,r}\), and we say that \(\mathcal{W}\) is an _\(r\)-coisotropic_ or an _\(r\)-isotropic distribution_ if \(\ \mathcal{W}^{\perp,r}\subset\mathcal{W}\) or \(\ \mathcal{W}\subset\mathcal{W}^{\perp,r}\), respectively (if \(\ \mathcal{W}=\mathcal{W}^{\perp,r}\) then \(\mathcal{W}\) is an _\(r\)-Lagrangian distribution_). Let us use previous notions. **Definition 5.8**.: Let \((M,\Omega)\) be a multisymplectic manifold of degree \(k\), and let \(\mathcal{W}\) be a regular one-isotropic involutive distribution in \((M,\Omega)\). 1. A _multisymplectic manifold of type \((k,0)\)_ is a triple \((M,\Omega,\mathcal{W})\) such that, for every \(x\in M\), 1. \(\dim\mathcal{W}(x)=\dim\Lambda^{k-1}(\mathrm{T}_{x}M/\mathcal{W}(x))^{*}\). 2. \(\dim(\mathrm{T}_{x}M/\mathcal{W}(x))>k-1\). 2. A _multisymplectic manifold of type \((k,r)\)_(\(1\leq r\leq k-1\)) is a quadruple \((M,\Omega,\mathcal{W},\mathcal{E})\), where \(\mathcal{E}\) is a distribution on \(M\) such that, for every \(x\in M\), one has that \(\mathcal{E}(x)\) is a vector subspace of \(\mathrm{T}_{x}M/\mathcal{W}(x)\) satisfying the following properties: 1. If \(\pi_{x}\colon\mathrm{T}_{x}M\to\mathrm{T}_{x}M/\mathcal{W}(x)\) is the canonical projection, then \(\iota(v_{1}\wedge\ldots\wedge v_{r})\Omega_{x}=0\), for every \(v_{i}\in\mathrm{T}_{x}M\) such that \(\pi_{x}(v_{i})\in\mathcal{E}(x)\) (\(i=1,\ldots,r\)). 2. \(\dim\mathcal{W}(x)=\dim\Lambda_{r}^{k-1}(\mathrm{T}_{x}M/\mathcal{W}(x))^{*}\), where the horizontal forms are considered with respect to the subspace \(\mathcal{E}(x)\). 3. \(\dim(\mathrm{T}_{x}M/\mathcal{W}(x))>k-1\). Then, the fundamental result is the following [19, Corollary 3.31]. **Theorem 5.9** (Generalised multisymplectic Darboux Theorem).: _Every multisymplectic manifold \((M,\Omega)\) of type \((k,0)\) (resp. of type \((k,r)\)) is locally multisymplectomorphic to a bundle of \((k-1)\)-forms \(\Lambda^{k-1}(\mathrm{T}^{*}Q)\) (resp. \(\Lambda_{r}^{k-1}(\mathrm{T}^{*}Q)\)), for some manifold \(Q\); that is, to a canonical multisymplectic manifold. Therefore, there is a local chart of Darboux coordinates around every point \(x\in M\)._ **Definition 5.10**.: Multisymplectic manifolds that are locally multisymplectomorphic to bundles of forms are called _locally special multisymplectic manifolds_. As a relevant example, if \(\pi\colon E\to M\) is a fiber bundle (where \(M\) is an \(m\)-dimensional oriented manifold), \(J^{1}\pi\) is the corresponding first-order jet bundle, and \(\mathcal{L}\) is a first-order regular or hyperregular Lagrangian density, then the Poincare-Cartan form \(\Omega_{\mathcal{L}}\in\Omega^{m+1}(J^{1}\pi)\) is a multisymplectic form and \((J^{1}\pi,\Omega_{\mathcal{L}})\) is a (locally) special multisymplectic manifold. If \(\mathcal{L}\) is a singular Lagrangian, then \((J^{1}\pi,\Omega_{\mathcal{L}})\) is a premultisymplectic manifold. **Definition 5.11**.: A _special premultisymplectic manifold_ is a premultisymplectic manifold \((M,\Omega)\) of degree \(k\) such that \(M/\ker\Omega\) is a manifold and the unique multisymplectic form \(\Omega^{\prime}\) on \(M/\ker\Omega\) such that \(\pi^{*}\Omega^{\prime}=\Omega\) is a special multisymplectic form. The following naturally follows. **Definition 5.12**.: Let \((M,\Omega)\) be a premultisymplectic manifold of degree \(k\), and \(\mathcal{W}\) a regular one-isotropic involutive distribution in \((M,\Omega)\) such that \(\ker\Omega\subset\mathcal{W}\) and \(d=\dim\ker\Omega\). 1. A _premultisymplectic manifold of type \((d,k,0)\)_ is a triple \((M,\Omega,\mathcal{W})\) such that, for every \(x\in M\), 1. \(\dim\mathcal{W}(x)-d=\dim\Lambda^{k-1}(\mathrm{T}_{x}M/\mathcal{W}(x))^{*}\). 2. \(\dim(\mathrm{T}_{x}M/\mathcal{W}(x))>k-1\). 2. A _premultisymplectic manifold of type \((d,k,r)\)_\((1\leq r\leq k-1)\) is a quadruple \((M,\Omega,\mathcal{W},\mathcal{E})\), where \(\mathcal{E}\) is a distribution on \(M\) such that, for every \(x\in M\), the space \(\mathcal{E}(x)\) is a vector subspace of \(\mathrm{T}_{x}M/\mathcal{W}(x)\) with the following properties: 1. If \(\pi_{x}\colon\mathrm{T}_{x}M\to\mathrm{T}_{x}M/\mathcal{W}(x)\) is the canonical projection, then \(\iota(v_{1}\wedge\ldots\wedge v_{r})\Omega_{p}=0\), for every \(v_{i}\in\mathrm{T}_{x}M\) such that \(\pi_{x}(v_{i})\in\mathcal{E}(x)\), \(i=1,\ldots,r\). 2. \(\dim\mathcal{W}(x)-d=\dim\Lambda_{r}^{k-1}(\mathrm{T}_{x}M/\mathcal{W}(x))^{*}\), where the horizontal forms are considered with respect to the subspace \(\mathcal{E}(x)\). 3. \(\dim(\mathrm{T}_{x}M/\mathcal{W}(x))>k-1\). **Theorem 5.13** (Generalised premultisymplectic Darboux Theorem).: _Every premultisymplectic manifold \((M,\Omega)\) of type \((d,k,0)\) (resp. of type \((d,k,r)\)) is locally premultisymplectomorphic to a canonical premultisymplectic manifold of type \((d,k,0)\) (resp. of type \((d,k,r)\)). Therefore, there is a local chart of Darboux coordinates around every point \(x\in M\)._ As in previous structures, analogous claims can be done concerning the existence of connected compatible connections with premultisymplectic manifolds. ## 6 Conclusions and outlook The focus of this research is the exploration of Darboux-type theorems concerning geometric structures defined by closed differential forms. The initial section of this study entails an examination of the Darboux theorem for symplectic, presymplectic, and cosymplectic manifolds. By imposing minimal regularity conditions, we have successfully established a proof for a Darboux theorem applicable to precosymplectic manifolds. Within the realm of geometric mechanics, these manifolds serve as the phase spaces for both regular and singular autonomous and non-autonomous dynamical systems. We have presented novel proofs for the Darboux theorem concerning \(k\)-symplectic and \(k\)-cosymplectic manifolds. These proofs appear to be simpler compared to the previously known ones. Additionally, we have introduced and demonstrated new Darboux theorems for specific families of \(k\)-presymplectic and \(k\)-precosymplectic manifolds. Furthermore, we have provided a counterexample illustrating that a general Darboux-type theorem does not hold for \(k\)-presymplectic manifolds. We have conducted a thorough review of previous findings regarding the existence of Darboux coordinates for certain types of multisymplectic manifolds. Lastly, we have presented fresh results that establish the existence of Darboux coordinates for particular cases of premultisymplectic manifolds. All of these structures play a vital role in the geometric representation of both regular and singular classical field theories. The relations of Darboux theorems with flat connections have been studied, which provides new viewpoints and gathers previous scattered results in the literature. The ideas of this paper can be extended to other geometric structures related with closed one or two-forms of different types. Notwithstanding, the formalism on flat compatible connections does not apply to geometric structures related to families of different forms that do not allow for a locally constant form and therefore closed, e.g. for contact and precontact structures, and their extensions (which appear, for instance, in the geometric description of dissipative and action-dependent systems in physics). It would be interesting to find an analogue of our formalism for such theories. In particular, note that non-closed differential forms may have flat compatible connections provided torsion different from zero is allowed. For instance, consider the manifold \(M=\mathbb{R}^{3}\) with natural coordinates \(\{t,x,p\}\), the one-form \(\eta=\mathrm{d}t-p\mathrm{d}x\) and the connection \(\nabla\) in whose only non-vanishing Christoffel symbol is \(\Gamma^{t}_{px}=-1\). It is easy to check that \(\eta\) is a contact one-form on \(M\), and parallel relative to the connection \(\nabla\), namely \(\nabla\eta=0\). However, the connection \(\nabla\) is not torsion-free: its torsion has local expression \(T=\mathrm{d}x\otimes\mathrm{d}p\otimes\frac{\partial}{\partial t}-\mathrm{d}p \otimes\mathrm{d}x\otimes\frac{\partial}{\partial t}\). This torsion takes account of the non-integrability of the contact distribution \(D=\ker\eta\). Meanwhile, \(\nabla\) is flat. The relation between integrability of a geometric structure and the torsion of compatible connections will be investigated in a future work. Moreover, this work has studied conditions for Darboux theorems of various types. We believe that there is still room to provide more types of Darboux coordinates, and that more research in the study of necessary and sufficient conditions for their existence is needed. In particular, this specially applies to \(k\)-pre(co)symplectic manifolds. ### Acknowledgments We thank M. de Leon and J. Gaset for fruitful discussions and comments. We acknowledge partial financial support from the _Spanish Ministry of Science and Innovation_, grants PID2021-125515NB-C21, PID2021-125515NB-C22, and RED2022-134301-T of AEI, and of the _Ministry of Research and Universities of the Catalan Government_, project 2021 SGR 00603 Geometry of Manifolds and Applications, GEOMVAP. J. de Lucas and X. Rivas acknowledge partial financial support from project IDUB with number PSP: 501-D111-20-2004310. X. Rivas would like to thank the cordiality shown during his stays at the Faculty of Physics of the University of Warsaw financed from the above mentioned IDUB project.
2310.15587
ScanDL: A Diffusion Model for Generating Synthetic Scanpaths on Texts
Eye movements in reading play a crucial role in psycholinguistic research studying the cognitive mechanisms underlying human language processing. More recently, the tight coupling between eye movements and cognition has also been leveraged for language-related machine learning tasks such as the interpretability, enhancement, and pre-training of language models, as well as the inference of reader- and text-specific properties. However, scarcity of eye movement data and its unavailability at application time poses a major challenge for this line of research. Initially, this problem was tackled by resorting to cognitive models for synthesizing eye movement data. However, for the sole purpose of generating human-like scanpaths, purely data-driven machine-learning-based methods have proven to be more suitable. Following recent advances in adapting diffusion processes to discrete data, we propose ScanDL, a novel discrete sequence-to-sequence diffusion model that generates synthetic scanpaths on texts. By leveraging pre-trained word representations and jointly embedding both the stimulus text and the fixation sequence, our model captures multi-modal interactions between the two inputs. We evaluate ScanDL within- and across-dataset and demonstrate that it significantly outperforms state-of-the-art scanpath generation methods. Finally, we provide an extensive psycholinguistic analysis that underlines the model's ability to exhibit human-like reading behavior. Our implementation is made available at https://github.com/DiLi-Lab/ScanDL.
Lena S. Bolliger, David R. Reich, Patrick Haller, Deborah N. Jakobi, Paul Prasse, Lena A. Jäger
2023-10-24T07:52:19Z
http://arxiv.org/abs/2310.15587v1
# ScanDL: A Diffusion Model for Generating Synthetic Scanpaths on Texts ###### Abstract Eye movements in reading play a crucial role in psycholinguistic research studying the cognitive mechanisms underlying human language processing. More recently, the tight coupling between eye movements and cognition has also been leveraged for language-related machine learning tasks such as the interpretability, enhancement, and pre-training of language models, as well as the inference of reader- and text-specific properties. However, scarcity of eye movement data and its unavailability at application time poses a major challenge for this line of research. Initially, this problem was tackled by resorting to cognitive models for synthesizing eye movement data. However, for the sole purpose of generating human-like scanpaths, purely data-driven machine-learning-based methods have proven to be more suitable. Following recent advances in adapting diffusion processes to discrete data, we propose ScanDL, a novel discrete sequence-to-sequence diffusion model that generates synthetic scanpaths on texts. By leveraging pre-trained word representations and jointly embedding both the stimulus text and the fixation sequence, our model captures multi-modal interactions between the two inputs. We evaluate ScanDL within- and across-dataset and demonstrate that it significantly outperforms state-of-the-art scanpath generation methods. Finally, we provide an extensive psycholinguistic analysis that underlines the model's ability to exhibit human-like reading behavior. Our implementation is made available at [https://github.com/Dii-Lab/ScanDL](https://github.com/Dii-Lab/ScanDL). ## 1 Introduction As human eye movements during reading provide both insight into the cognitive mechanisms involved in language processing Rayner (1998) and information about the key properties of the text Rayner (2009), they have been attracting increasing attention from across fields, including cognitive psychology, experimental and computational psycholinguistics, and computer science. As a result, computational models of eye movements in reading experienced an upsurge over the past two decades. The earlier models are explicit computational cognitive models designed for fundamental research with the aim of shedding light on i) the mechanisms underlying human language comprehension at different linguistic levels and ii) the broader question of how the human language processing system interacts with domain-general cognitive mechanisms and capacities, such as working memory or visual attention Reichle et al. (2003); Engbert et al. (2005); Engelmann et al. (2013). More recently, traditional and neural machine learning (ML) approaches have been adopted for the prediction of human eye movements Nilsson and Nivre (2009, 2011); Hahn and Keller (2016); Wang et al. (2019); Deng et al. (2023) which, in contrast to cognitive models, do not implement any psychological or linguistic theory of eye movement control in reading. ML models, in turn, exhibit the flexibility to learn from and adapt to any kind of reading pattern on any kind of text. Within the field of ML, researchers have not only begun to _create_ synthetic scanpaths but also, especially within NLP, to _leverage_ them for different use cases: the interpretability of language models (LMs) Sood et al. (2020); Hollenstein et al. (2021, 2022); Merkx and Frank (2021), enhancing the performance of LMs on downstream tasks Barrett et al. (2018); Hollenstein and Zhang (2019); Sood et al. (2020); Deng et al. (2023), and pre-training models for all kinds of inference tasks concerning reader- and text-specific proper Figure 1: Human scanpath vs. ScanDL. ties (e.g., assessing reading comprehension skills or L2 proficiency, detecting dyslexia, or judging text readability) [1, 1, 14, 15, 16, 17, 18, 19, 20]. For these NLP use cases, the opportunity to generate large amounts of synthetic scanpaths is crucial for two reasons: first, real human eye movement data is scarce, and its collection is resource intensive. Second, relying on real human scanpaths entails the problem of not being able to generalize beyond the respective dataset, as gaze recordings are typically not available at inference time. Synthetic scanpaths resolve both issues. However, generating synthetic scanpaths is not a trivial task, as it is a sequence-to-sequence problem that requires the alignment of two different input sequences: the word sequence (order of words in the sentence), and the scanpath (chronological order of fixations on the sentence). In this paper, we present a novel discrete sequence-to-sequence diffusion model for the generation of synthetic human scanpaths on a given stimulus text: ScanDL, **Scanpath** Diffusion conditioned on **L**anguage input (see Figure 1). ScanDL leverages pre-trained word representations for the text to guide the model's predictions of the location and the order of the fixations. Moreover, it aligns the different sequences and modalities (text and eye gaze) by jointly embedding them in the same continuous space, thereby capturing dependencies and interactions between the two input sequences. The contributions of this work are manifold: We (i) develop ScanDL, the first diffusion model for simulating scanpaths in reading, which outperforms all previous state-of-the-art methods; we (ii) demonstrate ScanDL's ability to exhibit human-like reading behavior, by means of a Bayesian psycholinguistic analysis; and we (iii) conduct an extensive ablation study to investigate the model's different components, evaluate its predictive capabilities with respect to scanpath characteristics, and provide a qualitative analysis of the model's decoding process. ## 2 Related Work Models of eye movements in reading.Two computational cognitive models of eye movement control in reading have been dominant in the field during the past two decades: the E-Z reader model [12] and the SWIFT model [1]. Both models predict fixation location and duration on a textual stimulus guided by linguistic variables such as lexical frequency and predictability. While these explicit cognitive models implement theories of reading and are designed to explain empirically observed psycholinguistic phenomena, a second line of research adopts a purely data-driven approach aiming solely at the accurate prediction of eye movement patterns. Nilsson2009 trained a logistic regression on manually engineered features extracted from a reader's eye movements and, in an extension, also the stimulus text to predict the next fixation [10]. More recent research draws inspiration from NLP sequence labeling tasks. For instance, Hahn2016 proposed an unsupervised sequence-to-sequence architecture, adopting a labeling network to determine whether the next word is fixated. Wang2019 proposed a combination of CNNs, LSTMs, and a CRF to predict the fixation probability of each word in a sentence. A crucial limitation of these models is their simplifying the dual-input sequence into a single-sequence problem, not accounting for the chronological order in which the words are fixated and thus unable to predict important aspects of eye movement behavior, such as regressions and re-fixations. To overcome this limitation, Deng2023 proposed _Eyettention_, a dual-sequence encoder-encoder architecture, consisting of two LSTM encoders that combine the word sequence and the fixation sequence by means of a cross-attention mechanism; their model predicts next fixations in an auto-regressive (AR) manner. Diffusion models for discrete input.Approaches to apply diffusion processes to discrete input (mainly text) comprise discrete and continuous diffusion. Reid2022 proposed an edit-based diffusion model for machine translation and summarization whose corruption process happens in discrete space. Li2022 and Gong2023 both proposed continuous diffusion models for conditional text generation; whereas the former adopted classifiers to impose constraints on the generated sentences, the latter conditioned on the entire source sentence. All of these approaches consist of uni-modal input being mapped again to the same modality. ## 3 Problem Setting Consider a scanpath \(\mathbf{f}_{\mathbf{w}}^{r}=\langle f_{1},\ldots,f_{N}\rangle\), which represents a sequence of \(N\) fixations generated by reader \(r\) while reading sentence \(\mathbf{w}\). Here, \(f_{j}\) denotes the location of the \(j^{th}\) fixation represented by the linear position of the fixated word within the sentence \(\mathbf{w}\) (word index). The goal is to find a model that predicts a scanpath \(\mathbf{f}\) given sentence \(\mathbf{w}\). We evaluate the model by computing the mean Normalized Levenshtein Distance (NLD) (Levenshtein, 1965) between the predicted and the ground truth human scanpaths. Note that several readers \(r\) can read the same sentence \(\mathbf{w}\). In the following, we will denote the scanpath by \(\mathbf{f}\) instead of \(\mathbf{f}_{\mathbf{w}}^{r}\) if the reader or sentence is unambiguous or the (predicted) scanpath is not dependent on the reader. ## 4 ScanDL Inspired by continuous diffusion models for text generation (Gong et al., 2023; Li et al., 2022), we propose ScanDL, a diffusion model that synthesizes scanpaths conditioned on a stimulus sentence. ### Discrete Input Representation ScanDL uses a discrete input representation for both the stimulus sentence and the scanpath. First, we subword-tokenize the stimulus sentence \(\mathbf{w}=\langle w_{1},\ldots,w_{M}\rangle\) using the pre-trained BERT WordPiece tokenizer (Devlin et al., 2019; Song et al., 2021). We prepend special \(\mathsf{CLS}\) and append special \(\mathsf{SEP}\) tokens to both the sentence and the scanpath in order to obtain the stimulus sequence \(\mathbf{x}^{\mathbf{w}}=\langle w_{CLS},w_{1},\ldots,w_{M},w_{SEP}\rangle\) and a corresponding fixation sequence \(\mathbf{x}^{\mathbf{f}}=\langle f_{CLS},f_{1},\ldots,f_{N},f_{SEP}\rangle\). We introduce these special tokens in order to separate the two sequences and to align their beginning and ending. The two sequences are concatenated along the sequence dimension into \(\mathbf{x}=\mathbf{x}^{\mathbf{w}}\oplus\mathbf{x}^{\mathbf{f}}\). An example of the discrete input \(\mathbf{x}\) is depicted in Figure 2 (blue row). We utilize three features to provide a discrete representation for every element in the sequence \(\mathbf{x}\). The word indices \(\mathbf{x}_{idx}\) align fixations in \(\mathbf{x}^{\mathbf{f}}\) with words in \(\mathbf{x}^{\mathbf{w}}\), and align subwords in \(\mathbf{x}^{\mathbf{w}}\) originating from the same word (yellow row in Figure 2). Second, the BERT input IDs \(\mathbf{x}_{bert}\), derived from the BERT tokenizer (Devlin et al., 2019), refer to the tokenized subwords of the stimulus sentence for \(\mathbf{x}^{\mathbf{w}}\), while consisting merely of \(\mathsf{PAD}\) tokens for the scanpath \(\mathbf{x}^{\mathbf{f}}\), as no mapping between fixations and subwords is available (orange row in Figure 2). Finally, position indices \(\mathbf{x}_{pos}\) capture the order of words within the sentence and the order of fixations within the scanpath, respectively (red row in Figure 2). ### Diffusion Model A diffusion model (Sohl-Dickstein et al., 2015) is a latent variable model consisting of a forward and a reverse process. In the forward process, we sample \(\mathbf{z}_{0}\) from a real-world data distribution and gradually corrupt the data sample into standard Gaussian noise \(\mathbf{z}_{\tilde{T}}\sim\mathcal{N}(0,\mathbb{I})\), where \(\tilde{T}\) is the maximal number of diffusion steps. The latents \(\mathbf{z}_{1},\ldots,\mathbf{z}_{\tilde{T}}\) are modeled as a first-order Markov chain, where the Gaussian corruption of each intermittent noising step \(t\in[1,\ldots,\tilde{T}]\) is given by \(\mathbf{z}_{t}\sim q(\mathbf{z}_{t}|\mathbf{z}_{t-1})=\mathcal{N}(\sqrt{1- \beta_{t}}\mathbf{z}_{t-1},\beta_{t}\mathbb{I})\), where \(\beta_{t}\in(0,1)\) is a hyperparameter dependent on \(t\). The reverse distribution \(p(\mathbf{z}_{t-1}|\mathbf{z}_{t})\) gradually removes noise to reconstruct the original data sample \(\mathbf{z}_{0}\) and is approximated by \(p_{\boldsymbol{\theta}}\). ### Forward and Backward Processes In the following, we describe ScanDL's projection of the discrete input into continuous space, its forward noising process \(q\) and reverse denoising process \(p_{\boldsymbol{\theta}}\) (all depicted in Figure 3), as well as architectural details and diffusion-related methods. #### 4.3.1 Forward Process: Embedding of Discrete Input in Continuous Space Following Gong et al. (2023), our forward process deploys an embedding function \(\textsc{Emb}\left(\cdot\right)\) : \(\mathbb{N}^{M+N+4}\rightarrow\mathbb{R}^{(M+N+4)\times d}\) that maps from the discrete input representation into continuous space, where \(N\) and \(M\) denote the number of fixations and words, respectively, and \(d\) is the size of the hidden dimension. This embedding learns Figure 2: Discrete input representation of the concatenation \(\mathbf{x}\) of sentence \(\mathbf{x}^{\mathbf{w}}\)and scanpath \(\mathbf{x}^{\mathbf{f}}\). Each element of the sequence \(\mathbf{x}\) is represented by a triple of word index \(\mathbf{x}_{idx}\), BERT input ID \(\mathbf{x}_{bert}\), and position index \(\mathbf{x}_{pos}\). a joint representation of the subword-tokenized sentence \(\mathbf{x}^{\mathbf{w}}\) and the fixation sequence \(\mathbf{x}^{\mathbf{f}}\). More precisely, the embedding function \(\textsc{Emb}(\mathbf{x}):=\textsc{Emb}_{idx}(\mathbf{x}_{idx})+\textsc{Emb}_{bert }(\mathbf{x}_{bert})+\textsc{Emb}_{pos}(\mathbf{x}_{pos})\) is the sum of three independent embedding layers (see Figure 7 in Appendix C). While the word index embedding \(\textsc{Emb}_{idx}\) and the position embedding \(\textsc{Emb}_{pos}\) are learned during training, the pre-trained BERT model embedding \(\textsc{Emb}_{bert}\) is frozen. It maps the input IDs to pre-trained BERT embeddings (Devlin et al., 2019) and adds semantic meaning to the sentence. Only the word index embedding \(\textsc{Emb}_{idx}\) is corrupted by noise during the forward process; this embedding is what the model has to learn. The other two embeddings remain unnoised. Embedding the discrete input \(\mathbf{x}\) into continuous space using our embedding function \(\textsc{Emb}(\cdot)\) allows for a new transition \(\mathbf{z}_{0}\sim q_{\phi}(\mathbf{z}_{0}|\mathbf{x})=\mathcal{N}(\textsc{ Emb}(\mathbf{x}),\beta_{0}\mathbb{I})\) to extend the original forward Markov chain, where \(\mathbf{z}_{0}\) is the first latent variable at noising step \(t=0\) and \(q_{\phi}\) is the parametrized embedding step of the forward process. Note that this initial latent variable \(\mathbf{z}_{0}\) is not only a projection of the discrete data \(\mathbf{x}\) into continuous space, i.e. \(\textsc{Emb}(\mathbf{x})\), but is actually sampled from a normal distribution that is centered around \(\textsc{Emb}(\mathbf{x})\). #### 4.3.2 Forward Process: Partial Noising Let \(\mathbf{z}_{0}\coloneqq\mathbf{z}_{0}^{\mathbf{w}}\oplus\mathbf{z}_{0}^{ \mathbf{f}}\) be an initial latent variable, where \(\mathbf{z}_{0}^{\mathbf{w}}\) refers to the sentence subsequence and \(\mathbf{z}_{0}^{\mathbf{f}}\) to the fixation subsequence. Each subsequent latent \(\mathbf{z}_{t}\in[\mathbf{z}_{1},\dots,\mathbf{z}_{T}]\) is given by \(\mathbf{z}_{t}\coloneqq\mathbf{z}_{t}^{\mathbf{w}}\oplus\mathbf{z}_{t}^{ \mathbf{f}}\), where \(\mathbf{z}_{t}^{\mathbf{w}}\) remains unchanged, i.e., \(\mathbf{z}_{t}^{\mathbf{w}}=\mathbf{z}_{0}^{\mathbf{w}}\), and \(\mathbf{z}_{t}^{\mathbf{f}}\) is noised, i.e., \(\mathbf{z}_{t}^{\mathbf{f}}\sim q(\mathbf{z}_{t}^{\mathbf{f}}\mid\mathbf{z}_ {t-1}^{\mathbf{f}})=\mathcal{N}(\sqrt{1-\beta_{t}}\mathbf{z}_{t-1}^{\mathbf{f}},\beta_{t}\mathbb{I})\), with \(1\leq t\leq T\), where \(T\) is the amount of noise, and \(1\leq T\leq\tilde{T}\), where \(\tilde{T}\) is the maximal number of diffusion steps.1 This _partial noising_(Gong et al., 2023) is crucial, as the sentence on which a scanpath is conditioned must remain uncorrupted. Footnote 1: Note that \(T\) can be \(0\) (see Section 4.5); if \(T=0\), the model learns to reconstruct \(\mathbf{x}\) from \(\mathbf{z}_{0}\sim\mathcal{N}(\textsc{Emb}(\mathbf{x}),\beta_{0}\mathbb{I})\). #### 4.3.3 Backward Process: Denoising During the denoising process, a parametrized model \(f_{\boldsymbol{\theta}}\) learns to step-wise reconstruct \(\mathbf{z}_{0}\) by denoising \(\mathbf{z}_{T}\). Due to the first-order Markov property, the joint probability of all latents can be factorized as \(p_{\boldsymbol{\theta}}(\mathbf{z}_{0:T})=p(\mathbf{z}_{T})\prod_{t=1}^{T}p_{ \boldsymbol{\theta}}(\mathbf{z}_{t-1}|\mathbf{z}_{t})\). This denoising process is modeled as \(p_{\boldsymbol{\theta}}(\mathbf{z}_{t-1}|\mathbf{z}_{t})\sim\mathcal{N}(\mu_{ \boldsymbol{\theta}}(\mathbf{z}_{t},t),\Sigma_{\boldsymbol{\theta}}(\mathbf{z} _{t},t))\), where \(\mu_{\boldsymbol{\theta}}(\cdot)\) and \(\Sigma_{\boldsymbol{\theta}}(\cdot)\) are the model's \(f_{\boldsymbol{\theta}}\) predicted mean and variance of the true posterior distribution \(q(\mathbf{z}_{t-1}|\mathbf{z}_{t})\). The optimization criterion of the diffusion model is to maximize the marginal log-likelihood of the data \(\log p(\mathbf{z}_{0})\). Since directly computing and maximizing \(\log p(\mathbf{z}_{0})\) would require access to the true posterior distribution \(q(\mathbf{z}_{t-1}|\mathbf{z}_{t})\), we maximize the variational lower bound (VLB) of \(\log p(\mathbf{z}_{0})\) as a proxy objective, defined in Equation 1. However, as ScanDL involves mapping the discrete input into continuous space and back, the training objective becomes the minimization of \(\mathcal{L}_{\textsc{ScanDL}}\), a joint loss comprising three components, inspired by Gong et al. (2023). \(\mathcal{L}_{\textsc{ScanDL}}\) is defined in Equation 2, where \(f_{\boldsymbol{\theta}}\) is the parametrized neural network trained to reconstruct \(\mathbf{z}_{0}\) from \(\mathbf{z}_{T}\) (see Section 4.5). Figure 3: The embedding layer, the forward process (noising) and the reverse process (denoising) of ScanDL. \[\text{VLB}\coloneqq\mathbb{E}_{q(\mathbf{z}_{t},T\mid\mathbf{z}_{0})}\left[\log \frac{p\left(\mathbf{z}_{T}\right)p\boldsymbol{\theta}\left(\mathbf{z}_{0}\mid \mathbf{z}_{1}\right)}{q\left(\mathbf{z}_{T}\mid\mathbf{z}_{0}\right)}+\sum_{t =2}^{T}\log\frac{p\theta\left(\mathbf{z}_{t-1}\mid\mathbf{z}_{t}\right)}{q \left(\mathbf{z}_{t-1}\mid\mathbf{z}_{t},\mathbf{z}_{0}\right)}\right] \tag{1}\] \[\underset{\boldsymbol{\theta}}{\operatorname{argmin}}\,\mathcal{L}_{\text{ ScanDL}}=\underset{\boldsymbol{\theta}}{\operatorname{argmin}}\left[\underbrace{\sum_{t=2}^{T} \left\|f_{\boldsymbol{\theta}}\left(\mathbf{z}_{t},t\right)-\mathbf{z}_{0} \right\|_{2}^{2}}_{\mathcal{L}_{\text{VLB}}}+\underbrace{\left\|\text{ Emb}(\mathbf{x})-f_{\boldsymbol{\theta}}(\mathbf{z}_{1},1)\right\|_{2}^{2}}_{ \mathcal{L}_{\text{Gmin}}}-\underbrace{\log p_{\boldsymbol{\theta}}(\mathbf{ x}|\mathbf{z}_{0})}_{\mathcal{L}_{\text{Gmin}}}\right] \tag{2}\] The first component \(\mathcal{L}_{\text{VLB}}\) is derived from the VLB (see Appendix G for a detailed derivation), and aims at minimizing the difference between ground-truth \(\mathbf{z}_{0}\) and the model prediction \(f_{\boldsymbol{\theta}}\left(\mathbf{z}_{t},t\right)\). The second component \(\mathcal{L}_{\text{EMB}}\) measures the difference between the model prediction \(f_{\boldsymbol{\theta}}\left(\mathbf{z}_{t},t\right)\) and the embedded input.2 The last component \(\mathcal{L}_{\text{round}}\) corresponds to the reverse embedding, or rounding operation, which pipes the continuous model prediction through a reverse embedding layer to obtain the discrete representation. Footnote 2: Recall that \(\text{Emb}(\mathbf{x})\neq\mathbf{z}_{0}\), as \(\mathbf{z}_{0}\sim\mathcal{N}(\text{EMB}(\mathbf{x}),\beta_{0}\mathbb{I})\). ### Inference At inference time, the model \(f_{\boldsymbol{\theta}}\) needs to construct a scanpath on a specific sentence \(\mathbf{w}\). Specifically, to synthesize the scanpath and condition it on the embedded sentence \(\text{Emb}(\mathbf{x}^{\mathbf{w}})\), we replace the word index embedding of the scanpath \(\text{Emb}_{idx}(\mathbf{x}^{\mathbf{f}}{}_{idx})\) with Gaussian noise, initializing it as \(\widehat{\mathbf{x}}_{idx}^{\mathbf{f}}\sim\mathcal{N}(0,\mathbb{I})\). We then concatenate the new embedding \(\widehat{\text{EMB}}(\mathbf{x}^{\mathbf{f}})=\widehat{\mathbf{x}}_{idx}^{ \mathbf{f}}+\text{Emb}_{bert}(\mathbf{x}^{\mathbf{f}}{}_{bert})+\text{Emb}_{ pos}(\mathbf{x}^{\mathbf{f}}{}_{pos})\) with \(\text{Emb}(\mathbf{x}^{\mathbf{w}})\) to obtain the model input \(\mathbf{z}_{\widetilde{T}}\). The model \(f_{\boldsymbol{\theta}}\) then iteratively denoises \(\mathbf{z}_{\widetilde{T}}\) into \(\mathbf{z}_{0}\). At each denoising step \(t\), an anchoring function is applied to \(\mathbf{z}_{t}\) that serves two different purposes. First, it performs rounding on \(\mathbf{z}_{t}\)(Li et al., 2022), which entails mapping it into discrete space and then projecting it back into continuous space so as to enforce intermediate steps to commit to a specific discrete representation. Second, it replaces the part in \(\mathbf{z}_{t-1}\) that corresponds to the condition \(\mathbf{x}^{\mathbf{w}}\) with the original \(\text{Emb}(\mathbf{x}^{\mathbf{w}})\)(Gong et al., 2023) to prevent the condition from being corrupted by being recovered by the model \(f_{\boldsymbol{\theta}}\). After denoising \(\mathbf{z}_{\widetilde{T}}\) into \(\mathbf{z}_{0}\), \(\mathbf{z}_{0}\) is piped through the inverse embedding layer to obtain the predicted scanpath \(\widehat{\mathbf{x}}^{\mathbf{f}}\). ### Model and Diffusion Parameters Our parametrized model \(f_{\boldsymbol{\theta}}\) consists of an encoder-only Transformer (Vaswani et al., 2017) comprising 12 blocks, with 8 attention heads and hidden dimension \(d=256\). An extra linear layer projects the pre-trained BERT embedding \(\text{Emb}_{bert}\) to the hidden dimension \(d\). The maximum sequence length is 128, and the number of diffusion steps is \(\tilde{T}=2000\). We use a _sqrt_ noise schedule (Li et al., 2022) to sample \(\beta_{t}=1-\sqrt{\frac{t}{\tilde{T}+s}}\), where \(s=0.0001\) is a small constant corresponding to the initial noise level. To sample the noising step \(T\in\left[0,1,\ldots,\tilde{T}\right]\), we employ importance sampling as defined by Nichol and Dhariwal (2021). ## 5 Experiments To evaluate the performance of ScanDL against both cognitive and neural scanpath generation methods, we perform a within- and an across-dataset evaluation. All models were implemented in PyTorch (Paszke et al., 2019), and trained for 80,000 steps on four NVIDIA GeForce RTX 3090 GPUs. For more details on the training, see Appendix A. Our code is publicly available at [https://github.com/Dii-Lab/ScanDL](https://github.com/Dii-Lab/ScanDL). ### Datasets We use two eye-tracking-while-reading corpora to train and/or evaluate our model. The _Corpus of Eye Movements in L1 and L2 English Reading_(CELER, Berzak et al., 2022) is an English sentence corpus including data from native (L1) and non-native (L2) English speakers, of which we only use the L1 data (CELER L1). The _Zurich Cognitive Language Processing Corpus_(ZuCo, Hollenstein et al., 2018) is an English sentence corpus comprising both "task-specific" and "natural" reading, of which we only include the natural reading (ZuCo NR). Descriptive statistics for the two corpora including the distribution of reading measures and participant demographics can be found in Section B of the Appendix. ### Reference Methods We compare ScanDL with other state-of-the-art approaches to generate synthetic scanpaths including two well-established cognitive models, the E-Z reader model (Reichle et al., 2003) and the SWIFT model (Engbert et al., 2005), and one machine-learning-based model, Eyettention (Deng et al., 2023). Moreover, we include a human baseline, henceforth referred to as _Human_, that measures the inter-reader scanpath similarity for the same sentence. Finally, we compare the model with two trivial baselines. One is the Uniform model, which simply predicts fixations \(\mathrm{iid}\) over the sentence, and the other one, referred to as Train-label-dist model, samples the saccade range from the training label distribution (Deng et al., 2023). ### Evaluation Metric To assess the model performance, we compute the Normalized Levenshtein Distance (NLD) between the predicted and the human scanpaths. The Levenshtein Distance (LD, Levenshtein, 1965) is a similarity-based metric quantifying the minimal number of additions, deletions and substitutions required to transform a word-index sequence \(S\) of a true scanpath into a word-index sequence \(\hat{S}\) of the model-predicted scanpath. Formally, the NLD is defined as \(\mathrm{NLD}(S,\hat{S})=\mathrm{LD}/\max(|S|,|\hat{S}|)\). ### Hyperparameter Tuning To find the best model-specific and training-specific hyperparameters of ScanDL, we perform triple cross-validation on the _New Reader/New Sentence_ setting (for the search space, see Appendix A). ### Within-Dataset Evaluation For the within-dataset evaluation, we perform 5-fold cross-validation on CELER L1 (Berzak et al., 2022) and evaluate the model in 3 different settings. The results for all settings are provided in Table 1. _New Sentence_ setting.We investigate the model's ability to generalize to novel sentences read by known readers; i.e., the sentences in the test set have not been seen during training, but the readers appear both in the training and test set. _Results._ ScanDL not only outperforms the previous state-of-the-art Eyettention (Deng et al., 2023) as well as all other reference methods by a significant margin, but even exceeds the similarity that is reached by the Human baseline. _New Reader_ setting.We test the model's ability to generalize to novel readers; the test set consists of scanpaths from readers that have not been seen during training, although the sentences appear both in training and test set. _Results._ Again, our model both improves over the previous state-of-the-art, the cognitive models, as well as over the Human baseline. Even more, the model achieves a greater similarity on novel readers as compared to novel sentences in the previous setting. _New Reader/New Sentence_ setting.The test set comprises only sentences and readers that the model has not seen during training to assess the model's ability to simultaneously generalize to novel sentences and novel readers. Of the within-dataset settings, this setting exhibits the most out-of-distribution qualities. _Results._ Again, ScanDL significantly outperforms the previous state-of-the-art. Again, in contrast to previous approaches and in line with the other settings, ScanDL attains higher similarity as measured in NLD than the Human baseline. ### Across-Dataset Evaluation To evaluate the generalization capabilities of our model, we train it on CELER L1 (Berzak et al., 2022), while testing it across-dataset on ZuCo NR (Hollenstein et al., 2018). Although the model has to generalize to unseen readers and sentences in the _New Reader/New Sentence_ setting, the hardware setup and the presentation style including stimulus layout of the test data are the same, and the readers stem from the same population. In the _Across-Dataset_ evaluation, therefore, we not only examine the model's ability to generalize to novel readers and sentences, but also to unfamiliar hardware and presentation style. _Results._ The results for this setting can also be found in Table 1. ScanDL outperforms all reference models, and achieves similarity with a true scanpath on par with the Human baseline. ### Ablation Study In this section, we investigate the effect of omitting central parts of the model: ScanDL without the position embedding \(\texttt{Em}_{pos}\) and the pre-trained BERT embedding \(\texttt{Emb}_{bert}\), and ScanDL without the sentence condition (unconditional scanpath generation). Additionally, we also consider the two previously prevalent noise schedules, the linear (Ho et al., 2020) and the cosine (Nichol and Dhariwal, 2021) noise schedules (for their definition, see Appendix C.2). All ablation cases are conducted in the _New Reader/New Sentence_ setting. _Results._ As shown in Table 2, omitting all embeddings except for the word index embedding results in a significant performance drop, as well as training ScanDL on unconditional scanpath generation. Moreover, changing the _sqrt_ to a linear noise schedule does not enhance performance, while there is a slight increase in performance for the cosine noise schedule. However, this performance difference between the _sqrt_ and the cosine schedule is statistically not significant.3 Footnote 3: \(p=0.68\) in a paired \(t\)-test. ## 6 Investigation of Model Behavior ### Psycholinguistic Analysis We further assess ScanDL's ability to exhibit human-like gaze behavior by investigating psycholinguistic phenomena observed in humans. We compare the effect estimates of three well-established psycholinguistic predictors -- _word length_, _surprisal_ and _lexical frequency effects_ -- on a range of commonly analyzed reading measures -- first-pass regression rate (fpr), skipping rate (sr), first-pass fixation counts (ffc) and total fixation counts (tfc) -- between human scanpaths on the one hand and synthetic scanpaths generated by ScanDL and our reference methods on the other hand.4 Effect sizes are estimated using Bayesian generalized linear-mixed models with reading measures as target variables and psycholinguistic features as predictors; logistic models for fpr and sr, Poisson models for ffc and tfc. For the human data, we fit random intercepts.5 We compute posterior distributions over all effect sizes using brms (Burkner, 2017), running 4 chains with 4000 iterations including 1000 warm-up iterations. Footnote 4: Full results plots, including the (psycholinguistically irrelevantly Uniform and Train-label-dist baselines, can be found in Fig. 8 of the Appendix. Footnote 5: For more details on the reading measures, computation of the predictors and model specification, see Appendix D. _Results._ We present the posterior distributions of the effect estimates obtained for the four reading measures in the _New Reader/New Sentence_ setting in Figure 4 and in Table 7 of the Appendix. We observe that across all reading measures and psycholinguistic predictor variables, ScanDL exhibits effects that are most consistent with the human data. On the one hand, the qualitative pattern, i.e., the sign of the estimates, for ScanDL-scanpaths is identical to the pattern observed in the human data, outperforming not only previous state-of-the-art (SOTA) models such as Eyettention, but even the two arguably most important cognitive models of eye movements in reading, E-Z reader and SWIFT. On the other hand, also the quantitative pattern, i.e., the estimated _size_ of each of the effects, exhibited by ScanDL are most similar to the ones observed in humans. ### Emulation of Reading Pattern Variability To assess whether ScanDL is able to emulate the different reading patterns and their variability typical for human data, we compare the true and the predicted scanpaths in the _New Reader_ setting of both datasets with respect to the mean and standard deviation of a range of commonly analyzed reading measures: regression rate, normalized fixation \begin{table} \begin{tabular}{l|l|l|l|l} \hline \hline Model & _New Sentence_ & _New Reader_ & _New Reader/New Sentence_ & _Across-Dataset_ \\ \hline Uniform & 0.779 \(\pm\) 0.002\(\dagger\) & 0.781 \(\pm\) 0.003\(\dagger\) & 0.782 \(\pm\) 0.005\(\dagger\) & 0.802 \\ Train-label-dist & 0.672 \(\pm\) 0.003\(\dagger\) & 0.672 \(\pm\) 0.004\(\dagger\) & 0.674 \(\pm\) 0.005\(\dagger\) & 0.723 \\ E-Z Reader & 0.619 \(\pm\) 0.005\(\dagger\) & 0.620 \(\pm\) 0.006\(\dagger\) & 0.667 \\ SWIFT & 0.607 \(\pm\) 0.004\(\dagger\) & 0.608 \(\pm\) 0.006\(\dagger\) & 0.607 \(\pm\) 0.006\(\dagger\) & 0.703 \\ Eyeettention & 0.580 \(\pm\) 0.002\(\dagger\) & 0.580 \(\pm\) 0.004\(\dagger\) & 0.578 \(\pm\) 0.006\(\dagger\) & 0.697 \\ **ScanDL** & **0.516 \(\pm\) 0.006** & **0.509 \(\pm\) 0.014** & **0.515 \(\pm\) 0.014** & **0.647** \\ \hline _Human_ & _0.538 \(\pm\) 0.006_ & _0.536 \(\pm\) 0.004_ & _0.538 \(\pm\) 0.006_ & 0.646 \(\pm\) 0.002 \\ \hline \hline \end{tabular} \end{table} Table 1: We report NLD \(\pm\) standard error for all settings. The dagger \(\dagger\) indicates models significantly worse than the best model. In the _New Sentence_, _New Reader_, and _New Reader/New Sentence_ settings, models are evaluated using five-fold cross-validation. In the _Across-Dataset_ setting, the model is trained on CELER L1 and tested on ZuCo NR. \begin{table} \begin{tabular}{l|l} \hline \hline Ablation case & NLD \(\downarrow\) \\ \hline **ScanDL** (original) & **0.515 \(\pm\) 0.014** \\ Cosine & **0.514 \(\pm\) 0.018** \\ Linear & 0.519 \(\pm\) 0.020 \\ W/o condition & 0.667 \(\pm\) 0.015 \\ W/o Emb\({}_{\textit{bwr}}\) and Emb\({}_{\textit{pos}}\) & 0.968 \(\pm\) 0.002 \\ \hline \hline \end{tabular} \end{table} Table 2: _Ablation study_. We report NLD \(\pm\) standard error for 5-fold cross-validation in the _New Reader/New Sentence_ setting. count, progressive and regressive saccade length, skipping rate, and first-pass count (see Table 8 in the Appendix for a detailed definition). _Results._ As depicted in Figure 5 and Table 9 in the Appendix, the scanpaths generated by ScanDL are similar in diversity to the true scanpaths for both datasets. Not only is ScanDL's mean value of each reading measure close to the true data, but, crucially, the model also reproduces the variability in the scanpaths: for instance, in both datasets, the variability is big in both the true and the predicted scanpaths for regressive saccades, and is small in both true and predicted scanpaths with regards to progressive saccade lengths and first-pass counts. We also inspect the model's ability to approximate reader-specific patterns. To this end, we average the above analyzed reading measures over all scanpaths of a reader, and compute the correlation with this reader's mean NLD. The analysis suggests that ScanDL more easily predicts patterns of readers with shorter scanpaths and a low proportion of regressions. Exact numbers are displayed in Table 10 in the Appendix. ### Qualitative Differences Between Models To investigate whether ScanDL and the baseline models exhibit the same qualitative pattern in their predictive performance, that is, whether they generate good (or poor) predictions on the same sentences, we compute the correlations between the mean sentence NLDs of ScanDL and each of the reference models. The correlations are all significant, ranging from 0.36 (SWIFT) to 0.41 (Eyettention), though neither very strong nor wide-spread (see Table 11 in the Appendix). For a more detailed inspection of each model's ability to predict certain reading patterns (scanpath properties), we compute the correlations between the above-introduced reading measures (see Section 6.2) of a true scanpath and the NLD of its predicted counterpart for all models, in the _New Reader/New Sentence_ setting. _Results._ All models exhibit a significant positive correlation between the NLD and both the regression rate and the normalized fixation count (see Table 12 in the Appendix): long scanpaths with many regressions and a high relative number of fixations result in predictions with a higher NLD (i.e., are more difficult). Further, models of the same type, that is the two ML-based models ScanDL and Eyettention on the one hand, and the two cognitive models E-Z reader and SWIFT on the other hand, exhibit very similar patterns. Overall, the cognitive models have much stronger correlations than the ML-based models. In particular, E-Z reader and SWIFT exhibit a large positive correlation for first-pass counts, meaning that they struggle with scanpaths with a high number of first-pass fixations on the words, while both ScanDL and Eyettention have correlations close to zero, indicating that they cope equally well for scanpaths with high or low first-pass counts. In sum, while the cognitive models appear to be overfitting to specific properties of a scanpath, ScanDL and Eyettention seem to generalize better across patterns. ### Investigation of the Decoding Progress We further examine the denoising process of ScanDL to determine at which step the Gaussian noise is shaped into the embeddings representing the word IDs of the scanpath. Figure 6 depicts three Figure 4: Comparison of posterior effect estimates for psycholinguistic phenomena on reading measures between original and predicted scanpaths. Lines represent a \(95\%\) credible interval, means are denoted by dots. Figure 5: Reading measures of true and predicted scanpaths by ScanDL of the CELER and ZuCo datasets. t-SNE plots [10] at different steps of the decoding process. Only during the last 200 denoising steps do we see an alignment between the words in the sentence and the predicted fixations on words in the scanpath, and a clear separation between these predicted fixations and the predicted PAD tokens. At the very last denoising step, all PAD tokens are mapped to a small number of spatial representations. ## 7 Discussion The experimental results show that our model establishes a new state-of-the-art in generating human-like synthetic eye movement patterns across all investigated evaluation scenarios, including within- and across-dataset settings. Indeed, the similarity of the model's synthetic scanpaths to a human scanpath even exceeds the similarity between two scanpaths generated by two different human readers on the same stimulus. This indicates that the model learns to mimic an average human reader, abstracting away from reader-specific idiosyncrasies.6 Interestingly, the model's performance is better in the _New Reader_ than in both the _New Sentence_ and _New Reader/New Sentence_ setting, which stands in stark contrast to previous research which identified the generalization to novel readers as the main challenge [11]. Furthermore, ScanDL's SOTA performance in the _Across-Dataset_ setting, attaining parity with the human baseline, corroborates the model's generalizability. This generalizability is further underlined by the model emulating the variability in reading patterns observed in the human data, even when being evaluated across-dataset. Footnote 6: This is further supported by the NLD not changing when computed as average NLD between a ScanDL scanpath and _all_ human scanpaths on the same sentence in the test set. The omission of the positional embedding and the pre-trained BERT embedding in the ablation study highlights their importance -- the fact that discarding them yields a worse performance than omitting the sentence condition, in which case the model still receives the positional embedding, stresses the importance of sequential information, which is lost to a transformer model if not explicitly provided. Moreover, removing the sentence-condition also emphasizes the importance of the linguistic information contained in the sentence. Overall, the ablation study emphasizes the importance of modeling scanpath prediction as a dual-nature problem. In contrast to cognitive models, ScanDL was not designed to exhibit the same psycholinguistic phenomena as human readers. However, our psycholinguistic analysis demonstrates that ScanDL nevertheless captures the key phenomena observed in human readers which psycholinguistic theories build on. Even more, in contrast to the cognitive models, it appears to overfit less to certain reading patterns. These findings not only emphasize the high quality of the generated data, but also open the possibility to use ScanDL when designing psycholinguistic experiments: the experimental stimuli can be piloted by means of simulations with ScanDL to potentially detect unexpected patterns or confounds and address them before conducting the actual experiment with human readers. Further, we can use ScanDL for human-centric NLG evaluations using synthetic scanpaths. ## 8 Conclusion We have introduced a new state-of-the-art model for scanpath generation called ScanDL. It not only resolves the two major bottlenecks for cognitively enhanced and interpretable NLP, data scarcity and unavailability at inference time, but also promises to be valuable for psycholinguistics by producing high quality human-like scanpaths. Further, we have extended the application of diffusion models to discrete sequence-to-sequence problems. Figure 6: t-SNE plots of the continuous model output \(\hat{\mathbf{z}}_{t}\) at different steps of the \(2000\)-step denoising process. Step \(1999\) refers to the last iteration (all noise removed). ### Limitations Eye movement patterns in reading exhibit a high degree of individual differences between readers (Kuperman and Van Dyke, 2011; Jager et al., 2020; Haller et al., 2022, 2023). For a generative model of scanpaths in reading, this brings about a trade-off between group-level predictions and predictions accounting for between-reader variability. The fact that ScanDL outperforms the human baseline in terms of NLD indicates that it learns to emulate an average reader. Whereas this might be the desired behavior for a range of use case scenarios, it also means that the model is not able to concomitantly predict the idiosyncrasies of specific readers. We plan to address this limitation in future work by adding reader-specific information to the model. Relatedly, since our model has been trained and evaluated on a natural reading task, it remains unclear as to what extent it generalizes to task-specific datasets, which arguably might provide more informative scanpaths for the corresponding NLP downstream task. As for the reader-specific extension of the model, this issue might be addressed by adding the task as an additional input condition. On a more technical note, a major limitation of the presented model is its relatively high computational complexity in terms of run time and memory at inference time (see Section A.2 in the Appendix). Moreover, the metric used for model evaluation, the Normalized Levensthein Distance, might not be the ideal metric for evaluating scanpaths. Other metrics that have been used to measure scanpath similarity -- MultiMatch (Jarodzka et al., 2010) and ScanMatch (Cristino et al., 2010) -- have been questioned in terms of their validity in a recent study (Kummerer and Bethge, 2021); both metrics have systematically scored incorrect models higher than ground-truth models. A better candidate to use in place of the Normalized Levenshtein Distance might be the similarity score introduced by von der Malsburg et al. (2015), which has been shown to be sensitive to subtle differences between scanpaths on sentences that are generally deemed simple. However, the main advantage of this metric is that it takes into account fixation durations, which ScanDL, in its current version, is unable to predict. This inability to concomitantly predict fixation durations together with the fixation positions is another shortcoming of our model. However, we aim to tackle this problem in future work. Finally, we would like to emphasize that, although our model is able to capture psycholinguistic key phenomena of human sentence processing, it is _not_ a cognitive model and hence does not claim in any way that its generative process simulates or resembles the mechanisms underlying eye movement control in humans. ### Ethics Statement Working with human data requires careful ethical consideration. The eye-tracking corpora used for training and testing follow ethical standards and have been approved by the responsible ethics committee. However, in recent years it has been shown that eye movements are a behavioral biometric characteristic that can be used for user identification, potentially violating the right of privacy (Jager et al., 2020; Lohr and Komogortsev, 2022). The presented approach of using synthetic data at deployment time considerably reduces the risk of potential privacy violation, as synthetic eye movements do not allow to draw any conclusions about the reader's identity. Moreover, the model's capability to generate high-quality human-like scanpaths reduces the need to carry out eye-tracking experiments with humans across research fields and applications beyond the use case of gaze-augmented NLP. Another possible advantage of our approach is that by leveraging synthetic data, we can overcome limitations associated with the availability and representativeness of real-world data, enabling the development of more equitable and unbiased models. In order to train high-performing models for downstream tasks using gaze data, a substantial amount of training data is typically required. However, the availability of such data has been limited thus far. The utilization of synthetic data offers a promising solution by enabling the training of more robust models for various downstream tasks using gaze data. Nonetheless, the adoption of this approach raises important ethical considerations, as it introduces the potential for training models that can be employed across a wide range of tasks, including those that may be exploited for nefarious purposes. Consequently, there exists a risk that our model could be utilized for tasks that are intentionally performed in bad faith. ## Acknowledgements This work was partially funded by the German Federal Ministry of Education and Research under grant 01\(|\) S20043 and the Swiss National Science Foundation under grant 212276, and is supported by COST Action MultipEYE, CA21131, supported by COST (European Cooperation in Science and Technology).
2305.11557
Multiplicities of Representations in Algebraic Families
In this short notes, we consider multiplicities of representations in general algebraic families, especially the upper semi-continuity of homological multiplicities and the locally constancy of Euler-Poincare numbers. This generalizes the main result of Aizenbud-Sayag for unramified twisting families.
Li Cai, Yangyu Fan
2023-05-19T09:55:58Z
http://arxiv.org/abs/2305.11557v3
# Multiplicities of representations in algebraic families ###### Abstract. In this short notes, we consider multiplicities of representations in general algebraic families, especially the upper semi-continuity of homological multiplicities and the locally constancy of Euler-Poincare numbers. This generalizes the main result of Aizenbud-Sayag for unramified twisting families. Key words and phrases:Branching laws, Homological multiplicities, Spherical varieties 1991 Mathematics Subject Classification: 22E45, 20G25 ###### Contents * 1 Introduction * 2 Homological algebras * 3 Homological multiplicities ## 1. Introduction Let \(G\) be a reductive group over a \(p\)-adic field \(F\) and \(H\subset G\) be a closed _spherical_ reductive subgroup, i.e. \(H\) admits an open orbits in the flag variety of \(G\). To simplify notations, we assume the center \(Z\subset G\) intersects \(H\) trivially in the introduction. Let \(\operatorname{Rep}(G,\mathbb{C})\) be the category of complex smooth \(G(F)\)-representations. In the _relative Langlands program_ (see [17] etc), it is central to study the _multiplicity_\(m(\sigma):=\dim\operatorname{Hom}_{H(F)}(\sigma,\mathbb{C})\) for smooth admissible \(\sigma\in\operatorname{Rep}(G,\mathbb{C})\). As suggested in [14], to study \(m(\sigma)\), it is more convenient to consider the homological multiplicities \(m^{i}(\sigma):=\dim\operatorname{Ext}_{H(F)}^{i}(\sigma,\mathbb{C})\) and the Euler-Poincare number \(\operatorname{EP}(\sigma):=\sum_{i\geq 0}(-1)^{i}m^{i}(\sigma)\) simultaneously. Usually, the Euler Poincare number \(\operatorname{EP}(\sigma)\) is easier to control and in many interesting circumstances, one may expect to deduce results on \(m(\sigma)\) from those of \(\operatorname{EP}(\sigma)\). For example, it is conjectured in [14] (see also [19, Conjecture 6.4,6.5]) that when the pair \((G,H)\) is _strongly tempered_, i.e. the matrix coefficients of tempered \(G(F)\)-representations are absolutely integrable on \(H(F)\), then \(m(\sigma)=\operatorname{EP}(\sigma)\) for any irreducible tempered \(\sigma\in\operatorname{Rep}(G,\mathbb{C})\). Actually the stronger result \(m^{i}(\sigma)=0\) for \(i>0\) is known for the \((\operatorname{GL}_{n+1}\times\operatorname{GL}_{n},\operatorname{GL}_{n})\) -case (see [9]), the triple product case (see [7]), the \(H(F)\) compact case (see [1, Thm 2.14] etc) and for \(\sigma\) supercuspidal (see [19, Remark 6.6]). In this paper, we shall consider variations of \(m^{i}(\sigma)\) and \(\operatorname{EP}(\sigma)\) in families. Throughout this paper, we assume the following working hypothesis: The multiplicity \(m(\sigma)\) is _finite_ for all irreducible \(\sigma\in\operatorname{Rep}(G,\mathbb{C})\), which will imply that \(m^{i}(\sigma)\) and \(\operatorname{EP}(\sigma)\) are all well-defined and finite for arbitrary finite length \(\sigma\in\operatorname{Rep}(G,\mathbb{C})\) (see the discussion at the beginning of Section 3). Note that this hypothesis is already known in many cases [17, Theorem 5.1.5] and conjectured to hold for all spherical pairs. To explain the favor of the main result, let us start with the _unramified twisting family_. **Unramified twisting family**.: _Let \(P\subset G\) be a parabolic subgroup with Levi factor \(M\) and take \(\sigma\in\operatorname{Rep}(M,\mathbb{C})\) of finite length. Attached to the data \((P,M,\sigma)\), one has the unramified twisting family \(\left\{I_{P}^{G}(\sigma\chi)\middle|\chi\in\widehat{M}\right\}\) where \(\widehat{M}\) is the complex torus parameterizing unramified characters of \(M(F)\). Then as functions on the complex torus \(\widehat{M}\),_ * \(m(I_{P}^{G}(\sigma\chi))\) _is upper semi-continuous, i.e. for each_ \(n\in\mathbb{N}\)_, the set_ \(\left\{\chi\in\widehat{M}\middle|m(I_{P}^{G}\sigma\chi)\leq n\right\}\) _is open (see_ _[_13_, Appendix D]__);_ * \(\operatorname{EP}(I_{P}^{G}(\sigma\chi))\) _is constant (see_ _[_2_, Theorem E]__)._ When attacking arithmetic problems using \(p\)-adic methods, one usually needs to consider families of automorphic representations parameterized by the eigenvarieties, which are defined over \(p\)-adic fields and contain _non-smooth_ classical points in general (see [3]). For arithmetic applications such as \(p\)-adic special value formulae on eigenvarieties (see [10] etc), one is motivated to consider the following setting (following [12, 11]): Fix a subfield \(E\subset\mathbb{C}\) and let \(R\) be a finitely generated reduced \(E\)-algebra. Let \(\pi\) be a torsion-free \(R\)-module equipped with a \(R\)-linear \(G(F)\)-action which is * _smooth_, i.e. any \(v\in\pi\) is fixed by an open compact subgroup of \(G(F)\); * _admissible_, i.e. the submodule \(\pi^{K}\subset\pi\) of \(K\)-fixed elements is finitely generated over \(R\) for any compact open subgroup \(K\subset G(F)\); * _finitely generated_, i.e. \(\pi\) is finitely generated as a \(R[G(F)]\)-module. For any point \(x\in\operatorname{Spec}\left(R\right)\), let \(k(x)\) be the residue field and denote the category of smooth \(G(F)\)-representations over \(k(x)\) by \(\operatorname{Rep}(G,k(x))\). For \(\pi|_{x}:=\pi\otimes_{R}k(x)\in\operatorname{Rep}(G,k(x))\), set \[m^{i}(\pi|_{x}):=\dim_{k(x)}\operatorname{Ext}_{H(F)}^{i}\left(\pi|_{x},k(x) \right),\quad\operatorname{EP}(\pi|_{x}):=\sum_{i\geq 0}(-1)^{i}m^{i}(\pi|_{x})\] where the \(\operatorname{Ext}\)-groups are computed in \(\operatorname{Rep}(G,k(x))\). Note that by Proposition 3.2 below, all \(m^{i}(\pi|_{x})\) and \(\operatorname{EP}(\pi|_{x})\) are well-defined and finite under our running hypothesis. By results for unramified twisting families, it seems reasonable to propose the following conjecture: **Conjecture 1.1**.: _With respect to the Zariski topology on \(\operatorname{Spec}\left(R\right)\), \(m^{i}(\pi|_{x})\) is upper semi-continuous for each \(i\in\mathbb{N}\) and \(\operatorname{EP}(\pi|_{x})\) is locally constant._ _Remark 1.2_.: The following example (of unramified twisting family) in [7] illustrates that the upper-semicontinuity is optimal to expect in general. Let \(K/F\) be a quadratic field extension and \(\theta\in\operatorname{Gal}(K/F)\) be the non-trivial element. The spherical pair \((G:=\mathbb{G}_{m}\backslash\operatorname{Res}_{K/F}\operatorname{GL}_{2},H:= \mathbb{G}_{m}\backslash\operatorname{GL}_{2})\) is not strongly tempered. Consider the \(G(F)\)-representation \(I_{P}^{G}(\chi)\) where \(P\) is the parabolic subgroup consisting of upper triangular matrices and \(\chi=(\chi_{1},\chi_{2})\) is a character of the Levi quotient \(M(F)\cong(K^{\times})^{2}/F^{\times}\). Then \(m^{i}(I_{P}^{G}\chi)=0\), \(i\geq 2\) and * \(m(I_{P}^{G}\chi)\leq 1\) with the equality holds iff \(\chi_{1}|_{F^{\times}}=\chi_{2}|_{F^{\times}}=1\) or \(\mu(\chi):=\chi_{1}\cdot(\chi_{2}\circ\theta)=1\); * \(m^{1}(I_{P}^{G}\chi)\leq 1\) with the equality holds iff \(\chi_{1}|_{F^{\times}}=\chi_{2}|_{F^{\times}}=1\) and \(\mu(\chi)\neq 1\); * \(\operatorname{EP}(I_{P}^{G}\chi)\leq 1\) with the equality holds iff \(\mu(\chi)=1\). In particular, consider the family \(I_{P}^{G}(\sigma\chi_{\lambda})\) where \(\sigma=(\xi,1)\) with \(\xi:\;F^{\times}\backslash K^{\times}\to\mathbb{C}^{\times}\) is a non-trivial character and \(\chi_{\lambda}=(|\cdot|^{\lambda},|\cdot|^{-\lambda})\), \(\lambda\in\mathbb{C}\). Then as functions of \(\lambda\), \(m^{0}(I_{P}^{G}(\sigma\chi_{\lambda}))\) and \(m^{1}(I_{P}^{G}(\sigma\chi_{\lambda}))\) both jump at \(\lambda=0\) while \(\operatorname{EP}(I_{P}^{G}(\sigma\chi_{\lambda}))\) is constant. The following result, which summarizes Propositions 3.5 and 3.6 below, is our main result towards Conjecture 1.1. To state the result, we need to introduce more notations. As the local analogue of classical points in eigenvarieties, we fix a Zariski dense subset \(\Sigma\subset\operatorname{Spec}\left(R\right)\) of closed points. We say _the fiber rank of \(\pi\) is locally constant on \(\Sigma\)_ if for any open compact subgroup \(K\subset G(F)\), the function \(\dim_{k(x)}\pi^{K}|_{x}\) is locally constant on \(\Sigma\). For any \(x\in\operatorname{Spec}\left(R\right)\), denote by \((\pi|_{x})^{\vee}\) the smooth dual of \(\pi|_{x}\). **Theorem 1.3**.: _Let \(\pi\) be a finitely generated torsion-free smooth admissible \(R[G(F)]\)-module. Assume moreover that_ * _the fiber rank of_ \(\pi\) _is locally constant on_ \(\Sigma\)_;_ * _there exists a finitely generated smooth admissible torsion-free_ \(R[G(F)]\)_-module_ \(\tilde{\pi}\) _such that_ \(\tilde{\pi}|_{x}\cong(\pi|_{x})^{\vee}\) _for any_ \(x\in\Sigma\)_._ _Then Conjecture 1.1 holds for \(\pi\) if \(R\) is regular or \(Z(F)\) is compact._ Before explaining the proof, we make several remarks. _Remark 1.4_.: For the unramified twisting family, * the fiber rank is locally constant by construction; * the underlying space \(\widehat{M}\) is connected and smooth (hence regular); * the family \(\tilde{\pi}\) can be taken as \(\left\{I_{P}^{G}(\sigma^{\vee}\chi^{-1})\Big{|}\chi\in\widehat{M}\right\}\). Thus Theorem 1.3 covers unramified twisting families. _Remark 1.5_.: The locally constancy of fiber rank may holds for general finitely generated torsion-free smooth admissible \(R[G(F)]\)-modules. If \(\pi|_{x}\) is absolutely irreducible for all \(x\in\Sigma\) and \(G=\operatorname{GL}_{n}\), one can deduce the localy constancy of fiber rank from the theory of co-Whittaker modules in [12]. _Remark 1.6_.: If \(\pi|_{x}\) is absolutely irreducible for all \(x\in\Sigma\) and \(G\) is classical, one can construct the \(R[G(F)]\)-module \(\tilde{\pi}\) from \(\pi\) by the MVW involution (see [15]). Now we explain our approach to Theorem 1.3. Our proof is based on the following general upper semi-continuous theorem in homological algebras (we choose the language of derived categories in the whole proof. See Section 2 for basics.) Denote by \(D(R)\) the derived category of \(R\)-modules. **Proposition 1.7** (Upper semi-continuous theorem).: _Let \(M\in D(R)\) be a perfect complex, i.e. quasi-isomorphic to a bounded above and below complex of finite projective \(R\)-modules. Then_ * _the function_ \(\dim_{k(x)}H^{i}(M\otimes_{R}^{L}k(x))\) _is upper semicontinuous for each_ \(i\)_,_ * _the Euler Poincare number_ \(\sum_{i}(-1)^{i}\dim_{k(x)}H^{i}(M\otimes_{R}^{L}k(x))\) _is locally constant._ Proof.: Item (i) is [18, Lemma 0BDI] and Item (ii) is [18, Lemma 0BDJ] Now let \(i_{H}^{G}E\) be the compact induction of \(E\) and \(\mathcal{H}(K,E)\) be the level-\(K\) Hecke algebra over \(E\). Then by the Frobenius reciprocity law and Bernstein's decomposition theorem (see [2, Theorem 2.5(1)] etc), for properly chosen open compact subgroup \(K\subset G(F)\) (see Proposition 3.2 below) \[m^{i}(\pi|_{x})=\dim_{k(x)}\operatorname{Ext}_{\mathcal{H}(K,E)}^{i}((i_{H}^{ G}E)^{K},\tilde{\pi}^{K}|_{x})=\dim_{k(x)}H^{i}\left(\operatorname{RHom}_{ \mathcal{H}(K,E)}((i_{H}^{G}E)^{K},\tilde{\pi}^{K}|_{x})\right).\] Hence Theorem 1.3 holds if * \(\operatorname{RHom}_{\mathcal{H}(K,E)}((i_{H}^{G}E)^{K},\tilde{\pi}^{K})\) is perfect and * there is an isomorphism \[\operatorname{RHom}_{\mathcal{H}(K,E)}((i_{H}^{G}E)^{K},\tilde{\pi}^{K}) \otimes_{R}^{L}k(x)\cong\operatorname{RHom}_{\mathcal{H}(K,E)}((i_{H}^{G}E)^ {K},\tilde{\pi}^{K}|_{x}).\] For (b), under our assumption on the local constancy of fiber rank, \(\tilde{\pi}^{K}\) is projective over \(R\) (up to shrinking \(\operatorname{Spec}{(R)}\), see Lemma 2.6 below). Thus the isomorphism in (b) is equivalent to \[\operatorname{RHom}_{\mathcal{H}(K,E)}((i_{H}^{G}E)^{K},\tilde{\pi}^{K}) \otimes_{R}^{L}k(x)\cong\operatorname{RHom}_{\mathcal{H}(K,E)}((i_{H}^{G}E)^ {K},\tilde{\pi}^{K}\otimes_{R}^{L}k(x)).\] Now this isomorphism holds either \(k(x)\) or \((i_{H}^{G}E)^{K}\) is perfect by standard results in homological algebra (see Lemma 2.2). Precisely, * if \(R\) is regular, then \(k(x)\) is perfect (see Lemma 2.4 below); * if \(Z(F)\) is compact, \((i_{H}^{G}E)^{K}\) is perfect by the result of Schneider-Stuhler in [16] (see Proposition 3.4 below). With \((b)\) at hand, by some general criterion of perfectness given in Lemma 2.5 below, the complex \(\operatorname{RHom}_{\mathcal{H}(K,E)}((i_{H}^{G}E)^{K},\tilde{\pi}^{K})\) is perfect as \(\operatorname{Ext}_{\mathcal{H}(K,E)}^{i}((i_{H}^{G}E)^{K},\tilde{\pi}^{K})\) is finitely generated over \(R\) (see Lemma 3.1) and there is a positive integer \(N\) such that for any closed point \(x\), \(m^{i}(\pi|_{x})=0\) for any \(i\geq N\) (see Proposition 3.2). _Remark 1.8_.: We briefly compare our approach with that in [2], which deals with unramfied twisting families associated with \((P,M,\sigma)\). By Frobenius reciprocity law, \[\operatorname{Ext}_{H(F)}^{i}(I_{P}^{G}(\sigma\chi),\mathbb{C})\cong \operatorname{Ext}_{M}^{i}(r_{M}^{G}(i_{H}^{G}\mathbb{C}),\sigma^{\vee}\chi^ {-1})\] where \(r_{M}^{G}\) is the normalized Jacquet module functor from \(\operatorname{Rep}(G,\mathbb{C})\) to \(\operatorname{Rep}(M,\mathbb{C})\). In _loc.cit_, the authors work over \(\widehat{M}\) and make full advantage of the theory of Bernstein center and Bernstein decomposition to show that there is a perfect complex \(\mathcal{G}(M,\sigma)\) over \(\widehat{M}\) associated to \((M,\sigma^{\vee})\) such that \[\operatorname{Ext}_{M}^{i}(r_{M}^{G}(i_{H}^{G}\mathbb{C}),\sigma^{\vee}\chi^ {-1})=H^{i}\left(\operatorname{RHom}_{\mathbb{C}[\widehat{M}]}(\mathcal{G}(M, \sigma),\delta_{\chi})\right)\] where \(\delta_{\chi}\) is the skyscraper sheaf at \(\chi^{-1}\). Then the locally constancy of Euler-Poincare numbers holds by the semicontinuity theorem for coherent sheaves over smooth varieties. In comparison, our approach seems more direct: * it works over \(G\) and does not depend on the special form of the family; * it requires less results from representation theory (while uses more results in the homological algebra). We conclude the introduction by the local constancy of \(m(\pi|_{x})\). **Corollary 1.9**.: _Assume the pair \((G,H)\) is strongly tempered and_ * _the fiber rank of_ \(\pi\) _is locally constant on_ \(\Sigma\) _and there exists a finitely generated smooth admissible torsion-free_ \(R[G(F)]\)_-module_ \(\tilde{\pi}\) _such that_ \(\tilde{\pi}|_{x}\cong(\pi|_{x})^{\vee}\) _for any_ \(x\in\Sigma\)_,_ * _for any_ \(x\in\Sigma\)_,_ \((\pi|_{x})\otimes_{k(x),\tau}\mathbb{C}\) _is irreducible and tempered for some field embedding_ \(\tau:k(x)\hookrightarrow\mathbb{C}\) _Then \(m(\pi|_{x})\) is locally constant on \(\Sigma\) if the conjectural identity \(m(\sigma)=\operatorname{EP}(\sigma)\) holds for all irreducible tempered \(\sigma\in\operatorname{Rep}(G,\mathbb{C})\)._ We remark that * when \(H(F)\) is compact, the upper semi-continuity of multiplicities holds under weaker assumptions (see Proposition 3.7 below); * when \((G,H)\) is strongly tempered and _Gelfand_, i.e. \(m(\sigma)\leq 1\) for all \(\sigma\in\operatorname{Rep}(G,\mathbb{C})\) irreducible, the local constancy of multiplicities can be deduced from the upper semi-continuity and the meromorphy property of canonical local periods considered in [6] (see also [5] for analytic families). ## 2. Homological algebras For any (unital but possibly noncommutative) ring \(A\), denote by \(\operatorname{Mod}_{A}\) (resp. \(K(A)\)) the category of left \(A\)-modules (resp. complexes of \(A\)-modules). The derived category \((D(A),q)\) consists a category \(D(A)\) with a functor \(q:K(A)\to D(A)\) such that any functor \(F:\;K(A)\to\mathcal{C}\) which sends quasi-isomorphisms to isomorphisms factors through \(q:K(A)\to D(A)\). We usually simply denote the derived category by \(D(A)\). The tensor product and \(\operatorname{Hom}\) functor on \(\operatorname{Mod}_{A}\) admit derived version on \(D(A)\) (see [18, Chapter 15] for \(A\) commutative and [20] for general \(A\)). In particular, we record that * for any \(A\)-algebra \(A^{\prime}\), viewed as a left \(A^{\prime}\)-module and right \(A\)-module, the tensor product functor \[A^{\prime}\otimes_{A}-:\;\operatorname{Mod}_{A}\to\operatorname{Mod}_{A^{ \prime}},\quad M\mapsto A^{\prime}\otimes_{A}M\] has the derived version \[A^{\prime}\otimes_{A}^{L}-:\;D(A)\to D(A^{\prime}).\] Note that if \(M\in D(A)\) is represented by a bounded above complex \(P^{\bullet}\in K(A)\) of projective \(A\)-modules, \(A^{\prime}\otimes_{A}^{L}M\) is represented by \(A^{\prime}\otimes_{A}P^{\bullet}\); * for any \(N\in\operatorname{Mod}_{A}\), the functor \[\operatorname{Hom}_{A}(-,-):\;\operatorname{Mod}_{A}\to\operatorname{Mod}_{ \mathbb{Z}};\quad M\mapsto\operatorname{Hom}_{A}(M,N)\] has the derived version \[\operatorname{RHom}_{A}(-,N):\;D(A)\to D(\mathbb{Z}).\] Note that if \(M\in D(A)\) is represented by a bounded above complex \(P^{\bullet}\in K(A)\) of projective \(A\)-modules, \(\operatorname{RHom}_{A}(M,N)\) is represented by the complex \(\operatorname{Hom}(P^{\bullet},N).\) Moreover, if \(N\in\operatorname{Mod}_{A}\) admits a compatible left \(R\)-module structure for some commutative ring \(R\), the functor \(\operatorname{RHom}_{A}(-,N)\) admits a natural lifting, which we denote by the same notation, \[\operatorname{RHom}_{A}(-,N):\;D(A)\to D(R).\] The following results on base change morphisms in derived category are crucial to our approach. Recall that a complex \(M\in D(A)\) is called _pseudo-coherent_ (resp. _perfect_) if it is quasi-isomorphic to a bounded above (resp. above and below) complex of finite projective \(A\)-modules. Let \(R\) be a commutative ring. **Lemma 2.1**.: _Let \(A^{\prime}\) be a flat \(A\)-algebra and take \(N\in\operatorname{Mod}_{A^{\prime}}\). Then for any pseudo-coherent \(M\in D(A)\), there is a canonical isomorphism_ \[\operatorname{RHom}_{A}(M,N)\cong\operatorname{RHom}_{A^{\prime}}(A^{\prime} \otimes_{A}^{L}M,N).\] Proof.: Assume \(M\) is represented by the bounded above complex \(P^{\bullet}\in K(A)\) of finite projective \(A\)-modules. Then \(\operatorname{RHom}_{A}(M,N)\) is represented by \(\operatorname{Hom}_{A}(P^{\bullet},N)\) and \(\operatorname{RHom}_{A^{\prime}}(A^{\prime}\otimes_{A}^{L}M,N)\) is represented by \(\operatorname{Hom}_{A^{\prime}}(A^{\prime}\otimes_{A}P^{\bullet},N)\). The desired result follows from the canonical isomorphism \[\operatorname{Hom}_{A}(P,Q)\cong\operatorname{Hom}_{A^{\prime}}(A^{\prime} \otimes_{A}P,Q)\] for any \(A\)-module \(P\) and \(A^{\prime}\)-module \(Q\). Actually, much more general result holds(see [18, Lemma 0E1W]). **Lemma 2.2**.: _Assume \(N\in\operatorname{Mod}_{A}\) admits a compatible left \(R\)-module structure. Then for any \(M\in D(A)\), \(P\in D(R)\), the canonical map_ \[\operatorname{RHom}_{A}(M,N)\otimes_{R}^{L}P\to\operatorname{RHom}_{A}(M,N \otimes_{R}^{L}P)\] _is an isomorphism if either (i) \(P\) is perfect in \(D(R)\); or (ii) \(M\) is perfect in \(D(A)\)._ Proof.: For case \((i)\), by [18, Lemma 0ATI], we are reduced to the case that \(P\) is represented by the single term complex \(R\), which is straightforward. For case \((ii)\), one can represent \(M\) by a bounded complex of finite free modules and do explicit computations at the level of complexes. Actually the results holds in much more general context (see [18, Lemma 0ATK(i)(ii)]). **Lemma 2.3**.: _Assume \(A\) is an algebra over \(R\) and let \(R^{\prime}\) be a flat commutative \(R\)-algebra. Then for \(N\in\operatorname{Mod}_{A}\) and \(M\in D(A)\) is pseudo-coherent, the canonical morphism_ \[\operatorname{RHom}_{A}(M,N)\otimes_{R}^{L}R^{\prime}\to\operatorname{RHom}_ {A\otimes_{R}R^{\prime}}(M\otimes_{R}^{L}R^{\prime},N\otimes_{R}^{L}R^{\prime})\] _is an isomorphism. In particular if \(A\) is Noetherian, one has natural isomorphism_ \[\operatorname{Hom}_{A}(M,N)\otimes_{R}R^{\prime}\to\operatorname{Hom}_{A \otimes_{R}R^{\prime}}(M\otimes_{R}R^{\prime},N\otimes_{R}R^{\prime})\] _for any \(M\in\operatorname{Mod}_{A}\) finitely generated._ Proof.: Take a bounded above complex \(P^{\bullet}\) of finite free \(A\)-modules representing \(M\). Then \(\operatorname{RHom}_{A}(M,N)\otimes_{R}^{L}R^{\prime}\) is represented by \(\operatorname{Hom}_{A}(P^{\bullet},N)\otimes_{R}R^{\prime}\) and \(\operatorname{RHom}_{A\otimes_{R}R^{\prime}}(M\otimes_{R}^{L}R^{\prime},N \otimes_{R}^{L}R^{\prime})\) is represented by \(\operatorname{Hom}_{A\otimes_{R}R^{\prime}}(P^{\bullet}\otimes_{R}R^{\prime},N\otimes_{R}R^{\prime})\). The desired result follows from the canonical isomorphism \[\operatorname{Hom}_{A}(P,Q)\otimes_{R}R^{\prime}\cong\operatorname{Hom}_{A \otimes_{R}R^{\prime}}(P\otimes_{R}R^{\prime},Q\otimes_{R}R^{\prime})\] for any finite free \(A\)-module \(P\) and arbitrary \(A\)-module \(Q\). Actually the results holds in much more general context (see [18, Lemma 0A6A(3)]). Now we turn to perfect complexes over commutative rings. Let \(R\) be a commutative Noetherian ring. For any \(x\in\operatorname{Spec}\left(R\right)\), let \(k(x)\) be the residue field. **Lemma 2.4**.: _Assume \(R\) is regular. Then \(M\in\operatorname{Mod}_{R}\) is perfect in \(D(R)\) iff \(M\) is finitely generated._ Proof.: See [18, Lemma 066Z]. **Lemma 2.5**.: _A complex \(M\in D(R)\) is perfect if the following conditions holds:_ 1. _the_ \(R\)_-module_ \(H^{i}(M)\) _is finitely generated for each_ \(i\in\mathbb{Z}\)_;_ 2. _there exists_ \(a<b\in\mathbb{Z}\) _such that for all closed point_ \(x\in\operatorname{Spec}\left(R\right)\)_,_ \(H^{i}(M\otimes_{R}^{L}k(x))=0\) _if_ \(i\notin[a,b]\)_._ Proof.: By [18, Lemma 068W], when \(M\) is pseudo-coherent, Item (ii) implies \(M\) is perfect. By [18, Lemma 064T] the assumption \(R\) is Noetherian implies that all \(H^{i}(M)\) is pseudo-coherent. Thus by [18, Lemma 066B], \(M\) is pseudo-coherent if \(H^{i}(M)=0\) for all \(i>b+1\). To see this, consider the exact sequence \[0\to\mathfrak{p}^{n}/\mathfrak{p}^{n+1}\to R/\mathfrak{p}^{n+1}\to R/ \mathfrak{p}^{n}\to 0\] for any maximal ideal \(\mathfrak{p}\subset R\). By induction, one finds that \(H^{i}(M\otimes_{R}^{L}R/\mathfrak{p}^{n})=0\) for all \(i>b\). By [18, Lemma 0CQE] and [18, Proposition 0922], one has the short exact sequence \[0\to R^{1}\lim H^{i-1}(M\otimes_{R}^{L}R/\mathfrak{p}^{n})\to H^{i}(R\lim M \otimes_{R}^{L}R/\mathfrak{p}^{n})\to\lim H^{i}(M\otimes_{R}^{L}R/\mathfrak{p }^{n})\to 0.\] By [18, Lemma 0A06], one has \[H^{i}(M)\otimes_{R}\hat{R}_{\mathfrak{p}}=H^{i}(R\lim_{n}M\otimes_{R}^{L}R/ \mathfrak{p}^{n}).\] Thus \(H^{i}(M)\otimes_{R}\hat{R}_{\mathfrak{p}}=0\) for all \(i>b+1\) and all maximal ideal \(\mathfrak{p}\subset R\). Consequently, \(H^{i}(M)=0\) for all \(i>b+1\) and we are done. Finally, we record the following result for a commutative Noetherian ring \(R\). **Lemma 2.6**.: _For any finitely generated \(R\)-module \(M\), the fiber rank function_ \[\beta(x):\ \operatorname{Spec}\left(R\right)\to\mathbb{N};\quad x\mapsto \dim_{k(x)}M\otimes_{R}k(x)\] _is upper-semicontinuous. If \(R\) is moreover reduced, then \(M\) is projective iff \(\beta\) is locally constant._ Proof.: The first part follows from Proposition 1.7. For the second part, see [18, Lemma 0FWG]. ## 3. Homological multiplicities Let \((G,H)\) be a spherical pair of reductive groups over \(p\)-adic field \(F\). Let \(I_{H}^{G}\mathbb{C}\) be the normalized induction, i.e. the space \[\{f:\ G(F)\to\mathbb{C}\ \text{smooth}\ |\ f(hg)=f(g)\ \forall\ h\in H(F),\ g\in G (F)\}\] on which \(G(F)\) acts by right translation and \(i_{H}^{G}\mathbb{C}\subset I_{H}^{G}\mathbb{C}\) be the normalized compact induction consisting of functions which are compactly supported modulo \(H(F)\). By [14, Proposition 2.5], \[m^{i}(\sigma)=\dim_{\mathbb{C}}\operatorname{Ext}_{G(F)}^{i}(\sigma,I_{H}^{G} \mathbb{C})=\dim_{\mathbb{C}}\operatorname{Ext}_{G(F)}^{i}(i_{H}^{G}\mathbb{C },\sigma^{\vee}),\quad\forall\ \sigma\in\operatorname{Rep}(G,\mathbb{C}).\] For any compact open subgroup \(K\subset G(F)\), let \(\mathcal{H}(K,\mathbb{C})\) be the Hecke algebra of \(\mathbb{C}\)-valued bi-\(K\)-invariant Schwartz functions on \(G(F)\). Then by Bernstein's decomposition theorem (see [2, Theorem 2.5(1)] etc), there exists a neighborhood basis \(\{K\}\) of \(1\in G(F)\) consisting of _splitting_ (see [1] for the notation) open compact subgroups such that \(\mathcal{H}(K,\mathbb{C})\) is Noetherian, the subcategory \(\mathcal{M}(G,K,\mathbb{C})\) of representations generated by their \(K\)-fixed vectors is a direct summand of \(\operatorname{Rep}(G,\mathbb{C})\) and the functor \(\sigma\mapsto\sigma^{K}\) induces an equivalence of categories \(\mathcal{M}(G,K,\mathbb{C})\cong\operatorname{Mod}_{\mathcal{H}(K,\mathbb{C})}\). Thus for \(\sigma^{\vee}\in M(G,K,\mathbb{C})\) with \(K\) splitting, \[\operatorname{Ext}_{\mathcal{H}(K,\mathbb{C})}^{i}((i_{H}^{G}\mathbb{C})^{K}, (\sigma^{\vee})^{K})\cong\operatorname{Ext}_{G(F)}^{i}(i_{H}^{G}\mathbb{C}, \sigma^{\vee}),\quad\forall\ i\in\mathbb{Z}.\] Under our working hypothesis \[\text{the multiplicity $m(\sigma)$ is $finite}\ \text{for all irreducible $\sigma\in\operatorname{Rep}(G,\mathbb{C})$},\] \(i_{H}^{G}\mathbb{C}\) is _locally finitely generated_, i.e. for any compact open subgroup \(K\subset G(F)\), \((i_{H}^{G}\mathbb{C})^{K}\) is finitely generated over \(\mathcal{H}(K,\mathbb{C})\), by [1, Theorem A]. Thus for \(K\) splitting, \((i_{H}^{G}\mathbb{C})^{K}\) admits a resolution by finite projective \(\mathcal{H}(K,\mathbb{C})\)-modules and consequently for \(\sigma\in\operatorname{Rep}(G,\mathbb{C})\) such that \(\sigma^{\vee}\in M(G,K,\mathbb{C})\), \[m^{i}(\sigma)=\dim_{\mathbb{C}}\operatorname{Ext}_{\mathcal{H}(K,\mathbb{C})} ^{i}((i_{H}^{G}\mathbb{C})^{K},(\sigma^{\vee})^{K})<\infty,\quad\forall i\in \mathbb{N}.\] In particular, for any \(\sigma\in\operatorname{Rep}(G,\mathbb{C})\) of finite length, \(m^{i}(\sigma)\) is finite for all \(i\). Finally by [14, Proposition 2.9], for any finite length \(\sigma\in\operatorname{Rep}(G,\mathbb{C})\), \(m^{i}(\sigma)=0\) for \(i>d(G)\), the split rank of \(G\). We now descend all the results above from the coefficient field \(\mathbb{C}\) to a fixed subfield \(E\subset\mathbb{C}\). Let \(i_{H}^{G}E\in\operatorname{Rep}(G,E)\) be the compact induction of the trivial \(H(F)\)-representation \(E\). For any open compact subgroup \(K\subset G(F)\), let \(\mathcal{H}(K,E)\) be the Hecke algebra of \(E\)-valued bi-\(K\)-invariant Schwartz functions on \(G(F)\). **Lemma 3.1**.: _For any splitting open compact subgroup \(K\subset G(F)\), the Hecke algebra \(\mathcal{H}(K,E)\) is Noetherian and \((i_{H}^{G}E)^{K}\) is finitely generated over \(\mathcal{H}(K,E)\)._ Proof.: Note that \((i_{H}^{G}E)^{K}\otimes_{E}\mathbb{C}=(i_{H}^{G}\mathbb{C})^{K}\). Take generators \(\{y_{i}=\sum_{j}f_{i,j}\otimes a_{i,j}\}\) of \((i_{H}^{G}\mathbb{C})^{K}\) over \(\mathcal{H}(K,\mathbb{C})\) with \(f_{i,j}\in(i_{H}^{G}E)^{K}\). Then clearly \(\{f_{i,j}\}\) generate \((i_{H}^{G}E)^{K}\) over \(\mathcal{H}(K,E)\). In particular, \(i_{H}^{G}E\) is locally finitely generated. Take any ascending chain of left ideals of \(\mathcal{H}(K,E)\) \[I_{0}\subset I_{1}\subset\cdots\subset I_{n}\subset\cdots.\] Then \(I_{i}\otimes_{E}\mathbb{C}\) forms an ascending chain of left ideals of \(\mathcal{H}(K,\mathbb{C})\cong\mathcal{H}(K,E)\otimes_{E}\mathbb{C}\). Since \(\mathcal{H}(K,\mathbb{C})\) is Noetehrian, we have that for some \(n\), \[I_{n}\otimes_{E}\mathbb{C}=I_{n+1}\otimes_{E}\mathbb{C}=\cdots.\] Consequently, \(I_{n}=I_{n+1}=\cdots\) and \(\mathcal{H}(K,E)\) is Noetherian. Let \(\operatorname{Rep}(G,E)\) be category of smooth \(G(F)\)-representations over \(E\) and for any open compact subgroup \(K\subset G(F)\), let \(M(G,K,E)\subset\operatorname{Rep}(G,E)\) be the subcategory of representations generated by their \(K\)-fixed vectors. For any \(\sigma\in\operatorname{Rep}(G,E)\), set \[m^{i}(\sigma):=\dim_{E}\operatorname{Ext}_{H(F)}^{i}(\sigma,E),\ \forall\ i\in\mathbb{N},\quad \operatorname{EP}(\sigma):=\sum_{i}(-1)^{i}m^{i}(\sigma).\] **Proposition 3.2**.: _For any \(\sigma\in\operatorname{Rep}(G,E)\) such that \(\sigma^{\vee}\in M(G,K,E)\), the homological multiplicity_ \[m^{i}(\sigma)=\dim_{E}\operatorname{Ext}_{\mathcal{H}(K,E)}^{i}((i_{H}^{G}E)^{K}, (\sigma^{\vee})^{K})\quad\forall\ i\in\mathbb{N}.\] _If moreover \(\sigma\) has finite length, then \(m^{i}(\sigma)\) is finite for each \(i\geq 0\) and \(0\) for \(i>d(G)\). In particular, \(\operatorname{EP}(\sigma)\) is actually a finite sum._ Proof.: For any \(\sigma\in\operatorname{Rep}(G,E)\), set \(\sigma_{\mathbb{C}}:=\sigma\otimes_{E}\mathbb{C}\). Then for any \(\sigma\in\operatorname{Rep}(G,E)\) and \(\theta\in\operatorname{Rep}(G(F),\mathbb{C})\) \[\operatorname{Hom}_{G(F)}(\sigma,\theta)=\operatorname{Hom}_{G(F)}(\sigma_{ \mathbb{C}},\theta).\] Thus computing using any projective resolution of \(\sigma\), one finds \[\operatorname{Ext}^{i}_{G(F)}(\sigma,I^{G}_{H}E)\otimes_{E}\mathbb{C}\cong \operatorname{Ext}^{i}_{G(F)}(\sigma_{\mathbb{C}},I^{G}_{H}\mathbb{C})\quad \forall\ i\geq 0.\] By Lemma 3.1, \((i^{G}_{H}E)^{K}\in D(\mathcal{H}(K,E))\) is pseudo-coherent for \(K\) splitting. Thus by Lemma 2.3, \[\operatorname{Ext}^{i}_{\mathcal{H}(K,E)}((i^{G}_{H}E)^{K},(\sigma^{\vee})^{K })\otimes_{E}\mathbb{C}\cong\operatorname{Ext}^{i}_{\mathcal{H}(K,\mathbb{C} )}((i^{G}_{H}\mathbb{C})^{K},(\sigma^{\vee}_{\mathbb{C}})^{K})\quad\forall\ i \geq 0.\] From the corresponding results for \(\sigma_{\mathbb{C}}\), one deduce that * if \(\sigma^{\vee}\in\operatorname{Rep}(G,K,E)\), \[m^{i}(\sigma)=\dim_{E}\operatorname{Ext}^{i}_{\mathcal{H}(K,E)}((i^{G}_{H}E)^ {K},(\sigma^{\vee})^{K}),\ \forall\ i\geq 0,\] * if \(\sigma\) has finite length, \(m^{i}(\sigma)\) is finite for all \(i\geq 0\) and \(m^{i}(\sigma)=0\) if \(i>d(G)\). When \(Z(F)\) is compact, one has better understanding of \(i^{G}_{H}E\) by the following result from [16, Theorem II.3.1, Corollary II.3.2]. For any character \(\chi:Z(F)\to\mathbb{C}^{\times}\), let \(\operatorname{Rep}(G,\chi)\) be the category of smooth complex \(G(F)\)-representations with central character \(\chi\). **Proposition 3.3**.: _There exists a neighborhood basis \(\{U\}\) of \(1\in G(F)\) consisting of splitting open compact subgroups such that any \(\sigma\in\operatorname{Rep}(G(F),\chi)\cap\mathcal{M}(G,U,\mathbb{C})\) admits an explicit bounded above and below resolution by projective objects in \(\operatorname{Rep}(G(F),\chi)\)._ **Proposition 3.4**.: _Assume \(Z(F)\) is compact. Then for any open compact subgroup \(U\) in Proposition 3.3, \((i^{G}_{H}E)^{U}\in D(\mathcal{H}(U,E))\) is perfect._ Proof.: Since \(Z(F)\) is compact, there exist finitely many characters \(\chi:\ Z(F)\to\mathbb{C}^{\times}\) such that for any \(W\in\mathcal{M}(G,U,\mathbb{C})\), \(W=\oplus_{\chi}W_{\chi}\) with \[W_{\chi}:=\{v\in W\mid\pi(g)v=\chi(g)v\quad\forall\ g\in Z(F)\}\neq 0.\] Let \(V\subset i^{G}_{H}\mathbb{C}\) be the sub-representation generated by \((i^{G}_{H}\mathbb{C})^{U}\). Then for each \(i\in\mathbb{N}\) and \(W\in M(G,U,\mathbb{C})\), \[\operatorname{Ext}^{i}_{G}(V,W)=\oplus_{\chi}\operatorname{Ext}^{i}_{G}(V_{ \chi},W_{\chi})\] where the Ext-groups on the right hand are computed in the category \(\operatorname{Rep}(G(F),\chi)\). Thus by Proposition 3.3, there exists a positive integer \(N\) such that for any \(W\in\mathcal{M}(G,U,\mathbb{C})\), \[\operatorname{Ext}^{i}_{G}(V,W)=0,\quad\forall\ i>N.\] By [2, Theorem 2.5(1)], for any \(\mathcal{H}(U,\mathbb{C})\)-module \(M\), \[\operatorname{Ext}^{i}_{\mathcal{H}(U,\mathbb{C})}((i^{G}_{H}\mathbb{C})^{U},M )=0,\quad\forall i>N.\] Thus by Lemma 2.3, for any \(\mathcal{H}(U,E)\)-module \(M\), \[\operatorname{Ext}^{i}_{\mathcal{H}(U,E)}((i^{G}_{H}E)^{U},M)=0,\quad\forall i >N.\] Take any resolution of \((i^{G}_{H}E)^{U}\) \[\cdots\to P_{N+1}\to P_{N}\to\cdots P_{1}\to P_{0}\to 0\cdots\] by finite projective \(\mathcal{H}(U,E)\)-modules. Let \(Q=\operatorname{coker}(P_{N+2}\to P_{N+1})\). Since \[\operatorname{Ext}^{N+1}_{\mathcal{H}(U,E)}((i^{G}_{H}\mathbb{C})^{K},P_{N})=0,\] the natural morphism \[\operatorname{Hom}(P_{N},P_{N})\to\operatorname{Hom}(Q,P_{N})\] is surjective and thus \(Q\) is projective. Consequently, \((i^{G}_{H}E)^{U}\) is perfect as it is quasi-isomorphic to \[\cdots 0\to Q\to P_{N}\to\cdots\to P_{0}\to 0\cdots.\] Now let \(R\) be a finitely generated reduced \(E\)-algebra and fix a Zariski dense subset \(\Sigma\) of \(\operatorname{Spec}\,(R)\) of closed points. Let \(\pi\) be a torsion-free smooth admissible finitely generated \(R[G(F)]\)-module. We moreover assume that * the fiber rank of \(\pi\) is locally constant on \(\Sigma\), * there exists a finitely generated smooth admissible torsion-free \(R[G(F)]\)-module \(\tilde{\pi}\) such that \(\tilde{\pi}|_{x}\cong(\pi|_{x})^{\vee}\) for any \(x\in\Sigma\). **Proposition 3.5**.: _If \(R\) is regular, \(m^{i}(\pi|_{x})\) is upper semi-continuous for each \(i\in\mathbb{N}\) and \(\mathrm{EP}(\pi|_{x})\) is locally constant._ Proof.: Take a splitting compact open subgroup \(K\subset G(F)\) such that \(\tilde{\pi}^{K}\) generates \(\tilde{\pi}\). By the duality between \(\pi^{K}|_{x}\) and \(\tilde{\pi}^{K}|_{x}\) and Lemma 2.6, upon shrinking \(\mathrm{Spec}\,(R)\) to an open subset containing \(\Sigma\) we can and will assume the fiber rank of \(\tilde{\pi}^{K}\) is locally constant on \(\Sigma\) and thus the \(R\)-module \(\tilde{\pi}^{K}\) is finite projective. Since \(R\) is regular, \(k(x)\in D(R)\) is perfect for closed \(x\in\mathrm{Spec}\,(R)\) by Lemma 2.4. Thus by Item (i) of Lemma 2.2, one has canonical isomorphisms \[\mathrm{RHom}_{\mathcal{H}(K,E)}((i_{H}^{G}E)^{K},\tilde{\pi}^{K})\otimes_{R}^ {L}k(x)\cong\mathrm{RHom}_{\mathcal{H}(K,E)}((i_{H}^{G}E)^{K},\tilde{\pi}^{K} |_{x}).\] By Lemma 2.1 and Proposition 3.2, one has \(m^{i}(\pi|_{x})=\dim_{k(x)}\mathrm{Ext}^{i}_{\mathcal{H}(K,E)}((i_{H}^{G}E)^ {K},\tilde{\pi}^{K}|_{x}).\) By Proposition 1.7, to finish the proof it suffices to show the complex \(\mathrm{RHom}_{\mathcal{H}(K,E)}((i_{H}^{G}E)^{K},\tilde{\pi}^{K}|_{x})\) is perfect in \(D(R)\). Note that the complex \(\mathrm{RHom}_{\mathcal{H}(K,E)}((i_{H}^{G}E)^{K},\tilde{\pi}^{K})\) is represented by \(\mathrm{Hom}_{\mathcal{H}(K,E)}(\boldsymbol{P}^{\star},\tilde{\pi}^{K})\). Here \(P^{\star}\) is any resolution \((i_{H}^{G}E)^{K}\) by finite free \(\mathcal{H}(K,E)\)-modules, whose existence is guaranteed by Lemma 3.1. Thus \(H^{i}(\mathrm{RHom}_{\mathcal{H}(K,E)}((i_{H}^{G}E)^{K},\tilde{\pi}^{K}))\) are finitely generated as \(R\)-modules for each \(i\in\mathbb{Z}\). Now the desired perfectness follows from Lemma 2.5 and Proposition 3.2. **Proposition 3.6**.: _If \(Z(F)\) is compact, \(m^{i}(\pi|_{x})\) is upper semi-continuous for each \(i\in\mathbb{N}\) and \(\mathrm{EP}(\pi|_{x})\) is locally constant._ Proof.: By Proposition 3.3 and 3.4, one can take a splitting compact open subgroup \(U\subset G(F)\) such that \(\tilde{\pi}^{U}\) generates \(\tilde{\pi}\) and \((i_{H}^{G}E)^{U}\in D(\mathcal{H}(U,E))\) is perfect. Thus the complex \(\mathrm{RHom}_{\mathcal{H}(U,E)}((i_{H}^{G}E)^{U},\tilde{\pi}^{U})\) is also perfect and by Item (ii) of Proposition 2.2, \[\mathrm{RHom}_{\mathcal{H}(U,E)}((i_{H}^{G}E)^{U},\tilde{\pi}^{U})\otimes_{R}^ {L}k(x)\cong\mathrm{RHom}_{\mathcal{H}(U,E)}((i_{H}^{G}E)^{U},\tilde{\pi}^{U} \otimes_{R}^{L}k(x)).\] The remaining part of the proof proceeds similarly as in Proposition 3.5. Finally, we remark that when \(H(F)\) is compact, the upper semi-continuity of \(m^{i}(\pi|_{x})\) holds for all torsion-free finitely generated smooth admissible \(R[G(F)]\)-modules. **Proposition 3.7**.: _Assume \(H(F)\) is compact. Then for any torsion-free finitely generated smooth admissible \(R[G(F)]\)-module \(\pi\), the function \(\mathrm{EP}(\pi|_{x})=m^{0}(\pi|_{x})\) is upper semi-continuous on \(\mathrm{Spec}\,(R)\)._ Proof.: By [1, Theorem 2.14], \(m^{i}(\pi|_{x})=0\) for each \(i\geq 1\) and \(\mathrm{EP}(\pi|_{x})=m^{0}(\pi|_{x})\) for any \(x\in\mathrm{Spec}\,(R)\). Let \(\pi_{H}\) (resp. \(\pi^{H}\)) be the \(H(F)\)-coinvariant (resp. \(H(F)\)-invariant) of \(\pi\). Since \(H(F)\) is compact, the natural map \(\pi^{H}\to\pi_{H}\) is an isomorphism and for any \(x\in\mathrm{Spec}\,(R)\), \((\pi^{H})|_{x}\cong(\pi|_{x})^{H}\cong(\pi|_{x})_{H}.\) In particular, \(m^{0}(\pi|_{x})=\dim_{k(x)}(\pi|_{x})_{H}=\dim_{k(x)}\pi^{H}|_{x}\). To finish the proof, it suffices to show \(\pi^{H}\) is coherent by Lemma 2.6. Note that [2, Theorem 2.5(1)] actually works for any algebraically closed field of characteristic zero. Thus by Proposition 3.2, for each generic point \(\eta\) of \(\mathrm{Spec}\,(R)\) and some splitting open subgroup \(K\) \[\dim_{k(\eta)}\pi^{H}|_{\eta}=\dim_{k(\eta)}\mathrm{Hom}_{G(F)}((i_{H}^{G}k( \eta))^{K},(\pi|_{\eta})^{\vee,K})<\infty.\] Since the diagonal map \(\pi\hookrightarrow\prod_{\eta}\pi|_{\eta}\) is an injection, there exists an open compact subgroup \(K^{\prime}\subset G(F)\) such that \(\pi^{H}\subset\pi^{K^{\prime}}\) and consequently \(\pi^{H}\) is coherent. **Acknowledgement** The debt this work owes to [14] and [2] is clear. We would like to thank Professor Ye Tian for his consistent encouragement, and Professor Haoran Wang for helpful discussions on the projective resolutions of smooth representations.
2302.05585
Searching for Unconventional Superfluid in Excitons of Monolayer Semiconductors
It is well known that two-dimensional (2D) bosons in homogeneous space cannot undergo real Bose-Einstein condensation, and the superfluid to normal phase transition is Berezinskii-Kosterlitz-Thouless (BKT) type, associated with vortex-antivortex pair unbinding. Here we point out a 2D bosonic system whose low energy physics goes beyond conventional paradigm of 2D {\it homogeneous} bosons, i.e., intralayer excitons in monolayer transition metal dichalcogenides. With intrinsic valley-orbit coupling and valley Zeeman energy, exciton dispersion becomes linear at small momentum, giving rise to a series of novel features. The critical temperature of Bose-Einstein condensation of these excitons is nonzero, suggesting true long-range order in 2D homogeneous system. The dispersion of Goldstone mode at long wavelength has the form $\varepsilon(\boldsymbol{q})\sim\sqrt{q}$, in contrast to conventional linear phonon spectrum. The vortex energy deviates from the usual logarithmic form with respect to system size, but instead has an additional linear term. Superfluid to normal phase transition is no longer BKT type for system size beyond a characteristic scale, without discontinuous jump in superfluid density. With the recent experimental progress on exciton fluid at thermal equilibrium in monolayer semiconductors, our work points out an experimentally accessible system to search for unconventional 2D superfluids beyond BKT paradigm.
Wei Chen, Chun-Jiong Huang, Qizhong Zhu
2023-02-11T03:16:12Z
http://arxiv.org/abs/2302.05585v2
# Searching for unconventional superfluid in exciton condensate of monolayer semiconductors ###### Abstract It is well known that two-dimensional (2D) bosons in homogeneous space cannot undergo real Bose-Einstein condensation, and the superfluid to normal phase transition is Berezinskii-Kosterlitz-Thouless (BKT) type, associated with vortex-antivortex pair unbinding. Here we point out a 2D bosonic system whose low energy physics goes beyond conventional paradigm of 2D _homogeneous_ bosons, i.e., intralayer excitons in monolayer transition metal dichalcogenides. With intrinsic valley-orbit coupling and external magnetic field, exciton dispersion becomes linear at small momentum, giving rise to a series of novel features. The critical temperature of Bose-Einstein condensation of these excitons is nonzero, suggesting true long-range order in 2D homogeneous system. The dispersion of Goldstone mode at long wavelength has the form \(\varepsilon(\mathbf{q})\sim\sqrt{q}\), in contrast to conventional linear phonon spectrum. The vortex energy deviates from the usual logarithmic form with respect to system size, but instead has an additional linear term. Superfluid to normal phase transition is no longer BKT type for system size beyond a characteristic scale, without discontinuous jump in superfluid density. With the recent experimental progress towards realization of exciton condensate in monolayer semiconductors, our work provides an experimentally feasible system to search for unconventional 2D superfluids beyond BKT paradigm. In two-dimensional (2D) homogeneous systems, it is well known that continuous symmetry cannot be broken spontaneously according to Mermin-Wagner theorem [1; 2], and there is no true long-range order. As a special example, Bose-Einstein condensation (BEC) critical temperature in 2D is zero. Nevertheless, superfluid is still possible at finite temperature, and the transition from superfluid to normal phase is described by the Berezinskii-Kosterlitz-Thouless (BKT) theory [3; 4], where the underlying mechanism is the vortex-antivortex pair unbinding at high temperature. This generic paradigm is successful in the description of a variety of 2D superfluids, including liquid helium films [5], superconductors [6], cold atomic gases [7; 8], exciton-polariton condensates [9], and dipolar excitons [10]. Generically, for a 2D bosonic system, the nature of low temperature phases depends crucially on the density of states. Deviation from homogeneous space or parabolic dispersion may lead to superfluids beyond the conventional paradigm. For example, when ultracold bosonic atoms are confined in a harmonic trap, the BEC critical temperature is nonzero [11]. Similar examples include bosons confined on the surface of a sphere, where a finite BEC critical temperature also exists due to finite size effect [12]. On the other hand, for homogeneous bosons with quartic dispersion realized at the transition point of spin-orbit coupled gases, BKT transition temperature vanishes and the low temperature phase is characterized by an algebraic order [13]. This is an interesting example of enhanced low energy fluctuations brought by increased density of states. The contrary case, i.e., interacting bosons with single-particle dispersion \(\epsilon(k)\sim k^{\nu}\) (\(\nu<2\)) in realistic experimental systems are still lacking. In a word, 2D _homogeneous_ bosons beyond the conventional paradigm are quite rare and highly interesting, and will surely enrich our understanding of such fundamental concepts as BEC and superfluidity. In recent years, there has been growing interest in the realization of exciton condensation in transition metal dichalcogenide (TMD) based 2D materials [14; 15; 16; 17; 18; 19; 20; 21; 22]. Besides the long-sought interlayer exciton condensation in heterobilayers, the possibility of intralayer exciton condensation in monolayer TMD can not be excluded either [23], despite of its relatively short lifetime. In fact, exciton radiative lifetime can be tailored by various techniques [24; 25]. In particular, experimental progress has been achieved in the condensation of intralayer exciton in monolayer TMD [26]. All these point to exciting possibilities for the realization of intralayer exciton condensate and exploring its unusual properties. Here we point out that in the system of monolayer TMD, intralayer exciton with linear dispersion [27; 28] provides another example beyond conventional paradigm. The center-of-mass motion of intralayer exciton in monolayer TMD features intrinsic valley-orbit coupling [27; 28], originating from electron-hole Coulomb exchange interaction. With additional external magnetic field introducing valley Zeeman energy, the dispersion of intralayer exciton at lower branch becomes linear (see Fig. 1(b)). We will reveal that this special dispersion endows intralayer excitons with a variety of novel features in low energy physics. Our main findings include the following: (i) With an external magnetic field, the BEC critical temperature of this 2D _homogeneous_ system is nonzero and exhibits rapid increase with the field strength; (ii) The Bogoliubov excitation spectrum of this exciton condensate shows \(\varepsilon(\mathbf{q})\sim\sqrt{q}\) behaviour at long wavelength in the presence of magnetic field, which is an usual form of gapless Goldstone mode, in contrast to conventional phonon excitation. (iii) The vortex energy deviates from logarithmic form with respect to system size, resulting in a non-BKT-type phase transition for large system size. There exists a characteristic system size, beyond which the superfluid to normal phase transition evolves from BKT type to 3D-like without discontinuous jump in superfluid density. This crossover can also be observed in a single system by tuning the strength of external magnetic field. (iv) With the increase of temperature, the system undergoes a two-step phase transition, first from a BEC with long-range order to a superfluid with quasi-long-range order, and then to a normal phase. These novel phases can be experimentally detected by measuring the spatial and temporal coherence of emitted photons by excitons. _Monolayer exciton dispersion_. There are two inequivalent valleys in the Brillouin zone corner of monolayer TMD, denoted as \(\pm K\) valley, respectively. Valley excitons in monolayer TMD behave as a pseudospin-1/2 bosonic system, whose valley pseudospin is coupled with center-of-mass momentum of excitons [27; 28], giving rise to the valley-orbit coupling. The effective Hamiltonian describing the center-of-mass motion of intralayer excitons with valley-orbit coupling reads \[\hat{H}_{0}= \frac{\hbar^{2}Q^{2}}{2m}+AQ+AQ\cos(2\theta_{\mathbf{Q}})\sigma_{x}+ AQ\sin(2\theta_{\mathbf{Q}})\sigma_{y}\] \[+\delta\sigma_{z}. \tag{1}\] Here \(\mathbf{Q}\) is the center-of-mass momentum of exciton with magnitude \(Q=|\mathbf{Q}|\) and angle \(\theta_{\mathbf{Q}}\), \(m\approx 1.1m_{0}\) is exciton effective mass, \(\sigma_{i}\) (\(i=x,y,z\)) are the Pauli matrices of valley pseudospin, and \(A\approx 0.9\) eV\(\cdot\)A is the valley-orbit coupling strength [28], related with Coulomb exchange interaction. We also introduce a valley Zeeman term \(\delta\sigma_{z}\), known as the valley Zeeman energy [29; 30], with strength tunable by applying an out-of-plane magnetic field. The non-interacting dispersion has two branches, given by \(\xi_{\pm}(\mathbf{Q})=\hbar^{2}Q^{2}/2m+AQ\pm\sqrt{A^{2}Q^{2}+\delta^{2}}\), as shown in Fig. 1. For finite \(\delta\), the lower branch of dispersion at small momentum features a linear spectrum, distinct from usual parabolic dispersion and brings dramatic change to low energy properties of this system. The range of linear dispersion defines a characteristic momentum \(Q_{c}\), at which the parabolic and linear terms are comparable in weight, and corresponds to a characteristic system size \(L_{c}=1/Q_{c}\), both shown in Fig. 3(b). _Non-interacting BEC critical temperature_. In the absence of external magnetic field, the lower branch of dispersion is parabolic, and obviously the BEC critical temperature vanishes, as in conventional 2D bosonic systems. With external magnetic field (\(\delta\neq 0\)), the lower branch of dispersion is linear at small momentum, which modifies the low energy density of states and renders a BEC at finite temperature possible. We first neglect the exciton-exciton interaction and calculate the non-interacting condensation temperature \(T_{\rm BEC}\) through the relation, \[n=\frac{1}{(2\pi)^{2}}\int\sum_{\sigma=\pm}\frac{d^{2}\mathbf{Q}}{e^{\beta[\xi_{ \sigma}(\mathbf{Q})-\mu(T)]}-1}, \tag{2}\] where \(\mu\) is the exciton chemical potential and \(\beta=1/k_{B}T\). When \(T\to T_{\rm BEC}\), \(\mu\to-\delta\). The calculated relation between \(T_{\rm BEC}\) and \(\delta\) is shown in Fig. 2 for different exciton densities. Clearly, \(T_{\rm BEC}\) monotonically increases with \(\delta\). For typical exciton density \(n=10^{10}\sim 10^{12}\) cm\({}^{-2}\) below the Mott limit [26], a moderate \(\delta\) can lead to a relatively high \(T_{\rm BEC}\) on the order of \(10\sim 100\) K. This is in stark contrast to the conventional bosonic system with parabolic dispersion, where \(T_{\rm BEC}=0\) and only a quasi-condensate exists at finite temperature. Here we have a true 2D condensate with long-range order, in homogeneous space. As will be shown below, we find that the inclusion of exciton-exciton interaction can further enhance the BEC critical temperature. _Bogoliubov excitation spectrum_. The linear dispersion Figure 1: Single exciton dispersion for \(\delta=0\) (a) and \(\delta=5\) meV (b). Bogoliubov excitation spectra of an exciton condensate at zero momentum for \(\delta=0\) (c) and \(\delta=5\) meV (d). Inset of (d) shows the zoom-in plot of dispersion \(\varepsilon(\mathbf{q})\sim\sqrt{q}\). All spectra in (a)-(d) have rotational symmetry in 2D plane. The exciton-exciton interaction strengths chosen throughout this paper are \(c_{1}=1.0\) eV\(\cdot\)nm\({}^{2}\), \(c_{2}=0.6\) eV\(\cdot\)nm\({}^{2}\), within the same order of magnitude as the values calculated in Ref. [31; 32]. \(n_{0}=9.9*10^{5}\) cm\({}^{-2}\). of non-interacting exciton at small momentum also implies unusual low energy excitations of exciton condensate. Assuming that excitons condense at zero momentum state, the low energy excitation can be calculated by the standard Bogoliubov theory (see, e.g., [33]). The mean-field energy functional reads \[E[\Psi_{\sigma}] =\int d^{2}\mathbf{r}\Bigg{\{}\left(\Psi_{\uparrow}^{*},\Psi_{ \downarrow}^{*}\right)\Big{[}\left(\frac{\hbar^{2}Q^{2}}{2m}+AQ\right)+\delta \sigma_{z}\] \[+AQ\cos(2\theta_{\mathbf{Q}})\sigma_{x}+AQ\sin(2\theta_{\mathbf{Q}}) \sigma_{y}\Big{]}\begin{pmatrix}\Psi_{\uparrow}\\ \Psi_{\downarrow}\end{pmatrix}\] \[+\frac{c_{1}}{2}\left(|\Psi_{\uparrow}|^{4}+|\Psi_{\downarrow}|^ {4}\right)+c_{2}|\Psi_{\uparrow}|^{2}|\Psi_{\downarrow}|^{2}\Bigg{\}}, \tag{3}\] where \(\Psi_{\sigma}\) is the condensate wave function for pseudospin \(\sigma\), corresponding to \(\pm K\) valley. At qualitative level, the exciton-exciton interaction is modelled as a contact interaction, with \(c_{1}\) and \(c_{2}\) being exciton-exciton interaction strengths between the same and different pseudospin states, respectively. Within the same framework, the condensate dynamics can be described by the spinor Gross-Pitaevskii (GP) equation, \[i\hbar\frac{\partial}{\partial t}\left(\begin{array}{c}\Psi_{\uparrow}\\ \Psi_{\downarrow}\end{array}\right)=\left(\begin{array}{cc}H_{\uparrow \uparrow}&H_{A}\\ H_{A}^{*}&H_{\downarrow\downarrow}\end{array}\right)\left(\begin{array}{c} \Psi_{\uparrow}\\ \Psi_{\downarrow}\end{array}\right), \tag{4}\] where \(H_{\uparrow\uparrow}=\hbar^{2}Q^{2}/2m+AQ+\delta+c_{1}|\Psi_{\uparrow}|^{2}+c _{2}|\Psi_{\downarrow}|^{2}\), \(H_{\downarrow\downarrow}=\hbar^{2}Q^{2}/2m+AQ-\delta+c_{1}|\Psi_{\downarrow}|^ {2}+c_{2}|\Psi_{\uparrow}|^{2}\), and \(H_{A}=AQ\exp(-2i\theta_{\mathbf{Q}})\). In the presence of magnetic field (\(\delta>0\)), minimization of the mean-field energy functional (Eq. 3) gives the wave function of an exciton condensate at ground state. The ground state is found to be \((|\Psi_{\uparrow}|^{2},|\Psi_{\downarrow}|^{2})=n_{0}(1/2-\delta/(n_{0}c_{1}- n_{0}c_{2}),1/2+\delta/(n_{0}c_{1}-n_{0}c_{2}))\) for \(0\leq\delta\leq(n_{0}c_{1}-n_{0}c_{2})/2\), while \((|\Psi_{\uparrow}|^{2},|\Psi_{\downarrow}|^{2})=n_{0}(0,1)\) for \(\delta>(n_{0}c_{1}-n_{0}c_{2})/2\), with \(n_{0}\) being the condensate density. For intralayer exciton, \(c_{1}>c_{2}\) ensures that without magnetic field, exciton densities in two valleys are equal at ground state. With the estimated values \(c_{1}\sim 1.0\) eV\(\cdot\)nm\({}^{2}\), \(c_{2}\sim 0.6\) eV\(\cdot\)nm\({}^{2}\)[31; 32], and typical exciton density \(n_{0}=1.0*10^{11}\) cm\({}^{-2}\), the critical \(\delta_{c}\equiv(n_{0}c_{1}-n_{0}c_{2})/2\sim 0.2\) meV, beyond which the excitons are completely valley polarized. In the following, we consider the case \(\delta>\delta_{c}\), which is readily accessible in experiment with moderate magnetic field and greatly simplifies the calculations. Results within the small range \(0<\delta<\delta_{c}\) are also calculated with the wave function \(n_{0}(0,1)\) as an approximation. By expanding the wave function around the stationary state, \(\Psi_{\sigma}^{\prime}=\Psi_{\sigma}+\delta\Psi_{\sigma}\), and assuming the form of perturbation \(\delta\Psi_{\sigma}=\Psi_{\sigma}e^{-i\mu/\hbar}[u_{\sigma}(\mathbf{q})e^{-i \varepsilon(\mathbf{q})t}-v_{\sigma}^{*}(\mathbf{q})e^{i\varepsilon(\mathbf{q})t}]\), one arrives at the Bogoliubov equation for the low energy excitation, \[\mathcal{M}\left(\begin{array}{c}u_{\uparrow}\\ u_{\downarrow}\\ v_{\uparrow}\\ v_{\downarrow}\end{array}\right)=\varepsilon\left(\begin{array}{c}u_{ \uparrow}\\ u_{\downarrow}\\ v_{\uparrow}\\ v_{\downarrow}\end{array}\right), \tag{5}\] with \[\mathcal{M}=\begin{pmatrix}H_{1}^{+}&B_{1}^{+}&0&0\\ B_{2}^{+}&H_{2}^{+}&0&-n_{0}c_{1}\\ 0&0&H_{1}^{-}&B_{1}^{-}\\ 0&n_{0}c_{1}&B_{2}^{-}&H_{2}^{-}\end{pmatrix}. \tag{6}\] Here \(\begin{array}{cc}H_{1}^{\pm}&=&\pm\left(\hbar^{2}q^{2}/2m+Aq+n_{0}c_{2}-\mu+ \delta\right),\\ H_{2}^{\pm}&=&\pm\left(\hbar^{2}q^{2}/2m+Aq+2n_{0}c_{1}-\mu-\delta\right),\\ B_{1}^{\pm}&=&\pm A\left(q_{x}^{2}-q_{y}^{2}-2iq_{x}q_{y}\right)/q,\quad B_{2}^ {\pm}=\pm A\left(q_{x}^{2}-q_{y}^{2}+2iq_{x}q_{y}\right)/q,\quad and \mu=-\delta+n_{0}c_{1}\). There are four groups of eigenvalues and only the two whose corresponding eigenvectors satisfy \(|u_{\sigma}|^{2}-|v_{\sigma}|^{2}=1\) are physical. The two branches of excitations are \[\varepsilon_{\sigma}(\mathbf{q})=\frac{1}{2}\sqrt{\mathcal{A}+\mathcal{B}\pm 2 \sqrt{\mathcal{C}^{2}-\mathcal{D}}}, \tag{7}\] where the expressions of \(\mathcal{A}\), \(\mathcal{B}\), \(\mathcal{C}\) and \(\mathcal{D}\) are lengthy and listed in the Supplementary Material. As shown in Figs. 1(c) and 1(d), for \(\delta=0\), \(\varepsilon_{-}(\mathbf{q})\sim q\) at small \(q\), while \(\varepsilon_{-}(\mathbf{q})\sim\sqrt{q}\) for \(\delta>0\) (see Supplementary Material for further details), which is a new form of gapless Goldstone mode unreported before. _Interacting BEC critical temperature._ With exciton-exciton interaction, the dispersion of low energy excitation is modified, and the resulting change of density of states also affects the BEC critical temperature. We quantitatively calculate the interacting critical temperature using the standard Hartree-Fock-Bogoliubov-Popov theory [34; 35; 36], by solving self-consistently the total exciton density (see Supplementary Material for further details) \[n=n_{0}+\sum_{\sigma}\int\frac{d^{2}\mathbf{q}}{(2\pi)^{2}}\left\{|v_{\sigma}(\mathbf{q})| ^{2}+\frac{|u_{\sigma}(\mathbf{q})|^{2}+|v_{\sigma}(\mathbf{q})|^{2}}{e^{\varepsilon_{ \sigma}(\mathbf{q})/k_{B}T}-1}\right\}, \tag{8}\] Figure 2: Change of non-interacting (interacting) BEC critical temperature with \(\delta\) shown in dotted (solid) line at different exciton densities. Lines with same color correspond to same exciton densities. where the condensate density \(n_{0}\) also enters the excitation spectrum and quasiparticle amplitudes \(u_{\sigma}(\mathbf{q})\) and \(v_{\sigma}(\mathbf{q})\). By calculating the dependence of \(n_{0}\) on \(T\), one can extrapolate to \(n_{0}\to 0\) and find the critical temperature \(T_{\rm BEC}\) with interaction. As shown in Fig. 2, the BEC critical temperature is enhanced by exciton-exciton interaction, which can be qualitatively understood by considering that the density of states is reduced at low energy and thereby the condensate fraction at given temperature is increased compared with non-interacting case. _Vortex energy._ In 2D bosonic systems, besides the non-singular excitations calculated above, there are also singular topological excitations, i.e., vortices, which play a decisive role in conventional BKT theory. In a single-component condensate, the appearance of a free vortex causes an energy increase \(E_{\rm v}\sim mn_{s}/2\int d\mathbf{r}v^{2}(\mathbf{r})\simeq\pi\hbar^{2}n_{s}/m\ln(L/\zeta)\), where \(v(\mathbf{r})\) is the magnitude of velocity, \(L\) is the system size, \(\zeta=\sqrt{\hbar^{2}/2mn_{s}c_{1}}\) is the condensate healing length, and \(n_{s}\) is the superfluid density. This energy function has the same form with increase of entropy \(S\simeq 2k_{B}\ln(L/\zeta)\) associated with a free vortex, thereby leading to the free energy change \(\Delta{\cal F}=E_{\rm v}-TS\simeq(\pi\hbar^{2}n_{s}/m-2k_{B}T)\ln(L/\zeta)\), whose turning point gives \(T_{\rm SF}\). This is the case for parabolic particle dispersion, and since intralayer exciton shows different dispersion here, the vortex energy is modified, as well as the nature of superfluid phase transition. In this pseudospin-1/2 system, vortices generally have two components with respective vorticity or circulation. Here we are interested in the case of \(\delta>\delta_{\rm c}\), where the condensate at ground state is pseudospin polarized, and thus we make the approximation by considering a single-component vortex with modified dispersion, i.e., \(\xi_{-}(\mathbf{Q})=\hbar^{2}Q^{2}/2m+AQ-\sqrt{A^{2}Q^{2}+\delta^{2}}\). With similar argument, the vortex energy should be modified as \(E_{\rm v}\sim mn_{s}/2\int d\mathbf{r}\left\{v^{2}(\mathbf{r})+(mA/\hbar)v(\mathbf{r}) \right\}\sim(\pi\hbar^{2}n_{s}/m)\left[\alpha\ln(L/\zeta)+\beta(L/\zeta)\right]\), where \(\alpha\) and \(\beta\) are two constants dependent on \(A\) and \(\delta\). In particular, \(\beta\) should vanish at \(\delta=0\) and increase with \(\delta\). Similar form of vortex energy is found in superconductors or two-component BECs with Josephson coupling [37; 38; 39; 40; 41], and the additional linear term \(\beta(L/\zeta)\) implies the breakdown of BKT theory for large system size. To be on a firmer ground, we adopt the trial wave function of a vortex as in the single-component case, i.e., \(\psi(\mathbf{r})\approx\sqrt{n_{s}}re^{i\phi_{\mathbf{r}}}/\sqrt{r^{2}+2}\), where \(r\) is in unit of \(\zeta\), being a good fit to the numerical solution by Gross-Pitaevskii equation [42]. We numerically calculate the kinetic energy increase associated with the vortex within the new dispersion, through \(E_{\rm v}\simeq\int d\mathbf{r}\psi^{*}(\mathbf{r})\hat{H}_{\rm kin}\psi(\mathbf{r})\), with \(\hat{H}_{\rm kin}\) being the kinetic energy operator corresponding to dispersion \(\xi_{-}(\mathbf{Q})\). The operation is carried out in momentum space, bypassing the difficulty in expressing the valley-orbit coupling term in coordinate space (see Supplemental Material for further details). The obtained dependence of \(E_{\rm v}\) on \(L/\zeta\) can be fitted well with the relation above, which gives the specific value of \(\alpha\) and \(\beta\). Within the experimentally feasible range of \(\delta\lesssim 10\) meV, we find \(\alpha\approx 1\) and the change of \(\beta\) with \(\delta\) is shown in Fig. 3(a). The condition of equal contribution from these two terms \(\alpha\ln(L/\zeta)+\beta(L/\zeta)\) also determines a characteristic system size, in quantitative agreement with \(L_{c}\) defined above. _Superfluid to normal phase transition_. The linear term of vortex energy will dominate for large system size, and since this term grows faster than entropy, proliferation of free vortices will always be suppressed even at high temperature. This means the main contribution to the depletion of superfluid density will come from non-singular excitations, including the Bogoliubov excitation with \(\sqrt{q}\) dispersion, and vortex-antivortex bound pairs. These types of excitations will lead to continuous decrease of superfluid density with temperature, without discontinous jump in superfluid density at the phase transition point, similar to 3D case. So we conclude that, for system size beyond a characteristic scale \(L_{c}\), the superfluid phase transition is no longer BKT type. On the other hand, for small system size with large lower bound of momentum, the effect of linear dispersion will be minor, and in this case we still expect a BKT-type finite-size Figure 4: Finite temperature phase diagram of intralayer excitons with change of \(\delta\) and \(T\). Three different phases are shown in different color. The upper solid (lower dashed) line denotes the critical temperature \(T_{\rm SF}\) (\(T_{\rm BEC}\)). Exciton density \(n=1.0*10^{11}\) cm\({}^{-2}\) is chosen here. Figure 3: (a) Change of coefficient \(\beta\) with \(\delta\). (b) Characteristic system size \(L_{c}\) and momentum \(Q_{c}\) as functions of \(\delta\). crossover. Note also that the characteristic system size \(L_{c}\) depends on \(\delta\), and thus the two limiting case above can also be observed in a single system by tuning \(\delta\). For small \(\delta\), \(L_{c}\) is on the order of \(\mu\)m, fully within the range of sample size in current experiments. We finally have enough information on the finite temperature phase digram of this system in \(\delta-T\) plane, as shown in Fig. 4. At \(\delta=0\), the BEC critical temperature is zero, while the BKT transition temperature, given by the well-known Nelson-Kosterlitz relation [43], is nonzero. With the increase of \(\delta\), the BEC critical temperature increases and the superfluid critical temperature should also increases, since both free vortices and nonsingular excitations are asymptotically suppressed. For large \(\delta\), where the free vortices are completely suppressed for a given system size, the superfluid density should continuously drop to zero, similar to 3D case, where the superfluid critical temperature coincides with BEC critical temperature. So both \(T_{\rm BEC}\) and \(T_{\rm SF}\) increase with \(\delta\), and asymptotically approach each other. One immediately realizes that the phase diagram consists of three different phases, i.e., BEC phase with long-range order (also a superfluid), superfluid phase with quasi-long-range order, and a trivial normal phase. In other words, at finite \(\delta\), with the increase of temperature, the system undergoes a two-step phase transition, first from BEC to a non-BEC superfluid, and then to a normal phase. Note that the superfluid critical temperature is only qualitatively demonstrated in Fig. 4, by interpolation between two known limiting cases (\(\delta=0\) and large \(\delta\)). A quantitative treatment of superfluid phase transition, taking full account of vortex-antivortex pair excitation induced screening, using methods such as Monte Carlo simulation [44; 45] will be postponed to a future work. _Experimental observation_. We finally comment on the possible experimental realization of intralayer exciton condensate in monolayer TMD. The major obstacle is the short exciton lifetime, whose magnitude relative to exciton thermalization time determines whether thermal equilibrium can be reached. Recently, it is claimed in an experiment that signature of exciton condensation in monolayer MoS\({}_{2}\) has been observed [26]. Although whether the observed experimental phenomena point to the definitive occurrence of exciton BEC still needs further confirmation, it is an exciting finding and may stimulate more experimental work on this topic. Further effort can be made to enhance exciton radiative lifetime, e.g., in combination with cavity or meta-mirrors [24; 25]. The three different phases predicted in this work can be distinguished by measuring the spatial and temporal coherence of emitted photons, from which we can infer the decay behaviour of exciton correlation function. Similar techniques have been used in previous experimental demonstration of BKT transition in exciton-polariton condensate [9] and dipolar excitons [10]. In summary, we have studied the unusual properties of intralayer exciton condensate in monolayer TMD, possibly realized in recent experiment [26], which is an unique bosonic system beyond conventional paradigm. With the valley-orbit coupling and external magnetic field, one can manipulate the dispersion of exciton center-of-mass motion. The attainable linear dispersion in lower branch brings a variety of exotic properties of excitons associated with BEC and superfluidity. The BEC critical temperature is nonzero, thereby realizing a true condensate with long-range order in 2D homogeneous space. The form of vortex energy has an additional linear term, giving rise to superfluid transition different with BKT type. There is a two-step phase transition with decrease of temperature from a normal exciton phase: at \(T_{c1}=T_{\rm SF}\) the system first enters a superfluid phase with quasi-long range order, and then at \(T_{c2}=T_{\rm BEC}\) it enters the BEC phase with both long-range order and superfluid properties. These interesting features can be possibly verified with the recent experimental progress towards intralayer exciton condensation, by measuring spatial and temporal coherence of emitted photons. _Acknowledgements._ We are grateful to Yi-Cai Zhang for valuable discussions. This work is supported by the National Key Research and Development Program of China (Grant No. 2022YFA1405304), the National Natural Science Foundation of China (Grant No. 12004118), and the Guangdong Basic and Applied Basic Research Foundation (Grants No. 2020A1515110228 and No. 2021A1515010212).
2308.01960
Resonant Pseudo-Dirac Dark Matter as a Sub-GeV Thermal Target
Dark matter (DM) could be a pseudo-Dirac thermal relic with a small mass splitting that is coupled off-diagonally to a kinetically mixed dark photon. This model, particularly in the sub-GeV mass range, is a key benchmark for accelerator searches and direct detection experiments. Typically, the presence of even a tiny fraction of pseudo-Dirac DM in the excited state around the time of recombination would be excluded by DM annihilation bounds from the cosmic microwave background (CMB); thus, viable thermal histories must typically feature an exponential suppression of the excited state. We revisit assumptions about the thermal history in the resonant regime, where the dark photon mass is slightly more than twice the DM mass (to within $\sim10\%$), leading to an $s$-channel resonance in the annihilation cross section. This resonance substantially reduces the couplings required for achieving the observed relic abundance, implying that in much of the parameter space, the DM kinetically decouples from the Standard Model well before the final DM relic abundance is achieved. We find that the excited state is not thermally depopulated in this regime. In spite of this, we find that the presence of the excited state does $\textit{not}$ violate CMB bounds, even for arbitrarily small mass splittings. The present-day abundance of the excited state opens up the possibility of signatures that are usually not relevant for pseudo-Dirac DM, including indirect detection, direct detection, and self-interacting DM signatures.
Nirmalya Brahma, Saniya Heeba, Katelin Schutz
2023-08-03T18:00:02Z
http://arxiv.org/abs/2308.01960v1
# Resonant Pseudo-Dirac Dark Matter as a Sub-GeV Thermal Target ###### Abstract Dark matter (DM) could be a pseudo-Dirac thermal relic with a small mass splitting that is coupled off-diagonally to a kinetically mixed dark photon. This model, particularly in the sub-GeV mass range, is a key benchmark for accelerator searches and direct detection experiments. Typically, the presence of even a tiny fraction of pseudo-Dirac DM in the excited state around the time of recombination would be excluded by DM annihilation bounds from the cosmic microwave background (CMB); thus, viable thermal histories must typically feature an exponential suppression of the excited state. We revisit assumptions about the thermal history in the resonant regime, where the dark photon mass is slightly more than twice the DM mass (to within \(\sim 10\%\)), leading to an \(s\)-channel resonance in the annihilation cross section. This resonance substantially reduces the couplings required for achieving the observed relic abundance, implying that in much of the parameter space, the DM kinetically decouples from the Standard Model well before the final DM relic abundance is achieved. We find that the excited state is not thermally depopulated in this regime. In spite of this, we find that the presence of the excited state does _not_ violate CMB bounds, even for arbitrarily small mass splittings. The present-day abundance of the excited state opens up the possibility of signatures that are usually not relevant for pseudo-Dirac DM, including indirect detection, direct detection, and self-interacting DM signatures. ## I Introduction The origin of dark matter (DM) remains an elusive mystery. If the DM thermalizes with the Standard Model (SM) plasma in the early Universe, then thermal freeze-out provides a compelling explanation for the observed abundance of DM. In particular, thermal freeze-out is relatively insensitive to the initial conditions of the early Universe and the relevant couplings can be probed in a number of ways using direct detection, indirect detection, and collider observables [1]. If the DM is lighter than the \(\sim\)GeV scale, then the Lee-Weinberg bound [2] implies that the mediator for DM-SM interactions cannot be a SM force carrier, which opens up the possibility of a "dark sector," with auxiliary forces and matter fields beyond just DM. A simple, technically natural example of a new mediator is a dark photon that kinetically mixes with the SM photon [3; 4; 5; 6; 7; 8; 9; 10; 11; 12]. Given the null detection of weak-scale DM thus far (see e.g. Refs. [13; 14]), lighter dark sectors are of increasing interest to the community (see e.g. Refs. [15; 16; 17; 18; 19]) and there are a range of new and proposed experimental methodologies that will be sensitive to these DM candidates [20; 21; 22; 23; 24; 25; 26]. The standard thermal freeze-out mechanism for sub-GeV DM is subject to strong bounds from cosmic microwave background (CMB) anisotropies assuming annihilation through an \(s\)-wave process to visible SM particles. Even after freeze-out, DM can still annihilate at a sub-Hubble rate and inject considerable energy into the SM plasma near the time of recombination. Energy injection in the form of visible particles would observably modify the properties of the plasma even if the DM annihilations are extremely rare, since a \(\sim\)part-per-billion fraction of the DM annihilating would be enough energy injection to ionize all the atoms in the Universe. Considering the effects on CMB anisotropies as measured by _Planck_, current bounds on DM annihilation rule out \(s\)-wave thermal freeze-out of DM below \(\sim 10\) GeV, with the exact value depending on the SM final state [27; 28]. A well-studied way to bring \(s\)-wave freeze-out into consistency with CMB constraints is by introducing a small mass splitting between non-degenerate DM states [29; 30; 31; 32; 33; 34; 35]. There is no symmetry that prevents Dirac fermions from splitting into two Majorana mass states, and this is easily realized in models where DM is charged under some new dark gauge symmetry at high energies which is broken at low energies [36; 37]. The dark matter multiplet, consisting of \(\chi_{1}\) and \(\chi_{2}\), acquires a mass splitting \(\delta=m_{\chi_{2}}-m_{\chi_{1}}\), which can be naturally small if the dark symmetry is approximate (for example, the small neutron-proton mass splitting is protected by an approximate isospin symmetry). In this work, we do not specify the origin of the mass splitting and treat it phenomenologically. However, we note that small mass splittings are generally easy to accommodate from a model-building perspective in situations with a small overall mass scale and small couplings [38; 39; 40; 41; 29]. In this model, the couplings of DM with the dark photon are purely off-diagonal, i.e. the only vertex with the dark photon couples \(\chi_{1}\) and \(\chi_{2}\). For this reason, annihilation rates to SM final states can be significantly reduced because the leading-order tree-level annihilation process requires a large \(\chi_{2}\) abundance, which may be thermally depleted like \(e^{-\delta/T}\) at temperatures \(T\lesssim\delta\)[42; 41]. Given the \(\sim\)eV-scale temperatures that are relevant for recombination, mass splittings with \(\delta\gtrsim 1\) eV can be compatible with CMB constraints in parts of the parameter space (with larger mass splittings being unconstrained for a wider range of couplings and DM masses). Alternatively, if the DM annihilation occurs close to a pole in the cross-section (for instance when the mediator is close to twice the DM mass), then the relevant couplings to achieve the observed relic abundance can be lowered substantially [43; 44]. If this pole is relevant for setting the DM abundance at early times but not during the recombination epoch, then the CMB bounds are relaxed because of the lower off-resonance annihilation rate to SM final states. The CMB bounds were carefully studied in Ref. [45] for the case of Dirac DM interacting with a dark photon with \(m_{A^{\prime}}\approx 2m_{\chi}\). Meanwhile, the resonant regime for inelastic pseudo-Dirac DM has been studied primarily in the context of its signature at colliders given the modification to the predicted couplings [46; 47; 48; 48]. The cosmology of resonant pseudo-Dirac DM, on the other hand, has yet to be studied in detail. In particular, the substantial effects of early kinetic decoupling of the DM have been overlooked so far. In this work, we perform a comprehensive study of the cosmology of pseudo-Dirac DM in the resonant regime. We find that even in the mildly resonant regime with \((m_{A^{\prime}}-2m_{\chi})/m_{A^{\prime}}\sim 10\%\), pseudo-Dirac DM can have arbitrarily low mass splittings without violating limits from the CMB. Moreover, we find that in most of the parameter space the excited state has a high relic fraction. This provides a strong contrast to most pseudo-Dirac thermal histories which feature an exponential suppression of the excited state due to thermal depletion. Accordingly, the cosmology and astrophysics of this DM candidate are quite different from the usual pseudo-Dirac parameter space, as are the direct and indirect DM detection signatures. The rest of this article is organized as follows. In Section II, we review the model and the relevant processes that affect the cosmology of this DM candidate in the early Universe. In particular, we solve the Boltzmann equations for the density and temperature evolution of the DM states \(\chi_{1}\) and \(\chi_{2}\). In Section III, we consider cosmological and astrophysical signatures including Big Bang Nucleosynthesis, the CMB, self-interacting DM (SIDM), and indirect detection. In Section IV we discuss prospects for detecting this DM candidate using terrestrial experimental methods. Discussion and concluding remarks follow in Section V ## II Early Universe Behaviour ### Pseudo-Dirac DM Parameter space We consider a light (\(m_{\chi}\lesssim 10\) GeV) pseudo-Dirac DM model with its relic abundance set by annihilation to SM final states via a dark photon mediator. We focus on this mass range primarily because Dirac DM with \(m_{\chi}\lesssim 10\) GeV is excluded by the CMB for \(s\)-wave freeze-out to visible final states [28]. In this model, the interaction terms are \[\mathcal{L}\supset\frac{\kappa}{2}F^{\prime}_{\mu\nu}F^{\mu\nu}+ig_{\chi}A^{ \prime}_{\mu}\chi_{2}\gamma^{\mu}\chi_{1}\,, \tag{1}\] where \(\chi_{2,1}\) are the excited and ground states, respectively, that couple with interaction strength \(g_{\chi}\) to a vector mediator \(A^{\prime}\) that kinetically mixes with the SM photon with mixing parameter \(\kappa\). We focus on parameter space where the mass splitting is much smaller than the DM mass, \(\delta=m_{\chi_{2}}-m_{\chi_{1}}\ll m_{\chi_{1}}\). In the following discussion, we denote the average mass of the two states as \(m_{\chi}\). We are furthermore interested in the resonant regime where \(m_{A^{\prime}}\approx m_{\chi_{2}}+m_{\chi_{1}}=2m_{\chi_{1}}+\delta=2m_{\chi}\). We parameterize the proximity to resonance with the parameter \[\epsilon_{R}=\frac{m_{A^{\prime}}^{2}-s_{0}}{s_{0}} \tag{2}\] where \(s_{0}=(m_{\chi_{1}}+m_{\chi_{2}})^{2}\). In this work, we consider \(\epsilon_{R}\in[0.001,0.1]\). The lower limit is motivated by photodisassociation bounds coming from BBN as discussed in Sec. III.1. On the other hand, as \(\epsilon_{R}>0.1\), we approach the non-resonant limit. We remain agnostic about the mechanism responsible for generating the mass splitting as well as the resonance, however the parameters we consider are self-consistent even in a minimal UV setup. For instance, we could consider a complex dark Higgs with a dark charge of 2 interacting with a Dirac fermion with a mass \(m_{D}\)[49; 36; 37]. The breaking of the dark \(U(1)\) symmetry through the vev of the dark Higgs, \(v_{D}\), then results in both the mass term for the dark photon as well as a Majorana mass term for the Dirac fermion which generates the mass splitting. If the dark Higgs is heavy with \(v_{D}\gg m_{D}\) then it will not participate in the dynamics that determine the DM thermal history. With this hierarchy in mind, having the dark photon mass near its resonant value pushes \(g_{\chi}\sim m_{D}/v_{D}\ll 1\), which we show below is consistent with setting the observed DM relic abundance in the resonant regime. Finally, the mass splitting is determined by the Yukawa coupling of the Dirac fermion \(y_{\chi}\), with \(\delta\sim y_{\chi}v_{D}\). Though the dark photon is the mediator of this model, it can be resonantly produced on shell via inverse decays. The dark photon can subsequently decay invisibly, \[\Gamma_{\rm DM}=\frac{g_{\chi}^{2}m_{A^{\prime}}}{12\pi}\sqrt{1-\frac{s_{0}}{ m_{A^{\prime}}^{2}}}\left(1+\frac{s_{0}}{2m_{A^{\prime}}^{2}}\right)\left(1- \frac{\delta^{2}}{m_{A^{\prime}}^{2}}\right)^{3/2}\,, \tag{3}\] or visibly to SM final states, \[\Gamma_{\rm SM}=R(m_{A^{\prime}})\Gamma_{\mu^{+}\mu^{-}}+\sum_{\ell}\Gamma_{ \ell^{+}\ell^{-}} \tag{4}\] where \(R(m_{A^{\prime}})\) is the empirically determined branching ratio of \(\sigma(e^{+}e^{-}\rightarrow{\rm hadrons})/\sigma(e^{+}e^{-}\rightarrow\mu^{+} \mu^{-})\)[50; 51] at centre-of-mass energy \(\sqrt{s}=m_{A^{\prime}}\) and \[\Gamma_{\ell^{+}\ell^{-}}=\frac{\kappa^{2}e^{2}m_{A^{\prime}}}{12\pi}\sqrt{1- \frac{4m_{\ell}^{2}}{m_{A^{\prime}}^{2}}}\left(1+\frac{2m_{\ell}}{m_{A^{ \prime}}}\right). \tag{5}\] The total on-shell decay width is \(\Gamma_{A^{\prime}}\equiv\Gamma_{\rm DM}+\Gamma_{\rm SM}\equiv\Gamma_{\rm DM} +\Gamma_{e^{+}e^{-}}/B_{e}\), where \(B_{e}\) is the branching ratio of the dark photon to electrons. ### Relic Density Assuming Thermal Equilibrium In our parameter region of interest, DM obtains its relic abundance when \(s\)-channel processes like \(\chi_{1}\chi_{2}\to\text{SM SM}\) become inactive, usually well after chemical freeze-out for \(\epsilon_{R}\ll 1\). Using the formalism developed in Ref. [47], the corresponding thermally averaged cross-section can be written as \[\langle\sigma v\rangle=\frac{2x}{K_{2}^{2}(x)}\int_{0}^{\infty}\text{d}\epsilon \,\sigma v\sqrt{\epsilon}\,(1+2\epsilon)K_{1}(2x\sqrt{1+\epsilon})\,, \tag{6}\] where \(x=m_{\chi_{1}}/T\), \(\epsilon=(s-s_{0})/s_{0}\) is a dimensionless measure of the kinetic energy and \[\sigma v=F(\epsilon)\frac{m_{A^{\prime}}\Gamma_{A^{\prime}}}{(s-m_{A^{\prime }}^{2})^{2}+m_{A^{\prime}}^{2}\Gamma_{A^{\prime}}^{2}} \tag{7}\] with, \[F(\epsilon)= \frac{4\pi\kappa^{2}\alpha\alpha_{D}}{3s_{0}m_{A^{\prime}}\Gamma _{A^{\prime}}}\frac{(3+2\epsilon)[(1+\epsilon)s_{0}+2m_{\epsilon}^{2}]}{(1+ \epsilon)(1+2\epsilon)B_{e}(\sqrt{s_{0}(1+\epsilon)})}\] \[\times[s_{0}(1+\epsilon)-4m_{e}^{2}]^{1/2}[s_{0}(1+\epsilon)- \delta^{2}]^{1/2}\,. \tag{8}\] The thermally averaged cross-section can be further reduced to semi-analytic forms in the non-relativistic (\(\epsilon\ll 1\)) and resonant (\(\epsilon\approx\epsilon_{R}\)) limits [45]. In order to calculate the _total_ DM relic density, one needs to solve the Boltzmann Equation for \(\chi_{1}\chi_{2}\to\text{SM SM}\), \[\frac{\text{d}Y_{\text{tot}}}{\text{d}x}=\frac{s}{Hx}\langle\sigma v\rangle_{ \text{eff}}\left(Y_{\text{tot}}^{2}-Y_{\text{tot,eq}}^{2}\right)\,, \tag{9}\] where \(Y_{\text{tot}}=Y_{\chi_{1}}+Y_{\chi_{2}}\) is the total comoving density for DM with \(Y_{\chi_{1},\chi_{2}}=n_{\chi_{1},\chi_{2}}/s\), \(s\) is the entropy density, and the effective thermally averaged cross-section is [43; 47] \[\langle\sigma v\rangle_{\text{eff}}=\frac{2(1+\delta/m_{\chi})^{3/2}e^{-x \delta/m_{\chi_{1}}}}{(1+(1+\delta/m_{\chi_{1}})^{3/2}e^{-x\delta/m_{\chi_{1} })^{2}}}\langle\sigma v\rangle. \tag{10}\] The total DM relic density is then given by \[\Omega_{\text{DM}}h^{2}=8.77\times 10^{-11}\left[\int_{x_{f}}^{\infty}\text{d}x \frac{\langle\sigma v\rangle_{\text{eff}}}{x^{2}}g_{*}^{1/2}\right]^{-1}, \tag{11}\] where \(g_{*}\) correspond to the effective relativistic degrees of freedom in the early universe. It can be seen from Eqs. (6) and (7) that the cross-section is resonantly enhanced when \(\chi_{1}\) and \(\chi_{2}\) have enough energy to produce the \(A^{\prime}\) on shell, leading to very efficient annihilation at a temperature, \(T\sim\epsilon_{R}m_{\chi_{1}}\). Due to the resonant enhancement, as \(\epsilon_{R}\to 0\) even very tiny couplings are able to efficiently deplete the DM in the early Universe to obtain the observed DM relic density. ### Early Kinetic Decoupling A crucial caveat to the relic density calculation detailed above is that it assumes that the DM and SM remain in kinetic equilibrium while annihilations are active (including after chemical freeze-out) through scattering processes, primarily off of electrons \(\chi_{1}e\leftrightarrow\chi_{2}e\). Since this is a \(t\)-channel process, it does not benefit from the same resonant enhancement as the \(s\)-channel annihilations. Therefore, when the couplings between the two sectors are small, the assumption of kinetic equilibrium may no longer hold and DM can kinetically decouple well before DM annihilation \(\chi_{1}\chi_{2}\to\text{SM}\,\text{SM}\) hits its resonance (compare e.g. the dashed and solid lines in Fig. 1). As a result, to accurately calculate the DM relic density, one needs to solve a coupled system of differential equations tracking the evolution of both the DM number density as well as the dark sector temperature, \(y(x)\equiv m_{\chi_{1}}T_{\text{DM}}s^{-2/3}\)[52; 53; 54], \[\frac{Y^{\prime}}{Y}=\frac{sY}{x\hat{H}}\left(\frac{Y_{\text{eq} }^{2}}{Y^{2}}\langle\sigma v\rangle-\langle\sigma v\rangle_{\text{neq}}\right) \tag{12}\] \[\frac{y^{\prime}}{y}=\frac{\gamma(T)}{x\hat{H}}\left(\frac{y_{ \text{eq}}}{y}-1\right)+\frac{sY}{x\hat{H}}\left(\langle\sigma v\rangle_{ \text{neq}}-\langle\sigma v\rangle_{2,\,\text{neq}}\right)\] \[+\frac{sY}{x\hat{H}}\frac{Y_{\text{eq}}^{2}}{Y^{2}}\left[\frac{y_ {\text{eq}}}{y}\langle\sigma v\rangle_{2}-\langle\sigma v\rangle\right]+ \frac{H}{x\hat{H}}\frac{\langle p^{4}/E^{3}\rangle_{\text{neq}}}{3T_{\text{DM}}} \tag{13}\] Figure 1: The SM temperature as a function of the dark photon mass at which DM-SM scattering (dashed), DM-DM scattering (dotted) and DM-DM annihilation (dot-dashed) decouple for \(\epsilon_{R}=0.001\) (light green) and \(\epsilon_{R}=0.1\) (dark green), with \(g_{\chi}=0.01\). The solid lines show when annihilation becomes resonant, with \(T\sim\epsilon_{R}m_{A^{\prime}}\). For each parameter point in this plot \(\kappa\) is chosen so as to obtain the observed DM relic abundance ignoring the effects of early kinetic decoupling. The shaded areas between dotted lines correspond to varying \(\delta\) between \(1-100\,\text{eV}\), from bottom to top. In some parts of the parameter space, resonant depletion of the DM abundance occurs much later than other decoupling processes, indicating that the early decoupling can influence the subsequent relic DM abundance. where \(\tilde{H}\) is the normalised Hubble rate as defined in Ref. [53] and the subscript "neq" denotes that the corresponding thermal average is over the DM phase distribution at a temperature \(T_{\rm DM}\) distinct from the SM temperature \(T\), i.e., assuming DM is not necessarily in kinetic equilibrium with the SM. Additionally, \(\langle\sigma v\rangle_{2}\) is a temperature-weighted analog of the usual thermally averaged annihilation cross-section \(\langle\sigma v\rangle\), as defined in Ref. [53], and \(\gamma(T)\) is the DM-SM momentum transfer rate which is a measure of DM-SM elastic scattering. Terms involving scattering and annihilation can both keep the DM temperature coupled to the SM. These Boltzmann equations have been extensively studied for the case of elastically decoupling relics [52; 53; 54; 55]. For inelastic DM models, the corresponding Boltzmann equations may have an additional dependence on the mass-splitting \(\delta\), which would appear in various cross sections, and there could also in principle be separate thermal evolution of the ground and excited state species. However, in our parameter region of interest, \(\delta\ll m_{\chi}\) and \(\delta\) is also much smaller than the decoupling temperatures for all relevant processes. Therefore, we explicitly find that the thermal history of this model behaves, to a very good approximation, as a strictly Dirac model during the temperatures relevant for freeze-out. Therefore, to calculate the relic pseudo-Dirac DM density, we use the publicly available Boltzmann solver DRAKE [54] modified for a resonant Dirac DM model. ### Parameter Space The couplings required to reproduce the observed DM abundance are shown in Fig. 2 for different values of \(m_{\chi}\) and \(\epsilon_{R}\). Note that \(\delta\) is generally much smaller than all the temperatures that are relevant for setting the relic abundance, and therefore it is irrelevant in determining the couplings. As expected, for a given DM mass, smaller values of \(\epsilon_{R}\) (indicated by thinner lines) result in a larger resonant enhancement in the annihilation cross-section, and corresponds to smaller couplings reproducing the relic density, thereby shifting the lines downwards and to the left. For a fixed value of \(m_{\chi}\) and \(\epsilon_{R}\), the shape of the curve in the \(g_{\chi}-\kappa\) plane can be explained by considering the thermally averaged cross-section in the limit \(\epsilon_{R}\ll 1\). In this case, as was pointed out in Ref. [45], the slower decay (\(\Gamma_{\rm SM}\) vs. \(\Gamma_{\rm DM}\)) is the bottleneck in terms of determining the final DM abundance, \[\Omega_{\rm DM}h^{2}\propto\frac{\Gamma_{A^{\prime}}}{\kappa^{2}g_{\chi}^{2}}\,. \tag{14}\] This implies that for \(g_{\chi}\ll\kappa\) (\(\kappa\ll g_{\chi}\)), the relic density becomes independent of \(\kappa\) (\(g_{\chi}\)) resulting in the asymptotic behavior seen in Fig. 2. In the limit \(\epsilon_{R}\to 0\), we find that the relic density obtained using the coupled system of Boltzmann Eqs. (12)-(13) differs from the standard Boltzmann treatment (i.e. Figure 2: The couplings that yield a DM abundance that matches the observed relic density for various DM masses and values of \(\epsilon_{R}\). Dashed lines indicate CMB annihilation constraints on those couplings, while solid lines are consistent with the CMB. In this parameter space, the smaller of the two couplings (corresponding to whichever decay channel is the bottleneck) determines the relic abundance, with the larger coupling being irrelevant. Small deviations from this behaviour occur at large values of \(m_{\chi}\) and \(\epsilon_{R}\) where early kinetic decoupling has a significant effect on the relic abundance. These trends contrast with the parameter space for standard thermal freeze-out where the product of couplings is the most relevant for setting the relic abundance. For comparison, we show the couplings for thermal freeze-out for \(m_{A^{\prime}}=3m_{\chi}\) (off resonance) as dot-dashed lines. assuming identical SM and DM temperatures) by at most a factor of \(\sim 2\). For \(\epsilon_{R}\sim 1\), one would naively expect the difference to be even smaller since we are not only further off-resonance but are also pushed toward larger couplings where the expectation is that kinetic equilibrium should be maintained more easily. However, we find that the difference between the two treatments can be as large as an order of magnitude in the relic density in the region \(\epsilon_{R}\sim 0.1\). This can be attributed to the deviation of the DM temperature from the SM temperature and was earlier discussed in the context of scalar singlet DM in Ref. [56]. In particular, for \(\epsilon_{R}\ll 0.1\), the deviation is very small and positive, \(T_{\rm DM}\gtrsim T\) whereas for \(\epsilon_{R}\sim 0.1\) the deviation is large and negative, \(T_{\rm DM}\ll T\). Although one can numerically estimate the sign and magnitude of this deviation by studying the interplay of the various terms on the right hand side of Eq. (13), they can also be understood qualitatively by considering the underlying DM phase space during and after chemical freeze-out. Under the assumption that DM has already kinetically decoupled, the final DM density can be assumed to be proportional to the annihilation cross-section averaged over the DM temperature, \(\langle\sigma v\rangle_{\rm neq}\) (in analogy with Eq. (11)) \[\Omega_{\rm DM}h^{2}\propto\left[\int_{x_{f}}^{\infty}{\rm d}x\frac{\langle \sigma v\rangle_{\rm neq}}{x^{2}}\right]^{-1}\,. \tag{15}\] In general, as \(\epsilon_{R}\to 0\), the DM particles need only very little momentum to hit the resonance and annihilate efficiently, implying that resonant annihilation depletes the low-momentum tail of the DM distribution and shifts the average DM momentum (and therefore the temperature) to larger values. During chemical freeze-out, since the average DM momentum is already large, only a small fraction of DM particles can annihilate resonantly. The increase in the DM temperature further decreases the available phase space for resonant annihilation and therefore decreases \(\langle\sigma v\rangle_{\rm neq}\). This results in the small dip in \(\langle\sigma v\rangle_{\rm neq}\) for \(\epsilon_{R}=0.001\) around chemical freeze-out \(x_{f}\sim 20\) seen in Fig. 3 and corresponds to reducing the efficiency of DM annihilations and increasing its abundance. After DM has chemically decoupled, its temperature now redshifts as matter and therefore DM cools much faster than the SM, increasing the relative number of DM particles with low momentum. As a result, resonant annihilation which is active at \(T_{\rm DM}\sim\epsilon_{R}m_{\chi}\) happens at slightly earlier times (since \(T_{\rm DM}\ll T\)) compared to the kinetically coupled case when they occur at \(T\sim\epsilon_{R}m_{\chi}\). Hence, resonant annihilation is more efficient in depleting the DM (due to the \(x\) dependence of Eq. (15)). Since these two effects change the relic density in opposing ways, the final relic density is only slightly different from the kinetically coupled case. This is especially true given that the \(1/x^{2}\) weighting in the integral of Eq. (15) ensures that the most substantial contributions to the relic abundance come from early times before the difference between \(\langle\sigma v\rangle_{\rm eq}\) and \(\langle\sigma v\rangle_{\rm neq}\) becomes too large. For \(\epsilon_{R}\sim 0.1\), on the other hand, DM particles need larger momentum to annihilate resonantly and therefore resonant annihilations shift the average DM momentum (and temperature) to smaller values. Additionally, resonant annihilation is active exactly during chemical freeze-out, \(x_{\rm DM}\sim\epsilon_{R}^{-1}\sim x_{f}\). This implies that if DM is kinetically decoupled, the depletion in the large-momentum DM phase space effectively turns off resonant annihilations quite quickly as shown by the dark green line in Fig. 3. Furthermore, the \(x^{2}\) scaling of Eq. (15) in this regime enhances the difference in the total relic abundance since \(\langle\sigma v\rangle_{\rm eq}\) and \(\langle\sigma v\rangle_{\rm neq}\) differ substantially around the time of chemical freeze-out. As a result, the relic density is significantly altered: DM is over-produced and larger couplings are required to obtain the observed DM abundance. ### Late-time Abundance of the Excited State The relative fraction of excited-state particles, \(f^{*}=n_{\chi_{2}}/n_{\chi_{1}}\) is a key quantity in determining the impact of late-time DM behaviour in cosmological environments as well as terrestrial experiments, as discussed further in Sections III and IV. Even if DM is symmetrically produced in the ground and excited states, the ground and excited states can inter-convert through processes within the dark sector or through scattering processes with the SM as long as their rates exceed the Hubble expansion rate. In particular, as long as chemical equilibrium is maintained _within_ the dark sector, the excited state number density is given by \(n_{\chi_{2}}\sim n_{\chi_{1}}e^{-\delta/T_{\rm DM}}\). Chemical Figure 3: The thermally averaged annihilation cross-section for kinetically decoupled (solid) and kinetically coupled (dashed) DM as a function of \(x\) (defined with respect to the SM temperature, \(T\)) for \(\epsilon_{R}=0.1\) (dark green) and \(\epsilon_{R}=0.001\) (light green) and \(m_{\chi}=100\,{\rm MeV}\). We fix \(g_{\chi}=0.01\) and choose \(\kappa\) such that we reproduce the observed DM abundance after solving Eqs. (12)-(13). Early kinetic decoupling can suppress or enhance resonant annihilation at given temperature. equilibrium in the dark sector can be maintained through DM-SM scattering, which also maintains kinetic equilibrium with the SM, \(\chi_{1}e\leftrightarrow\chi_{2}e\), or through DM up/down-scattering, \(\chi_{1}\chi_{1}\leftrightarrow\chi_{2}\chi_{2}\). The fractional abundance of a cosmologically stable excited state at late times is determined the DM temperature when it chemically decouples, \(f^{*}=n_{\chi_{2}}/n_{\chi_{1}}\approx e^{-\delta/T_{\rm chem}}\), where \(T_{\rm chem}\) is determined by whichever of the processes listed above decouples last, \(T_{\rm chem}=\min[T_{\chi e},\,T_{\chi\chi}]\), where \[\frac{n_{e}\langle\sigma v\rangle_{\chi_{2}e\to\chi_{1}\bar{e}}}{H} \Bigg{|}_{T=T_{\chi e}} \sim 1\,, \tag{16}\] \[\frac{n_{\chi_{2}}\langle\sigma v\rangle_{\chi_{2}\chi_{2}\chi_{ 2}\to\chi_{1}\chi_{1}}}{H}\Bigg{|}_{T=T_{\chi\chi}} \sim 1. \tag{17}\] Here, \(n_{e}\) is the electron number density, \(n_{\chi_{2}}=n_{\chi_{1}}e^{-\delta/T_{\rm DM}}\) is obtained by scaling back the present-day DM abundance \(n_{\chi_{1}}\sim T_{\rm eq}T^{3}/m_{\chi_{1}}\), and the relevant cross-sections are from Ref. [41]. If \(\chi_{2}\chi_{2}\leftrightarrow\chi_{1}\chi_{1}\) decouples after \(\chi_{2}e\leftrightarrow\chi_{1}e\), two temperature scales enter in Eq. (17), the SM temperature \(T\) that largely determines the Hubble rate, and the DM temperature which is a function of the temperature at which DM decouples from the SM, \(T_{\rm DM}\sim T^{2}/T_{\chi e}\). In the standard thermal history for inelastic DM, one finds that \(T_{\rm chem}\lesssim\delta\) owing to the large DM-DM and/or DM-SM couplings which ensures chemical equilibrium in the dark sector is maintained until late times. This results in a strong suppression of the excited state at late times, \(f^{*}\sim\mathcal{O}(10^{-4})\)[57; 42]. However, the small couplings present in our parameter space result in \(T_{\rm chem}\gg\delta\), and therefore a similar abundance of the ground and excited state, \(f^{*}\sim 1\) in much of the parameter space. This is represented in Fig. 4, in which we show the relative abundance of the ground and excited states as a function of the DM mass for different values of \(\epsilon_{R}\) and \(g_{\chi}\). Note that \(f^{*}\sim 1\) for \(g_{\chi}\ll 0.1\) for all DM masses of interest. For \(g_{\chi}\sim 0.1\), we find a suppression of the excited state to \(f^{*}\sim 0.01\) when \(m_{\chi}\lesssim 100\,\mathrm{MeV}\). ## III Cosmological and astrophysical constraints Due to the high late-time abundance of the excited state, this thermal history has unique signatures compared to other pseudo-Dirac DM thermal histories. In this Section, we determine the qualitatively new behavior in astrophysical systems caused by the presence of the excited state and estimate the resulting constraints on the model as inferred from existing measurements. ### Bbn Sub-GeV DM may significantly impact the abundance of light elements in the Universe produced during BBN. If DM has a thermal abundance and is relativistic at a SM temperature of a few MeV, it contributes to the number of relativistic degrees of freedom in the early universe modifying \(N_{\rm eff}\) and changing the abundances of light elements like helium. Measurements of the helium fraction can thus be used to place a lower bound on the mass of thermal DM, \(m_{\chi}\gtrsim 10\,\mathrm{MeV}\)[58; 59]. In this work, we conservatively consider DM above this scale to ensure that the parameter space is not ruled out. However, we note that this constraint could be slightly weaker in parts of our parameter space due to the early kinetic decoupling of DM from the SM. In particular, if the DM decouples early enough, then some of the SM degrees of freedom in the bath at that time, given by \(g_{\star,0}\) in total, heat the SM bath at later times and raise the SM temperature relative to \(T_{\rm DM}\) by a factor of a few in order to conserve entropy, \(T_{\rm SM}/T_{\rm DM}\sim(g_{\star,0}/g_{\star,\rm BBN})^{1/3}\). This means that the DM contribution to the energy density, and hence \(N_{\rm eff}\), could be diluted by a factor of \((g_{\star,0}/g_{\star,\rm BBN})^{4/3}\). Furthermore, once the DM becomes non-relativistic at a temperature \(T_{\rm DM}\sim m_{\chi}\), the DM temperature will drop even further relative to the SM temperature, \(T_{\rm DM}\sim T_{\rm SM}^{2}(g_{\star,0}/g_{\star,\rm BBN})^{-2/3}/m_{\chi}\) for \(T_{\rm SM}<m_{\chi}\). Therefore, by the time of BBN, MeV-scale DM may have had its energy density diluted and may be non-relativistic, thus not contributing substantially to \(N_{\rm eff}\). In concert, these effects could increase the range of allowed masses for this model; we leave a more detailed exploration to future work. In addition to the modification of \(N_{\rm eff}\), DM annihilating into SM states at a temperature of a few keV can inject energy into the SM plasma causing the photo Figure 4: The relative number density of the excited and ground states, \(f^{*}=n_{\chi_{2}}/n_{\chi_{1}}\), as a function of the DM mass. The different lines correspond to \(g_{\chi}=0.1\) (solid) and \(g_{\chi}=0.01\) (dotted). The two colours correspond to two different values of \(\epsilon_{R}\) and the shaded areas correspond to varying \(\delta\) between \(1-100\) eV from top to bottom. Due to the early decoupling of the processes that would deplete the excited state, the excited state remains abundant at late times in most of the parameter space, in contrast to most other pseudo-Dirac DM thermal histories. disassociation of light nuclei like deuterium. The corresponding bound on the annihilation cross-section not only depends on the DM mass and the relevant final states but also depends on the temperature of the kinetic decoupling, \(T_{\rm kd}\), in the case when the thermally averaged cross-section has a temperature dependence [58]. In our model, the cross-sections show this dependence around the keV-scale temperatures relevant for photo-disassociation for \(\epsilon_{R}\ll 1\). Accurately evaluating this bound for such small \(\epsilon_{R}\)'s is therefore non-trivial, and we leave a detailed study for future work. For the purposes of this work, we note that for a velocity-independent annihilation cross-section, the bound from photo-disassociation corresponds to \(\sigma v\lesssim\text{few}\times 10^{-25}\text{cm}^{3}/\text{s}\), and the constraint for velocity dependent cross-section gets weaker with increasing \(T_{\rm kd}\)[58]. For resonant annihilations, even the small couplings considered in this work can result in significantly larger cross sections at the late times relevant to BBN. Since the resonant annihilation cross section peaks near \(T\sim\epsilon_{R}m_{\chi}\), in this work we consider \(\epsilon_{R}\geq 0.001\), which along with the conservative lower bound on \(m_{\chi}\) discussed above, ensures that the DM annihilation cross section is always below the upper bound during temperatures relevant for photo-disassociation. ### Cosmic Microwave Background DM annihilation into SM final states during recombination can inject energy into the SM plasma. This energy injection can alter the ionization history and affect CMB anisotropies due to the scattering of CMB photons on additional free electrons that would have otherwise recombined into neutral atoms in the standard cosmology. This injection is usually described in terms of an effective parameter [28], \[p_{\rm ann}=2f^{*}f_{\rm eff}\frac{\langle\sigma v\rangle_{\rm CMB }}{m_{\chi}}<3.2\times 10^{-28}\text{cm}^{3}\text{s}^{-1}\text{GeV}^{-1}\,, \tag{18}\] where \(f^{*}\) is the fraction of DM in the excited state as described in the previous Section, \(f_{\rm eff}\) is the efficiency fraction of injected energy that gets deposited in the plasma (where we adopt the values from Ref. [45] using the spectra from Refs. [60, 61] and which depend on the DM mass and SM final state), and \(\langle\sigma v\rangle_{\rm CMB}\) is the total annihilation cross section into SM states at recombination. The bound on \(p_{\rm ann}\) includes annihilation channels into all visible SM final states (i.e. excluding neutrino final states). In our case, the final states are dominantly electrons (and muons for DM masses above the muon mass), and we include the relevant branching fractions as appropriate when computing the total annihilation cross section. In the case of multiple possible final states, we weight the cross sections by the relevant deposited energy efficiency fraction, \(f_{\rm eff}\). Since the DM particles are highly non-relativistic during and after recombination, to obtain \(\langle\sigma v\rangle_{\rm CMB}\) we evaluate Eq. (6) in the limit \(\epsilon\to 0\), \[\langle\sigma v\rangle_{\rm CMB}=\frac{\alpha g_{\chi}^{2}\kappa^ {2}}{s_{0}^{2}}\frac{(s_{0}+2m_{e}^{2})\sqrt{(s_{0}-4m_{e}^{2})\left(s_{0}- \delta^{2}\right)}}{B_{e}\left(\sqrt{s_{0}}\right)((1+\epsilon_{R})\Gamma_{A^ {\prime}}^{2}+s_{0}\epsilon_{R}^{2})}. \tag{19}\] To compute the CMB limit on our parameter space, we require that the total \(f_{\rm eff}\)-weighted cross section in Eq. (19) not exceed the one implied by the bound on \(p_{\rm ann}\). The excluded parameters are depicted in Fig. 2 as dotted lines which, despite being excluded by the CMB, would yield the observed amount of DM, as described in the previous Section. We find that - in contrast to the case of non-resonant pseudo-Dirac DM - very small mass splittings and large excited state fractions remain unconstrained by the CMB, particularly in the part of the plane corresponding to small couplings. Additionally, for the sub-keV values of \(\delta\) that we consider here, the bound from the CMB is independent of \(\delta\) because we do not get a substantial enough suppression in the excited state abundance as a result of the early kinetic decoupling (i.e. \(f^{*}\sim 1\) near the boundary of the CMB exclusion). ### Self-interacting DM Models of inelastic DM can also have unique SIDM behaviour in DM halos, affecting density profiles and sub-halo mass functions in a way that is distinct from purely elastic SIDM [62, 63]. In this model, elastic scattering between the ground and excited states can occur at tree level, which is especially relevant in the thermal histories we consider where it is possible to have a high abundance of the excited state at late times. As we are in the Born regime, \(\alpha_{\chi}m_{\chi}/m_{A^{\prime}}\approx\alpha_{\chi}/2\ll 1\), we can use results previously derived in the literature for the relevant SIDM cross sections. The \(s\)-channel resonance is not typically relevant in astrophysical environments (in contrast to e.g. Ref. [64]), since the lowest value of \(\epsilon_{R}\) we consider is \(\sim 10^{-3}\) due to the strong BBN constraints described in Sec. III.1, which corresponds to a minimum resonant velocity of \(\sim 10^{4}\) km/s, in contrast to the \(\sim 10^{2}\) km/s velocities that are typical in galaxies like the Milky Way. For tree-level elastic scattering between the ground and excited states, we employ the Born cross section of Ref. [65], assuming that the \(t\)-channel dominates. We find that the elastic scattering cross section is generally much less than the \(\sigma/m_{\chi}\sim 1\) cm\({}^{2}\)/g characteristic of SIDM constraints from merging galaxy clusters [66]. The exception to this lies at the edge of perturbativity \(g_{\chi}\sim 1\) for the lightest DM masses we consider, \(m_{\chi}\sim 10\) MeV. However, this part of the parameter space is excluded by the CMB, as evident from Fig. 2. Even if the CMB constraint could be circumvented, in this part of the parameter space the abundance of excited state particles (which would be required for tree-level elastic scattering) is generally more suppressed as shown in Fig. 4, which would also weaken the bounds from merging clusters. Elastic scattering between particles in the same mass state (i.e. two ground state particles) only occurs at the 1-loop level in this model; we confirm that the cross sections for these processes fall well below the \(\sim 1\) cm\({}^{2}\)/g level (due to the \(\alpha_{\chi}^{4}\) scaling) using the expressions in the Supplemental Materials of Ref. [57]. We therefore conclude that merging cluster constraints on elastic SIDM do not have any impact in the viable parameter space for this model. Upscattering from the ground state to the excited state is kinematically forbidden below the velocity threshold of \(v\sim\sqrt{\delta/m_{\chi}}\), which can take on a wide range of values in the parameter space we consider, some of which are relevant to astrophysical systems. Above this velocity threshold, the cross section for upscattering in the Born regime saturates to the value of the elastic cross section between the ground and excited states [67]; while this cross section is generally below \(\sim 1\) cm\({}^{2}\)/g in our parameter space, the endothermic nature of the scattering can lead to substantial qualitative differences in the DM distribution compared to the elastic scattering case [63], and therefore strong conclusions cannot be drawn about observational prospects without dedicated simulation work. Similarly, downscattering from the excited state to the ground state takes on the same value as upscattering once the velocities are above threshold; below threshold, there is an enhancement to the downscattering cross section, rendering it potentially quite large at low velocities (with \(\sigma/m_{\chi}\gg 1\) cm\({}^{2}\)/g). The effects of inelastic scattering has only begun to be explored in simulation, with no direct analog of this situation having been analyzed. In this thermal history, up to 50% of the DM begins in the excited state similar to Ref. [62], which showed that even a few percent of the DM downscattering can have a significant effect on the structure of DM halos. On the other hand, the velocity thresholds in our parameter space can be easily accessible in astrophysical systems, potentially leading to some upscattering, which has a highly nontrivial interplay with the effects of downscattering as studied in Ref. [63]. Clearly futher exploration of the parameter space in simulation will be fruitful for connecting late-universe halo observables to DM parameters motivated by self-consistent DM thermal histories. ### Indirect Detection DM annihilation in astrophysical environments produces cosmic rays and high-energy photons, which can be employed to search indirectly for DM. A null detection of annihilation byproducts can thus be used to constrain the annihilation cross section, \(\langle\sigma v\rangle\). The strength of the expected signal additionally depends on the integrated DM density (the \(J\) factor). As discussed in the previous Subsection, SIDM effects, particularly those from inelastic interactions, may alter the expected density profiles of DM halos. As this has yet to be quantified via simulation, we assume in this discussion that resonant inelastic DM has the same \(J\) factor as what is typically considered in the literature. Additionally, as shown in Sec. III, the CMB constraints on DM annihilation push us toward \(g_{\chi}\lesssim 0.1\), which corresponds to the ground and excited states being equally populated, or \(f^{*}\sim 1\), for the \(\delta\) ranges of interest (see also Fig. 4). Therefore, as opposed to standard thermal histories of inelastic DM, we do not necessarily have a late-time suppression of annihilation from the thermal depletion of the excited state. Since the DM velocity in present-day halos is too small (by about two orders of magnitude) for the dark photon mediator to be produced on shell, the DM annihilation rate at late times can be calculated in the heavy mediator (non-resonant) limit. In Fig. 5, we show the constraints on the _total_ present-day DM annihilation cross-section to SM states, \(\langle\sigma v\rangle_{0}\), from measurements of the gamma-ray flux from the galactic center using Fermi INTEGRAL, COMPTEL, and EGRET [61]. We also show constraints from Voyager [68] and XMM-Newton [69], which bound the DM annihilation cross-section to electrons and therefore have been cut off at the muon threshold, \(m_{\mu}\sim 100\,\mathrm{MeV}\) beyond which annihilation to other final states becomes relevant. For reference, we show the resonant pseudo-Dirac DM parameter region allowed by the CMB in grey. Even though current constraints do not yet probe the target parameter space that is allowed by the CMB, future telescopes such as GECCO [70; 71], MAST [72], GRAMS [73; 74] and AMEGO [75; 76] will be able to explore the parameter space [91]. Figure 5: Bounds on the total DM annihilation cross-section as a function of DM mass from various \(\gamma-\)rays and X-rays experiments [61; 68; 69] (solid lines) as well as projections [70; 71; 72; 73; 74; 75; 76] (dashed lines). The resonant iDM parameter space allowed by the CMB is shown in grey. ## IV Terrestrial Searches ### Accelerator searches Accelerator experiments provide a complementary probe to search for DM and any associated mediators. Dark photons with masses in the MeV-GeV range can be produced at collider and beam dump experiments, resulting in either missing transverse energy or displaced vertex signatures. The production cross-section for the dark photons depends on their coupling to the SM, \(\kappa\), whereas the decay signature depends on their lifetime, \(\tau_{A^{\prime}}=1/\Gamma_{A^{\prime}}\) which is in general a function of \(\kappa\), \(g_{\chi}\), \(\epsilon_{R}\) and \(\delta\). Since \(\delta\ll m_{A^{\prime}}\), \(m_{\chi}\), one can assume to a very good approximation that \(\Gamma_{A^{\prime}}\) is independent of \(\delta\) (i.e. we can take \(\delta\to 0\)). In this limit, the bounds presented in Ref. [45] which were obtained using a modified version of Darkcast[90] can be directly applied to our parameter space. In particular, as was pointed out in Ref. [45], for a given \(m_{A^{\prime}}\), the constraints on \(\kappa\) depend only on a combination of \(g_{\chi}\) and \(\epsilon_{R}\) through the dark photon's reduced invisible decay width, \[\gamma_{\rm inv}\equiv\frac{\Gamma_{A^{\prime}}}{m_{A^{\prime}}}= \frac{g_{\chi}^{2}}{12}\left(1-\frac{1}{1+\epsilon_{R}}\right)^{1/2}\left(1+ \frac{1}{2(1+\epsilon_{R})}\right)\,. \tag{20}\] In Fig. 6, we display the bounds on dark photons in the \(\kappa-m_{A^{\prime}}\) plane for two fixed values of the dark photon's reduced invisible width, \(\gamma_{\rm inv}=10^{-5}\) (left) and \(\gamma_{\rm inv}=10^{-13}\) (right). The shaded regions correspond to different dark photon production channels. For \(\gamma_{\rm inv}=10^{-5}\), the invisible decay width is larger than the visible one for \(\kappa\lesssim 0.1\) and therefore the beam dump experiments that search for \(A^{\prime}\) decays to leptons lose sensitivity. The solid black lines in both panels correspond to the \(\kappa\) values that reproduce the observed DM abundance following the production mechanism outlined in Sec. II, for \(\epsilon_{R}\in[0.001,\,0.1]\). The dashed parts of the black lines correspond to the points excluded by the CMB. We note that for \(\Gamma_{\rm DM}>\Gamma_{\rm SM}\), (or equivalently \(g_{\chi}\gg\kappa\)), the CMB constrains the part of the parameter space which cannot be probed by accelerator experiments making the accelerator and cosmological bounds highly complementary. Additionally, we find that for a given \(\gamma_{\rm inv}\) and \(\epsilon_{R}\), or equivalently, a fixed \(g_{\chi}\), there exists a maximum \(m_{A^{\prime}}\) beyond which DM is always over-produced. From Eq. (14), we see that the DM relic density is proportional to \(m_{A^{\prime}}/\min(g_{\chi}^{2},\kappa^{2})\) meaning that once the couplings are fixed, a larger \(m_{A^{\prime}}\) results in a larger DM abundance. This causes the relic lines to be vertical in the right panel of Fig. 6 (in the left panel, the maximum \(m_{A^{\prime}}\) lies outside the plotted region). Finally, we note that for a given \(\gamma_{\rm inv}\), \(\kappa\)-values bounded by the two curves corresponding to \(\epsilon_{R}=0.001\) and \(\epsilon_{R}=0.1\) also reproduce the observed DM abundance for different values of \(\epsilon_{R}\) resulting in a much broader thermal target. For reference, we show the usual thermal target assuming non-resonant, \(s\)-wave thermal freeze-out (dotted black line) in the left panel of Fig. 6. Future accelerator experiments are poised to more fully explore the parameter space of this model, as shown in Fig. 7 Figure 6: Some of the strongest constraints in the \(\kappa-m_{A^{\prime}}\) plane from accelerator experiments, including BaBar [77; 78; 79], NA64 [80; 81], LHCb [82; 83], CMS [84], NuCal [85; 86], E141 [87], NA48 [88], and E137 [89], as computed with DarkCast[90]. The different colours correspond to different dark photon production channels. We show two representative cases where the dark photon decays primarily invisibly (left) and visibly (right). The solid black lines represent the target for \(\epsilon_{R}=0.001\) and \(\epsilon_{R}=0.1\), with the dashed portions corresponding to exclusions from the CMB. Also shown for comparison is the thermal prediction assuming thermal freeze-out with \(m_{A^{\prime}}=3m_{\chi}\) (dotted line) [48]. ### Direct Detection Most thermal histories for pseudo-Dirac DM result in a relic excited state fraction suppressed by several orders of magnitude such that most of the DM in the halo of the Milky Way (MW) is in the ground state. In this case, because of the off-diagonal coupling, the only tree-level scattering process on a SM target would be upscattering to the excited state. In contrast, for the thermal history considered in this work, around half of the DM is in the excited state at late times for most parts of the parameter space. In particular, the largest values of \(g_{\chi}\) that suppress the abundance of \(\chi_{2}\) in this thermal history are already excluded by the CMB (see Figs. 2 and 4). Therefore, the primary signature of pseudo-Dirac DM in direct detection experiments is downscattering of the excited state \(\chi_{2}e\to\chi_{1}e\), which deposits an energy \(\sim\delta\). The deposited energy can ionize electrons in the target which can be detected either directly through charge-coupled devices [102], or by detecting secondary scintillation photons using photomultiplier tubes [103]. The absence of a kinematic barrier for downscattering implies an enhancement in the event rate for sub-GeV DM. In the following discussion, we consider DM-electron scattering in semiconductor and Xenon targets, and we place constraints on the fiducial DM-electron scattering cross-section [20], \[\overline{\sigma}_{e}=\frac{4\mu_{\chi,e}^{2}\alpha\kappa^{2}g_{\chi}^{2}}{(m _{A^{\prime}}^{2}+\alpha^{2}m_{e}^{2})^{2}}\,, \tag{21}\] where \(\mu_{\chi,e}\) is the reduced mass of the DM-electron system. The recoil energy of the electron is \(E_{Re}=\Delta E_{e}-\Delta E_{B}\) where \(\Delta E_{B}\) is the electron binding energy and the energy deposited by sub-GeV DM downscattering is \[\Delta E_{e}=\mathbf{q}\cdot\mathbf{v}-\frac{\mathbf{q}^{2}}{2m_{\chi}}+\delta \tag{22}\] for momentum transfer \(\mathbf{q}\) and relative velocity \(\mathbf{v}\). For downscattering, the minimum velocity required to transfer a momentum with magnitude \(q\) and an energy \(\Delta E_{e}\) to the electron is therefore given by \[v_{\text{min}}\left(q,\Delta E_{e}\right)\equiv\left|\frac{\Delta E_{e}- \delta}{q}+\frac{q}{2m_{\chi}}\right|, \tag{23}\] which corresponds to the differential event rate for atomic targets [20; 104], \[\frac{\mathrm{d}R}{\mathrm{d}\Delta E_{e}} =\frac{\overline{\sigma}_{e}}{8\mu_{\chi e}^{2}}\sum_{n,l}( \Delta E_{e}-E_{nl})^{-1}\frac{\rho_{\chi_{2}}}{m_{\chi_{2}}} \tag{24}\] \[\times\int q\mathrm{d}q|f_{nl\to\Delta E_{e}-E_{nl}}(q)|^{2} \eta(v_{\text{min}}(q,\Delta E_{e}))\,,\] where \(\Delta E_{n,l}\) is the binding energy of the electron in the \(nl\) shell, \(\rho_{\chi_{2}}=f^{*}\rho_{\text{DM}}\approx 0.5\times 0.4\) GeV cm\({}^{-3}\) is the density of the excited state [51], and \(f_{nl\to\Delta E_{e}-E_{nl}}(q)\) is the electron ionization form factor which we evaluated numerically following the prescription of Ref. [105] using DarkART[106]. The function \(\eta(v_{\text{min}})\) in Eq. (24) can be related to the DM-velocity distribution \(f_{\chi}(v)\) by \[\eta(v_{min})\equiv\int\frac{d^{3}v}{v}f_{\chi}(v)\Theta\left(v-v_{min}\right). \tag{25}\] In this work, we assume the Standard Halo Model for \(f_{\chi}(v)\)[107]. The magnitude of the momentum transferred to the electron for a given DM mass and energy Figure 7: Same as Fig. 6 except with some of the strongest projections for future experiments including Belle II [92], FASER [93], HPS [94], LDMX [95], LHCb [96; 97], SeaQuest [98], SHiP [99; 100], and Yemilab [101]. deposition \(\Delta E_{e}\) is bounded by \[q_{\rm min} =\text{sign}(\Delta E_{e}-\delta)m_{\chi}v_{\rm max}\left(1-\sqrt{1- \frac{\Delta E_{e}-\delta}{\frac{1}{2}m_{\chi}v_{\rm max}^{2}}}\right)\] \[q_{\rm max} =m_{\chi}v_{\rm max}\left(1+\sqrt{1-\frac{\Delta E_{e}-\delta}{ \frac{1}{2}m_{\chi}v_{\rm max}^{2}}}\right) \tag{26}\] where \(v_{\rm max}\) is the largest DM velocity relative to the detector frame as determined by the escape velocity of the halo at the Earth's position and the velocity of Earth in the Galactic rest frame [107]. A similar expression as Eq. (24) can be obtained for semiconductor targets which depends instead on the crystal form factor as described in Ref. [21]. The relevant energy threshold for such a target is the band gap between the valence band to the conduction band. Note that in Eq. (24), we have set the DM form factor to unity, \(F_{\rm DM}=1\), which corresponds to the heavy mediator limit, \(m_{A^{\prime}}\gg\alpha m_{e}\) as determined by the typical Fermi momentum (for semiconductors targets) or inverse Bohr radius (for atomic targets). In contrast to the case of elastic scattering, the minimum momentum transfer, \(q_{\rm min}\) for a given energy transfer \(\Delta E_{e}\), can be zero if the mass splitting is above the energy threshold for ionization. This results in much larger event rates since the inelastic scattering kinematics have substantial overlap with peaks in the electron ionisation form factor [108]. A similar effect also occurs for semiconductor targets where the peaks in the crystal form factor become more kinematically accessible for \(\delta>0\)[21]. Inelastic scattering also results in a characteristic spectrum of events peaked around \(\Delta E_{e}\sim\delta\). As a result, bounds on \(\overline{\sigma}_{e}\) derived under the assumption of elastic scattering cannot be directly applied to the parameter space of this model. In order to re-cast the bounds, we use the prescription outlined in Ref. [109] for calculating event rates in the XENON10 [109, 110], XENON1T [103] and SENSEI [111] experiments. For XENON10 and XENON1T, we calculate the electron ionization form factors using DarkART [106]. The crystal form factors for SENSEI are obtained from QEDark[21]. We use the publicly available data for the three experiments [109, 109, 110, 111] to derive 90% confidence level exclusions on \(\overline{\sigma}_{e}\), shown in Fig. 8. For XENON10 and XENON1T, we show the exclusion for \(\delta=10\,\text{eV}\) and \(\delta=100\,\text{eV}\), while for SENSEI, we show the exclusion for \(\delta=10\,\text{eV}\). We verify that our analysis reproduces the elastic scattering (\(\delta=0\)) bounds from these experiments, shown as shaded regions in Fig. 8. We note that these bounds are conservative since we do not model any backgrounds or place any cuts in the observed events. In other words, we treat any event as a potential DM signal, resulting in the weakest possible limits on the cross section. As shown in Fig. 8, the allowed parameter space of this model (represented by the grey band) is an attractive target for upcoming experiments probing light DM. In particular, we plot the sensitivity curves for Oscura [112] for \(\delta=0\,\text{eV}\) (light purple) and \(\delta=10\,\text{eV}\) (purple) assuming an exposure of 30 kg-year. Furthermore, for the \(\mathcal{O}\)(eV) values of \(\delta\) we consider here, up-scattering inside the Earth may result in an enhanced density of the excited state at the detector, which would also result in stronger bounds and forecasts than the ones presented here [108]. Finally, even stronger bounds and forecasts may be obtained by considering electron ionization caused by DM-nucleon scattering through the Migdal effect [113, 114, 115]. These bounds were evaluated for \(\delta\gtrsim O\)(keV) in Ref. [116]. We leave an analysis of these bounds for sub-keV values of \(\delta\) for future work. ## V Discussion Pseudo-Dirac DM is a minimal modification of standard vector portal DM that can result in qualitatively new cosmological, astrophysical, and experimental phenomenology. In this work, we examine the parameter space of this well-studied model in the regime of small (sub-keV) mass splittings and in the presence of resonant annihilations, \(\chi_{2}\chi_{1}\to A^{\prime}\to\text{SM}\) SM, where \(m_{A^{\prime}}\approx 2m_{\chi}\). The resonantly enhanced annihilations imply that tiny couplings are able to reproduce the observed DM relic density. Additionally, because of these small couplings, DM can kinetically decouple from the SM before its final relic abundance is reached. Therefore, in order to accurately predict the relic density, one must solve a coupled system of Boltzmann equations for the densities and temperatures of the relevant species. We used the numerical Boltzmann solver DRAKE to properly account for this ef Figure 8: Constraints on the DM-electron scattering cross-section, \(\sigma_{e}\) as a function of the DM mass from XENON10 (blue), XENON1T (green), and SENSEI (orange) for different values of \(\delta\), with \(f^{*}=1\). The shaded regions correspond to exclusions on elastic scattering, i.e., \(\delta=0\). The dashed purple lines are sensitivity curves for Oscura assuming \(\delta=0\) (light) and \(\delta=10\,\text{eV}\) (dark) respectively. In grey, we show the resonant iDM parameter space allowed by CMB. fect and found that the predicted DM abundance can have corrections as large as an order of magnitude, depending on the underlying parameters of the theory. The early kinetic decoupling ensures that the excited state is not thermally depleted. Despite the presence of the excited state, this model is consistent with strong bounds coming from BBN and the CMB owing to the strong velocity suppression in the annihilation cross-section at sub-keV temperatures (when the dark photon can no longer be produced on-shell). As a result, as shown in Fig. 2, most of the parameter space of this model is unconstrained, in contrast to sub-GeV Dirac DM that freezes-out through an \(s\)-wave process. The presence of the long-lived excited state can have unique astrophysical signatures that are usually not relevant for pseudo-Dirac DM. For instance, tree-level elastic scattering could cause SIDM behavior, with the caveat that cross sections exceeding \(\sigma/m_{\chi}\sim 1\) cm\({}^{2}\)/g either have couplings that are excluded by the CMB or lie in a region of parameter space with \(f^{*}\sim 0.01\). More notably, exothermic downscattering can be relevant, especially in low-velocity environments where there is an enhancement to the cross section. The extent to which exothermic scattering matters _in situ_ is difficult to quantify without further simulation of inelastic SIDM halos, however previous work has found that small mass splittings with \(\delta/m_{\chi}\sim 10^{-6}\) can have a dramatic impact on the properties of a DM halo and its subhalos [62; 63]. The relic excited state can also result in signals in indirect detection experiments. The ground and excited state present in the MW can annihilate into various SM states and therefore gamma-ray and X-ray telescopes can be used to look for signatures of this model. Since DM annihilation at late times mimics the off-resonance Dirac case, we use previous analyses of gamma-ray and \(X\)-ray data [68; 69; 61] to place bounds on the total DM annihilation cross-section. We find that despite the small couplings, the resonant inelastic parameter space is an attractive target for the next generation of telescopes such as GECCO, MAST, GRAMS and AMEGO (see Fig. 5). The excited state can downscatter in a range of direct detection experiments. Because of the absence of kinematic barrier for this process, the event rate is significantly enhanced compared to elastic scattering and also compared to pseudo-Dirac DM thermal histories with an exponentially suppressed abundance of the excited state. Using state-of-the-art numerical codes DarkART and QEDark to obtain the electron ionisation and crystal form factors respectively, we calculate the event rates for inelastic DM-electron scattering at Xenon- and Silicon-based experiments. We use the analysis procedure described in Ref. [109] to place bounds on the DM-electron scattering cross-section for different values of \(\delta\). We find that future semiconductor-based experiments such as Osura will begin to probe the resonant inelastic DM parameter space as shown in Fig. 8. Simultaneously, accelerator searches for dark photons can also explore relevant parameter space for this model. We use the publicly available code Darkcast, to depict bounds on the kinetic mixing, \(\kappa\) for fixed values of \(g_{\chi}\). The presence of a resonance implies a broadening of the thermal target as shown in Fig. 6. We find that accelerator bounds are complementary to those set by the CMB, as the accelerator experiments probe the parts of the parameter space that are harder to constrain using early-universe probes. Future experiments will constrain large parts of the parameter space of this model, as shown in Fig. 7. In summary, sub-GeV resonant pseudo-Dirac DM is an attractive thermal target for a variety of terrestrial DM experiments and astrophysical searches. The complete exploration of the phenomenology of this model leaves a lot of promising directions for future work. In particular, a more accurate treatment of the photo-disassociation bounds coming from BBN may further constrain this parameter space. Additionally, the early kinetic decoupling present in this model may loosen the \(m_{\chi}\geq 10\,\mathrm{MeV}\) lower bound on the mass of thermal DM coming from \(N_{\mathrm{eff}}\), despite the thermal equilibrium between the dark and visible sectors at early times. Furthermore, it will be necessary to perform additional cosmological simulations in order to understand the effect of up- and downscattering on halo properties which have immediate consequences for direct and indirect detection experiments. Finally, a more rigorous analysis of the direct detection bounds needs to be undertaken, including (1) the upscattering of the ground state as it passes through the Earth before downscattering in the detector and (2) scattering on electrons through the Migdal effect. Such an analysis may result in even stronger bounds and forecasts on resonant pseudo-Dirac DM than the ones presented here. ## Acknowledgements It is a pleasure to thank Daniel Baxter, Asher Berlin, Elias Bernreuther, Timon Emken, Felix Kahlhoefer, Tongyan Lin, and Tien-Tien Yu for useful conversations and correspondence pertaining to this work. We especially thank Neal Weiner for useful comments on the manuscript. The research of NB was undertaken thanks in part to funding from the Canada First Research Excellence Fund through the Arthur B. McDonald Canadian Astroparticle Physics Research Institute. SH was supported in part by a Trottier Space Institute Fellowship. NB, SH, and KS acknowledge support from a Natural Sciences and Engineering Research Council of Canada Subatomic Physics Discovery Grant and from the Canada Research Chairs program.
2305.01744
A Novel Mechanism for the Formation of Dislocation Cell Patterns in BCC Metal
In this study, we present the first simulation results of the formation of dislocation cell wall microstructures in tantalum subjected to shock loading. Dislocation patterns and cell wall formation are important to understanding the mechanical properties of the materials in which they spontaneously arise, and yet the processing and self-assembly mechanisms leading to their formation are poorly understood. By employing transmission electron microscopy and discrete dislocation dynamics, we propose a new mechanism involving coplanar dislocations and pseudo-dipole mixed dislocation arrays that is essential to the pattern formation process. Our large-scale 3D DDD simulations demonstrate the self-organization of dislocation networks into cell walls in deformed BCC metal (tantalum) persisting at the strain 20%. The simulation analysis captures several crucial aspects of how the dislocation cell pattern affects metal plasticity, as observed in experiments. Although experimental evidence is inconclusive regarding whether cell wall formation takes place at the shock front, after the shock, during release, or when the sample has had enough time to relax post-recovery, our simulations indicate cell wall formation occurs after the shock and before release. The extended Taylor hardening composite model effectively considers the non-uniform dislocation density when cell walls form and accurately describes the corresponding flow stress.
Jaehyun Cho, Luke L. Hsiung, Robert E. Rudd, Sylvie Aubry
2023-05-02T19:25:44Z
http://arxiv.org/abs/2305.01744v1
# A Novel Mechanism for the Formation of Dislocation Cell Patterns in BCC Metal ###### Abstract In this study, we present the first simulation results of the formation of dislocation cell wall microstructures in tantalum subjected to shock loading. Dislocation patterns and cell wall formation are important to understanding the mechanical properties of the materials in which they spontaneously arise, and yet the processing and self-assembly mechanisms leading to their formation are poorly understood. By employing transmission electron microscopy and discrete dislocation dynamics, we propose a new mechanism involving coplanar dislocations and pseudo-dipole mixed dislocation arrays that is essential to the pattern formation process. Our large-scale 3D DDD simulations demonstrate the self-organization of dislocation networks into cell walls in deformed BCC metal (tantalum) persisting at strain \(\varepsilon=20\%\). The simulation analysis captures several crucial aspects of how the dislocation cell pattern affects metal plasticity, as observed in experiments. Although experimental evidence is inconclusive regarding whether cell wall formation takes place at the shock front, after the shock, during release, or when the sample has had enough time to relax post-recovery, our simulations indicate cell wall formation occurs after the shock and before release. The extended Taylor hardening composite model effectively considers the non-uniform dislocation density when cell walls form and accurately describes the corresponding flow stress. keywords: High-strain-rate deformation, Dislocation cellular structures, Discrete dislocation dynamics, Transmission electron microscopy, Composite Taylor hardening + Footnote †: journal: ## 1 Introduction Advanced metallic materials with superior mechanical properties enable the ever greater structural performance needed across the spectrum of modern technologies. Development of a material with increased fracture toughness provides reliability and design flexibility of devices in many fields such as transportation, advanced manufacturing, and national defense. Mechanical properties of metallic materials are strongly influenced by their crystallographic defect patterns called microstructure. Among various microstructure types, formation and evolution of dislocation structures have a pronounced impact on mechanical properties of ductile metals such as flow stress. A classical model describing the relationship between the flow stress and dislocation structures is the Taylor hardening equation [1]: \[\tau=\alpha b\sqrt{\rho} \tag{1}\] with \(\tau\) the shear strength, \(\alpha\) a dimensionless material constant, \(\mu\) the shear modulus, b the magnitude of the Burgers vector, and \(\rho\) the average dislocation density in the metal. The basis for the power-law relationship in the Taylor hardening law is the principle that the flow stress is set by the spacing between obstacles to dislocation flow, which scales like \(1/\sqrt{\rho}\) assuming the dislocation density is uniform. However, the simple power-law dependence on \(\rho\) may not be enough to describe how the strength of a metal depends on its dislocation-based microstructure. Many experiments [2; 3; 4] have endeavored to refine or redefine Taylor's law. One of the main drawbacks of Taylor's law is its loose dependence, solely contained in the parameter \(\alpha\) and indirectly included in the dislocation density, on dislocation arrangements. Dislocation cell structures are observed in plastic deformation experiments with wide ranges of materials and loading conditions [5; 6; 7; 8; 9; 10; 11]. Cell patterns are characterized by three-dimensional regions with dense dislocation entanglements (cell walls) along with less dense internal regions (cell interiors). The presence of walls of tangled dislocations reduces the total elastic energy of the dislocation network by screening long-range elastic fields from dislocations [2; 12; 13; 14]. Important aspects of cell structure development in BCC metals found in experiments can be briefly summarized as follows: (1) dislocation cell formation is frequently observed with [001] crystal orientations [15; 16], (2) dislocation cell wall structures are typically observed in stage III and IV of work hardening [17; 18], (3) cell walls are composed of geometrically necessary dislocations (GNDs) which increases an average density of dislocations but does not alter a level of flow stress of the system [7; 8; 9; 10], and (4) continuous development of dislocation cell structures eventually transforms into sub-grain structures bounded by sharp dislocation cell walls during dynamic recovery [19; 20; 21; 22]. Many analytical models have been proposed to explain formation process of dislocation cell structures [23; 24; 25; 26; 27]. These models concur that uniform dislocation distributions become unstable for sufficiently high dislocation density and form modulated dislocation structures to reduce the elastic energy of the dislocations. By introducing point-like dislocations with simplified slip systems and dynamics in 2D, cell pattern formation processes were investigated [28; 29; 30]. However, the 2D approach is limited in its ability to describe 3D dislocation motion and interactions in complex network structures. Using 3D dislocation simulations, heterogeneous dislocation patterns in FCC metals in early-stage deformation (few percent of strain) have been studied [28; 29; 31], and it was found that short-range interaction, e.g., cross-slip, plays a key role. Although the cross-slip mechanism may be responsible for the early stage of dislocation patterning, it is questionable that dislocation core processes are still important for the formation of cell microstructures at larger strain (stage III and/or IV of work hardening) reported in experiments. In this paper, we explain how dislocation cell walls are formed as shown by Transmission Electron Microscopy (TEM) using discrete dislocation dynamics (DDD) simulation. First, we review TEM analysis of dislocation cellular structures observed in deformed tantalum, and then elucidate a new mechanism of cell wall formation in BCC metals using DDD. The new mechanism is demonstrated in two steps. The first crucial step is to obtain an initial dislocation network that is fully relaxed using the theory of dislocations in an infinite medium. The second step consists of loading the dislocation network consisting of a fully relaxed configuration based on what we learn from step one but now using periodic boundary conditions (PBCs). This procedure leads naturally to cell wall formation. Lastly, we validate our new mechanism by comparing it to the features of cell walls exhibited in experiments. ## High-explosive-shock-recovery experiment and TEM examination of dislocation structures Ingot metallurgy (IM) tantalum samples (commercially pure tantalum) in the form of plate stock produced using a standard electron-beam melting process were obtained from Cabot Corporation, Boyertown, PA. Details of high-explosive-driven (HE-driven) hubcap shock-recovery experiments employed for the current investigation can be found in [32]. Briefly, one single explosively driven shock-recovery experiment was conducted by detonating explosive on hubcap alloy plates (3mm thick), which were shocked into polyurethane foams immersed in a water tank. The shock experiments were carried out under a peak pressure of \(\sim\)30GPa, as simulated using a CALE continuum hydrodynamic code [33]. As shown in Figure 1, a dislocation cell wall structure is observed in a grain oriented with the loading axis parallel to the [001] orientation with a mean dislocation density estimated to be around \(\bar{\rho}\sim 10^{14}/\text{m}^{2}\). The cell-wall structure tends to align parallel to the projected \(\langle 101\rangle\) directions (shown as red dotted line) that have an angle of 35.16\({}^{o}\) with respect to the adjacent \(\langle 111\rangle\) Burgers vectors. ## 2 Pseudo-dipole dislocation cell structures A careful dislocation analysis of Figure 1 done along the [001] view direction, \(\langle 101\rangle\) direction reveals that each pair of Burgers vectors belonging to the \(\{111\}\) family shares the same plane. More precisely, \(\mathbf{b}_{1}=\pm[111]\) and \(\mathbf{b}_{2}=\pm[\bar{1}1\bar{1}]\) share the slip plane \(\mathbf{n}_{1}=(\bar{1}01)\) plane, and \(\mathbf{b}_{3}=\pm[\bar{1}\bar{1}1]\) and \(\mathbf{b}_{4}=\pm[1\bar{1}\bar{1}]\) lie together on the same \(\mathbf{n}_{2}=(101)\) slip plane [34]. We define this pair of Burgers vectors, sharing one slip plane as _coplanar slip systems_. A novel mechanism stemming from the coupling reactions of dislocations lying on coplanar slip systems was found to explain the formation of low-energy-type dislocation substructure as shown in Figure 1, as described in an unpublished report [34]. The mechanism involves the following hypotheses: (I) In the early deformation stage, the dislocation network is mainly composed of screw dislocations. (II) Each pair of screw dislocations lying on coplanar slip systems - \(\mathbf{b}_{1}\) and \(\mathbf{b}_{2}\) / \(\mathbf{b}_{3}\) and \(\mathbf{b}_{4}\) - alters their line directions to be parallel to the [101] direction on their slip plane - \(\mathbf{n}_{1}\) / \(\mathbf{n}_{2}\), respectively - through elastic interactions, forming mixed dislocation pairs. Figure 2 (a) shows an example of the coupling reaction between two screw dislocations on coplanar slip systems (dashed lines) resulting in their re-alignment from screw to mixed dislocations. More precisely, one dislocation of the pair turns counterclockwise and the other turns clockwise, both by an angle of 35.16\({}^{\circ}\). Once aligned, each pair includes two different Burgers vectors that attract each other forming a pair. The formed configuration is referred as a _pseudo-dipole_. Our definition of a pseudo-dipole is different than a dipole defined as a pair of aligned dislocations with equal and opposite signs of Burgers vectors. (III). Pseudo-dipole arrays of dislocations, composed of many pseudo-dipoles, form a relaxed microstructure - resulting in stress-screening dislocation arrays. Fig 1 (b) shows an example of the relaxed pseudo-dipole array structure composed of \(\mathbf{b}_{1}\) and \(\mathbf{b}_{2}\). The pseudo-dipole array is a different low energy dislocation structure than Taylor lattice [35] where pairs of aligned dipoles with a Burgers vector. (IV) Under external loading, these pseudo-dipoles travel and form locked configurations - stable cell walls - determined by the long-range stress fields and short-range dislocation core interactions e.g., junction formation. The hypotheses describing the newly introduced mechanism are investigated and confirmed in the next two sections using DDD. ## 3 Formation of mixed dislocation pseudo-dipoles The formation of pseudo-dipole arrays as described in the previous section is investigated in this section using the DDD computer code ParaDiS [36]. To allow for a relaxation of dislocation networks composed of an initial array of screw dislocations followed by rotations rotations on coplanar slip systems without changing their lengths, a cylindrical-shaped simulation domain is used. Figure 3 (a) and (c) show the set-up of two simulations comprising a cylindrical system with radius 2,000\(\mathbf{b}\) and height 30,000\(\mathbf{b}\). In the first system (a), 400 screw dislocations are positioned in the domain with gaps as follows: first, 200 screw dislocations with Burgers vectors (\(\mathbf{b}_{1}\) and \(\mathbf{b}_{2}\)) are randomly populated in -15,000\(|\mathbf{b}|\leq z<-7,500|\mathbf{b}|\), where \(z=[001]\) is the longitudinal coordinate. The remaining 200 screw dislocations with the other slip systems (\(\mathbf{b}_{3}\) and \(\mathbf{b}_{4}\)) are arbitrary inserted in the range \(0\leq z<7,500\mathbf{b}\). The constructed sets of co-planar slip systems are separated by a distance of 7,500\(\mathbf{b}\) which is sufficiently large for a set of dislocations in each coplanar slip system to closely interact with each other at the beginning of the simulation without interference from dislocations of non-coplanar dislocations. In the second system (c), 400 screw dislocations are randomly inserted over the whole cylinder domain. In the following, we refer to these two systems (a) and (c) as "local-random system" and "full-random system", respectively. The local- and full-random systems are relaxed assuming they are composed of infinite dislocations located in an infinite medium and under zero external loading to find their low energy dislocation structures. In order to allow dislocations to move freely in the simulation domains, dislocations are modeled as semi-infinite dislocations: radial surfaces of the cylinders are treated as infinite boundaries, and the vertical plane (\(z\)) is handled using PBC. The contributions of quantities such as force, stress and velocity field, calculated in dislocation dynamics, are modified by implicitly extending dislocations to infinity using virtual straight segments [37]. With this mixed boundary condition, we can expect each dislocation to travel as one infinitely across the \(x\)-\(y\) plane and periodically in the \(z\) planes of the end caps. Figure 3 (b) and (d) show snapshots of the relaxed configurations for the two systems in top view along the [001] axis at a series of times. The blue and red lines represent glissile and junction dislocations respectively. Both systems exhibit similar relaxation progress: the initial screw dislocations rotate toward their mixed character and naturally align along \(\langle 101\rangle\) direction forming pseudo-dipole arrays. The rotated dislocations entangle and form compact structures of width \(\sim 600\)**b** (\(\sim\)180 nm) in the \(y\) direction. During relaxation, the total density of dislocations decays by about 50% to release the elastic energy stored in the dislocation network and saturates at a constant dislocation density level. The local-random system undergoes a faster relaxation toward equilibrium compared to the full-random system. This rapid relaxation to a low energy state is due to the rotation of the screws to mixed character occurring only between the sets of coplanar screw dislocations screw dislocations lying in coplanar slip systems. More precisely, in the partial-random system, each of the two groups of screw dislocations, (**b\({}_{1}\)** and **b\({}_{2}\)**) and (**b\({}_{3}\)** and **b\({}_{4}\)**), located at the top and bottom part of the cylinder, aligns and forms a compact structure. In the case of the full-random system, initial screw dislocations rotate at a lower rate due to the adjacent interactions with non-coplanar slip systems, but eventually this system also ends up relaxing to a similar microstructure as the local-random system. These simulations prove that independently of the initial configuration, the relaxation of screw dislocations lying in coplanar slip system give rise to bundle of dislocations which form an inhomogeneous dislocation microstructure and confirm the hypotheses I, II, and III described in Section 2. ### Formation of dislocation cell walls during plastic deformation of Ta under shock loading In Section 3, we validated our hypothesis that an initial screw-dislocation-dominated network relaxes to form pseudo dipole arrays, where dislocations rearrange along the \(\langle 101\rangle\) direction and group during relaxation as suggested by experimental data [19; 34]. Now, we investigate whether an initial \(\langle 101\rangle\) mixed dislocation array can yield a dislocation cell structure during loading as observed in the TEM for tantalum shown in Figure 1. A simulation box of dimension 300nm\(\times\)300nm\(\times\)300nm (300nm\(\approx\)1000**b**) containing an initial array of mixed dislocations with arbitrary Burgers vectors aligned in the \(\langle 101\rangle\) direction, or pseudo-dipole array, is randomly populated up to an initial dislocation density of 0.75\(\times\)10\({}^{14}\)/m\({}^{2}\). In the following, we refer to this simulation set up as "pseudo dipole-array". A loading stress is applied to the system at a strain rate of \(\dot{\varepsilon}=10^{5}/\mathrm{s}\) along the [001] axis. A counterpart simulation we will refer to as a "screw-array" is initiated with a screw dislocation array randomly populated at the density similar to the initial pseudo dipole array. In contrast to the relaxation simulations executed in the previous section, the strain-rate controlled simulations are carried out using PBCs in all directions. Figure 4 shows snapshots of the two DDD simulations in viewed along [001] at strains \(\varepsilon\)=0%, 3.5%, 10%, and 20% respectively. Figure 4 (a) shows the simulations results starting from the screw array and (b) starting from the pseudo dipole array. Dislocation lines are colored by the Burgers vector types: blue and red represent glissile and junction Burgers vectors, respectively. For a better visual understanding of the dislocation microstructures, the periodic unit domain is replicated one time in each direction \(x\) and \(y\). The two simulations exhibit different dislocation structures after a few percent strain is reached. As the strain increases in each simulation, the screw array evolves to a network with a homogeneous distribution of dislocations 1 composed of glissile and junction dislocations at 87% and 13% percentages, respectively. However, the pseudo dipole array exhibits a microstructure transitioning from a homogeneous dislocation distributions observed between 0% and 3.5% strain to cell walls at strain levels between 10% and 20%. More precisely, at 10% of strain, the cell walls are loosely equiaxed in \(x\) and \(y\) axes loosely forming two-dimensional cellular structures. As strain continues further, such two-dimensional cell structures evolve and change their shapes and positions. As the strain approaches to 20%, the cell walls become one-dimensional structures distinctly extended in \(y\) axis and non-evolving. In the cell interiors, dislocation networks are composed of \(\sim 13\%\) junctions as the simulation of the screw array, while the cell walls contain \(\sim 27\%\) junction dislocations. Such a high density of junctions in the clusters helps to keep the quasi-static cell wall structures without changes of shapes and positions. This observation confirms hypotheses IV described in Section 2. In the supplemental material, we discuss in more detail about dislocation structures of the simulations. In the following discussion, we term these two systems where the cell walls are formed and the homogeneous distribution of dislocations is observed as "cell-wall system" and "no-cell system", respectively. Footnote 1: The \(\sim\)10% strain is \(\sim\)10%, which is the \(\sim\)10% strain in the supplemental material. Figure 5 (a) shows the stress-strain response of the two systems. Both systems saturate at about the same flow stress level of 410 MPa. Note that the cell-wall system's stress saturates at \(\sim\)5% strain, at which point the cell patterning has begun but well-defined cell walls form later at \(\sim\)10% strain (cf. Figure 4 (b)). Evolution of the corresponding dislocation densities is shown in Figure 5 (b). The cell-wall system saturates at \(\sim\)1.5 times higher dislocation density (0.7\(\times\)10\({}^{14}\)/m\({}^{2}\)) than the no-cell system (0.47\(\times\)10\({}^{14}\)/m\({}^{2}\)). This observation reveals that the presence of dislocation cell structure increases dislocation density levels without increasing the corresponding flow stresses. To understand the density increase in the presence of the cell walls, we analyze the obtained microstructures by distinguishing between regions with high and low local density using the kernel density estimation (KDE) method. We will call high- and low-density regions as "knotted" and "unknotted" in the following. Figure 5 (c) shows the time evolution of the dislocation velocities averaged over different sets of dislocations: all of them (solid lines), knotted (dotted lines), and unknotted (dashed lines) for each of the two systems. We find that the cell-wall system exhibits overall slower dislocations than the no-cell system by \(\sim\)67% indicating that the cell walls reduce the average dislocation velocities by impeding dislocation motion, and the differences of average velocities and the dislocation densities yield the same constant strain rate according to Eshelby's equation, where the strain rate is given by \[\dot{\varepsilon}=\frac{1}{M}b\overline{v}\rho \tag{2}\] where \(M\) the orientation factor, b the magnitude of the Burgers vector, \(\rho\) the dislocation density, and \(\overline{v}\) the average dislocation velocity. Moreover, dislocations in knotted regions of the cell wall system (red dotted line) move much more slowly than the other kinds of dislocations, meaning that the obtained cell wall structures are more stable than any microstructures where cell walls are not present. Interestingly, as the the cell walls contain more and more dislocations starting at \(\sim 12.5\%\) strain, the velocity of dislocations in the unknotted (red dashed line) increases. Figure 5 (d) shows the probability of dislocation velocities of the two systems at 20% strain. These results confirm that dislocations located in the cell walls move slower, and some not at all, in the cell-wall system than in the no-cell system. Figure 5 (e) and (f) shows pair correlation analysis of dislocation networks of the two systems at 10% and 20% strain levels, respectively. Correlation analysis result shows a signal starting at around 500**b** with the distance 300**b** indicating the presence of the wall structure and how it becomes more pronounced after 10% of strain. According to Mughrabi [7], the \(\alpha\)-factor of the Taylor hardening equation (Equation 1) decreases when dislocation cells are present during steady-state deformation, which is also observed in our simulations: \(\alpha\)-factor is reduced by 8.5% when cells are present compared to when they are not. Since cells walls create more dislocation density which little affects of the stress response of the material when compared to dislocations homogeneously distributed over the domain [38, 39], the Taylor hardening model needs to be refined to consider a different strengthening effect for a given dislocation density. Mughrabi [3, 7, 14] observed dislocation cell formations under cyclic loading, and characterized the hardening effect using a composite model [14, 40]: \[\sigma = f_{\mbox{wall}}\sigma_{\mbox{wall}}\;+\;f_{\mbox{interior}} \sigma_{\mbox{interior}} \tag{3}\] \[= \alpha\mu\textbf{b}\left(f_{\mbox{wall}}\sqrt{\rho_{\mbox{wall }}}\;+\;f_{\mbox{interior}}\sqrt{\rho_{\mbox{interior}}}\right)\] where \(f_{\mbox{wall}}\) (\(\rho_{\mbox{wall}}\)) and \(f_{\mbox{interior}}\) (\(\rho_{\mbox{interior}}\)) are the volume fractions (dislocation densities) of the high and low density regions, respectively. Similarly to the previous analysis, the two high- and low-density regions can be defined as knotted and unknotted. Evolution of the knotted and unknotted populations of dislocations are shown in Figure 6 (a) and (b) as characterized by density and volume fractions, respectively.2 We find that the presence of a cell wall leads to a system in which \(\sim\)20% of the volume with knotted dislocations contains almost the same amount of dislocations as the remaining \(\sim\)80% of volume with unknotted dislocations. Without the presence of the cell wall (blue curves), dislocations are quasi-uniformly distributed over the domain, and the system is mostly unknotted dislocations. Footnote 2: The threshold we employ to separate the high-density and low-density regions is \(1.3\times 10^{-9}/m^{2}\). In Figure 6 (c), simulation results of the cell-wall system and no-cell system are shown in blue and red areas, respectively, along with fits to the results using the Taylor model (Equation 1) and the composite model (Equation 3) as shown by dashed and solid lines, respectively, in the corresponding color. The constant \(\alpha\) is taken to be 1.3 for all predictions. We find the original Taylor model fits the quasi-uniformly distributed data correctly, but fails to describe the hardening when heterogeneous dislocation clusters are present (red dashed line). The composite model of flow stress is in good agreement with both systems and successfully predicts strain hardening effects. ## Discussion In summary, we verified the key mechanism leading to dislocation cell formation, and successfully proved its validity by showing spontaneous formation of cell walls using the DDD approach and analysis. The mechanism is based on the reaction between co-planar dislocations and results in alignment and clustering of \(\langle 101\rangle\) mixed dislocation arrays: pseudo-dipole dislocation arrays. When the domain is seeded with \(\langle 101\rangle\) mixed dislocations and is subjected to constant strain-rate loading, dislocation cell patterns spontaneously form and become more pronounced as plastic strain increases to \(\sim 20\%\) (stage III/IV of work hardening). The presence of cell walls lead to an additional accumulation of dislocations while maintaining the same level of saturated flow stress as in a quasi-uniform dislocation distribution. We found the composite model is in good agreement with the flow stress for systems both with and without cell walls. Several models in the literature [3; 27; 41] predicting dislocation cell patterns have focused on elastic interactions between dipoles of straight dislocations including Burgers vectors of equal and opposite signs. In this study, we found a new assembled configuration of low energy dislocation structures composed of dislocations with Burgers vectors that are not strictly opposite, but have a significant anti-parallel component: pseudo-dipole mixed dislocations, which turned out to be the key ingredient of dislocation cell structures. Determining the timing and process of cell wall formation in shock experiments is a challenging task. It is unclear whether the cell walls develop during the initial shock rise at extremely high strain rates (\(\sim 10^{7}\)/s), during the post-shock plastic flow, or after the release. Our simulations indicate that cell walls are only visible at intermediate strain rates ranging from \(10^{4}\)/s to \(10^{5}\)/s, similar to the post-shock stage where pressure fields persist during the tens of microseconds pulse duration in explosively driven flyer-plate experiments [42]. We did not observe any cellular dislocation structures forming at high strain rates between \(10^{6}\)-\(10^{7}\)/s, which is comparable to the 30 GPa range of the initial shock rise. Additionally, our simulation indicates that initial dislocations should relax in an energetically favorable pseudo-dipole configuration before reaching the post-shock stage. Although we did not examine the shock release in our DDD simulation, we anticipate that the resulting cell walls are stable enough to survive the shock release and be observed under ambient conditions, as demonstrated in TEM observations (Figure 1). The objective of this paper is to emphasize the novel process of cell creation through the utilization of dislocation dynamics simulations. There are various possibilities for expansion of this research. First, a bigger simulation with an increased amount of initial pseudo-dipoles may be explored to model more intricate cellular configurations. Second, material parameters such as Poisson ratio and shear modulus, which are dependent on pressure and temperature, can be enhanced [43; 44] to more accurately simulate high-explosive-shock-recovery experiments in a quantitative manner. ## 4 Acknowledgements This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory (LLNL) under Contract DE-AC52-07NA27344 and was supported by the LLNL Laboratory-Directed Research and Development Program under Project No. 20-LW-027. Release number: LLNL-JRNL-829087.
2303.14979
Lexicon-Enhanced Self-Supervised Training for Multilingual Dense Retrieval
Recent multilingual pre-trained models have shown better performance in various multilingual tasks. However, these models perform poorly on multilingual retrieval tasks due to lacking multilingual training data. In this paper, we propose to mine and generate self-supervised training data based on a large-scale unlabeled corpus. We carefully design a mining method which combines the sparse and dense models to mine the relevance of unlabeled queries and passages. And we introduce a query generator to generate more queries in target languages for unlabeled passages. Through extensive experiments on Mr. TYDI dataset and an industrial dataset from a commercial search engine, we demonstrate that our method performs better than baselines based on various pre-trained multilingual models. Our method even achieves on-par performance with the supervised method on the latter dataset.
Houxing Ren, Linjun Shou, Jian Pei, Ning Wu, Ming Gong, Daxin Jiang
2023-03-27T08:22:21Z
http://arxiv.org/abs/2303.14979v1
# Lexicon-Enhanced Self-Supervised Training for ###### Abstract Recent multilingual pre-trained models have shown better performance in various multilingual tasks. However, these models perform poorly on multilingual retrieval tasks due to lacking multilingual training data. In this paper, we propose to mine and generate self-supervised training data based on a large-scale unlabeled corpus. We carefully design a mining method which combines the sparse and dense models to mine the relevance of unlabeled queries and passages. And we introduce a query generator to generate more queries in target languages for unlabeled passages. Through extensive experiments on Mr. TYDI dataset and an industrial dataset from a commercial search engine, we demonstrate that our method performs better than baselines based on various pre-trained multilingual models. Our method even achieves on-par performance with the supervised method on the latter dataset. ## 1 Introduction Information Retrieval (IR) aims to retrieve relevant passages for a given query, which plays a critical role in many industry scenarios such as Open-Domain Question Answering (QA) Lee et al. (2019) and Web Search Nguyen et al. (2016). Traditionally, bag-of-words (BOW) retrieval systems such as TF-IDF and BM25 Robertson and Zaragoza (2009) were widely used, which mainly depend on keyword matching between queries and passages. With the development of large-scale pre-trained language models (PLMs) Vaswani et al. (2017); Devlin et al. (2019) such as BERT, dense retrieval methods Lee et al. (2019); Karpukhin et al. (2020) show quite effective performance. These methods usually employed a dual-encoder architecture to encode both queries and passages into dense embeddings and then perform approximate nearest neighbor searching Johnson et al. (2021). Recently, some works found that dense retrievers perform poorly in the zero-shot multilingual settings Zhang et al. (2021) due to the distributional shift. To boost the performance of dense retrievers, some previous methods for cross-domain retrieval can be directly adopted to unsupervised multilingual dense retrieval. There are two important kinds: 1) generating training data in target languages. For example, Kulshreshtha _et al._ applied self-training to generate labeled data and further proposed back-training Kulshreshtha et al. (2021) to obtain more high-quality data. QGen Ma et al. (2021) proposed to use a query generator to generate in-domain queries. 2) leveraging sparse retrievers, which is more effective in the unsupervised setting, to enhance dense retrievers. For example, SPAR Chen et al. (2021) proposed to distill knowledge from BM25 to the dense model and LaPraDoR Xu et al. (2022) proposed to enhance the dense model by multiplying the similarity with the BM25 score. However, there are three major problems when directly adopting these methods to multilingual dense retrieval. First, zero-shot multilingual query generators suffer from grammatical adjustment and accidental translation problems Xue et al. (2021). As a result, zero-shot query generators only provide little help in bridging the gap among different languages. Second, hybrid dense and sparse models such as LaPraDoR and SPAR get high latency in the inference stage1. Finally, dense retrieval is different from other tasks, it not only needs positive query-passage pairs but also needs negative query-passage pairs Xiong et al. (2021). However, previous methods such as the back-training focus on positive pairs and simply take the top passages of BM25 as negative passages. Footnote 1: The latency of dense retriever on GPU is 32ms and the latency of BM25 on CPU is 36ms Gao et al. (2021). Although training data in target languages is very expensive, unlabeled queries and passages can be easily obtained from search engines such as _Google_ and _Bing_. In this paper, we propose a novel method that augments data in target languages by combining sparse and dense models, namely LeSTM, which stands Lexicon-enhanced Self-supervised Training for Multilingual dense retrieval. First, as we mentioned above, sparse retrievers mainly depend on keyword matching between queries and passages and dense retrievers mainly depend on the language modeling ability of pre-trained models, which indicates the sparse and dense models perform retrieval in different aspects Chen et al. (2021). In addition, the sparse-dense hybrid retriever is significantly better than both sparse and dense models Zhang et al. (2021); Ma et al. (2021). Both can demonstrate that sparse and dense models notice different characteristics and are complementary. Therefore, we craft a lexicon-enhanced retrieval module to mine positive and negative passages for each unlabeled query in target languages, which leverages the retrieval results of both sparse and dense models. We treat passages that both sparse and dense models regard are relevant as positive passages, and passages that one model regards are relevant but the other regards are irrelevant as negative passages. Furthermore, we employ a query generator to generate queries for passages in target languages due to the limited number of unlabeled queries. The query generation methods have been shown to significantly improve the performance of retrieval models in the monolingual setting Kulshreshtha et al. (2021); Ma et al. (2021). Considering the grammatical adjustment and accidental translation problems, we first use the mined positive query-passage pairs to train a query generator. Then, we use the trained model to generate more queries in target languages. Considering that there may exist more relevant passages to the generated queries, we use both sparse and dense retrievers to filter the generated samples. Finally, using only unlabeled data from target languages, LeSTM iteratively mines query passage pairs by the lexicon-enhanced retriever and generator, trains a new better retriever and query generator using these mined pairs, mines again for better query passage pairs, and repeats. In summary, our contributions are as follows. * To the best of our knowledge, our approach is the first attempt to combine sparse and dense retrievers to mine high-quality positive and negative query-passage pairs for the multilingual dense retriever. * We propose to use a query generator to expand the unlabeled queries in target languages and an iterative training paradigm is introduced to further enhance the dense retriever and generator. * Extensive experiments on two datasets show the effectiveness of our proposed approach. In particular, experiments demonstrate that our method is model-agnostic, they are effective on various pre-trained language models. ## 2 Related Work **Information Retrieval.** Information retrieval aims to search relevant passages from a large corpus for a given query. Traditionally, researchers use bag-of-words (BOW) based methods such as TF-IDF and BM25 Robertson and Zaragoza (2009). These methods use a sparse vector to represent the text, so we call them sparse retrievers. Recently, some studies use neural networks to improve the sparse retriever. For example, DocTTTTTQuery Nogueira et al. (2019) proposes to expand the document to narrow the vocabulary gap and DeepCT Dai and Callan (2020) generates a weight for each term to emphasize the import terms. In contrast to sparse retrievers, dense retrievers usually encode both queries and passages into dense vectors whose lengths are much less than sparse vectors. There are two kinds of dense retrieval methods: 1) pre-training with unlabeled data and 2) fine-tuning with labeled data. For pre-training, ORQA Lee et al. (2019) proposes Inverse Cloze Task (ICT) which aims to predict the context of a given sentence, and REALM Guu et al. (2020) proposes to predict the masked text based on an end-to-end retriever-reader model. Furthermore, SEED Lu et al. (2021), Condenser Gao and Callan (2021), and coCondenser Gao and Callan (2021) propose pre-training tasks to encode more information into the dense vectors. For fine-tuning, one major method is how to incorporate hard negative samples during training, including static sparse hard negative samples Karpukhin et al. (2020); Luan et al. (2021) and dynamic dense hard negative samples Xiong et al. (2021); Zhan et al. (2021). Another major method is training the retriever with a cross-attention encoder jointly, including extractive reader Yang and Seo (2020), generative reader Izacard and Grave (2021), and cross-encoder re-ranker Qu et al. (2021); Ren et al. (2021); Zhang et al. (2021). In addition, some works trade time for performance by using multiple vectors to represent the passage Khattab and Zaharia (2020); Humeau et al. (2020); Tang et al. (2021); Zhang et al. (2022). **Cross-lingual (domain) Retrieval.** These tasks aims to investigate the retrieval capabilities under cross-lingual Zhang et al. (2021); Asai et al. (2021) or cross-domain Thakur et al. (2021) setting. The methods for these tasks can be divided into two main categories: model transfer methods and data transfer methods. The model transfer methods for cross-domain focus on pre-training sentence representation. For example, GTR Ni et al. (2021) and CPT Neelakantan et al. (2022) propose that scaling up the model size can significantly improve the performance of dense models. Contriever Izacard et al. (2021) and LaPraDoR Xu et al. (2022) propose to use contrastive learning to learn sentence aware representation. For cross-lingual, they focus on learning multilingual representations by pre-training Conneau and Lample (2019); Chi et al. (2021); Feng et al. (2022) such as mBERT Devlin et al. (2019) and XLMR Conneau et al. (2020). The data transfer methods mainly focus on obtaining noisy training data in the target domain or target languages. For example, Back-training Kulshreshtha et al. (2021) and QGen Ma et al. (2021) propose to use a query generator to generate in-domain queries. CORA Asai et al. (2021) leverages a generator to help mine retrieval training data and DR.DECR Li et al. (2021) mines lots of parallel data to perform cross-lingual distillation. ## 3 Preliminaries In this section, we give a brief review of dense retrieval and then present how to apply models to multilingual dense retrieval. **Overview.** Given a query \(q\) and a corpus \(C\), the retrieval task aims to find the relevant passages for the query from a large corpus. Usually, a dense retrieval model employs two dense encoders (_i.e.,_ BERT) \(E_{Q}(\cdot)\) and \(E_{P}(\cdot)\). They encode queries and passages into dense embeddings, respectively. Then, the model uses a similarity function, often dot-product, to perform retrieval: \[f(q,p)=E_{Q}(q)\cdot E_{P}(p), \tag{1}\] where \(f\) denotes the similarity function, \(q\) and \(p\) denote the query and the passage, respectively. During the inference stage, we apply the passage encoder \(E_{P}(\cdot)\) to all the passages and index them using FAISS Johnson et al. (2021) which is an extremely efficient, open-source library for similarity search. Then given a query \(q\), we derive its embedding by \(\mathbf{v}_{q}=E_{Q}(q)\) and retrieve the top \(k\) passages with embeddings closest to \(\mathbf{v}_{q}\). **Training.** The training of retrieval is metric learning essentially. The goal is to narrow the distance between the query and the relevant passages (_a.k.a.,_ positive passages) and widen the distance between the query and the irrelevant passages (_a.k.a.,_ negative passages). Let \(\{q_{i},p_{i}^{+},p_{i,0}^{-},p_{i,1}^{-},\dots,p_{i,n}^{-}\}\) be the \(i\)-th training sample. It consists of one query, one positive passage, and \(n\) negative passages. Then we can employ a contrastive loss function, called InfoNCE van den Oord et al. (2018), to optimize the model: \[\mathcal{L}=-\log\frac{e^{f(q_{i},p_{i}^{+})}}{e^{f(q_{i},p_{i}^{+})}+\sum_{j= 0}^{n}e^{f(q_{i},p_{i,j}^{-})}}. \tag{2}\] In practice, we cannot use all passages in the corpus \(C\) as negative passages due to the limitation of resources. Therefore, a common practice is sampling a subset from the corpus \(C\) as negative samples, and many studies focus on which distribution the negative passages sampled from is better Xiong et al. (2021); Qu et al. (2021). **Multilingual Setting.** This setting aims to transfer knowledge from the source language to the target languages. In this setting, only labeled data from the source language is available. And the trained model will be directly evaluated on target languages. Note that the setting is different to _cross-lingual retrieval_ whose queries and passages are in different languages. In this setting, the queries and the passages are in the same language and just training data from the source language (_e.g.,_ English) is available. ## 4 Methodology In this section, we present the proposed LeSTM. The overview is presented in Figure 1. We first present the augmentation method which combines sparse and dense retrievers. Then, we present how to use the mined data to train the query generator, generate new data, filter the generated samples, and fine-tune the dense retriever. Finally, we summarize the full training process. ### Lexicon-enhanced Retrieval Module In target languages, we do not have the labeled training data, but we have unlabeled queries and passages. To effectively utilize the unlabeled queries and passages, we design a mining method shown in Algorithm 1. ``` Input: One query \(q\) and candidate passages \(P\). Output: Positive passages and Negative passages. 1 Set \(L\) and \(S\); 2\(\mathbb{L}_{s}\) & \(\mathbb{S}_{s}\) - Top-L and Top-S of sparse retriever; 3\(\mathbb{L}_{d}\) & \(\mathbb{S}_{d}\) - Top-L and Top-S of dense retriever; 4\(P^{+}\leftarrow\mathcal{D};P^{-}\leftarrow\mathcal{D}\); 5for\(p\in P\)do 6if\(p\in\mathbb{S}_{s}\) & \(p\in\mathbb{S}_{d}\)then 7\(P^{+}\gets P^{+}\cup\{p\}\) 8 end if 9if\(p\in\mathbb{S}_{s}\) & \(p\notin\mathbb{L}_{d}\)then 10\(P^{-}\gets P^{-}\cup\{p\}\) 11 end if 12if\(p\notin\mathbb{L}_{s}\) & \(p\in\mathbb{S}_{d}\)then 13\(P^{-}\gets P^{-}\cup\{p\}\) 14 end if 15 16 end for Return\(P^{+},P^{-}\). ``` **Algorithm 1**Lexicon-enhanced Retrieval This augmentation is based on the intuition that the sparse retriever and the dense retriever solve different problems and they are complementary. Specifically, the sparse retriever depends on word match, it is more effective than the dense retriever for words that do not appear in the training set. On the contrary, the dense retriever depends on neural networks, it is more effective than the sparse retriever for synonyms and semantics of the sentence. As a result, for a passage, if both of them regard it are relevant to the query, we then regard the passage as a positive passage. If one of them regards it as relevant but the other regards it as irrelevant, we regard the passage does not meet all conditions (_i.e.,_ keyword match and semantic match), and the passage is a hard case. Although we cannot judge whether it is a negative passage, we think its relevance is weaker than the positive passage. As a result, we hope the score of the positive passage is higher than the hard case and then we regard the hard case as a hard negative passage. In practice, because the score distributions of sparse retrievers and dense retrievers are different, we use the ranking position to measure the relevance between passages and queries. Then, we present our method as follows: 1. We introduce two parameters to define relevant and irrelevant passages: \(S\) and \(L\), _i.e.,_ for a query, the retriever retrieves passages and ranks them with scores, if the ranking position of a passage is less than \(S\), we regard the passage is relevant to the query and if the ranking position is greater than \(L\), we regard the passage is irrelevant to the query. 2. We retrieve \(L\) and \(S\) passages by both sparse and dense retrievers, respectively. We define the top-\(L\) passage set as \(\mathbb{L}\) and the top-\(S\) passage set as \(\mathbb{S}\), and use subscript \(s\) and \(d\) to denote sparse and dense retrievers, respectively. 3. We traverse all passages in the corpus. For each passage, if it exists in both \(\mathbb{S}_{s}\) and \(\mathbb{S}_{d}\), we add it to the positive passage set; if it exists in one \(\mathbb{S}\) but does not exist in another \(\mathbb{L}\) (_a.k.a.,_ exists in \(\mathbb{S}_{s}\) but not exists in \(\mathbb{L}_{d}\) or exists in \(\mathbb{S}_{d}\) but does not exist in \(\mathbb{L}_{s}\)), we add it to the negative passage set. 4. For each mined sample, we add random negative passages like DPR Karpukhin et al. (2020): 1) random passages from the corpus; 2) positive passages of other queries (_a.k.a.,_ in-batch negative). And we use our mined negative passage set to replace "top passages returned by BM25" in DPR. To sum up, our mined training data includes a query, mined positive and negative passages, and random negative passages. Figure 1: Overview of the training process. ### Query Generation Module Due to the limited number of unlabeled queries, we leverage a query generator to generate more queries for unlabeled passages in target languages. Note that the generated queries are in the same languages as the corresponding passage. Specifically, for a trained generator, we randomly select some passages and leverage the fine-tuned query generator to generate queries for these passages. To tackle the noisy label problem introduced by the generator, we use both sparse and dense retrievers to filter the generated pairs. We retrieve the top-1 passage for each generated query with both sparse and dense retrievers and only accept pairs where the best passages from both sparse and dense retrievers are the corresponding passage. Finally, for each filtered sample, we select negative passages like DPR Karpukhin et al. (2020): 1) random passages from the corpus; 2) top passages returned by sparse and dense retrievers (the passages returned by dense retriever are more effective Xiong et al. (2021); Qu et al. (2021)); 3) positive passages of other queries. To sum up, our generated training data includes one positive passage, generated query, random negative passages, and top passages returned by retrievers as hard negative passages. ### Model Training Previously, we introduced the lexicon-enhanced retrieval module and the query generation module. In this part, we present the full training process. As shown in Algorithm 2, firstly, we train the warm-up dense retriever and query generator with data in the source language. We note that the input to the generator is the positive passage, and the label is the query. Secondly, we generate training data in target languages with the proposed two modules. Finally, we fine-tune the retriever and the generator with the generated data. Based on these steps, we can conduct iteratively generating and training procedures to improve the performance. Note that due to the grammatical adjustment and accidental translation problems in the zero-shot multilingual generator, we only use the lexicon-enhanced retrieval module to generate data in the first iteration. Considering the query generator is more sensitive to the quality of data, we set \(S=1\) when generating data for the query generator. ## 5 Experiments In this section, we construct experiments to demonstrate the effectiveness of the proposed method. ### Experimental Setup #### 5.1.1 Dataset **Mr. TYDI.** The Mr. TYDI dataset Zhang et al. (2021) is constructed from TYDI Clark et al. (2020) dataset and can be viewed as the "open-retrieval" condition of the TYDI dataset. It is a multilingual dataset for monolingual retrieval in 11 languages. The detailed statistics of the Mr. TYDI dataset are presented in Appendix A. **DeepQA.** An Q&A task dataset from one commercial Q&A system, with 18,000 labeled cases in three languages: English (En), German (De), French (Fr). Each case consists of two parts, _i.e.,_ query and passage. The detailed statistics of the DeepQA dataset are presented in Appendix A. #### 5.1.2 Evaluation Metrics. Following Mr. TYDI, we use MRR@100 and Recall@100 as evaluation metrics, where MRR denotes the mean of reciprocal rank across queries and Recall@k denotes the proportion of queries to which the top k retrieved passages contain positives. For DeepQA, due to the smaller size of the corpus (only 1,220,030 passages in the corpus, for comparison, the Mr. TYDI data has 58,043,326 passages, which is times that of DeepQA), we use MRR@10 and Recall@10 as metrics. #### 5.1.3 Implementation Details. For the warm-up training stage, although Mr. TYDI proposed to use NQ (Kwiatkowski et al., 2019) as English training data, we follow Xinyu _et al._ (Zhang et al., 2022) to use MS-MARCO as English training data. Xinyu _et al._ find that MS-MARCO is better than NQ for zero-shot multilingual retrieval. We have further constructed experiments on NQ in Appendix D.3. For the iteratively training stage, both the retriever and the generator are scheduled to train with 500 mini-batches in each iteration. The document index is refreshed after each iteration of training. The hyper-parameters are shown in Appendix B. All the experiments run on 8 NVIDIA Tesla A100 GPUs. The implementation code is based on HuggingFace Transformers (Wolf et al., 2020). For sentence embedding, we use the corresponding hidden state of the _[CLS] token_ for mBERT (Devlin et al., 2019) and the average hidden states of all tokens for XLM-R (Conneau et al., 2020). For the generator, we leverage mBART (Liu et al., 2020) as the pre-trained model. ### Results #### 5.2.1 Baselines As we investigate retrieval in the multilingual setting, in this paper, the main baselines methods include BM25, and multilingual DPR with mBERT (Devlin et al., 2019), XLM-R (Conneau et al., 2020) as the multilingual pre-trained model. Furthermore, we compare our method with two state-of-the-art domain adaption methods: self-training (Yarowsky, 1995) and back-training (Kulshreshtha et al., 2021). Following Back-training, we train the models 3 iterations with 5 epochs per iteration. Then we present the results with the best MRR@100. In addition, we present the supervised performance as an upper limit reference. When constructing the supervised training data, we follow DPR (Karpukhin et al., 2020) to select three kinds of negative passages. #### 5.2.2 Mr. TYDI Table 1 shows the result on Mr. TYDI. The first group is the sparse retriever, _i.e.,_ BM25 (Robertson and Zaragoza, 2009) and tuned BM25. For each pre-trained model, the first group is the multilingual pre-trained models which are only fine-tuned on MS-MARCO data. The second block is the multilingual pre-trained models which are fine-tuned on MS-MARCO data and data augmentation method. We conduct pair t-test (Hsu and Lachenbruch, 2014) between our method and other data augmentation method (self-training and back-training). The final block is the multilingual pre-trained models which are fine-tuned on Mr. TYDI dataset. Due to the limited space, we only present the average performance among all languages in Table 1 and present results for each language in Appendix E. Based on the results, we have the following findings. Firstly, comparing the performance of domain adaption methods (the second block for each pre-trained model) and zero-shot performance, we can find that all domain adaption methods are effective. Secondly, comparing the three domain adaption methods, we can find that our method is better than the other methods. Finally, comparing our method and supervised dense retriever, we can find that the performance of our method is closed to the supervised performance on Recall@100, but is still worse than supervised performance with a clear edge on MRR@100. This indicates that the augmented data are noisy, for example, the mined \begin{table} \begin{tabular}{c|c|c c} \hline \hline \multicolumn{2}{c|}{Methods} & MRR@100 & Recall@100 \\ \hline Sparse & BM25 & 32.1 & 73.2 \\ Method & (tuned) & 33.3 & 75.8 \\ \hline \multirow{5}{*}{mBERT} & Zero-Shot & 34.4 & 73.4 \\ & (reimpl) & 36.5 & 73.3 \\ \cline{1-1} \cline{2-4} & Self-Training & 37.2 & 78.5 \\ \cline{1-1} & Back-Training & 41.1 & 82.0 \\ \cline{1-1} & LeSTM & **49.0\({}^{\dagger}\)\({}^{\dagger}\)** & **83.6\({}^{\dagger}\)\({}^{\dagger}\)** \\ \cline{1-1} \cline{2-4} & Supervised & 54.6 & 87.0 \\ \hline \multirow{5}{*}{XLM-R} & Zero-Shot & 30.4 & 74.3 \\ \cline{1-1} & Self-Training & 35.0 & 78.6 \\ \cline{1-1} & Back-Training & 29.6 & 77.5 \\ \cline{1-1} & LeSTM & **47.2\({}^{\dagger}\)\({}^{\dagger}\)** & **82.7\({}^{\dagger}\)\({}^{\dagger}\)** \\ \cline{1-1} \cline{2-4} & Supervised & 54.5 & 87.2 \\ \hline \hline \end{tabular} \end{table} Table 1: Results on Mr. TYDI test set. The best results except supervised training are in bold. We copy the results of BM25, tuned BM25, and zero-shot mBERT from (Zhang et al., 2022) and re-implement the zero-shot mBERT. \(*\) denotes that our method significantly outperforms self-training at the level of 0.01. \(\dagger\) denotes that our method significantly outperforms back-training at the level of 0.01. passages are relevant to the queries but are not the best passages, and there may be more relevant passages for the queries. So, it is more helpful to Recall@100 but less helpful to MRR@100. #### 5.2.3 DeepQA Due to the limited space, we only construct experiments on DeepQA based on mBERT. Table 2 presents the performance of all methods. As we can see, our method achieves the best performance among all the compared methods. It indicates that our method is effective for unsupervised multilingual dense retrieval. In addition, we find that the unsupervised methods (_i.e.,_ self-training and back-training) perform better than the supervised training on MRR@10 but worse on Recall@10. A possible reason is that the size of DeepQA is small and there is a large gap between the distributions of the training data and test data. To demonstrate that, we evaluate the performance of the zero-shot dense retriever on both training data and test data. As shown in Table 3, there is a large gap between the MRR@10 on the training set and the test set of DeepQA. That indicates the gap between the training set of the test set is large. The dense model trained on the training set may seriously suffer from the overfitting problem. These results also indicate that our method is even more effective than supervised training when the training data in target languages is limited. ### Ablation Study In our method, we have incorporated two data augmentation modules, namely lexicon-enhanced retrieval, and query generation. Here, we would like to check how each module contributes to the final performance. We construct the ablation experiments on the Mr. TYDI data. We prepare four variants of our method that try all combinations: * w/o LR denotes that the retriever does not be fine-tuned with data from the lexicon-enhanced retrieval module. But the generator also is fine-tuned with data from the lexicon-enhanced retrieval module. * w/o LR denotes that both the retriever and the generator do not be fine-tuned with data from the lexicon-enhanced retrieval module. * w/o QG denotes that the retriever does not be fine-tuned with data from the query generation module. * w/o ALL denotes without both the two modules, _a.k.a.,_ zero-shot multilingual retrieval. Table 4 presents all comparison results of the four variants. Due to the limited space, we present results for each language in Appendix E. As we can see, the performance rank can be given as follows w/o ALL < w/o QG < LeSTM. These results indicate that both the two augmentation modules are essential to improve performance. And we can \begin{table} \begin{tabular}{l|c c} \hline \hline Methods & MRR@100 & Recall@100 \\ \hline LeSTM & **49.0** & **83.6** \\ \hline w/o LR & 46.9 & 81.6 \\ w/o LR\({}_{+}\) & 37.9 & 77.9 \\ w/o QG & 48.1 & 83.2 \\ w/o ALL & 36.5 & 73.3 \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation results based on mBERT. “LR” denotes the lexicon-enhanced retrieval module. “QG” denotes the query generation module. \begin{table} \begin{tabular}{c|c c c|c} \hline \hline Methods & En & De & Fr & Avg \\ \hline BM25 & 22.5 & 31.4 & 40.1 & 31.3 \\ \hline Zero-Shot & 24.0 & 29.4 & 37.7 & 30.3 \\ \hline Self-Training & 25.3 & 31.4 & 42.3 & 33.0 \\ Back-Training & 25.8 & 32.0 & 42.0 & 33.3 \\ LeSTM & **27.2\({}^{\dagger}\)** & **34.6\({}^{\dagger\dagger}\)** & **43.0\({}^{\dagger\dagger}\)** & **35.0\({}^{\ast\dagger}\)** \\ \hline Supervised & 23.0 & 33.9 & 39.7 & 32.2 \\ \hline \hline \end{tabular} \end{table} Table 2: Results on DeepQA test set. The best results except supervised training are in bold. \(\ast\) denotes that our method significantly outperforms self-training at the level of 0.01. \(\dagger\) denotes that our method significantly outperforms back-training at the level of 0.01. \begin{table} \begin{tabular}{l|c c|c} \hline \hline & MRR@10 & Recall@10 \\ \hline Training set & 35.1 & 48.5 \\ Test set & 30.3 & 48.4 \\ \hline \hline \end{tabular} \end{table} Table 3: Performance of zero-shot dense retriever on DeepQA training set and test set. find that the lexicon-enhanced retrieval module is more effective than the query generation module, because of w/o LR < w/o QG. In addition, we find that w/o LR > w/o LR\({}_{+}\), it denotes the zero-shot multilingual query generation suffers from lots of problems and it also can demonstrate the effectiveness of the lexicon-enhanced retrieval module. ### Method Analysis #### 5.4.1 Effect of Lexicon-enhanced Retrieval In our lexicon-enhanced retrieval module, we combine the results of the sparse and dense retrievers to mine new training data. To show the effectiveness of our mining method, we construct the five variants (for more conciseness, we use mBERT w/o QG + iterative refinement as the base model): * Sparse \(+\) Dense combines results of sparse and dense retrievers by adding their scores. * Sparse \(\times\) Dense combines results of sparse and dense retrievers by multiplying their scores. * Double Dense Retrievers mines positive and negative passages with results from two dense retrievers which are trained on different data (MS-MARCO and NQ). * w/o Hard Negatives (HN) fine-tunes the model with mined positive passages and only in-batch negative passages. * w/ Sparse Hard Negatives (HN) fine-tunes the model with mined positive passages, in-batch negative passages, and top passages returned by sparse retriever as negative passages. Table 5 presents all comparison results of the five variants. Based on the results, we have the following findings. Firstly, our mining method is more effective than the hybrid results of sparse and dense models. It demonstrates that our method can effectively leverage the knowledge of both sparse and dense retrievers. Secondly, mining data with sparse and dense retrievers are more effective than two dense retrievers. It demonstrates that sparse and dense retrievers have noticed different characteristics of retrieval. Finally, mined negatives are more effective than sparse negatives. It demonstrates that negatives are important in dense retrieval tasks and our methods can provide more effective negatives. #### 5.4.2 Effect of Parameters In our method, we introduce two parameters in the lexicon-enhanced retrieval module to define relevant and irrelevant passages: \(S\) and \(L\). And the number of generated queries also influences the final performance. Here, we tune the \(S\) and \(L\) based on mBERT w/o QG. We vary \(S\) in the set \(\{1,2,3,4,5\}\). And for more conciseness, we set \(L=S\times 10\). In addition, we tune the number of generated queries based on mBERT. We vary the number of generated queries per language in the set \(\{1000,2000,3000,4000,5000\}\). Figure 2(a) presents the effect of the parameter \(S\). We can observe that \(S=1\) leads to poor performance on both MRR@100 and Recall@100. Because the method with little \(S\) mines few samples and leads to the overfitting problem. When we set \(S>2\), it leads to better Recall@100 but poorer MRR@100. A possible reason is that large \(S\) leads to more noisy samples. As we mentioned above, noisy samples are helpful to Recall@100 \begin{table} \begin{tabular}{l|c c} \hline \hline Methods & MRR@100 & Recall@100 \\ \hline BM25 & 32.1 & 73.2 \\ Zero-shot mBERT & 36.5 & 73.3 \\ \hline LeSTM w/o QG & **47.5** & **82.5** \\ \hline Sparse \(+\) Dense & 46.6 & 81.1 \\ Sparse \(\times\) Dense & 40.8 & 80.4 \\ Double Dense Retrievers & 44.5 & 81.9 \\ \hline w/o HN & 42.5 & 80.4 \\ w/ Sparse HN & 43.3 & 79.2 \\ \hline \hline \end{tabular} \end{table} Table 5: Effect of lexicon-enhanced retrieval module. Figure 2: Parameter sensitivity. but harmful to MRR@100. Figure 2(b) presents the effect of the number of generated queries. As we can see, the large number of generated queries improves the MRR@100 but cannot improve the Recall@100. A possible reason is that the generated queries mainly focus on a few kinds (_e.g.,_ when or where something happened). They are helpful to MRR@100 for these kinds of queries but less helpful to both Recall@100 and MRR@100 for other kinds of queries. ## 6 Conclusion In this paper, we propose a novel augmentation method that combines sparse and dense retrievers for multilingual retrieval. We firstly designed a passage mining method based on the results of both sparse and dense retrievers. After that, we utilized the mined data to train a query generation model and generate more training data. Extensive experimental results show that the proposed method outperforms the baselines, and can significantly improve the state-of-the-art performance. Currently, we directly utilize a large number of unlabeled queries in target languages. As future work, we will investigate how to augment training data without any unlabeled queries in target languages. ## 7 Limitations The limitations are summarized as follows. * The method needs unlabeled queries. For seriously rare languages, there are no unlabeled queries in search engines and we cannot perform our passage mining method in this condition. Although our query generation module can alleviate this problem, the zero-shot query generator suffers from grammatical adjustment and accidental translation problems and can only provide limited help. * The method performs inconsistently on the two metrics (MRR@100 and Recall@100). Due to the quality of augmented data, we need to set some threshold to filter the augmented data, where different parameters lead to optimal performance on different metrics. * The sparse retriever is fixed during training. The fixed sparse retriever leads to the rapid convergence of the dense retriever. We believe that if both sparse and dense retrievers can be improved in the iterative process, the dense retriever may achieve better performance. ## Acknowledgments Jian Pei's research is supported in part by the NSERC Discovery Grant program. All opinions, findings, conclusions and recommendations in this paper are those of the authors and do not necessarily reflect the views of the funding agencies.
2305.10891
Diffusion-Based Mel-Spectrogram Enhancement for Personalized Speech Synthesis with Found Data
Creating synthetic voices with found data is challenging, as real-world recordings often contain various types of audio degradation. One way to address this problem is to pre-enhance the speech with an enhancement model and then use the enhanced data for text-to-speech (TTS) model training. This paper investigates the use of conditional diffusion models for generalized speech enhancement, which aims at addressing multiple types of audio degradation simultaneously. The enhancement is performed on the log Mel-spectrogram domain to align with the TTS training objective. Text information is introduced as an additional condition to improve the model robustness. Experiments on real-world recordings demonstrate that the synthetic voice built on data enhanced by the proposed model produces higher-quality synthetic speech, compared to those trained on data enhanced by strong baselines. Code and pre-trained parameters of the proposed enhancement model are available at \url{https://github.com/dmse4tts/DMSE4TTS}
Yusheng Tian, Wei Liu, Tan Lee
2023-05-18T11:41:26Z
http://arxiv.org/abs/2305.10891v3
# Diffusion-Based Mel-Spectrogram Enhancement for Personalized Speech Synthesis with Found Data ###### Abstract Creating synthetic voices with found data is challenging, as real-world recordings often contain various types of audio degradation. One way to address this problem is to pre-enhance the speech with an enhancement model and then use the enhanced data for text-to-speech (TTS) model training. This paper investigates the use of conditional diffusion models for generalized speech enhancement, which aims at addressing multiple types of audio degradation simultaneously. The enhancement is performed on the log Mel-spectrogram domain to align with the TTS training objective. Text information is introduced as an additional condition to improve the model robustness. Experiments on real-world recordings demonstrate that the synthetic voice built on data enhanced by the proposed model produces higher-quality synthetic speech, compared to those trained on data enhanced by strong baselines. Code and pre-trained parameters of the proposed enhancement model are available at [https://github.com/dmse4fts/DMSE4TTS](https://github.com/dmse4fts/DMSE4TTS) Yusheng Tian, Wei Liu, Tan Lee Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong SAR {ystian0617, louislau_1129}@link.cuhk.edu.hk, [email protected] **Index Terms:** personalized speech synthesis, found data, generalized speech enhancement, text-informed speech enhancement, conditional diffusion models ## 1 Introduction Text-to-speech (TTS) models are typically trained using carefully recorded databases. Collecting such recordings is costly and sometimes impractical. In real-world applications, sometimes only low-quality recordings from the target speaker(s) are available. This has motivated the study of speech synthesis with readily available data, i.e., speech data that are not purposely recorded for the development of TTS systems [1, 2, 3, 4, 5, 6, 7, 8]. Developing TTS systems in this context is challenging, given the highly varying degradation of audio quality affecting the found data. Synthetic voices built directly with low-quality recordings would inevitably produce distorted speech. Previous studies have attempted to tackle this problem within the TTS framework, by augmenting the acoustic model with an additional noise embedding [7, 8]. This enabled TTS model training with noisy speech, but did not consider the impact of other forms of audio degradation. Background noise is not the only disturbance present in real-world recordings. With this in mind, it might be more practical to pre-enhance found data with a separate module as in [1, 2, 3, 4], especially given the recent progress in generalized speech enhancement [9, 10, 11, 12, 13], which aims at addressing multiple types of audio degradation simultaneously. In the present study, we propose to apply enhancement directly to log Mel-spectrograms to align with the TTS training objective, and use conditional diffusion models for generalized speech enhancement. The choice of a diffusion model is motivated by the work of Palette [14], which used a single generalist diffusion model to deal with a range of image-to-image translation tasks. Mel-spectrograms are time-frequency representations that can be treated as images. Thus we expect that the diffusion model would be effective in Mel-spectrogram enhancement. To improve the model robustness against unseen forms of audio degradation, text content of speech is used as an additional condition, which is typically available in the context of TTS training. We applied the proposed enhancement model to a real case of speech generation: developing a personalized synthetic voice for a male Cantonese speaker with 37-minute found recordings. This gentleman lost the ability to speak after receiving laryngectomy a few years ago. The recordings he provided, though containing multiple types of degradation, are the only available and precious record of his voice. TTS models are trained with speech enhanced by different systems. Subjective evaluation by human listeners show that the synthetic voice built on speech enhanced by the proposed model is rated higher for both cleanliness and overall impression, compared to those trained on data enhanced by strong speech enhancement baselines. ## 2 Related Work ### Speech synthesis with found data A number of previous studies have approached the problem of speech synthesis with found data. [5, 6] designed algorithms to automatically select clean recordings from crowd-sourced data. Others consider the situation when there are simply no high-quality samples in the found data. [7, 8] augmented the TTS model with an additional noise embedding, which permits the separation of environmental noise from clean speech during TTS training. Another line of research [1, 2, 3, 4] approached the problem in two steps: pre-enhance the low-quality speech audio and then use the enhanced data for TTS model training. ### Generalized speech enhancement Several prior works [9, 10, 11, 12, 13] have approached the problem of generalized speech enhancement, i.e., addressing multiple types of audio degradation simultaneously. Their success relies on well designed simulation of audio degradation, and advanced neural network architectures. Most of these models operate on the waveform or magnitude spectrogram. The enhanced speech might be suboptimal for training TTS acoustic models, which are typically designed to predict compact acoustic representations such as Mel-spectrograms. ### Diffusion-based speech enhancement The use of diffusion models for speech enhancement has been investigated in [13, 15, 16, 17]. Most of the models are derived from diffusion-based neural vocoder [18], by replacing the clean mel-spectrogram input with a degraded one. This design may not have taken the full advantage of diffusion models for the task of speech enhancement, as the enhancement work is undertaken largely by the conditioning network. Our work is most similar to that of [19], in which diffusion models are used for speech enhancement in the complex short-time Fourier transform (STFT) domain. However, their model targets only at denoising, and is not tailored for the development of TTS systems. ## 3 Diffusion-based Mel-spectrogram Enhancement Suppose we are given a large collection of Mel-spectrogram pairs, denoted as \(\mathcal{D}=\{\mathbf{x}^{(i)},\mathbf{y}^{(i)}\}_{i=1}^{N}\), where \(\mathbf{x}^{(i)}\) represents the Mel-spectrogram of a high-quality speech sample, and \(\mathbf{y}^{(i)}\) represents the Mel-spectrogram of a respective degraded sample. \(\mathbf{y}^{(i)}\) can be created by applying artificial audio degradation to \(\mathbf{x}^{(i)}\). We are interested in learning the conditional distribution \(P(\mathbf{x}|\mathbf{y})\) on \(\mathcal{D}\). If \(\mathcal{D}\) is constructed to be representative, Mel-spectrogram enhancement can be achieved by sampling from the learned conditional distribution. Diffusion models are adopted here to learn a parametric approximation of \(P(\mathbf{x}|\mathbf{y})\). ### Conditional diffusion process We consider the Variance Preserving (VP) diffusion model [20, 21]. Suppose \(\mathbf{x}_{0}\sim P(\mathbf{x}|\mathbf{y})\) is one enhanced realization for the degraded input \(\mathbf{y}\). VP diffusion defines the forward process as \[\mathrm{d}\mathbf{x}_{t}=-\frac{1}{2}\beta_{t}\mathbf{x}_{t}\mathrm{d}t+\sqrt{\beta_{ t}}\mathrm{d}\mathbf{w}_{t}\;, \tag{1}\] where \(t\sim\mathcal{U}(0,1)\), \(\beta_{t}=\beta_{0}+\beta_{1}t\) is a predefined linear noise scheduler, and \(\mathbf{w}_{t}\) is a standard Brown motion. One important result derived from (1) is the conditional distribution of \(\mathbf{x}_{t}\) given \(\mathbf{x}_{0}\): \[P(\mathbf{x}_{t}|\mathbf{x}_{0})=\mathcal{N}(\mathbf{\rho}(\mathbf{x}_{0},t),\sigma_{t}^{2} \mathbf{I})\;, \tag{2}\] where \(\mathbf{\rho}(\mathbf{x}_{0},t)=e^{-\frac{1}{2}\int_{0}^{t}\beta_{s}\mathrm{d}s}\mathbf{x }_{0}\), and \(\sigma_{t}^{2}=1-e^{-\int_{0}^{t}\beta_{s}\mathrm{d}s}\). The nice property given by (2) suggests that if \(\mathbf{x}_{0}\) is known, we can sample \(\mathbf{x}_{t}\) using the reparameterization trick: \[\mathbf{x}_{t}=\mathbf{\rho}(\mathbf{x}_{0},t)+\sigma_{t}\mathbf{\epsilon}_{t},\mathbf{\epsilon} _{t}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\;. \tag{3}\] Furthermore, as \(t\to 1\), with appropriate noise scheduler \(\beta_{t}\) we have \(\mathbf{\rho}(\mathbf{x}_{0},t)\xrightarrow{}\mathbf{0}\) and \(\sigma_{t}\xrightarrow{}1\), meaning that the forward process gradually transforms the data distribution from \(P(\mathbf{x}|\mathbf{y})\) into a standard Gaussian distribution \(\mathcal{N}(\mathbf{0},\mathbf{I})\). Diffusion models generate samples by reversing the above forward process, starting with a Gaussian noise: \[\mathrm{d}\mathbf{x}_{t}=-\frac{1}{2}\beta_{t}\left[\mathbf{x}_{t}+\nabla_{\mathbf{x}_{t} }\log P(\mathbf{x}_{t}|\mathbf{y})\right]\mathrm{d}t\;. \tag{4}\] Note that the reverse process is conditioned on \(\mathbf{y}\) to enable conditional generation. The core part of a diffusion model is to train a neural network \(S_{\mathbf{\theta}}\) to estimate the value of \(\nabla_{\mathbf{x}_{t}}\log P(\mathbf{x}_{t}|\mathbf{y})\) (a.k.a the score). Once the score is known for all time steps, we can draw samples from \(P(\mathbf{x}|\mathbf{y})\) by simulating the reverse process from \(t=1\) to \(0\), typically with an ODE solver [22]. ### Robust text condition As mentioned earlier, the Mel-spectrogram enhancement model relies on a synthetic dataset of paired samples to learn the conditional distribution \(P(\mathbf{x}|\mathbf{y})\). Therefore a domain gap between training samples and real-world degraded recordings is inevitable. Consequently, the trained model may overfit to in-domain data and fail to generalize well to unseen audio degradation. In order to improve the model robustness, we introduce text content of speech samples as an additional condition. Text transcription is usually available in the context of TTS development, and has shown to improve the robustness of SE models [23, 24]. Inspired by GradTTS [25], a diffusion-based TTS model, we use an average Mel-spectrogram \(\mathbf{\mu}\) to represent text. \(\mathbf{\mu}\) is of the same shape as \(\mathbf{y}\) and is obtained in three steps. First, text transcription for each training sample is converted to time-aligned phone sequence by forced alignment. Second, a phoneme-to-Mel-spectrum dictionary is created on training data by averaging speech frames that correspond to the same phoneme. Then given any time-aligned phone sequence, the respective average Mel-spectrogram \(\mathbf{\mu}\) is obtained by looking up the dictionary. When text is provided, the reverse process is rewritten as: \[\mathrm{d}\mathbf{x}_{t}=-\frac{1}{2}\beta_{t}\left[\mathbf{x}_{t}+\nabla\log P(\mathbf{x} _{t}|\mathbf{y},\mathbf{\mu})\right]\mathrm{d}t\;. \tag{5}\] ### Training and inference Following [20], we make the score estimator \(S_{\mathbf{\theta}}\) aware of the time step and train it with a weighted \(L2\) loss: \[\mathcal{L}(\mathbf{\theta})=\mathbb{E}_{t}\sigma_{t}^{2}\mathbb{E}_{(\mathbf{x}_{0}, \mathbf{y})}\mathbb{E}_{\mathbf{\epsilon}_{t}}\left\|S_{\mathbf{\theta}}(\mathbf{x}_{t},t,\mathbf{ y},\mathbf{\mu})+\sigma_{t}^{-1}\mathbf{\epsilon}_{t}\right\|_{2}^{2}\;, \tag{6}\] where we have made use of the following results: \[P(\mathbf{x}_{t}|\mathbf{x}_{0},\mathbf{y},\mathbf{\mu})=P(\mathbf{x}_{t}|\mathbf{x}_{0}) =\mathcal{N}(\mathbf{\rho}(\mathbf{x}_{0},t),\sigma_{t}^{2}\mathbf{I})\;, \tag{7}\] \[\nabla_{\mathbf{x}_{t}}\log P(\mathbf{x}_{t}|\mathbf{x}_{0},\mathbf{y},\mathbf{\mu}) =-\sigma_{t}^{-1}\mathbf{\epsilon}_{t}\;. \tag{8}\] Once the score estimator is trained, we can use the predicted score to generate samples by running equation (5) backward in time from \(t=1\) to \(0\). The training and inference procedures of the proposed Mel-spectrogram enhancement model are illustrated as in Figure 1. ## 4 Experimental Setup ### Baselines for comparison The proposed enhancement model is named as DMSEtext, abbreviating **D**iffusion-based **M**el-**S**pectrogram **E**nhancement with **t**ext** conditioning. The model without text conditioning is named as DMSEbase. We compare them with two baselines: Demucs [26] and VoiceFixer [12]. Demucs is a denoising model. We use it as a baseline to investigate the efficacy of a single-task denoising model in the intended application. VoiceFixer is a regression model trained to address a range of audio degradation, including additive noise, reverberation, clipping and low-bandwidth. The motivation of choosing VoiceFixer as a baseline is to examine whether diffusion-based speech enhancement has advantage over a regression-based approach. Figure 1: The training (solid lines) and inference (dashed lines) procedure of the proposed enhancement model. ### Network architecture The GradTTS model architecture [25] is adopted for the diffusion-based Mel-spectrogram enhancement model. It is augmented to be conditioned on both text (\(\mathbf{\mu}\)) and audio (\(\mathbf{y}\)). Conditions are provided to the score estimator by concatenation in the channel dimension. We scale the model depth to 5-layer, with the output channel dimensions being 32, 64, 128, 256, 256 respectively. The TTS model is similar to DurIAN [27], except that style control and posent are not included. We use the HiFiGAN [28] neural vocoder to convert log Mel-spectrograms into waveforms. The number of Mel bands is increased from 80 to 128. We empirically found that this is beneficial for modelling voices of low vocal range, as is the case of our target speaker. ### Data The found data from our target speaker contain 513 utterances of spontaneous yet very fluent speech, giving a total duration of approximately 37 minutes, all manually transcribed. Speech was recorded in six different sessions with varied room acoustics and sampled at 22.05 kHz. Audio degradation types found in the recordings include background noise, room reverberation, band limiting and magnitude clipping. The speech is in Cantonese. This speech corpus is used for performance evaluation of speech enhancement models. The enhanced speech is for training the TTS model. We use CUSENT [29], a multi-speaker Cantonese speech database and artificially created degraded audio to train DMSEbase and DMSEtext. CUSENT contains about 20 hours read speech from 80 speakers, sampled at 16kHz. It was designed for automatic speech recognition (ASR) and the recordings contain low-level noise, making it less than ideal for training speech enhancement models. We therefore run Demucs on CUSENT to obtain high-quality clean speech. The resulted dataset, denoted by DenoiseCUSENT, represent the clean reference for speech enhancement training. Data from speaker cn01m and cn12f are held out for validation. To synthesize distorted speech, we consider the four most frequent types of audio degradation present in found data: noise, reverberation, band limiting and magnitude clipping. We use the DNS noise dataset [30] and the RIRs [31] dataset to simulate background noise and reverberation respectively, adopting their default train/test split. Band limiting and magnitude clipping are simulated with the Scipy signal processing toolkit1. The four types of degradation are applied following a specific order: reverberation, noise, clipping and band limiting. Parameters such as SNR or frequency are randomly set to be within a reasonable range. Details of simulated degradation are released in the project repository. Footnote 1: [https://docs.scipy.org/doc/scipy/reference/signal.html](https://docs.scipy.org/doc/scipy/reference/signal.html) For fair comparison with Demucs and VoiceFixer, we also train a diffusion-based enhancement model on the high-quality English dataset VCTK [32], to rule out the impact of language mismatch. Data from speaker p232 and p257 are held out from training, as they appear in a standard denoising testset. Text conditioning is not included here as the language of found data is Cantonese. This model is referred to as DMSEbase(VCTK). Throughout the experiments Mel-spectrograms are computed on audio signals resampled to 22.05 kHz with a window length of 1024 and a hop size of 256. The number of mel filter-banks is set to 128 as mentioned earlier. ### Implementation details DMSEbase and DMSEtext are trained on DenoiseCUSENT for 900 epochs, while DMSEbase(VCTK) is trained on VCTK for 750 epochs, all at batch size of 32. The number of reverse steps for all three models is 25 with the DPMsolver [22]. Mel-spectrograms are mean-normalized and scaled to be within the range of \([-1.0,1.0]\). We empirically found that this normalization helps improving the convergence speed of reverse diffusion. The TTS model is pre-trained on CUSENT for 600 epochs with a batch size of 32, then fine-tuned on the target speaker's data (original or enhanced) for 6000 steps with a batch size of 16. The neural vocoder is fine-tuned from a pre-trained HiFiGAN2 on the VCTK and CUSENT dataset for 220,000 steps with a batch size of 32. We insert a pre-processing block to map the 128-dim log Mel-spectrogram input to 80-dim, to take advantage of the pre-trained HiFi-GAN parameters. Footnote 2: [https://github.com/jik876/hifi-gan](https://github.com/jik876/hifi-gan) We use the Adam optimizer [33] in all experiments with the default value \([0.9,0.999]\) for betas. Learning rate is fixed at 1e-4, 1e-3 and 1e-6 for training the MDSEbase/base(VCTK)/text, the TTS model, and the HiFi-GAN vocoder respectively. Time-aligned phone sequences for training the TTS model and DMSEtext are obtained using the Montreal Forced Aligner [34]. DMSEtext also need time-aligned phone sequence during test time. For the found data we apply denoising with Demucs first before running forced alignment, in order to improve the alignment accuracy. We should emphasize that the denoised found data are used only during forced-alignment. All enhancement models take the original found data as input for evaluation. We additionally train a CTC-based ASR model for objective evaluation of speech enhancement, following the recipe from SpeechBrain3. The model is trained on KingASR086, a commercial Cantonese speech recognition corpus purchased from SpeechOcean4, which contains 80-hour reasonably clean read speech from 136 speakers, sampled at 44.1 kHz. No data augmentation is applied. Footnote 3: [https://github.com/jspechbrain/speechbrain/tree/develop/recipes/TIMIT/ASR/CTC](https://github.com/jspechbrain/speechbrain/tree/develop/recipes/TIMIT/ASR/CTC) Footnote 4: [https://en.speechocean.com/datacenter/recognition.html](https://en.speechocean.com/datacenter/recognition.html) ## 5 Result ### Enhancing found speech data The 513 utterances in the original found data, denoted as Recordings, are enhanced by the three diffusion-based models as well as the two baselines, respectively. Objective evaluation was carried out with the ASR model. Results are summarized in Table 1. It is noted that VoiceFixer and DMSEbase/base(VCTK) tend to introduce pronunciation distortions (reflected in the number of substitutions) and erase speech segments (reflected in the number of deletions) compared with Demucs. On the other hand, text conditioning helps DMSEtext to preserve content information and at the same time improve the audio quality over original recordings. Subjective evaluation was conducted through a web-based listening test. We selected the same 87 sentences with lengths between 4.0 and 5.0 seconds from the enhanced data produced by each system, as well as the original recordings. This gives six stimuli for each sentence. The test is evaluated in Mean Opinion Score (MOS) format. Listeners were asked to rate the cleanliness (no noise or reverberation) as well as the overall impres sion of each stimulus on a scale from 1.0 to 5.0. A clean audio sample from DenoiseCUSENT and a synthetic distorted sample were provided, serving as the high and low anchors. We evenly split the 87 sentences into three groups and recruited 18 listeners per group. Each listener was presented with 29 sentences, and each sentence was produced by one of the six systems, randomly selected. All listeners are native Cantonese speakers. The score distributions depicted in Figure 2 indicate that VoiceFixer and DMSEbase/base/(VCTK) generate speech that is clean but somewhat distorted, while Demeus gives the opposite pattern. On the other hand, DMSEtext received the highest score for both cleanliness and overall impression, which conforms with the objective evaluation results. Notably, all diffusion-based models outperformed the two baselines in terms of cleanliness. We speculate this to be related to the generative modeling approach, where the generation process is influenced by a clean speech prior. By comparing DMSEbase against DMSEbase(VCTK), we can conclude that there is no significant gain from using a Cantonese dataset, except for the availability of text information. Figure 3 gives a specific example of the Mel-spectrogram comparison between different systems. It can be seen that audio enhanced by Demucs preserves more speech content as well as disturbances than audio enhanced by other systems. ### Speech synthesis with enhanced data The purpose of developing the TTS model is to assist the target speaker to communicate with others. To ensure that the model can produce high-quality speech that meets the speaker's personal needs, we asked him to provide text content that he would like to or most likely say in his daily life for TTS model evaluation. A total of 30 sentences were randomly selected from a script written by the target speaker, and synthesized with TTS models trained on speech enhanced by different systems. We recruited 24 native Cantonese speakers to participate in the listening test. The test format is similar to that described in the previous section. Each listener heard all 30 sentences once, and each sentence generated by one of the six TTS models was evaluated by exact four listeners. The MOS results in Table 2 show an opposite trend to that observed in Figure 2, with the synthetic voice built on speech enhanced by Demucs receiving much lower scores for overall impression compared with other systems. Two factors might explain the difference. First, the remaining disturbances in speech enhanced by Demucs were turned into unpleasant averaged artifacts after TTS training. Second, the text encoder in the TTS system is shared across speakers, which might have \begin{table} \begin{tabular}{l|c|c} \hline \hline Source & Peak\% & Details (N / I, D, S) \\ \hline Recordings & 22.7 & 18928 / 282, 495, 3525 \\ \hline Demucs & **20.3** & 18928 / 321, 300, 3224 \\ VoiceFixer & 29.7 & 18928 / 352, 921, 4341 \\ \hline DMSEbase & 24.7 & 18928 / 317, 702, 3652 \\ DMSEbase(VCTK) & 24.3 & 18928 / 334, 593, 3664 \\ DMSEtext & **17.6** & 18928 / 232, 258, 2836 \\ \hline \hline \end{tabular} \end{table} Table 1: Phone Error Rate (PER%) results on found speech recordings enhanced by different systems with a clean-speech trained ASR model. N, I, D, S stands for the number of phones in the reference text, and the number of insertions, deletions, and substitutions in decoded results, respectively. \begin{table} \begin{tabular}{l|c|c} \hline \hline Source & Cleanliness & Overall impression \\ \hline Recordings & 2.74 \(\pm\) 0.08 & 3.12 \(\pm\) 0.06 \\ \hline Demucs & 3.22 \(\pm\) 0.08 & 3.38 \(\pm\) 0.05 \\ VoiceFixer & 3.97 \(\pm\) 0.09 & 3.95 \(\pm\) 0.09 \\ \hline DMSEbase & 4.02 \(\pm\) 0.09 & 3.98 \(\pm\) 0.08 \\ DMSEbase(VCTK) & 4.05 \(\pm\) 0.09 & 4.02 \(\pm\) 0.09 \\ DMSEtext & **4.32**\(\pm\) 0.08 & **4.17**\(\pm\) 0.06 \\ \hline \hline \end{tabular} \end{table} Table 2: MOS results for cleanliness and overall impression of synthetic voices built on data enhanced by different systems. Figure 3: Comparing the log Mel-spectrograms of speech audio generated by different speech enhancement models. Figure 2: Score distribution of speech enhanced by different systems. Mean and 95% confidence interval are reported at the bottom. mitigated the negative impact of content distortion for Voice-Fixer and DMSEbase/base(VCTK). Nevertheless, the synthetic voice built on speech enhanced by DMSEtext received the highest score for both cleanliness and overall impression, which is expected given that content information was well preserved during the enhancement process with the additional text condition. Readers are encouraged to visit [https://dmse4tts.github.io/](https://dmse4tts.github.io/) to listen to audio samples. ## 6 Conclusion We introduced a diffusion-based Mel-spectrogram enhancement model, which is intended for pre-enhancing found data for TTS model development. It is trained to tackle multiple types of audio degradation simultaneously, and is conditioned on text transcriptions to improve its robustness. We empirically showed that a generalist enhancement model is preferred over a single-task denoising model for pre-enhancing real-world recordings for TTS model development. Moreover, the additional text condition helps to preserve the speech content during the enhancement process. Subjective evaluations by human listeners demonstrate that the resulted synthetic voice produces higher-quality synthetic speech compared to those trained on data enhanced by other generalist speech enhancement baselines. ## 7 Acknowledgements We thank all the participants in the listening test for their contributions. Our deepest gratitude belongs to the gentleman, who generously agreed to reveal his synthetic voice on the demo page.
2306.03403
SGAT4PASS: Spherical Geometry-Aware Transformer for PAnoramic Semantic Segmentation
As an important and challenging problem in computer vision, PAnoramic Semantic Segmentation (PASS) gives complete scene perception based on an ultra-wide angle of view. Usually, prevalent PASS methods with 2D panoramic image input focus on solving image distortions but lack consideration of the 3D properties of original $360^{\circ}$ data. Therefore, their performance will drop a lot when inputting panoramic images with the 3D disturbance. To be more robust to 3D disturbance, we propose our Spherical Geometry-Aware Transformer for PAnoramic Semantic Segmentation (SGAT4PASS), considering 3D spherical geometry knowledge. Specifically, a spherical geometry-aware framework is proposed for PASS. It includes three modules, i.e., spherical geometry-aware image projection, spherical deformable patch embedding, and a panorama-aware loss, which takes input images with 3D disturbance into account, adds a spherical geometry-aware constraint on the existing deformable patch embedding, and indicates the pixel density of original $360^{\circ}$ data, respectively. Experimental results on Stanford2D3D Panoramic datasets show that SGAT4PASS significantly improves performance and robustness, with approximately a 2% increase in mIoU, and when small 3D disturbances occur in the data, the stability of our performance is improved by an order of magnitude. Our code and supplementary material are available at https://github.com/TencentARC/SGAT4PASS.
Xuewei Li, Tao Wu, Zhongang Qi, Gaoang Wang, Ying Shan, Xi Li
2023-06-06T04:49:51Z
http://arxiv.org/abs/2306.03403v2
# SGAT4PASS: Spherical Geometry-Aware Transformer for PAnoramic Semantic Segmentation ###### Abstract As an important and challenging problem in computer vision, PAnoramic Semantic Segmentation (PASS) gives complete scene perception based on an ultra-wide angle of view. Usually, prevalent PASS methods with 2D panoramic image input focus on solving image distortions but lack consideration of the 3D properties of original \(360^{\circ}\) data. Therefore, their performance will drop a lot when inputting panoramic images with the 3D disturbance. To be more robust to 3D disturbance, we propose our Spherical Geometry-Aware Transformer for PAnoramic Semantic Segmentation (SGAT4PASS), considering 3D spherical geometry knowledge. Specifically, a spherical geometry-aware framework is proposed for PASS. It includes three modules, i.e., spherical geometry-aware image projection, spherical deformable patch embedding, and a panorama-aware loss, which takes input images with 3D disturbance into account, adds a spherical geometry-aware constraint on the existing deformable patch embedding, and indicates the pixel density of original \(360^{\circ}\) data, respectively. Experimental results on Stanford2D3D panoramic datasets show that SGAT4PASS significantly improves performance and robustness, with approximately a 2% increase in mIoU, and when small 3D disturbances occur in the data, the stability of our performance is improved by an order of magnitude. Our code and supplementary material are available at [https://github.com/TencentARC/SGAT4PASS](https://github.com/TencentARC/SGAT4PASS). ## 1 Introduction There has been a growing trend of practical applications based on \(360^{\circ}\) cameras in recent years, including holistic sensing in autonomous vehicles [11, 12, 13, 14], immersive viewing in augmented reality and virtual reality devices [16, 15, 16, 17], etc. Panoramic images with an ultra-wide angle of view deliver complete scene perception in many real-world scenarios, thus drawing increasing attention in the research community in computer vision. Panoramic semantic segmentation (PASS) is essential for omnidirectional scene understanding, as it gives pixel-wise analysis for panoramic images and offers a dense prediction technical route acquiring \(360^{\circ}\) perception of surrounding scenes [16]. Most existing PASS approaches use equirectangular projection (ERP) [16, 16] to convert original \(360^{\circ}\) data to 2D panoramic images. However, Figure 1: The results with 3D disturbance input. (a) is the original image, and (b) / (c) is the images rotated \(5^{\circ}\) in pitch / roll axis. Our baseline is Trans4PASS*. Compared with the minor change in images, the huge variance / performance change in SGA validation is shown in (d) / (e) and (f). “Mean” and “Variance” are defined in detail in Section 4.1. these methods often suffer from two main problems: large image distortions and lack of Spherical Geometry-Aware (SGA) robustness that resists 3D disturbance. These problems lead neural networks to only learn suboptimal solutions for panoramic segmentation [22, 23]. Although some recent works [24, 25] take serious distortions into account in their models and become the current state-of-the-art (SOTA), they still do not pay enough attention to the SGA properties of the original \(360^{\circ}\) data, resulting in performance degradation even with small projection disturbance. As shown in Figure 0(b) and Figure 0(c), applying \(5^{\circ}\) rotation on the pitch or roll axis of original \(360^{\circ}\) data carries only minor changes in 2D panoramic images. However, as shown in Figure 0(e), Figure 0(f), and Figure 0(d), the performances of Trans4PASS+ [24] (the blue lines) drop a lot (about 4%), and the variance increases by almost 2 orders of magnitude, because the axis rotations lead to different spherical geometry relations between pixels in the projected panoramic images, which the existing methods fail to adapt. Besides disturbance, the ERP also introduces boundaries to panoramic images that the original \(360^{\circ}\) data do not have. Some adjacent pixels are disconnected and some objects are separated, which is a severe issue, especially for semantic segmentation. Furthermore, there also exists a difference in pixel sampling density between the original \(360^{\circ}\) data and its corresponding projection image, e.g., pixels are over sampled in the antarctic and arctic areas of 2D panoramic images. All these issues make panoramic semantic segmentation a challenging task, and the above characteristics should be well studied to design a robust model that adapts to disturbance, disconnection, uneven density, and other SGA properties. Improving robustness and taking SGA properties into account, we propose a novel model, i.e., Spherical Geometry-Aware Transformer for PAnoramic Semantic Segmentation (SGAT4PASS), equipped with the SGA framework and SGA validation. The proposed SGA framework includes SGA image projection in the training process, Spherical Deformable Patch Embedding (SDPE), and a panorama-aware loss. SGA image projection provides images with 3D disturbance to improve the 3D robustness of the model. SDPE improves the patch embedding and makes it consider not only the image distortions with deformable operation but also spherical geometry with SGA intra- and inter-offset constraints. The panorama-aware loss deals with the difference in pixel density between the original \(360^{\circ}\) data and its corresponding 2D panoramic images. Moreover, we propose a new validation method, i.e., SGA validation, to evaluate the 3D robustness of various models comprehensively, which considers different 3D disturbances for input images, and measures the average performance and the variance for comparisons. Extensive experimental results on popular Stanford2D3D panoramic datasets [1] demonstrate that our proposed approach achieves about 2% and 6% improvements on traditional metrics and SGA metrics, respectively. The contributions of this paper are summarized as follows: * We propose SGAT4PASS, a robustness model for the PASS task, which utilizes SGA image projection to deal with the 3D disturbance issue caused by ERP. * We introduce SDPE to combine spherical geometry with deformable operation to better deal with panoramic image distortion. And we also propose panorama-aware loss to ease the oversampling problem. * We evaluate SGAT4PASS on the popular benchmark and perform extensive experiments with both traditional metrics and proposed SGA metrics, which demonstrate the effectiveness of each part of the framework. ## 2 Related Work The two most related fields are panoramic semantic segmentation and dynamic and deformable vision transformers. ### Panoramic Semantic Segmentation Semantic segmentation of panoramic images has many applications in real-world scenarios, such as autonomous driving [20], panoramic lenses safety and monitoring applications [13], etc. With the development of deep learning, many neural networks have been developed for panoramic semantic segmentation. Deng et al. [6] first proposed a semantic segmentation framework for wide-angle (fish-eye) images and transformed an existing pinhole urban scene segmentation dataset into synthetic datasets. Yang et al. [21] designed a semantic segmentation framework for panoramic images using a panoramic annular camera with an entire Field of View (FoV) for panoramic surrounding perception based on a single camera. Furthermore, Yang et al. [22] proposed DS-PASS to improve it with a more efficient segmentation model with attention connections. PASS solutions can be divided into two main fields: distortion-aware strategies and 2D-geometry-aware ones. For distortion-aware strategies, Tateno et al. [23] proposed using specially designed distortion-aware convolutions in a fixed manner to address image distortions. Furthermore, ACDNet [24] combined convolution kernels with different dilation rates adaptively and used fusion-driven methods to take advantage of several projections. Jiang et al. [25] designed a spherical convolution operation. Lee et al. [11] used spherical polyhedrons to represent panoramic views to minimize the difference in spatial resolution of the surface of the sphere and proposed new convolution and grouping methods for the representation of spherical polyhedrons. Hu et al. [19] designed and proposed a distortion convolutional module based on the image principle to solve the distortion problem caused by the distortion of the panoramic image. Zhang et al. [24][25] designed their Trans4PASS and Trans4PASS+ that perceived spherical distortion and solved the distortion problem of spherical images better through their Deformable Patch Embedding (DPE) and Deformable Multi-Layer Perception (DMLP) modules. Also, Trans4PASS+ is the current SOTA panoramic semantic segmentation model and is our baseline. For 2D geometry-aware strategies, horizontal features are mainly used based on the ERP inherent property. Sun et al. [26] proposed HoHoNet and Pintore et al. [12] proposed SliceNet to use the extracted feature maps in a 1D horizontal representation. For our SGAT4PASS based on the distortion-aware SOTA model, Trans4PASS+, we add SGA information from the original \(360\lx@math@degree\) data instead of the 2D geometry prior to panoramic images to improve not only its performance but also its robustness when meeting 3D disturbance. ### Dynamic and Deformable Vision Transformers Regarding the field of vision transformers, some works have developed architectures with dynamic properties. Chen et al. [3] and Xia et al. [11] used deformable designs in later stages of the encoder. Yue et al. [13] used a progressive sampling strategy to locate discriminatory regions. Deformable DETR [15] used deformable attention to deal with feature maps. Some other works used adaptive optimization of the number of informative tokens to improve efficiency [14][12][13][14]. Zhang et al. [15] designed their Trans4PASS and Trans4PASS+ based on DPE and Deformable Multi-Layer Perception (DMLP) modules, and we use Trans4PASS+ as our baseline. ## 3 Method We present Spherical Geometry-Aware Transformer for PAnoramic Semantic Segmentation (SGAT4PASS) in this section. First, we introduce the background of panoramic semantic segmentation in Section 3.1. Second, we describe our main idea to apply different SGA properties in panoramic semantic segmentation in Section 3.2. To improve the 3D robustness of SGAT4PASS, we propose SGA Image Projection, Spherical Deformable Patch Embedding (SDPE), and panorama-aware loss. Specifically, SGA Image Projection adds rotated samples in training; SDPE adds SGA constraints on the deformable patch embedding; and the panorama-aware loss fuses sphere pixel density to training process. ### Background We first describe a general formulation of PASS and then introduce the spherical geometry property that we focus mainly on. Panoramic images are based on original \(360\lx@math@degree\) data formulated in the spherical coordinate system (based on longitude and latitude). To convert it to a rectangular image in a Cartesian coordinate system, ERP is a widely used projection in this field: \(x=(\theta-\theta_{0})cos\phi_{1},y=(\phi-\phi_{1})\), where \(\theta_{0}=0\) is the central latitude and \(\phi_{1}=0\) is the central longitude. The ERP-processed rectangular images are used as the input sample in datasets and fed to the neural network, and the rectangular semantic segmentation results are obtained to compare with the ground truth and calculate the metrics. Although traditional methods can treat PASS as the conventional 2D semantic segmentation task and deal with panoramic images easily, the spherical geometry property is partly ignored. ### Spherical Geometry-Aware (SGA) Framework We propose the SGA framework for PASS with SGA image projection, SDPE, and panorama-aware loss. To deal with the inevitable 3D disturbance during the acquisition of the input image, our SGA image projection aims to encode the original \(360\lx@math@degree\) data spherical geometry by generating input images with different rotations. We design SDPE to model spatial dependencies on a sphere, making patch embedding consider Figure 2: Overall review of SGAT4PASS. We borrow the network from Trans4PASS+, and add three main modules: Spherical geometry-aware (SGA) image projection, SDPE, and panorama-aware loss. (Lower left) SGA image projection rotates the input panoramic images to mimic 3D disturbance. (Lower middle) SDPE adds several SGA constraints on deformable patch embedding and let it consider both image distortions and spherical geometry. (Lower right) Panorama-aware loss (PA loss) takes into account the pixel density of a sphere. both spherical geometry and image distortions. Furthermore, a panorama-aware loss is proposed to model the pixel density of a sphere, making the loss weight distribution more similar to the original \(360^{\circ}\) data. With these three modules, the spherical geometry is well employed in the PASS task. ### Spherical Geometry-Aware (SGA) Image Projection The original \(360^{\circ}\) data follow a spherical distribution and are _spherically symmetric_. After rotating any angle along the yaw / pitch / roll axis, the transformed data are still equivalent to the original data. Traditional strategies assume that the images are taken with the yaw / pitch / roll angle equal to zero degrees, which is too ideal in real-world scenarios and ignores the camera disturbance and random noise. When the rotation angle is disturbed, traditional strategies usually have a large degradation in the PASS task. SGA image projection fuses this property between the inevitable equirectangular projection and regular image augmentation to make models robust to 3D disturbance. We use \(T\) to represent the forward process of ERP transformation, which is the process of converting spherical coordinates to plane coordinates, and use \(T^{-1}\) to represent the inverse process of ERP that transforms the plane back onto the sphere. Given an ERP-processed input panoramic image, we first transform the image \(I\) originally in plane coordinates to spherical coordinates through the inverse ERP process. After that, we use the rotation matrix in the three-dimensional (3D) space to perform a 3D rotation in the spherical coordinate system. For a general rotation in a 3D space, the angles of yaw, pitch, and roll are \(\alpha_{use}\), \(\beta_{use}\), and \(\gamma_{use}\), respectively. The corresponding rotation matrix is \(R(\alpha_{\rm use},\beta_{\rm use},\gamma_{\rm use})\). We multiply \(R\) by the data in the spherical coordinate system to obtain the rotated data in the spherical coordinate system. Finally, we use the ERP forward process to convert the rotated spherical coordinate system image into a panoramic image, thus obtaining a certain rotated image of the real input of the network. The corresponding point in input image of a pixel in rotated image may not have integer coordinates, and we select the nearest pixel as its corresponding pixel to be generic to the ground truth transformation. Based on these operations, we build our SGA image projection, \(O_{3D}(I,\alpha_{\rm use},\beta_{\rm use},\gamma_{\rm use})=T(R(\alpha_{\rm use },\beta_{\rm use},\gamma_{\rm use})\cdot T^{-1}(I))\). (See Section C "Details for SGA Image Projection" in the supplementary material for details.) At the beginning of the training process, we set the maximum rotation angle of the yaw / pitch / roll axis at \((\alpha_{\rm train},\beta_{\rm train},\gamma_{\rm train})\). ### Slope: Spherical Deformable Patch Embedding We first introduce DPE, and then fuse spherical geometry into DPE by SGA constraints to earn SDPE. Faced with image distortions in panoramic images, DPE, considering different distortions in different regions of an input panoramic image, is a popular solution [22][22]. In detail, given a 2D input panoramic image, the standard patch embedding handles it into flattened patches \(H\times W\), and the resolution of each patch is \((s,s)\). A learnable projection layer transforms each patch into out-dimensional embeddings. For each patch, the offsets \(\Delta^{DPE}_{(i,j)}\) of the \(i^{th}\) row \(j^{th}\) column pixel are defined as: \[\Delta^{DPE}_{(i,j)}=\begin{bmatrix}\min(\max(\text{-}k_{D}\cdot H,g(f)_{(i,j)}),k_{D}\cdot H)\\ \min(\max(\text{-}k_{D}\cdot W,g(f)_{(i,j)}),k_{D}\cdot W)\end{bmatrix}, \tag{1}\] where \(g(\cdot)\) is the offset prediction function. Hyperparameter \(k_{D}\) puts an upper bound on the learnable offsets \(\Delta^{DPE}_{(i,j)}\). For implementation, the deformable convolution operation [2] is popularly employed to realize DPE. When fusing spherical geometry into DPE, human photographic and ERP priors are taken into consideration, in which the plane formed by pitch and roll axes is always parallel to the ground plane and the projection cylinder is perpendicular to the ground plane. As a result, we add SGA constraints mainly on the yaw axis. In detail, we give intra-offset and inter-offset constraints on \(\Delta^{DPE}_{(i,j)}\). For convenience, we use \(\Delta^{m,n}_{(i,j)}\) to represent the \(i^{th}\) row \(j^{th}\) column pixel of the learnable offset for the \(m^{th}\) row \(n^{th}\) column patch. Intra-offset constraint.Based on the phenomenon that the original \(360^{\circ}\) data are symmetric on any longitude and the projection cylinder in ERP is symmetric in any line perpendicular to the base of the cylinder, the offset of any pixel in 2D input panoramic image \(I\) should be symmetric on its perpendicular. To be generic to the learnable offsets \(\Delta^{m,n}_{(i,j)}\) dealing with the image distortions, we use a constraint \(\mathcal{L}_{intra}\): \[\mathcal{L}_{intra}=\sum_{m,n}\sum_{i,j}L^{intra}_{2}(\Delta^{m,n}_{(i,j)}, \Delta^{m,n}_{S(i,j)}), \tag{2}\] where \(\Delta^{m,n}_{S(i,j)}\) is the single patch offset that is formed symmetrically along the yaw axis with \(\Delta^{m,n}_{(i,j)}\) as the template. \(L^{intra}_{2}(\cdot,\cdot)\) represents the element-wise L2 loss. Inter-offset constraint.Based on the phenomenon that the projection cylinder in ERP can be slit and expanded from any line perpendicular to the base of the cylinder, the offset of any pixel in 2D input panoramic image \(I\) corresponding to the same latitude of the original \(360^{\circ}\) data should be similar. To be generic to the learnable \(\Delta^{DPE}_{(i,j)}\) dealing with the image distortions, we use a constraint, \(\mathcal{L}_{inter}\), to model this property. For a certain pixel, we use the average offset in the \begin{table} \begin{tabular}{l c c} \hline \hline Method & Avg mIoU & F1 mIoU \\ \hline SidConv [21] & - & 32.6 \\ CubeDA [21] & - & 33.8 \\ DistConv [21] & - & 34.6 \\ SWSCNN [22] & 43.4 & - \\ Tangent (ResNet-101) [23] & 45.6 & - \\ FFeDSNet [2] & - & 46.1 \\ PanoFormer [2] & 48.9 & - \\ HoHoTNet (ResNet-101) [22] & 52.0 & 53.9 \\ Trans4PASS (Small) [22] & 52.1 & 53.3 \\ CBFC [22] & 52.2 & - \\ Trans4PASS+ (Small) [22] & 53.7 & 53.6 \\ **Ours(Small)** & **56.3** & **56.4** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison with the SOTA methods on Stanford2D3D Panoramic datasets. We follow recent works to compare the performance of both official fold 1 and the average performance of all three official folds. respectively. “Avg mIoU” / “F1 mIoU” means the mIoU performance of three official folds on average / official fold 1. A considerable improvement is gained. whole horizontal line as its constraint: \[\mathcal{L}_{inter}=\sum_{m,n}\sum_{i,j}L_{2}^{inter}(\Delta_{(i,j)}^{m,n},\Delta _{(i,j)}^{m,\mathrm{AVG}}), \tag{3}\] where \(\Delta_{(i,j)}^{m,\mathrm{AVG}}\) is the average of each component in \(\{\Delta_{(i,j)}^{m,n},n\in W\}\), and \(L_{2}^{inter}(\cdot,\cdot)\) represents the L2 loss for each component length of the two vectors. Then the total SDPE loss is: \(\mathcal{L}_{SDPE}=\mathcal{L}_{inter}+\mathcal{L}_{intra}\). #### 3.2.2 Panorama-Aware Loss Because the panoramic images are rectangular in shape, the region of the antarctic and arctic areas in the original \(360^{\circ}\) data is over sampled than the one near the equator. However, due to human photographic priors, the semantics of the antarctic (ground, floor, etc.) and arctic areas (sky, ceiling, etc.) are relatively simple, as seen in the sample images of Figure 1 and Figure 2. When using traditional segmentation loss for supervised training, we treat each pixel equally, which leads to models paying relatively less attention to semantic rich regions near the equator. To deal with this phenomenon, we design our panorama-aware loss. For an ERP-processed panoramic image, the number of pixels in each horizontal line is the same, but the corresponding resolution density on the original sphere of each horizontal line is very different. For this reason, we design a loss to reweight the loss proportion of different horizontal lines depending on its height. For a pixel \((m,n)|m\in[1,H_{I}],n\in[1,W_{I}]\) (\(W_{I}\) and \(H_{I}\) are the width and height of the input image), we give a weight \(w_{\mathrm{pan}}^{(m,n)}\) when calculating its per pixel loss: \[w_{\mathrm{pan}}^{(m,n)}=\cos(\frac{|2m-H_{I}|}{H_{I}}\cdot\frac{\pi}{2}). \tag{4}\] We use \(W_{\mathrm{pan}}\) to represent the set that includes all \(w_{\mathrm{pan}}^{(m,n)}\). When faced with a panoramic semantic segmentation problem, we first estimate the usage scenario to determine \(\beta\) and \(\gamma\) used in SGA image projection when \(\alpha\) is often set as \(360^{\circ}\) in common condition. We set our total loss as: \[\mathcal{L}_{\mathrm{all}}=(1+\lambda_{w}\cdot W_{\mathrm{pan}})\odot\mathcal{ L}_{SEG}+\lambda_{s}\cdot\mathcal{L}_{SDPE}, \tag{5}\] where \(\mathcal{L}_{SEG}\) is the common per pixel loss for semantic segmentation, \(\odot\) is the element-wise matrix multiplication, \(\lambda_{w}\) and \(\lambda_{s}\) are hyperparameters. ## 4 Experiments In this section, we evaluate our SGAT4PASS against the popular benchmark, Stanford2D3D, for both traditional metrics and our SGA validation. ### Datasets and Protocols We validate SGAT4PASS on Stanford2D3D Panoramic datasets [1]. It has 1,413 panoramas, and \begin{table} \begin{tabular}{c c c c} \hline \hline SGAIP & SDPE & PA & mIoU & Pixel accuracy \\ \hline \hline & & & 53.617 & 81.483 \\ ✓ & & & 54.637 & 82.303 \\ & ✓ & & 54.554 & 81.508 \\ & & ✓ & 54.833 & 81.733 \\ ✓ & ✓ & ✓ & 56.374 & 83.135 \\ \hline \hline \end{tabular} \end{table} Table 4: Effect of each SGAT4PASS module. We validate them on Stanford2D3D Panoramic datasets official fold 1 with traditional metrics. “SGAIP” / “SDPE” / “PA” means our SGA image projection / spherical deformable patch embedding / panorama-aware loss. Using anyone, an average improvement of 1.058% mIoU / 0.365% pixel accuracy is gained when using three gains 2.757% / 1.652%. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{(\(\beta\),\(\gamma\),\(\alpha\)) (\({}^{\circ}\))} & BL mIoU / PAcc & \multirow{2}{*}{(\(\beta\),\(\gamma\),\(\alpha\)) (\({}^{\circ}\))} & BL mIoU / PAcc & \multirow{2}{*}{(\(\beta\),\(\gamma\),\(\alpha\)) (\({}^{\circ}\))} & BL mIoU / PAcc & \multirow{2}{*}{(\(\beta\),\(\gamma\),\(\alpha\)) (\({}^{\circ}\))} & BL mIoU / PAcc \\ \cline{2-3} \cline{5-7} & Our mIoU / PAcc & & Our mIoU / PAcc & & Our mIoU / PAcc & & Our mIoU / PAcc \\ \hline \multirow{2}{*}{(0,0,0)} & 53.617 / 81.483 & \multirow{2}{*}{(0,5,0)} & 49.292 / 78.346 & \multirow{2}{*}{(5,0,0)} & 49.468 / 78.500 & \multirow{2}{*}{(5,5,0)} & 47.234 / 77.129 \\ \cline{2-3} \cline{5-7} & 56.374 / 83.135 & & 56.073 / 82.892 & & 56.074 / 82.905 & & 55.7584 / 82.794 \\ \hline \multirow{2}{*}{(0,90,90)} & 53.918 / 81.590 & \multirow{2}{*}{(0,5,90)} & 49.861 / 78.656 & \multirow{2}{*}{(5,0,90)} & 49.400 / 78.373 & \multirow{2}{*}{(5,5,90)} & 47.589 / 77.361 \\ \cline{2-3} \cline{5-7} & 56.441 / 83.130 & & 55.954 / 82.847 & & 56.128 / 82.895 & & 55.636 / 82.657 \\ \hline \multirow{2}{*}{(0,0,180)} & 53.587 / 81.476 & \multirow{2}{*}{(0,5,180)} & 49.344 / 78.532 & \multirow{2}{*}{(5,0,180)} & 49.536 / 78.585 & \multirow{2}{*}{(5,5,180)} & 47.458 / 77.307 \\ \cline{2-3} \cline{5-7} & 56.246 / 83.054 & & 55.951 / 82.906 & & 55.714 / 82.796 & & 55.501 / 82.750 \\ \hline \multirow{2}{*}{(0,0,270)} & 53.669 / 81.459 & \multirow{2}{*}{(0,5,270)} & 49.462 / 78.445 & \multirow{2}{*}{(5,0,270)} & 49.363 / 78.485 & \multirow{2}{*}{(5,5,270)} & 47.726 / 77.451 \\ \cline{2-3} \cline{5-7} & 56.223 / 83.051 & & 55.924 / 82.779 & & 55.983 / 82.904 & & 55.732 / 82.701 \\ \hline \hline \end{tabular} \end{table} Table 2: Detail performance comparison with Tran4PASS+ on Stanford2D3D Panoramic datasets official fold 1 with SGA metrics. All 18 situations are shown, and the analysis is in table 3. “BL” means the baseline, i.e., Tran4PASS+. “PAcc” means the pixel accuracy metric. \begin{table} \begin{tabular}{c c c c c} \hline \hline Statistics & \multicolumn{2}{c}{Baseline} & \multicolumn{2}{c}{Ours} \\ & mIoU & PAcc & mIoU & PAcc \\ \hline Mean & 50.033 & 78.949 & 55.984 (+5.951) & 82.887 (+3.938) \\ Variance & 5.147 & 2.413 & 0.066 (-5.081) & 0.020 (-2.393) \\ Range & 6.684 & 4.461 & 0.940 (-5.744) & 0.478 (-3.983) \\ \hline \hline \end{tabular} \end{table} Table 3: Overall performance comparison with Tran4PASS+ on Stanford2D3D Panoramic datasets in table 2 setting. “PAcc” means the pixel accuracy metric. SGAT4PASS earns considerable mean performance and significant robustness improvement. 13 semantic classes are labeled, and has 3 official folds, fold 1 / 2 / 3. We follow the report style of previous work [14]. Our experiments are conducted with a server with four A100 GPUs. We use Trans4PASS+ [14] as our baseline and set an initial learning rate of 8e-5, which is scheduled by the poly strategy with 0.9 power over 150 epochs. The optimizer is AdamW [13] with epsilon 1e-8, weight decay 1e-4, and batch size is 4 on each GPU. Other settings and hyperparameters are set the same as Trans4PASS+ [14]. For each input panoramic image \(I\) in an iteration, there is a 50% chance of using it directly and the other 50% chance of using it after SGA image projection, \(O_{3D}(I,\alpha_{\text{use}},\beta_{\text{use}},\gamma_{\text{use}})\), where \(\alpha_{\text{use}}\) / \(\beta_{\text{use}}\) / \(\gamma_{\text{use}}\) uniformly sampled from 0 to \(\alpha_{\text{train}}\) / \(\beta_{\text{train}}\) / \(\gamma_{\text{train}}\). We set \((\beta_{\text{train}},\gamma_{\text{train}},\alpha_{\text{train}})=(10^{\circ},10^{\circ},360^{\circ})\). \(\lambda_{w}\) and \(\lambda_{s}\) are set as 0.3 and 0.3, respectively. **Spherical Geometry-Aware (SGA) Validation.** Most PASS datasets use a unified ERP way to process original \(360^{\circ}\) data, PASS models have the potential to overfit the ERP way, cannot handle 3D disturbance well and have little 3D robustness. To validate the robustness of the PASS models, we propose a novel SGA validation. \(n_{\alpha}\), \(n_{\beta}\), and \(n_{\gamma}\) are the number of different angles for the yaw / pitch / roll axis, respectively, and \(n_{\alpha}\cdot n_{\beta}\cdot n_{\gamma}\) different-angle panoramic images for a certain original \(360^{\circ}\) data is earned. Panoramic semantic segmentation models are validated in all \(n_{\alpha}\cdot n_{\beta}\cdot n_{\gamma}\) settings, and their statistics are reported as SGA metrics. In our SGA validation, "Mean" means the average of all \(n_{\alpha}\cdot n_{\beta}\cdot n_{\gamma}\) traditional results (e.g., mIoU, per pixel accuracy, etc.). "Variance" means the variance of all \(n_{\alpha}\cdot n_{\beta}\cdot n_{\gamma}\) results. "Range" means the gap between the maximum and minimum results of all \(n_{\alpha}\cdot n_{\beta}\cdot n_{\gamma}\) results. Compared to traditional validation, SGA validation avoids models gain performance by fitting the ERP way of datasets and reflects objective 3D robustness. In detail, we assume that the 3D rotation disturbance is at most \(5^{\circ}\) / \(5^{\circ}\) / \(360^{\circ}\) of pitch (\(\beta\)) / roll (\(\gamma\)) / yaw (\(\alpha\)) angle. We set \(n_{\alpha}=4\) (\(0^{\circ},90^{\circ},180^{\circ},270^{\circ}\)), \(n_{\beta}=2\) (\(0^{\circ},5^{\circ}\)), and \(n_{\gamma}=2\) (\(0^{\circ},5^{\circ}\)). We use the mean of them as the final performance and observe the performance difference among them to indicate the 3D robustness of models. ### Performance Comparison In this part, we first compare several recent SOTA methods with traditional metrics, and then compare the latest SOTA Trans4PASS+ in detail with SGA metrics. **Traditional Metrics.** Comparison results on Stanford2D3D Panoramic datasets with SOTA methods in traditional metrics are shown in Table 1. Following recent work, we report the performance of both official fold 1 and the average performance of all three official folds. From the results, SGAT4PASS outperforms current SOTA models by 2.8% / 1.6% mIoU, respectively, which means that our SGAT4PASS has a considerable performance margin compared to current models with traditional metrics. **SGA Metrics.** Comparison results on Stanford2D3D Panoramic datasets with our SGA validation metrics are shown in Table 3, and Table 2 is the detailed performance of each situation. For mean mIoU / pixel accuracy, an improvement of nearly 6% / 4% is achieved, respectively. Furthermore, our variance is about \(\frac{1}{100}\) and our fluctuation range is about \(\frac{1}{10}\). These results show that our SGAT4PASS have much better robustness than Trans4PASS+. Figure 3: Visualization comparison of SGAT4PASS and Trans4PASS+. The rotation of the pitch / roll / yaw axis is \(5^{\circ}\) / \(5^{\circ}\) / \(180^{\circ}\). SGAT4PASS gains the better results of semantic class “door” and “sofa” (highlighted by red dotted line boxes). \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Statistics} & \multicolumn{2}{c}{Baseline} & \multicolumn{2}{c}{Ours} \\ & mIoU & Pixel accuracy & mIoU & Pixel accuracy \\ \hline \((\beta,\gamma,\alpha)=(1^{\circ},1^{\circ},360^{\circ})\) & & & \\ Mean & 53.473 & 81.251 & 56.212 (+2.739) & 83.021 (+1.770) \\ Variance & 0.056 & 0.029 & 0.011 (-0.045) & 0.003 (-0.026) \\ Range & 0.856 & 0.591 & 0.394 (-0.462) & 0.192 (-0.399) \\ \hline \((\beta,\gamma,\alpha)=(0^{\circ},0^{\circ},360^{\circ})\) & & & \\ Mean & 53.698 & 81.502 & 56.321 (+2.623) & 83.093 (+1.591) \\ Variance & 0.017 & 0.003 & 0.008 (-0.009) & 0.002 (-0.001) \\ Range & 0.331 & 0.131 & 0.218 (-0.113) & 0.084 (-0.047) \\ \hline \hline \end{tabular} \end{table} Table 5: Overall performance comparison on Stanford2D3D Panoramic datasets in different SGA metrics in two more favorable settings for Tran4PASS+. SGAT4PASS also earns considerable mean performance and significant robustness improvement. ### Ablation Study Effect of Three Modules in Training Process.The effectiveness of SGA image projection, SDPE, and panoramaware loss are studied on Stanford2D3D Panoramic datasets official fold 1 with traditional metrics as shown in Table 4. (a) SGA image projection: Using it alone improves the baseline mIoU / per pixel accuracy by 1.020% / 0.820%. (b) SDPE: Using SDPE alone outperforms the baseline by 0.937% and 0.025% in mIoU and per pixel accuracy. (b) Panorama-aware loss: Using it alone improves the baseline by 1.216% and 0.250% in mIoU and per pixel accuracy. Effect of SGA Validation.We demonstrate the effect of SGA validation, which means a stronger generalizability to resist 3D rotational perturbation. We carried out experiments with two smaller disturbance settings on the pitch and roll axes (\((\beta,\gamma,\alpha)=(1^{\circ},1^{\circ},360^{\circ})\) / \((0^{\circ},0^{\circ},360^{\circ})\)), which are more favorable settings for Trans4PASS+ [15], because it is designed for the standard panoramic view image (\((\beta_{\rm use},\gamma_{\rm use},\alpha_{\rm use})=(0^{\circ},0^{\circ},0^{ \circ})\)). The overall statistical results are shown in Table 5. For the \((\beta,\gamma,\alpha)=(1^{\circ},1^{\circ},360^{\circ})\) setting, an improvement of approximately 2.7 % / 1.7 % is obtained for the mean mIoU / pixel accuracy. Our variance is approximately \(\frac{1}{5}\) / \(\frac{1}{10}\) and our fluctuation range is approximately \(\frac{1}{2}\) / \(\frac{1}{3}\) in mIoU / pixel precision. In \((\beta,\gamma,\alpha)=(0^{\circ},0^{\circ},360^{\circ})\) setting, mean mIoU / pixel accuracy gains approximately 2.6% / 1.6% improvement, variances / fluctuation is approximately \(\frac{1}{2}\) / \(\frac{2}{3}\) for SGAT4PASS. SGAT4PASS has better robustness even with little 3D perturbations. The detailed performance of these two settings and the performance of several random rotation settings are shown in Section A "Detailed Performance of SGA Validation" in the supplementary material. ### Discussion and Visualizations Performance of All Semantic Classes and Visualizations.We show the detailed performance of all 13 semantic classes on the Stanford2D3D Panoramic datasets with both traditional and SGA metrics in Table 6, respectively. We focus mainly on the classes with significant performance gaps and mark the gap larger than 5% / 10% as red numbers / bold red numbers, respectively. There is no semantic class for which the baseline is significantly better. From the results, we can learn that the "sofa" and "door" classes improve more. An image with "door" and "sofa" is visualized in Figure 3. Rotation of the pitch / roll / yaw axis is \(5^{\circ}\) / \(5^{\circ}\) / \(180^{\circ}\). The baseline prediction gap between the original and rotated input is large, which means less robustness. It predicts the door near the right boundary in Figure (c)c overall right, but it is totally wrong with rotation in Figure (g)g when SGAT4PASS predicts both correct. The baseline predictions for the sofa change a lot with rotation when SGAT4PASS is stable. More visualizations are shown in Section B "More Visualizations" in the supplementary material. Different Hyper-Parameters.\(\lambda_{w}\) and \(\lambda_{s}\) are hyperparameters in our SGAT4PASS. \(\lambda_{s}\) / \(\lambda_{w}\) determines the proportion of our constraint of spherical geometry in SDPE / panoramaware loss. We apply them on the baseline, respectively. Traditional mIoU results are shown in Figure (a)a and Figure (b)b, and we choose 0.3 / 0.3 as the final \(\lambda_{w}\) / \(\lambda_{s}\). ## 5 Conclusion We have studied an underexplored but important field in panoramic semantic segmentation, i.e., the robustness of dealing with 3D disturbance panoramic input images. We have shown that using our SGA framework is key to improving the semantic segmentation quality of 3D disturbance inputs. It applies spherical geometry prior to panoramic semantic segmentation and gains considerable improvement. In detail, the SGA framework includes SGA image projection, SDPE, and panorama-aware loss. We also validated the effectiveness of our SGAT4PASS on popular datasets with the traditional metrics and the proposed SGA metrics, and studied its properties both empirically and theoretically. \begin{table} \begin{tabular}{c|c|c|c c c c c c c c c c c} \hline \hline Network & Test Method & mIoU & beam & board & bookcase & ceiling & chair & clutter & column & door & floor & sofa & table & wall & window \\ \hline Trans4Pass+ & \multirow{2}{*}{Traditional} & 53.62 & 0.39 & 74.4 & 65.32 & 84.21 & 62.86 & 36.44 & 15.96 & 32.79 & 93.09 & 44.10 & 63.67 & 75.02 & 46.90 \\ Ours & & 56.37 & 0.73 & 74.05 & 65.91 & 84.20 & 64.53 & 41.24 & 19.62 & **52.67** & 93.08 & **56.92** & 58.86 & 76.43 & 44.62 \\ \hline Trans4Pass+ & \multirow{2}{*}{SGA} & 50.03 & 0.26 & 73.78 & 62.21 & 83.82 & 61.87 & 32.11 & 10.93 & 20.26 & 92.96 & 38.33 & 61.78 & 74.35 & 37.73 \\ Ours & & 55.98 & 0.78 & 73.94 & 65.56 & 84.08 & 64.39 & 40.96 & 18.31 & **51.64** & 92.98 & **56.53** & 58.14 & 76.06 & 44.42 \\ \hline \hline \end{tabular} \end{table} Table 6: Per-class mIoU results on Stanford2D3D Panoramic datasets according to the fold 1 data setting with traditional mIoU and per-pixel accuracy metrics. No mark for the results that the gap between Trans4Pass+ and Ours less than 5% (performance at the same level). Our results will be red when Ours outperforms more than 5%. If Ours outperforms more than 10%, our results will be **bold and red**. There is no semantic class that Trans4Pass+ outperforms Ours 5% or more. Figure 4: Influence of \(\lambda_{s}\) and \(\lambda_{w}\) in SGAT4PASS. The results are carried out on Stanford2D3D Panoramic datasets official fold 1. ## Acknowledgements This work is supported in part by National Key Research and Development Program of China under Grant 2020AAA0107400, National Natural Science Foundation of China under Grant U20A20222, National Science Foundation for Distinguished Young Scholars under Grant 62225605, Research Fund of ARC Lab, Tencent PCG, The Ng Teng Fong Charitable Foundation in the form of ZJU-SUTD IDEA Grant, 188170-11102 as well as CCF-Zhipu AI Large Model Fund (CCF-Zhipu202302). ## Contribution statement Xuewei Li and Tao Wu contributed equally to this work.
2308.15915
The double-well Bose Hubbard model with nearest-neighbor and cavity-mediated long-range interactions
We consider a one-dimensional Bose-Hubbard model (BHM) with on-site double-well potentials and study the effect of nearest-neighbor repulsion and cavity-mediated long-range interactions by calculating the ground-state phase diagrams with quantum Monte-Carlo simulations. We show that when the intra-well repulsion is as strong as the on-site repulsion a dimerized Mott insulator phase appears at the tip of the dimerized Density Wave phase for a density of one particle per double well. Furthermore, we find a dimerized Haldane insulator phase in the double-well BHM with nearest-neighbor interaction, which is identical to a dimerized BHM with repulsive interactions up to the third neighbor.
Johannes Sicks, Heiko Rieger
2023-08-30T09:42:32Z
http://arxiv.org/abs/2308.15915v2
The double-well Bose Hubbard model with nearest-neighbor and cavity-mediated long-range interactions ###### Abstract We consider a one-dimensional Bose-Hubbard model (BHM) with on-site double-well potentials and study the effect of nearest-neighbor repulsion and cavity-mediated long-range interactions by calculating the ground-state phase diagrams with quantum Monte-Carlo simulations. We show that when the intra-well repulsion is as strong as the on-site repulsion a dimerized Mott insulator phase appears at the tip of the dimerized Density Wave phase for a density of one particle per double well. Furthermore, we find a dimerized Haldane insulator phase in the double-well BHM with nearest-neighbor interaction, which is identical to a dimerized BHM with repulsive interactions up to the third neighbor. ## I Introduction Since its introduction, the Bose-Hubbard-Model (BHM) [1] has been a focus of research. In the simplest form, where tunneling between neighboring sites and a repulsive on-site interaction exist, the ground state phase diagram is characterized by two phases. For commensurate fillings and large on-site repulsions compared to the bosonic tunneling, a Mott insulator (MI) phase appears, while for incommensurate fillings or weak on-site repulsion, the Superfluid (SF) phase exists. Jaksch _et al._ showed in [2], that the dynamics of ultracold bosons, which are contained by an optical lattice, realize a BHM. This was experimentally shown by Greiner _et al._[3] and led consequentially to a broad study of experiments of ultracold bosons in optical lattices [4; 5; 6; 7; 8]. The universality class of the SF-MI phase transition in the BHM is generally of mean-field type, except for the multicritical point, where particle-hole symmetry holds [1]. At this point, the universality class changes to the type of the (\(d\)+1) dimensional XY model [9; 10; 11], where the two dimensional XY model [12; 13] has a topological Kosterlitz-Thouless phase transition [14; 15]. Furthermore, when the occupation per site is restricted to 0, 1 and 2 bosons per site, the one-dimensional BHM can be described by a quantum spin 1 chain, which features a gapped Haldane insulator phase, characterized by a nonlocal order parameter [16; 17; 18; 19; 20]. The quantum critical phenomena of the BHM were studied extensively with quantum Monte-Carlo (QMC) methods, like the path-integral [21; 22], world-line [23; 24] and worm-algorithm QMC [25; 26], of which we use the latter in this work. Also approximate methods were used, like the mean-field theory [1; 27] and density matrix renormalization group method [28]. There are many different modifications and extensions to the BHM, originating from the addition of interactions or changes to the confining potentials. Possible interactions are nearest-neighbor interaction [28; 29; 30; 31; 32; 33; 34; 35] (also referred to as extended BHM), next-nearest-neighbor interaction [16; 36] and hopping [37], cavity-mediated long-range interaction [38; 39; 40; 4] and combinations of nearest-neighbor and long-range interaction [41; 42]. Changes to the confining potentials include for instance disordered potentials [43; 44] and double-well potentials [45; 46; 47; 48; 49; 50; 51]. Superimposing two optical lattices with different wavelengths form a so-called superlattice, like the double-well lattice in which each site consists of a double-well potential [45; 52]. With the help of double-well lattices, quantum information processes can be studied [53; 54], as it allows for example to manipulate atoms individually [55] or study the many-body dynamics and entanglement of a double-well chain [48]. Furthermore, the hard-core bosonic double-well BHM is the bosonic counterpart of the Su-Schrieffer-Heeger model [56] for free fermions, which possesses a nontrivial topological insulator phase. For the hard-core double-well BHM, this topological phase was shown as well [51; 57] and the ground state properties were studied recently [58]. In this paper, we study the ground state phase diagram of the one-dimensional extended double-well BHM with cavity-mediated long-range interaction. In this model, each lattice site consists of one double-well potential, which are aligned in a chain. We consider nearest-neighbor interaction between the sites, therefore bosons in both wells of the double-well on one site feel the interactions between all wells of the neighboring double-well sites. Also, the intra-well repulsion between the two wells of each double-well is taken into account. Its parameter space includes a one-dimensional extended single-well BHM with long-range interaction, where additional Density Wave (DW), Supersolid (SS) and Haldane insulator (HI) phases appear [42] and a one-dimensional dimerized BHM, where intra-well repulsion and hopping strength alternate between every other site [58; 59; 60; 61; 62] and bond-ordered phases appear [63]. We are interested in the similarities and differences in the ground state behavior between the double-well BHM and the single-well and dimerized BHM. While the effects of cavity-mediated long-range interaction on the single-well BHM were studied [4; 38], its effects on a double-well lattice chain remain unknown. Also the interaction between neighboring double-wells - which corresponds to an interaction range up to the third neighbor in the dimerized chain - has not been considered yet. The paper is organized as follows: In Section II, the Hamiltonian of the one-dimensional extended double-well BHM with cavity-mediated long-range interaction is defined and the order parameters are introduced. Then, the analytically solvable ground states without hopping terms are discussed. Section III contains the QMC worm-algorithm results for the ground states of the standard and extended double-well BHM. First, we examine the standard double-well BHM, before giving results for the extended double-well BHM and double-well BHM with cavity-mediated long-range interaction. The conclusion is given in Section IV. ## II Model ### Hamiltonian of the Double-Well BHM We state the one-dimensional extended double-well Bose Hubbard Model (BHM) Hamiltonian with cavity-mediated long-range interaction. In Fig. 1 we depict all interactions of the Hamiltonian except for the cavity-mediated long-range interaction. Each site position contains one double-well and is labeled by the index \(i\in{1,\ldots,L}\), while the left and right well of the respective double-well is given by \(\sigma\in\{\mathcal{L},\mathcal{R}\}\). Here, \(L\) is the length of the chain. We use periodic boundary conditions, thus \(L+1,\sigma\equiv 1,\sigma\). The Hamilton operator for the one-dimensional extended double-well BHM with cavity-mediated long-range interaction reads \[\hat{\mathcal{H}}=\hat{\mathcal{H}}_{t}+\hat{\mathcal{H}}_{U}+\hat{\mathcal{ H}}_{U_{\mathcal{LR}}}+\hat{\mathcal{H}}_{V}+\hat{\mathcal{H}}_{\mu}+\hat{ \mathcal{H}}_{U_{d}}, \tag{1}\] where the particular terms have the following form. \(\hat{\mathcal{H}}_{t}\) depicts the hopping terms \[\hat{\mathcal{H}}_{t}=-t_{1}\sum_{i}\left(\hat{b}^{\dagger}_{i,\mathcal{L}} \hat{b}_{i,\mathcal{R}}+h.c.\right)-t_{2}\sum_{i}\left(\hat{b}^{\dagger}_{i, \mathcal{R}}\hat{b}_{i+1,\mathcal{L}}+h.c.\right), \tag{2}\] where \(t_{1}\) is the intra-well hopping parameter between left and right wells of a single double-well on each site and \(t_{2}\) the inter-well hopping parameter between adjacent left and right wells of double-wells next to each other. \[\hat{\mathcal{H}}_{U}=\frac{U}{2}\sum_{\begin{subarray}{c}i,\\ \sigma=\mathcal{L},\mathcal{R}\end{subarray}}\hat{n}_{i,\sigma}(\hat{n}_{i, \sigma}-1) \tag{3}\] is the on-site interaction on each site, separated between the two wells and \[\hat{\mathcal{H}}_{U_{\mathcal{LR}}}=U_{\mathcal{LR}}\sum_{i}\hat{n}_{i_{ \mathcal{L}}}\hat{n}_{i,\mathcal{R}} \tag{4}\] defines the intra-well repulsion between bosons located in different wells on the same site. The repulsive interaction between neighboring sites is given by \[\hat{\mathcal{H}}_{V}=V\sum_{\begin{subarray}{c}i\\ \sigma,\sigma^{\prime}=\mathcal{L},\mathcal{R}\end{subarray}}\hat{n}_{i, \sigma}\hat{n}_{i+1,\sigma^{\prime}}\equiv V\sum_{i}\hat{n}_{i}\hat{n}_{i+1}. \tag{5}\] Here we assume, that the spatial distance between neighboring sites is much larger than the distance between left and right well of the double-well on one site and therefore that \(V\) is independent of the well index \(\sigma\). To abbreviate the notation, we define \(\hat{n}_{i}=\hat{n}_{i,\mathcal{L}}+\hat{n}_{i,\mathcal{R}}\) and can omit most of the \(\sigma\)-indices in the Hamiltonians. The chemical potential term is \[\hat{\mathcal{H}}_{\mu}=-\mu\sum_{\begin{subarray}{c}i\\ \sigma=\mathcal{L},\mathcal{R}\end{subarray}}\hat{n}_{i,\sigma}\equiv-\mu\sum _{i}\hat{n}_{i} \tag{6}\] with the total boson number operator \(\hat{N}=\sum_{i}\hat{n}_{i}\). The last Hamiltonian \[\hat{\mathcal{H}}_{U_{d}}= -\frac{U_{d}}{L}\left(\sum_{\begin{subarray}{c}i\\ \sigma=\mathcal{L},\mathcal{R}\end{subarray}}\hat{n}_{i,\sigma}-\sum_{ \begin{subarray}{c}i\\ \sigma=\mathcal{L},\mathcal{R}\end{subarray}}\hat{n}_{i,\sigma}\right)^{2}\] \[= -\frac{U_{d}}{L}\left(\sum_{i\text{ even}}\hat{n}_{i}-\sum_{i \text{ odd}}\hat{n}_{i}\right)^{2} \tag{7}\] Figure 1: Sketch of two sites of the double-well Bose-Hubbard model according to the interactions in Hamiltonian (1) without the \(\hat{\mathcal{H}}_{U_{d}}\) term. Each site consists of a double-well with a left (\(\mathcal{L}\)) and a right (\(\mathcal{R}\)) well. In each well, bosons experience an on-site repulsion \(U\), when two or more bosons are present. Intra-well tunneling \(t_{1}\) is possible between left and right well of the same site and inter-well tunneling \(t_{2}\) between left and right well of adjacent sites. Intra-well repulsion \(U_{\mathcal{LR}}\) is present between left and right well of the same site and the nearest-neighbor interaction \(V\) interacts between all wells of adjacent sites. represents the cavity-mediated long-range interaction between even and odd chain sites. Hamiltonian (1) is identical to a dimerized chain with \[\hat{\mathcal{H}}_{\text{\tiny{dim}}}=-t\sum_{j}\left(1+(-1)^{j+1}\delta\right) \left(\hat{b}_{j}^{\dagger}\hat{b}_{j+1}+h.c.\right), \tag{8}\] where \(t=(t_{1}+t_{2})/2\) is the mean hopping strength and \(\delta=(t_{1}-t_{2})/(t_{1}+t_{2})\) the bond dimerization. Likewise, the intra-well repulsion can be understood as a dimerized nearest-neighbor interaction \[\hat{\mathcal{H}}_{U_{\mathcal{CR}},\text{ dim}}=-\frac{U_{\mathcal{CR}}}{2} \sum_{j}\left(1+(-1)^{j+1}\right)\hat{n}_{j}\hat{n}_{j+1}. \tag{9}\] The index \(j\) is hereby the combination of the \(i,\sigma\) notation into one index, where \(j\equiv i,\mathcal{L}\) and \(j+1\equiv i,\mathcal{R}\). Therefore, the chain length is doubled. Fig. 2 represents Hamiltonian (1) in form of a dimerized chain BHM, where the depicted interactions correspond to Fig. 1. The cavity-mediated long-range interaction is not shown. Each double-well is hereby equal to a pair of sites in the dimerized chain, highlighted by the dotted circle around the pair. Therefore, even and odd sites of the double-well BHM are even and odd _pair of sites_ of the dimerized chain, which is important to note for the cavity-mediated long-range interaction and the definition of the notation of the phases used in this work. The intra-well repulsion acts as a dimerized interaction itself and can be regarded for instance as an inter-chain nearest-neighbor interaction of a two-leg ladder model [64], however, the alignment of the sites is different between this model and the double-well BH chain used in this work. ### Simulation Method and Order Parameters We use the exact quantum Monte-Carlo (QMC) worm-algorithm [25; 26] to obtain the phase diagrams. This method operates in the grand-canonical ensemble, thus, the boson number is not fixed. We consider chain lengths up to \(L=64\), where each site consists of one double-well potential. We elaborate the QMC-WA further in Appendix A. From the QMC-WA simulations we obtain the boson density \[\rho=\frac{1}{L}\sum_{i}\langle\hat{n}_{i}\rangle \tag{10}\] and the superfluid density \[\rho_{s}=\frac{\langle W^{2}\rangle L}{2t_{2}\beta}, \tag{11}\] with \(W\) being the winding number, which is defined as the difference between boson lines crossing the periodic boundary condition in one direction versus the other direction. Furthermore with the density-density correlation \(D(r)=\frac{1}{L}\sum_{i}\langle\hat{n}_{i}\hat{n}_{i+r}\rangle\), the structure factor is defined as \[S(k)=\frac{1}{L}\sum_{r}e^{ikr}D(r). \tag{12}\] With these order parameters we are able to distinguish between the Mott insulator (MI), Superfluid (SF), Density Wave (DW) and Supersolid (SS) phases. As shown in single-well BHM with nearest-neighbor interactions, the so called Haldene insulator (HI) phase can emerge at the tip of the DW lobes [16; 20], originating from the spin 1 antiferromagnetic Heisenberg chain [17; 18]. To determine the HI we must introduce two non-local observables, the string and parity operators, \[\mathcal{O}_{s}(|i-j|) =\left\langle\delta\hat{n}_{i}\exp\left\{{{i\pi\sum_{k=i}^{j} \delta\hat{n}_{k}}}\right\}\delta\hat{n}_{j}\right\rangle, \tag{13}\] \[\mathcal{O}_{p}(|i-j|) =\left\langle\exp\left\{{{i\pi\sum_{k=i}^{j}\delta\hat{n}_{k}}} \right\}\right\rangle, \tag{14}\] Figure 2: Sketch of the dimerized chain described by Eq. (1). The \(\hat{\mathcal{H}}_{U_{d}}\) term is not depicted. The dotted circles around two sites is a guide to the eye and correspond to the double-wells of Fig. 1. The indices \(i,\mathcal{L}\)\((i,\mathcal{R})\) match \(j\)\((j+1)\) and result in a dimerized chain length twice as long as the double-well chain. where \(\delta\hat{n}_{i}=\hat{n}_{i}-\rho\) is the difference between particle number and density. Due to periodic boundary conditions, both observables are evaluated for \(|i-j|=L/2\). The upper part in Tab. 1 shows the phases, which can be identified in the ground state phase diagram with the help of the above mentioned order parameters. As notation, we use for Mott insulator phases MI(X), where X is the number of bosons in each double-well. For the Density Wave phases we use DW(X,Y), with X being the boson number on even sites and Y the boson number on odd sites (see Fig. 3 (a)). To further differentiate the behavior of the double-well dynamics, we introduce the intra-well fluctuation parameter \[\zeta\propto\langle\hat{b}_{i,\mathcal{L}}\hat{b}_{i,\mathcal{R}}^{\dagger}+h. c.\rangle \tag{15}\] as an indicator for the bosonic movement inside a double-well between the left and right well. It is linked to the kinetic energy operator for dimerized models [59]. Furthermore we define the well occupation difference \[\Delta=\frac{1}{L}\sum_{i}\langle|\hat{n}_{i,\mathcal{L}}-\hat{n}_{i, \mathcal{R}}|\rangle. \tag{16}\] When \(\Delta=0\) the boson distribution inside a double-well is symmetric, meaning that as many particle are present in the left well, as in the right well for every site. If \(\Delta>0\) the symmetry is (partially) broken, as it happens when intra-well fluctuations become stronger. When \(\zeta=0\) all fluctuations inside the double-wells vanish. For \(\zeta>0\) and \(\rho_{s}=0\) the movement inside the double-wells can be compared to the dimerized BHM, where the finite bond dimerization leads to dimerized Mott insulator (D-MI), dimerized Density Wave (D-DW) and dimerized Haldane insulator (D-HI) phases [59]. The bottom part of Tab. 1 shows the dimerized phases, which are characterized by \(\zeta\) and \(\Delta\). In the following, we express all parameters in units of the on-site repulsion by the abbreviated form \(\tilde{\mu}=\mu/U\) and analogous for all other parameters. ### Ground States without Hopping Terms The ground state phase diagram of Hamiltonian (1) can be calculated analytically when the hopping terms are neglected, because the number operator is diagonal in the Fock basis. In analogy to the ground state phase diagrams of the single-well BHM and dimerized BHM we expect Mott insulator (MI) phases and Density Wave (DW) phases to appear. We do not differentiate between various boson configurations inside each double-well. This is determined by the ratio of on-site repulsion and intra-well repulsion \(U_{\mathcal{LR}}/U=\tilde{U}_{\mathcal{LR}}\). If \(\tilde{U}_{\mathcal{LR}}<1\), the symmetric filling of left and right wells on each site is most favorable and in case of odd boson numbers, the last particle can be located in either well, resulting in two equal likely configurations for each double-well site. In the case \(\tilde{U}_{\mathcal{LR}}=1\), on-site repulsion and intra-well repulsion are equally strong. Thus, the energy gain by increasing the boson number in one well by one and the energy gain by increasing the boson number in the neighboring well on one site are the same. As a result, all boson distributions inside the double-well share the same energy and the degeneracy can be determined by combinatorics. To distribute X bosons in two wells, there are \(\binom{X+2-1}{X}=X+1\) possible arrangements per site. Fig. 3 shows the ground state phase diagram of the double-well BHM with cavity-mediated long-range interaction \(U_{d}\) and nearest-neighbor interaction \(V\) for \(\tilde{U}_{\mathcal{LR}}=0\) and \(\tilde{U}_{\mathcal{LR}}=1\). Generally, nearest-neighbor and cavity-mediated long-range interaction share a lot of commonalities in their ground state behavior, like on a mean-field level, where they show identical phase diagrams [38]. For \(\tilde{U}_{\mathcal{LR}}=0\), the occupation of left and right well on each site is symmetric, thus only even particle numbers per site occur. One difference is the behavior, when the interaction becomes sufficient large compared to the on-site repulsion, as for the cavity-mediated long-range interaction there is no global energy minimum and thus no ground state anymore. One can see the reason for this in the energy per site, which the system is gaining via the nearest-neighbor interaction and the long-range interaction \[\epsilon_{V} =4VXY,\] \[\epsilon_{U_{d}} =-\frac{U_{d}}{4}\left(X^{2}+Y^{2}-2XY\right), \tag{17}\] where \(X\) and \(Y\) give the particle number on even and odd sites. Apparently, the nearest-neighbor interaction increases the energy, when neighboring sites are occupied. This includes the MI phases, while DW phases remain unaffected. This reverses for the long-range \begin{table} \begin{tabular}{l|c c c c c c} & \(\rho_{s}\) & \(S(\pi)\) & \(\mathcal{O}_{s}\left(\frac{\mathcal{L}}{2}\right)\) & \(\mathcal{O}_{p}\left(\frac{L}{2}\right)\) & \(\zeta\) & \(\Delta\) \\ \hline SF & \(\neq 0\) & \(0\) & \(0\) & \(0\) & \(\neq 0\) & \(\neq 0\) \\ SS & \(\neq 0\) & \(\neq 0\) & \(\neq 0\) & \(\neq 0\) & \(\neq 0\) & \(\neq 0\) \\ DW(\(X_{o}\),0) & \(0\) & \(\neq 0\) & \(\neq 0\) & \(\neq 0\) & \(0\) & \(1/2\) \\ DW(\(X_{e}\),0) & \(0\) & \(\neq 0\) & \(\neq 0\) & \(\neq 0\) & \(0\) & \(0\) \\ MI(\(X_{o}\)) & \(0\) & \(0\) & \(0\) & \(\neq 0\) & \(0\) & \(1\) \\ MI(\(X_{e}\)) & \(0\) & \(0\) & \(0\) & \(\neq 0\) & \(0\) & \(0\) \\ HI(1) & \(0\) & \(0\) & \(\neq 0\) & \(0\) & \(0\) & \(\neq 0\) \\ \hline D-DW(\(X_{o}\),0) & \(0\) & \(\neq 0\) & \(\neq 0\) & \(\neq 0\) & \(\neq 0\) & \(\neq 0\) \\ D-DW(\(X_{e}\),0) & \(0\) & \(\neq 0\) & \(\neq 0\) & \(\neq 0\) & \(\neq 0\) & \(\neq 0\) \\ D-MI(\(X_{o}\)) & \(0\) & \(0\) & \(0\) & \(\neq 0\) & \(\neq 0\) & \(\neq 0\) \\ D-MI(\(X_{e}\)) & \(0\) & \(0\) & \(0\) & \(\neq 0\) & \(\neq 0\) & \(\neq 0\) \\ D-HI(1) & \(0\) & \(0\) & \(\neq 0\) & \(0\) & \(\neq 0\) & \(\neq 0\) \\ \end{tabular} \end{table} Table 1: Order parameters for the phases studied in this paper. We differentiate between (dimerized) MI and DW phases with even (e) or odd (o) particle numbers per double-well. interaction, where the energy is lowered, when a misbalance of particle occupation between even and odd sites is present, as for the DW phases, while the MI phases are unaffected from the long-range interaction. Hence, when the energy lowering from the long-range interaction is more than the energy gain from the on-site repulsion, the global energy function becomes a concave function and no energy minimum exists anymore. We can also explain why the transition points for \(\tilde{U}_{\mathcal{LR}}=0\) are halved compared to \(\tilde{U}_{\mathcal{LR}}=1\), as seen in Fig. 3 (b). For \(\tilde{U}_{\mathcal{LR}}=0\), the Hamiltonian (1) without hopping terms scales by the factor two in its on-site repulsion per site, while the nearest-neighbor and cavity-mediated long-range interaction both scale by the factor four per site. On the other hand for the \(\tilde{U}_{\mathcal{LR}}=1\) case, we first rewrite the on-site repulsion term Eq. (3) as \[\hat{\mathcal{H}}_{U}=\frac{U}{2}\sum_{i}\hat{n}_{i}(\hat{n}_{i}-1)-U\sum_{i} \hat{n}_{i,\mathcal{L}}\hat{n}_{i,\mathcal{R}} \tag{18}\] and see that the intra-well repulsion Eq. (4) and last term of Eq. (18) cancel each other out. What remains is the Hamiltonian of a single-well BHM, where on-site repulsion, nearest-neighbor and cavity-mediated long-range interaction scale equally. ## III Results With the QMC-WA, we study the double-well BHM with inter- and intra-well hopping terms to analyze the ground state phase diagrams for various parameter settings. First, we discuss the ground states of the standard double-well BHM (without cavity-mediated long-range and nearest-neighbor interaction). We are interested if the dimerization of hopping terms results in dimerized MI and DW phases with non-integer densities, as expected from the dimerized chain [57; 51]. Next, the influence of the nearest-neighbor and long-range interaction will be studied. Of special interest for us is hereby the \(\rho=1\) lobe. In the one-dimensional single-well BHM, a HI phase occurs at the tip of this lobe, when a nearest-neighbor interaction is present, while a MI phase appears for an included cavity-mediated long-range interaction [42]. ### Standard Double-Well BHM The standard double-well BHM without intra-site repulsion \(\tilde{U}_{\mathcal{LR}}=0\) can be interpreted as a single-well dimerized BHM with double chain length (c.f. Fig. 2). Then, the hopping terms correspond to the dimerized BHM via \(t_{1}=t+\delta\) and \(t_{2}=t-\delta\) with \(t\) being the mean hopping strength and \(\delta\) the dimerization factor. Therefore, the phase diagram of this parameter setting is expected to be identical to the phase diagram of the dimerized single-well BHM, where dimerized Mott insu Figure 3: (a) Depiction of the MI(2) and DW(2,0) phases. In the former, two bosons are located on every site, while in the latter, only every other site is occupied by two bosons. (b) Ground state sketch of Hamiltonian (1) without hopping terms (\(t_{1}=t_{2}=0\)) for \(\tilde{V}>0,\ \tilde{U}_{d}=0\) (left) and \(\tilde{V}=0,\ \tilde{U}_{d}>0\) (right), in the two cases \(\tilde{U}_{\mathcal{LR}}=0\) and \(\tilde{U}_{\mathcal{LR}}=1\). \(X\) is an integer number, while \(X_{e}\) represents only even integer numbers. When \(\tilde{V}=0,\ \tilde{U}_{d}=0\), DW phases vanish and only MI phases persist. lator (D-MI) phases emerge, which are a combination of MI and bond-ordered phases [51; 57; 59]. The latter appear due to the broken translational symmetry of the dimerized model and is characterized by the alternating strengths of the bond kinetic energy [63]. The phase diagram is depicted in Fig. 4 (a) and shows the expected behavior. For \(\tilde{t}_{2}>0\) the MI phases are replaced by D-MI phases, where bosons move between left and right well of the sites. The D-MI(2) phase is hereby identical to the D-MI(1) phase of a single-well dimerized BHM. We calculate the order parameters under the constant density line \(\tilde{\mu}=-2.6\cdot\tilde{t}_{2}+0.605\) to analyze the phase transition at the tip of the lobe and show the results in Fig. 5. For \(\rho_{s}=0\), the D-MI(2) phase is present, while the transition to the SF phase takes place at around \(\tilde{t}_{2}\approx 0.22\), where the superfluid density becomes non-zero and independent of the system size. We compare the phase transition point of our grand-canonical method to the canonical density matrix renormalization group technique used in [59], where the transition point is located around \(\tilde{t}_{2}\approx 0.23\) and can confirm that our results match with the single-well dimerized BHM. This includes, that for any dimerization, \(\delta\neq 0\), the MI phases are replaced by D-MI phases. Later, this is also shown to be true for the DW and HI phases, which will become D-DW and D-HI phases, respectively. It is because of the higher energy of the intra-well hopping compared to the inter-well hopping. Fluctuations inside the double-wells are more favorable than between neighboring sites which leads to a different kinetic energy contribution between the bond of left and right well of one double-well and the bond between wells of neighboring sites. The existence of a D-MI(1) phase in the standard double-well BHM with \(\tilde{U}_{\mathcal{LR}}=0\) is in accordance with the single-well BHM, where no insulating phase exists, when the chain is not dimerized [28], but a MI-bond-order phase with \(\rho=1/2\) emerges, when the chain is dimerized [51; 57]. In Fig. 6, we investigate this phase in more detail. Beginning from the vacuum state, when the chemical potential is increased, bosons start to occupy the empty chain and can move inside the system by intra-well hopping (between left and right well of each double-well on each site) or inter-well hopping (between left and right wells of double-wells of neighboring sites). When approaching \(\rho=1\), nearly all double-wells are occupied by one boson and the well occupation difference \(\Delta\) maximizes, as seen at \(\tilde{\mu}_{a}\) in Fig. 6 (a). As a result, Figure 5: Intra-well fluctuation \(\zeta\), structure factor \(S(\pi)\), intra-well occupation difference \(\Delta\) and superfluid density \(\rho_{s}\) order parameters along the \(\tilde{\mu}=-2.6\cdot\tilde{t}_{2}+0.605\) line with constant particle density \(\rho=2\) for different chain lengths. While \(\rho_{s}=0\) the D-MI(2) is present. The tip of the lobe can be approximated by the position, where the superfluid densities of the different chain lengths overlap at \(\tilde{t}_{2}\approx 0.22\). Figure 4: Phase diagram of the standard double-well BHM with \(\tilde{V}=\tilde{U}_{d}=0\) and \(\tilde{t}_{1}=2\cdot\tilde{t}_{2}\) for (a) \(\tilde{U}_{\mathcal{LR}}=0\) and (b) \(\tilde{U}_{\mathcal{LR}}=1\). For \(\tilde{t}_{2}=0\), only MI phases are present in both cases, as discussed in Section II.3. (a) When \(\tilde{t}_{2}>0\), SF phases emerge between Vacuum and D-MI phases, where D-MI phases replace the MI phases. A small D-MI(1) emerges between Vacuum and D-MI(2) phase. The tip of the D-MI(2) lobe is estimated at around \(\tilde{t}_{2}\approx 0.22\) with help of the analysis of the order parameters along the constant particle density line \(\tilde{\mu}=-2.6\cdot\tilde{t}_{2}+0.605\). (b) When \(\tilde{t}_{2}>0\), the D-MI phases persist for lower values of \(\tilde{\mu}\) and stronger hopping strengths until transition into SF phases, compared to the \(\tilde{U}_{\mathcal{LR}}=0\) case. This indicates a stabilizing effect of the intra-well hopping for all D-MI(X) phases. when a boson hops to a double-well of a neighboring site, where another boson is already localized, it hinders the intra-well movement of this boson and cannot hop inside the new double-well itself. This is energetically unfavorable, so the inter-well hopping is suppressed. Only when the chemical potential becomes stronger, this effect will be overcome and the double-well chain populates further with bosons. In the inset of Fig. 6 (a) one can see the chain length dependency of the superfluid order parameter, underlining the inter-well hopping decrease around \(\rho=1\). Increasing the ratio \(\tilde{t}_{1}/\tilde{t}_{2}\) enhances the aforementioned effect even further and the D-MI(1) phase becomes bigger, as depicted in Fig. 6 (b). When \(\tilde{t}_{1}/\tilde{t}_{2}<1\), there is a plateau as well, although the intra-well hopping is weaker than the inter-well hopping. This is of no surprise, as a negative bond dimerization \(\delta\) only changes the alternating order of the dimerized chain Eq. (8). The shift of the density with respect to \(\tilde{\mu}\) is explainable due to the change of the mean hopping strength. For \(\tilde{U}_{\mathcal{LR}}=1\), the energy of a boson being in the same well as another one, is equivalent to a boson being located in the neighboring well on the same site. Hence, the movement of a boson inside a double-well is solely dependent on the intra-well hopping parameter \(\tilde{t}_{1}\) and not the arrangement of bosons inside the double-well. In Fig. 4 (b) the phase diagram for the double-well BHM with \(\tilde{U}_{\mathcal{LR}}=1\) and \(\tilde{t}_{1}=2\tilde{t}_{2}\) is shown. The resemblance to the single-well BHM [28] is evident, as each double-well in the double-well BHM, for intra-well repulsion strength \(\tilde{U}_{\mathcal{LR}}=1\), behaves in most ways similar to a single-well. The important exception is the intra-well hopping \(\tilde{t}_{1}\), which introduces more energy due to the movement of bosons inside the double-well and thus leads to a shift of the D-MI phases in the phase diagram to lower \(\tilde{\mu}\) values and higher hopping values, compared to the single-well BHM phase diagram, where \(\tilde{t}=\frac{3}{2}\tilde{t}_{2}\). ### Double-well BHM with Nearest-Neighbor and Long-Range Interactions We study the nearest-neighbor and cavity-mediated long-range interaction for different parameter settings and show the phase diagrams in Fig. 7. We have chosen the nearest-neighbor and long-range interaction strengths to be in the regime, where for \(\tilde{t}_{2}=0\) only DW phases are present (see Fig. 3 (b)). We compare our results with the standard double-well BHM and the (dimerized) single-well BHM. For the nearest-neighbor interaction, at the tip of the DW(2,0) lobe a HI phase was found in the undimerized case [29] and a D-HI in the dimerized BHM [61]. For the single-well BHM with cavity-mediated long-range interaction a MI phase is located at the tip of the DW(2,0) lobe [42]. Regarding all phase diagrams presented in Fig. 7, when the hopping is greater than zero, MI, DW and HI phases will become dimerized phases, meaning that bosons are localized in the double-well on one site and fluctuate between left and right well of this double-well. For \(\tilde{U}_{\mathcal{LR}}=0\), D-DW(\(X_{o}\),0) phases emerge in between integer density phases for nearest-neighbor Fig. 7 (a) and long-range Fig. 7 (c) interactions. The reason for this is analogue to the D-MI(1) phase in the standard double-well BHM in Fig. 4 (a). Additionally, the D-DW phases are carried out for bigger hopping values. When \(\tilde{V}>0\), the SF and SS phases are shifted to higher energy values, Figure 6: (a) Density \(\rho\), superfluid density \(\rho_{s}\), intra-well fluctuation \(\zeta\), structure factor \(S(\pi)\) and intra-well occupation difference \(\Delta\) order parameters for \(\tilde{t}_{2}=0.125\). The position \(\tilde{\mu}_{a}\approx-0.165\) is a guide to the eye where \(\Delta\) peaks. While \(\rho_{s}\) goes to zero, \(\zeta\) remains monotonically increasing. In the inset the behavior of the superfluid density for increasing chain lengths is depicted, where a dip for \(\rho=1\) is visible and indicates the presence of a D-MI(1) phase. (b) The particle density \(\rho\) over the chemical potential \(\tilde{\mu}\) for different ratios of \(\tilde{t}_{1}/\tilde{t}_{2}\). A plateau is formed for all dimerized hopping strengths and becomes broader the more the hopping strengths differ in size. while the D-DW phases are unaffected by the nearest-neighbor term and thus persist longer as in the standard double-well BHM. For the long-range interaction, when \(\tilde{U}_{d}>0\), the argumentation reverses. The energy of the D-DW phases are lowered, while the SF and SS phases are unaffected. Hence, not only do the D-DW phases persist for bigger hopping terms, but also for smaller values of \(\tilde{\mu}\). For \(\tilde{U}_{\mathcal{CR}}=1\), the argumentation, that D-DW phases are present at higher hopping strengths and shift to lower \(\tilde{\mu}\) values for the long-range interaction compared to the \(\tilde{U}_{\mathcal{CR}}=0\) case, remains the same for the phase diagrams with nearest-neighbor Fig. 7 (b) and long-range Fig. 7 (d) interaction. Yet in contrast to the \(\tilde{U}_{\mathcal{CR}}=0\) case, the D-DW(\(X_{o}\),0) phases become broader when they approach \(\tilde{t}_{2}=0\). Figure 7: Phase diagrams of the double-well BHM for (a) \(\{\tilde{U}_{\mathcal{CR}}=0,\tilde{V}=0.4,\tilde{U}_{d}=0\}\), (b) \(\{1,0.75,0\}\), (c) \(\{0,0,0.3\}\), (d) \(\{1,0,0.6\}\) and \(\tilde{t}_{1}=2\cdot\tilde{t}_{2}\). For \(\tilde{t}_{2}=0\), only DW phases are present. Phases with non-integer densities are colored orange to better distinguish between the phases and the tips of the D-DW(2,0) lobes, where \(\rho=1\), are enhanced in an inset for all phase diagrams. A detailed behavior of the order parameters at the tip for constant densities is carried out along the red lines in Fig. 8. For the extended double-well BHM Fig. 7 (a) and (b), at the tip of the D-DW(2,0) phase, a D-HI(1) phase can be identified, while for the long-range interaction Fig. 7 (c) and (d) only at the tip of the D-DW(2,0) lobe for the \(\tilde{U}_{\mathcal{LR}}=1,\ \tilde{U}_{d}=0.6\) diagram, a D-MI(1) phase was extrapolated. The order parameters along the constant density lines for the D-DW(2,0) phase of the phase diagrams in Fig. 7 are given in Fig. 8. The labels (a)-(d) correspond in both figures. On the left hand side of each picture is the D-DW(2,0) phase, where only the superfluid density \(\rho_{s}\) is zero, while all other order parameters obtain non-zero values. For the extended double-well BHM in Fig. 8 (a) and (b) the D-DW(2,0) phase transits into a D-HI(1) phase, where structure factor \(S(\pi)\) and parity order parameter \(\mathcal{O}_{p}\) drop to zero, while the string order parameter \(\mathcal{O}_{s}\) remains non-zero. The superfluid density becomes non-zero, but is size-dependent and becomes zero in the limit \(L\to\infty\). Furthermore, in Fig. 8 (b) the transition to the SF can also be seen, where \(\rho_{s}\) is size-independent and non-zero, while \(\mathcal{O}_{s}\) approaches zero. Comparing these results for the D-HI phase with the results for the single-well extended BHM [29; 42] shows, that the topological HI phase persists in the double-well BHM in a dimerized way, where the intra-well hopping does not break the long-range order of the HI phase. Since the nearest-neighbor interaction of our double-well BHM is identical to the dimerized chain where the nearest-neighbor interaction acts on neighboring pairs of sites, we showed that the D-HI phase persists also for the dimerized chain with interactions up to the third neighboring site. This is an expansion of the results of Sugimoto _et al._[61], where a D-HI phase was found in the dimerized chain with nearest-neighbor interaction. For the cavity-mediated long-range interaction with \(\tilde{U}_{\mathcal{LR}}=0,\ \tilde{U}_{d}=0.3\), shown in Fig. 8 (c), the D-DW(2,0) directly transits into the SS phase, where \(\rho_{s}\) and \(S(\pi)\) are non-zero and \(\mathcal{O}_{s}\) and \(\mathcal{O}_{p}\) zero. To understand why there exists no D-MI(1) phase at the tip of the D-DW(2,0) lobe, contrary to the single-well BHM, where a MI phase is present at the tip of the DW(2,0) lobe, we reiterate why the MI(1) phase exists in the single-well BHM in the first place. The MI(1) phase appears at the tip of the DW(2,0) lobe in the single-well BHM because the inter-site particle fluctuation per site scales in the MI phase by the power of \(U_{d}/L\), as particles are evenly distributed between even and odd sites. On the other hand, inter-site particle fluctuation per site for the DW(X,0) phases scale by the factor \(X^{2}U_{d}/4\), making it independent of system size. This argumentation holds true also for the double-well BHM, but is expanded with the effect of the intra-well hopping, where the bosons in the D-DW(2,0) phase are able to fluctuate inside every second double-well. This overcomes the effects of the cavity-mediated long-range interaction on the inter-well fluctuations for the D-DW(2,0) phase and results in the D-DW(2,0) to be present until the hopping strengths are strong enough that a direct transition into the SS phase occurs. For the \(\tilde{U}_{\mathcal{LR}}=1,\ \tilde{U}_{d}=0.6\) case, shown in Fig. 8 (d), a transition to the D-MI(1) phase can be seen, where \(\mathcal{O}_{p}\) persists, while \(\rho_{s}\) goes to zero for larger system sizes. Due to the strong variance of the starting point, where the superfluid density becomes non-zero according to system size, an extrapolation was carried out to determine the position of the transition to the D-MI(1) phase. The occurrence of a D-MI(1) at the tip of the D-DW(2,0) lobe in the double-well BHM with long-range interaction \(\tilde{U}_{d}=0.6\) matches with the finding of a MI(1) phase at the tip of the DW(2,0) lobe in the single-well BHM with cavity-mediated long-range interaction [42]. This is reasonable, as we have showed in Section II.3 for the double-well BHM without hopping, that the double-well BHM with \(\tilde{U}_{\mathcal{LR}}=1\) is identical to a single-well BHM. Our results confirm, that a dimerization of the hopping keeps the structure of a MI(1) phase at the tip of the DW(2,0), but dimerizes both phases to D-MI(1) and D-DW(2,0) respectively. ## IV Conclusion The one-dimensional double-well BHM with nearest-neighbor and cavity-mediated long-range interaction includes a variety of well-established models, like the single-well BHM and the dimerized BHM. When dimerization is present, dimerized Mott insulator (D-MI), dimerized Density Wave (D-DW) and dimerized Haldane insulator (D-HI) phases exist, characterized by a combination of a bond-ordered phase [63] and a MI, DW and HI phase, respectively. It turns out that dimerized hopping stabilizes the D-DW phases with non-integer boson densities, in agreement with results from dimerized chains [57; 51]. When the intra-well repulsion is as strong as the on-site repulsion per well, each double-well can be treated as a single-well. Hence in this case, the ground state phase diagram is identical to the single-well BHM [28], with the exception, that the intra-well hopping leads to dimerized phases and a shift of the D-MI phases to lower chemical potentials and higher hopping values. Interactions between particles in neighboring double-well potentials imply interactions up to the third neighbor in the corresponding dimerized BHM. For those a dimerized Haldane insulator, D-HI, phase at the tip of the D-DW lobe with particle density one exists, which was previously reported for a dimerized chain with solely nearest-neighbor interaction [61]. For non-integer densities D-DW phases exist, as was recently shown for the dimerized BH chain with nearest-neighbor interactions [58]. In the presence of cavity-mediated long-range interactions a D-MI phase appears at the tip of the D-DW lobe with density one when the intra-well repulsion is as strong as the on-site repulsion, which is in agreement with results for the single-well chain BHM [42]. This D-MI phase at the tip of the D-DW phase disappears when the the intra-well repulsion vanishes. This is due to the intra-well hopping, which distinguishes the D-DW phase in the double-well BHM and the DW phase of the single-well BHM. Moreover, D-DW phases exist for non-integer densities in the double-well BHM with cavity-mediated long-range interaction, which is reminiscent of the double-well BHM with nearest-neighbor interaction and underpins the equivalence of nearest-neighbor and cavity-mediated long-range interaction on a mean-field level [38]. ## Appendix A Quantum Monte-Carlo worm-algorithm In this appendix we discuss the quantum Monte Carlo worm-algorithm we used to obtain the phase diagram of Hamiltonian (1) in more detail. We split the Hamiltonian in an on-diagonal part \(\hat{\mathcal{H}}_{\text{on}}\) and off-diagonal part \(\hat{\mathcal{H}}_{\text{off}}\) with regard to the Fock basis representation of the 1D chain \(\ket{\mathbf{n}_{i}}=\ket{n_{1}\dots n_{L}}_{i}\). Hence, \(\hat{\mathcal{H}}_{\text{on}}\ket{\mathbf{n}_{i}}\) gives the on-diagonal energy value \(\epsilon_{i}\). With the inverse temperature \(\beta\) and the Dyson series, we can write the partition function as \[\mathcal{Z}(\mathcal{C})= \sum_{m=0}^{\infty}\sum_{\mathbf{n}_{1}\dots\mathbf{n}_{m}}e^{- \beta e_{1}}\int_{0}^{\beta}d\tau_{m}\cdots\int_{0}^{\tau_{2}}d\tau_{1}\] \[\times\left(e^{\tau_{m}\epsilon_{1}}\hat{\mathcal{H}}_{\text{off} }^{\mathbf{n}_{1}\mathbf{n}_{m}}e^{-\tau_{m}\epsilon_{m}}\right)\cdots\left(e^ {\tau_{1}\epsilon_{2}}\hat{\mathcal{H}}_{\text{off}}^{\mathbf{n}_{2}\mathbf{n} _{1}}e^{-\tau_{1}\epsilon_{1}}\right), \tag{10}\] with \(\hat{\mathcal{H}}_{\text{off}}^{\mathbf{n}_{i}\mathbf{n}_{j}}=\langle\mathbf{ n}_{i}|\hat{\mathcal{H}}_{\text{off}}|\mathbf{n}_{j}\rangle\). The partition function is the sum over all possible configurations, where \(m\) denotes the number of vertices in the system, as it counts the number of \(\hat{\mathcal{H}}_{\text{off}}\) terms. The imaginary time dimension ranges from \([0,\beta]\) and is segmented by the vertices in different lengths \(\tau_{m}\). Additionally, the Fock states at the beginning and the end must be the same \(|\mathbf{n}_{0}\rangle=|\mathbf{n}_{m}\rangle\). Now, the worm is inserted by including a \(\hat{b}_{i}^{\dagger\ \mathbf{n}_{i}\mathbf{n}_{e}}(\tau_{c})\ \hat{b}_{i}^{ \mathbf{n}_{i}\mathbf{n}_{m}}(\tau_{a})\) or \(\hat{b}_{i}^{\mathbf{n}_{i}^{\mathbf{n}_{i}}\mathbf{n}_{a}}(\tau_{a})\ \hat{b}_{i}^{ \dagger\ \mathbf{n}_{i}^{\mathbf{n}_{i}}\mathbf{n}_{e}}(\tau_{c})\) pair at an arbitrary site \(i\) and imaginary time position \(\tau_{c}=\tau_{a}\). In the first case, between the operators, a boson is removed, while in the second case a boson is created. Obviously, the worm is only inserted, if a boson can be removed, or the maximal amount of bosons on one site is not surpassed, when there is such a limit defined. So, the extended configuration for the partition function kernel reads \[\mathcal{C}^{\text{ext}} =\left(e^{\tau_{m}\epsilon_{1}}\hat{\mathcal{H}}_{\text{off}}^{ \mathbf{n}_{1}\mathbf{n}_{m}}e^{-\tau_{m}\epsilon_{m}}\right)\cdots\left(e^{ \tau_{c}\epsilon_{c}^{\prime}}\hat{b}_{i}^{\dagger\ \mathbf{n}_{i}^{\mathbf{n}_{e}} \mathbf{n}_{c}}e^{-\tau_{c}\epsilon_{c}}\right)\cdots\] \[\times\left(e^{\tau_{a}\epsilon_{c}^{\prime}}\hat{b}_{i}^{ \mathbf{n}_{i}^{\mathbf{n}_{a}}\mathbf{n}_{e}}e^{-\tau_{a}\epsilon_{a}}\right) \cdots\left(e^{\tau_{1}\epsilon_{2}}\hat{\mathcal{H}}_{\text{off}}^{\mathbf{n} _{2}\mathbf{n}_{1}}e^{-\tau_{1}\epsilon_{1}}\right) \tag{11}\] and vice versa for the other pairing. W.l.o.g. we define the creator operator as head and the annihilator operator as tail. They can move through the configuration space by advancing forwards and backwards in imaginary time, or by hopping accordingly to the off-diagonal term \(\hat{\mathcal{H}}_{\text{off}}\). When they come across an already existing vertex, where they cannot pass through (because the commutator does not vanish), one of three scenarios happens. Either the vertex gets deleted, the vertex is relinked to another site according to \(\hat{\mathcal{H}}_{\text{off}}\), or nothing happens and the worm end moves in the other direction. The worm movement ends when head and tail collide. Fig. 9 sketches the 1D quantum chain, expanded with the imaginary time to a classical 2D model and the insertion of a worm. With the QMC-WA it is possible to obtain grand-canonical and canonical observables. During worm movements the boson number is variable, allowing directly to obtain the Green's function for example. When the worm is deleted, the particle number is constant and canonical observables can be calculated. The usual approach is via importance sampling \[\langle\mathcal{O}\rangle=\frac{1}{\mathcal{Z}}\sum_{\mathcal{C}}\mathcal{O}( \mathcal{C})\mathcal{Z}(\mathcal{C}). \tag{12}\]
2307.14256
Lorentz invariance violation from GRB221009A
The Large High Altitude Air Shower Observatory~(LHAASO) reported observation of photons with energies above 10~TeV from gamma-ray burst GRB 221009A. A suggestion was proposed that this result may contradict our knowledge of special relativity~(SR) and the standard model~(SM), according to which photons of about 10~TeV from such a distant object should be severely suppressed because of the absorption by extragalactic background light. As a result, a number of mechanisms have been proposed to solve this potential puzzle, including Lorentz invariance violation~(LIV). In this work, we perform a detailed numerical calculation and show the feasibility to constrain LIV of photons from the LHAASO observation of GRB 221009A quantitatively.
Hao Li, Bo-Qiang Ma
2023-07-26T15:39:24Z
http://arxiv.org/abs/2307.14256v3
# Lorentz invariance violation from GRB221009A ###### Abstract The Large High Altitude Air Shower Observatory (LHAASO) reported observation of photons with energies above 10 TeV from gamma ray burst GRB221009A. A suggestion was proposed that this result may contradict with our knowledge of special relativity (SR) and the standard model (SM), according to which photons of about 10 TeV from such a distant object should be severely suppressed because of the absorption by extragalactic background light. As a result, a number of mechanisms have been proposed to solve this potential puzzle, including Lorentz invariance violation (LIV). In this work, we perform a detailed numerical calculation and show the feasibility to constrain LIV of photons from the LHAASO observation of GRB221009A quantitatively. keywords: Lorentz invariance violation, threshold anomaly, gamma-ray burst, extragalactic background light On October 9, 2022, an extremely bright (long) gamma-ray burst (GRB), dubbed GRB221009A was observed by several observatories, including the Fermi Gamma-ray Space Telescope [1; 2; 3; 4] and the Large High Altitude Air Shower Observatory (LHAASO) [5; 6]. As a long burst, GRB221009A is rather close to the Earth, and locates at \(z=0.1505\) in redshift [7; 8]. What makes this GRB deserve more attention is the observation of photons with energies up to about 18 TeV by LHAASO [5]. In the standard understanding, the universe is not always transparent to photons due to the background light absorption of high energy photons propagating in the space, and therefore it is natural to asky the question: why such high energy photons from GRB221009A is permissible [9; 10]. Besides, another question follows that if within the standard framework of physics we cannot find an answer to this question, how to understand this phenomenon, and whether it is necessary to invoke new mechanisms (see, e.g., discussions and references in Ref. [11]). During the observation of LHAASO, the two different detectors, the Water Cherenkov Detector Array (WCDA) and the Kilometer Square Array (KM2A) both recorded GRB221009A while at very different energy bands [12; 13]. WCDA reported more than 64000 photons with energies between 0.2 TeV and 7 TeV [6], while the preliminary result from KM2A shows that there might be more than 5000 photons from 0.5 TeV to around 18 TeV [5]. For convenience we may call the WCDA result as \(\mathbf{R_{WCDA}}\) and the KM2A report as \(\mathbf{R_{KM2A}}\). Recently a series of work has been devoted to analyzing possibility of excesses in the observation of GRB221009A mainly based on \(\mathbf{R_{KM2A}}\). It seems that considering the absorption of extragalactic background light (EBL), we could not observe about 18 TeV photons from GRB221009A with LHAASO [9; 10]. Consequently it is suggested that there exists necessity for mechanisms beyond the special relativity (SR) and the stan dard model (SM) to explain the observed results. Amongst the many attempts to understand this possible excess, we mainly focus on Lorentz invariance violation (LIV) induced threshold anomalies [9; 10] in this work below. For our analysis, the relevant process in which LIV induced threshold anomalies take effect is the pair-production process \(\gamma\gamma_{b}\to e^{-}e^{+}\) with the background photons \(\gamma_{b}\) coming from EBL. In the standard understanding of this process, the absorption can be calculated quantitatively by the optical depth [14]: \[\tau_{\gamma\gamma}(E,z) =\int_{0}^{z}\mathrm{d}z^{\prime}\frac{\partial l}{\partial z^{ \prime}}(z^{\prime})\int_{0}^{\infty}\mathrm{d}\varepsilon\frac{\partial n}{ \partial\varepsilon}(\varepsilon,z^{\prime})\] \[\times\int_{-1}^{+1}\mathrm{d}\mu\frac{1-\mu}{2}\sigma[\beta(E,z ^{\prime},\varepsilon,\mu)], \tag{1}\] where \(E\) is the energy of a GRB photon and \(z\) is the redshift of the GRB, \(\partial n/\partial\varepsilon(\varepsilon,z^{\prime})\) is the number density of EBL photons of energy \(\varepsilon\) at redshift \(z^{\prime}\), and \[\frac{\partial l}{\partial z^{\prime}}=\frac{1}{H_{0}}\frac{1}{1+z^{\prime}} \frac{1}{\sqrt{\Omega_{\Lambda}+\Omega_{M}{(1+z^{\prime})}^{3}}} \tag{2}\] is a factor from the Fermi-Robertson-Walker cosmology with \(H_{0}=70\)\(\mathrm{kms^{-1}Mpc^{-1}}\), \(\Omega_{\Lambda}=0.7\) and \(\Omega_{M}=0.3\). In Eq. (1) the pair-production process cross-section is [15; 16]: \[\sigma(\beta) =\frac{3\sigma_{T}}{16}(1-\beta^{2})\] \[\times\left[2\beta(\beta^{2}-2)+(3-\beta^{4})\ln\frac{1+\beta}{1 -\beta}\right], \tag{3}\] where \(\sigma_{T}\) is the Thomson cross-section and \[\beta(E,z,\varepsilon,\mu)=\sqrt{1-\frac{2m_{e}^{2}}{E\varepsilon}}\frac{1}{1 -\mu}{\left(\frac{1}{1+z}\right)^{2}}. \tag{4}\] It is noteworthy that Eq. (4) should be real so that the threshold information in SR is already encoded and hence the integration in Eq. (1) is understood to be performed satisfying the condition \(\beta\in\mathbb{R}\), which indicates the ordinary threshold condition: \[E_{thr}=\frac{2m_{e}^{2}}{\varepsilon}\frac{1}{1-\cos\theta}\left(\frac{1}{1+ z}\right){}^{2}. \tag{5}\] However, in certain models, LIV could cause threshold anomalies in various reactions, including the pair-production process [17]. The threshold anomaly in the pair-production process is suggested to modify the threshold condition in Eq. (5), and it is possible that the \(\varepsilon\)-space integrated in Eq. (1) gets shrunk, resulting in a smaller optical depth \(\tau_{\gamma\gamma}^{\mathrm{LIV}}\). By considering the intrinsic flux \(F_{\mathrm{int}}\), the observed flux \(F_{\mathrm{obs}}\) and the observed flux with LIV \(F_{\mathrm{obs}}^{\mathrm{LIV}}\), which are related by \[F_{\mathrm{obs}}^{\mathrm{LIV}} :=F_{\mathrm{int}}\times e^{-\tau_{\gamma\gamma}^{\mathrm{LIV}}},\] \[F_{\mathrm{obs}} :=F_{\mathrm{int}}\times e^{-\tau_{\gamma\gamma}}, \tag{6}\] from which we may have \[F_{\mathrm{obs}}^{\mathrm{LIV}}>F_{\mathrm{obs}}, \tag{7}\] As a result, with a proper LIV parameter \(\xi\) we might be able to interpret \(\mathbf{R_{KM2A}}\). Here the parameter \(\xi\) is defined to be the deviation from the photon dispersion relation: \[\omega(k)^{2}\approx k^{2}-\xi k^{3}+\cdots \tag{8}\] with the photon energy \(\omega\) not too large compared to \(E_{\mathrm{Planck}}\). For later convenience, we also define \(\xi^{-1}\equiv E_{\mathrm{LIV}}\). Now we concentrate on GRB221009A, especially on the results \(\mathbf{R_{WCDA}}\) and \(\mathbf{R_{KM2A}}\) from LHAASO. First we consider what we can learn from \(\mathbf{R_{WCDA}}\). For convenience we assume that there are exactly \(N_{\mathrm{WCDA}}=64000\) photons in total in this data set. The distribution of these photons is assumed to follow a simple power-law form: \[\frac{\mathrm{d}N}{\mathrm{d}E}=A_{\alpha}\times E^{-\alpha}, \tag{9}\] where the index \(\alpha\) is chosen to be \(2.41^{+0.14}_{-0.13}\)1 and \(A_{\alpha}\) is left to be normalized by solving Footnote 1: The values are taken from Ref. [6] and its supplementary material, where time-dependent values \(\alpha(t)\) are given and here we use the time-averaged values. \[N_{\mathrm{WCDA}}=\int_{200~{}\mathrm{GeV}}^{7~{}\mathrm{TeV}}\mathrm{d}E\, \frac{\mathrm{d}N}{\mathrm{d}E}\times e^{-\tau}, \tag{10}\] in which \(\tau\) could be either \(\tau_{\gamma\gamma}\) or \(\tau_{\gamma\gamma}^{\rm LIV}\). Once the normalization factor \(A_{\alpha}\) is determined for each case, we may extend the spectrum (9) to a higher energy \(E_{\rm max}\) and thus we can compare the theoretical calculation with \({\bf R_{KM2A}}\). The results of this comparison then would provide us with more clue of whether there is any unexpected excess in the observation of GRB221009A. In this work, we choose \(E_{\rm max}=20\) TeV and to calculate the optical depth, we adopt the model of Ref. [18]. Let us define a critical energy \(E_{c}\), which satisfies \[\int_{E_{c}}^{E_{\rm max}}{\rm d}E\,\frac{{\rm d}N}{{\rm d}E}\times e^{-\tau}=1, \tag{11}\] which means that there should be at least one event within \(E_{c}\) to \(E_{\rm max}\). By solving \(E_{c}\) we could obtain the information about at most above which energy can we observe one photon. If \(E_{c}\gtrapprox 18\) TeV, then observing photons around 18 TeV is quite natural. However when \(E_{c}\ll 18\) TeV, it is unlikely that we can do the same thing conclusively. Replacing \(\tau_{\gamma\gamma}\) with \(\tau_{\gamma\gamma}^{\rm LIV}\), we can perform the similar analyses with the existence of LIV. The results then could indicate whether we need to invoke new mechanisms, and if so, whether LIV is a good candidate. We choose the LIV scales to be \(E_{\rm LIV}=0.03\times E_{\rm Planck}\)2, \(E_{\rm LIV}=0.1\times E_{\rm Planck}\), \(E_{\rm LIV}=E_{\rm Planck}\), \(E_{\rm LIV}=10\times E_{\rm Planck}\), and \(E_{\rm LIV}=+\infty\) which can also be understood as the non-LIV case. In the case of \(\alpha=2.41\), we list the critical energies and the total photon numbers for the extended spectra in Tab. 1, and meanwhile we depict the results in Fig. 1a, where the spectra are drawn and the positions of \(E_{c}\) are shown by vertical lines schematically. From Tab. 1, we learn that the photon numbers only change mildly, since the suppression caused by EBL attenuation and the shape of the spectrum turns stronger at high energies, and as a result most photons still '_accumulate_' at low energies. Then we concentrate on the positions of \(E_{c}\) for different LIV scales. Obviously from the last three rows of Tab. 1 we may conclude that it is quite unlikely that LHAASO could report photons around 18 TeV from GRB221009A, since there is only one photon above \(\sim 8\) TeV and because of the aforementioned '_accumulation_' it is not consistent with \({\bf R_{KM2A}}\). Indeed if we replace with \(10^{-6}\) the one in Eq. (11) and set \(E_{\rm LIV}=+\infty\), we are able to obtain the corresponding \(E_{c}\) in the second row of Tab. 4. It provides a more strict limit that there is no more than \(10^{-6}\) photon above 16.54 TeV, making it rather unusual to observe photons about 18 TeV. However from the first two rows of Tab. 1 we find that a LIV scale \(E_{\rm LIV}\lesssim 0.1\times E_{\rm Planck}\) is enough for understanding \({\bf R_{KM2A}}\), that is, there are photons observable around 18 TeV. Footnote 2: This is a phenomenological suggestion of the lower bound of \(E_{\rm LIV}\), see, e.g., Ref. [19] and references therein. To minimize the effects caused by choosing different spectrum indices, we also perform the same analyses for \(\alpha=2.28\) and \(\alpha=2.55\), for which the results are depicted in Tab. 2 and Tab. 3 respectively. As we in Fig. 1b we depict the effects of changing the index schematically by the filled regions. Meanwhile, in Tab. 4 the \(E_{c}\) for finding \(10^{-6}\) photon are also listed. Needless to say, although with a larger index \(\alpha=2.55\) it would be more unlikely to observe photons around 18 TeV, LIV induced threshold anomalies still provide an explanation for \({\bf R_{KM2A}}\) so long as \(E_{\rm LIV}\lesssim 0.1\times E_{\rm Planck}\). However even a smaller index \(\alpha=2.28\) could not provide a reasonable interpretation to \({\bf R_{KM2A}}\). As we can see from Tab. 2 and Fig. 1b, without LIV we might have only one photon above 8.46 TeV, while above 16.76 TeV only \(10^{-6}\) photon is observable. In contrast, with LIV, or more precisely, with \(E_{\rm LIV}\lesssim 0.1\times E_{\rm Planck}\), \({\bf R_{KM2A}}\) from LHAASO can be understood naturally. \begin{table} \begin{tabular}{c c c} \hline \hline \(E_{\rm LIV}\) (\(E_{\rm Planck}\)) & \(E_{c}\) (TeV) & Photon number \\ \hline 0.03 & 19.69 & 64060 \\ 0.1 & 19.01 & 64023 \\ 1 & 8.82 & 64003 \\ 10 & 8.05 & 64003 \\ \(\infty\) & 7.97 & 64002 \\ \hline \hline \end{tabular} \end{table} Table 1: \(E_{c}\) and expected photon numbers with \(\alpha=2.41\). In summary, we perform analyses of the observation results of GRB221009A by LHAASO [5; 6]. We utilize the detailed result, which exhibit no excess and no contradiction with the standard physics, from WCDA of LHAASO [6], to construct models and extend these models to the preliminary results from KM2A of LHAASO [5], which is likely to contradict special relativity. After comparing the theoretical predictions with the observation, we find that around 18 TeV photons from GRB221009A by LHAASO are hardly understood from standard physics, and novel mechanisms are needed to provide reasonable explanations. Furthermore we explore the possibility of using Lorentz invariance violation to understand this observation. We find that a LIV scale \(E_{\rm{LIV}}\lesssim 0.1\times E_{\rm{Planck}}\) is enough to render about 18 TeV photons from GRB221009A detectable on the Earth. As a result, 10 TeV scale photons from GRB221009A indicate the feasibility to constrain LIV of photons from LHAASO observation. Of course, besides LIV, there are also other approaches available, such as introducing axion-like particles [11; 20; 21; 22; 23; 24]. Therefore we expect that more detailed analyses on the results of KM2A observation from LHAASO could provide more information. \begin{table} \begin{tabular}{c c} \hline \hline \(\alpha\) & \(E_{c}\) (TeV) \\ \hline 2.28 & 16.76 \\ 2.41 & 16.54 \\ 2.55 & 16.31 \\ \hline \hline \end{tabular} \end{table} Table 4: \(E_{c}\) for observing \(10^{-6}\) photon without LIV for different indices. \begin{table} \begin{tabular}{c c c} \hline \hline \(E_{\rm{LIV}}\) (\(E_{\rm{Planck}}\)) & \(E_{c}\) (TeV) & Photon number \\ \hline 0.03 & 19.45 & 64036 \\ 0.1 & 18.43 & 64014 \\ 1 & 8.09 & 64002 \\ 10 & 7.53 & 64002 \\ \(\infty\) & 7.47 & 64002 \\ \hline \hline \end{tabular} \end{table} Table 3: \(E_{c}\) and expected photon numbers with \(\alpha=2.55\). Figure 1: The expected spectra with different LIV scales: (a) with \(\alpha=2.41\) and the vertical lines representing the positions of \(E_{c}\); (b) the same as in (a) with the filled regions representing the uncertainties. \begin{table} \begin{tabular}{c c c} \hline \hline \(E_{\rm{LIV}}\) (\(E_{\rm{Planck}}\)) & \(E_{c}\) (TeV) & Photon number \\ \hline 0.03 & 19.81 & 64098 \\ 0.1 & 19.48 & 64038 \\ 1 & 9.61 & 64005 \\ 10 & 8.55 & 64004 \\ \(\infty\) & 8.46 & 64004 \\ \hline \hline \end{tabular} \end{table} Table 2: \(E_{c}\) and expected photon numbers with \(\alpha=2.28\). ## Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## Data availability The data used in this work could easily be reproduced by **ebltable**[25] with the model reported in Ref. [18] being adopted. ## Acknowledgements This work is supported by National Natural Science Foundation of China (Grant No. 12075003).
2303.09096
Supersingular Isogeny Graphs from Algebraic Modular Curves
We describe and compare algorithms for computing supersingular isogeny graphs. Along the way, we obtain a formula for the trace of the adjacency matrix of a general supersingular isogeny graph, and we prove a conjecture recently posed by Nakaya.
Nadir Hajouji
2023-03-16T05:57:24Z
http://arxiv.org/abs/2303.09096v3
# Supersingular isogeny graphs from algebraic modular curves ###### Abstract. We describe algorithms for computing supersingular \(\ell\)-isogeny graphs using algebraic models of modular curves. The algorithms we describe were used to compute all supersingular isogeny graphs for all pairs \((p,\ell)\) with \(p\leq 2048\) and \(\ell\leq 13\). In addition, we use our results to prove a conjecture posed by Nakaya. ## 1. Introduction Given a pair of primes \(p,\ell\), with \(p\neq\ell\), we would like to compute the following: 1. The set of supersingular \(j\)-invariants. 2. The supersingular \(\ell\)-isogeny graphs. There are already algorithms for computing these graphs, see e.g. [4], [9], [15]. These algorithms usually involve computations in the field of definition of the \(\ell\)-torsion points of the supersingular curves. We will describe simpler algorithms using the modular curves \(X_{0}(\ell)\) and \(X_{0}(\ell)^{+}\). In addition, we will use our results about supersingular isogeny graphs to explain and generalize some of the observations in [12]. Our paper is structured as follows: * In Section 2, we prove that every isogeny between supersingular curves is defined over \(\mathbb{F}_{p^{2}}\). This result is needed to ensure we can compute isogeny graphs by checking only the \(\mathbb{F}_{p^{2}}\) of a suitable modular curve. * In Section 3, we obtain models for the modular curves \(X_{0}(\ell)\) and \(X_{0}(\ell)^{+}\) for \(\ell\in\{3,5,7,11,13\}\). * In Section 4, we describe algorithms for computing supersingular isogeny graphs. * In Section 5, we use the tools we built for computing supersingular isogeny graphs to prove new theorems. * We obtain a formula for the trace of the adjacency matrix of the \(\ell\)-isogeny graph in 5.1. * In 5.2, we explain how to translate questions asked in [12] to questions about isogeny graphs. We use this perspective to prove and extend Conjecture 2 in that paper. ## 2. Supersingular Elliptic Curves ### Quadratic Twists Let \(\mathbb{F}_{q}\) be a field of characteristic not equal to \(2\), let \(d\in\mathbb{F}_{q}^{\times}\) be a nonsquare and let \(E/\mathbb{F}_{q}\) be an elliptic curve given by a Weierstrass equation: \[E:\quad y^{2}=x^{3}+ax^{2}+bx+c\] We define the quadratic twist of \(E/\mathbb{F}_{q}\) to be the elliptic curve: \[E^{-1}:\quad dy^{2}=x^{3}+ax^{2}+bx+c\] There is an obvious injection: \[E^{-1}(\mathbb{F}_{q})\to E(\mathbb{F}_{q^{2}})\qquad(x,y)\mapsto(x,\sqrt{d}y)\] We identify \(E^{-1}(\mathbb{F}_{q})\) with its image in \(E(\mathbb{F}_{q^{2}})\). Note that \(E^{-1}(\mathbb{F}_{q})\) coincides with the kernel of the trace map \(E(\mathbb{F}_{q^{2}})\to E(\mathbb{F}_{q})\). In particular, this means: \[E^{-1}(\mathbb{F}_{q})\cap E(\mathbb{F}_{q})=\ker(Tr|_{E(\mathbb{F}_{q})})=E[ 2](\mathbb{F}_{q})\] Let \(E/\mathbb{F}_{q}\) be an elliptic curve and \(\phi_{n}:E\to E\) the \(p^{n}\)-power Frobenius. We write \(Tr(E,\phi_{n})\) to denote the trace of \(\phi_{n}\) on \(E\). If \(E\) is clear from context, we write \(Tr(\phi_{n})\). **Lemma 2.1**.: _Let \(E/\mathbb{F}_{q}\) be an elliptic curve, let \(E^{-1}/\mathbb{F}_{q}\) be the quadratic twist and let \(\phi:E\to E\) be the \(q\)-power Frobenius. Then:_ \[Tr(\phi,E^{-1})=-Tr(\phi,E)\] Proof.: We will show that \(E(\mathbb{F}_{q})\cup E^{-1}(\mathbb{F}_{q})\) coincides with the preimage of \(\mathbb{P}^{1}_{\mathbb{F}_{q}}\subset\mathbb{P}^{1}_{\mathbb{F}_{q^{2}}}\) under the double cover \(E(\mathbb{F}_{q^{2}})\to\mathbb{P}^{1}_{\mathbb{F}_{q^{2}}}\). This will allow us to compute \(\#E(\mathbb{F}_{q})+\#E^{-1}(\mathbb{F}_{q})\) by determining the number of points in each fiber over each point in \(\mathbb{P}^{1}_{\mathbb{F}_{q}}\). We will see that \(\#E(\mathbb{F}_{q})+\#E^{-1}(\mathbb{F}_{q})=2q+2\), which allows us to deduce the lemma from the Weil conjectures. Let \(x_{0}\in\mathbb{P}^{1}_{\mathbb{F}_{q}}\). Then exactly one of the following is true: 1. \(x_{0}^{3}+ax_{0}^{2}+bx_{0}+c=0\). There are at most \(3\) points \(x_{0}\) for which this is true, and whenever this happens, it means we have \((x_{0},0)\in E(\mathbb{F}_{q})\cap E^{-1}(\mathbb{F}_{q})\). 2. \(x_{0}^{3}+ax_{0}^{2}+bx_{0}+c=y_{0}^{2}\) for some \(y_{0}\mathbb{F}_{q}^{\times}\): in this case, we have \((x_{0},y_{0}),(x_{0},-y_{0})\in E(\mathbb{F}_{q})\). 3. \(x_{0}^{3}+ax_{0}^{2}+bx_{0}+c=dy_{0}^{2}\) for some \(y_{0}\mathbb{F}_{q}^{\times}\): in this case, we have \((x_{0},y_{0}),(x_{0},-y_{0})\in E^{-1}(\mathbb{F}_{q})\). Thus, every point \([x_{0}:1]\in\mathbb{P}^{1}_{\mathbb{F}_{q}}\) contributes \(2\) to the sum \(\#E(\mathbb{F}_{q})+\#E^{-1}(\mathbb{F}_{q})\). Together with the identity, they give us a total of: \[\#E(\mathbb{F}_{q})+\#E^{-1}(\mathbb{F}_{q})=2\cdot\#\mathbb{P}^{1}(\mathbb{F }_{q})=2(q+1)\] which proves the lemma. ### Isogenies **Theorem 2.2**.: _Let \(E/\mathbb{F}_{p^{2}}\) be a supersingular curve and \(\psi:E\to E^{\prime}\) an isogeny of degree prime to \(p\). Then \(\psi\) is defined over \(\mathbb{F}_{p^{2}}\)._ Proof.: First, we prove the theorem for supersingular curves \(E/\mathbb{F}_{p}\). Let \(\phi:E\to E\) be the \(p\)-power Frobenius map. Since \(E\) is supersingular, the characteristic polynomial of \(\phi\) is: \[\chi_{\phi}(t)=t^{2}+p\] By the Cayley-Hamilton theorem, \(\phi^{2}\) acts on \(E\) as multiplication-by-\([-p]\). Now, let \(\psi:E\to E^{\prime}\) be an isogeny. We can factor \(\psi\) as \(\psi=\psi_{0}\circ[m]\), where \(\psi_{0}:E\to E^{\prime}\) is an isogeny with cyclic kernel, and \([m]:E\to E\) is the multiplication map. The latter is defined over \(\mathbb{Z}\), so we just have to prove that \(\psi_{0}\) is defined over \(\mathbb{F}_{p^{2}}\). Let \(\langle P_{0}\rangle\cong\mathbb{Z}/d\mathbb{Z}\) be the kernel of \(\psi_{0}\). Then: \[\phi^{2}\cdot\langle P_{0}\rangle=\left\langle\phi^{2}\cdot P_{0}\right\rangle= \langle[-p]\cdot P_{0}\rangle=\langle P_{0}\rangle\] Thus, \(\langle P_{0}\rangle\) is defined over \(\mathbb{F}_{p^{2}}\), so \(\psi_{0}\) is defined over \(\mathbb{F}_{p^{2}}\) as claimed. This completes the proof of the theorem for supersingular curves \(E/\mathbb{F}_{p}\). Next, let \(E/\mathbb{F}_{p^{2}}\) be a supersingular curve with \(j(E)\not\in\mathbb{F}_{p}\), and let \(\phi_{2}:E\to E\) be the \(p^{2}\)-power Frobenius. Note that \(Tr(\phi_{2},E)\in\{-2p,2p\}\) for all supersingular curves with \(j(E)\not\in\mathbb{F}_{p}\).1 Footnote 1: In fact, this is true for all except at most \(4\) supersingular curves. If \(p\equiv 3\pmod{4}\), there are two models for the curve with \(j=1728\) that are defined over \(\mathbb{F}_{p^{2}}\) with \(Tr(\phi_{2})=\in\{\pm p\}\), and if \(p\equiv 2\pmod{3}\), there are two models for the curve with \(j=0\) with \(Tr(\phi_{2})=0\). See [17],[13],[4] for additional details. Let \(E_{0}/\mathbb{F}_{p^{2}}\) be any supersingular curve satisfying: * \(j(E_{0})\in\mathbb{F}_{p}\). * \(Tr(\phi_{2},E_{0})=Tr(\phi_{2},E)\). Note that such a curve necessarily exists. First, we can always find a supersingular curve \(E/\mathbb{F}_{p}\) - this follows from Prop 14.18 in [6], or Theorem 2 in [12]. The trace of Frobenius of the base change \(E_{0}/\mathbb{F}_{p^{2}}\) is \(2p\), so the trace of Frobenius on the quadratic twist of \(E_{0}/\mathbb{F}_{p^{2}}\) must be \(-2p\) by Lemma 2.1. Thus, we take \(E_{0}\) to be the base change of \(E_{0}/\mathbb{F}_{p}\) when \(Tr(E,\phi_{2})=-2p\) and we take \(E_{0}\) to be the quadratic twist of the base change of \(E_{0}/\mathbb{F}_{p}\) otherwise.2 Footnote 2: Note that order matters - the base change of the quadratic twist of \(E_{0}\) is not isomorphic to the quadratic twist of the base change of \(E_{0}\); we have to quadratically twist using a nonsquare \(d\in\mathbb{F}_{p^{2}}\) to obtain the model with trace \(2p\). Fix an isogeny \(\psi_{0}:E_{0}\to E\). Let \(\psi:E\to E^{\prime}\) be any isogeny. We know the theorem holds for \(E_{0}\), so the isogenies \(\psi_{0}:E_{0}\to E\) and \(\psi_{1}=\psi\circ\psi_{0}:E_{0}\to E^{\prime}\) must both be defined over \(\mathbb{F}_{p^{2}}\). Thus \(\psi\) must be defined over \(\mathbb{F}_{p^{2}}\). \(\square\) ## 3. Algebraic Models of Modular curves In this section, we derive formulas that are needed in Section 4. For each odd prime \(\ell\leq 13\), we will obtain descriptions of \(X_{0}(\ell)\), as well as formulae for the \(j\)-maps \(j_{\ell}:X_{0}(\ell)\to X(1)\) and the Fricke involutions \(F_{\ell}:X_{0}(\ell)\to X_{0}(\ell)\). For \(\ell>3\), our starting point will be the "_universal elliptic curve with a point \(P\) of order \(\geq 4\)_".3 Explicitly, this is the family of elliptic curves over \(\mathbb{A}^{2}=\operatorname{Spec}\ \mathbb{Z}[u,v]\) given by the equation: Footnote 3: See, e.g., Ex. 8.13 in [14]. \[\mathcal{E}_{u,v}:\quad y^{2}+(1-u)xy-vy=x^{3}-vx^{2}\] Note that \((0,0)\in\mathcal{E}_{u,v}(\mathbb{Z}[u,v])\) is a point of infinite order. If \(E/K\) is any elliptic curve, and \(P\in E(K)\) is a point of order \(\geq 4\), there is a unique point \((u_{0},v_{0})\in\mathbb{A}^{2}_{K}\) such that \(E\) is isomorphic to the fiber over \((u_{0},v_{0})\) in \(\mathcal{E}_{u,v}\times\operatorname{Spec}\ K\), and in fact there is an isomorphism from \(E\) to the fiber that takes \(P\) to the point \((0,0)\) on the fiber. We will describe the modular curves \(X_{1}(\ell)\) by describing maps \(i_{\ell}:X_{1}(\ell)\rightarrow\mathbb{A}^{2}\) from smooth curves with the property that \(\mathcal{E}\times_{\mathbb{A}^{2}}X_{1}(\ell)\to X_{1}(\ell)\) is the universal curve with a point of order \(\ell\). Next, define: \[\alpha_{\mathcal{E}}:\mathbb{A}^{2}\to\mathbb{A}^{2}\qquad\alpha_{\mathcal{E}}(u, v)=\left(-\frac{u^{2}v-u^{2}+3uv-2v^{2}}{u^{4}},-\frac{v\left(u^{2}+u-v\right)^{3}}{u^{8}}\right)\] If \((u_{0},v_{0})\in\mathbb{A}^{2}\) represents a pair \((E,P_{0})\), then \(\alpha_{\mathcal{E}}(u_{0},v_{0})\) represents \((E,2P_{0})\). In particular, \(j\varepsilon\circ\alpha_{\mathcal{E}}=j\varepsilon\). We will restrict \(\alpha|_{\mathcal{E}}\) to the modular curves \(X_{1}(\ell)\) to obtain automorphisms \(\alpha_{\ell}:X_{1}(\ell)\to X_{1}(\ell)\) of order \(\frac{\ell-1}{2}\). We obtain \(X_{0}(\ell)\) as a quotient \(X_{1}(\ell)\to X_{1}(\ell)/\left\langle\alpha_{\ell}\right\rangle\), and we obtain the maps \(j_{\ell}:X_{0}(\ell)\to X(1)\) by finding functions on \(X_{0}(\ell)\) that pull back to \(j_{\mathcal{E}}\circ i_{\ell}\) on \(X_{1}(\ell)\). Obtaining a formula for the Fricke involution \(F_{\ell}:X_{0}(\ell)\to X_{0}(\ell)\) is straightforward: we simply look for involutions that permute the two cusps of \(X_{0}(\ell)\). This determines \(F\) up to a scalar multiple. We can determine \(F\) completely by using Velu's formula4 to determine the correct value of \(j_{\ell}\circ F_{\ell}(t_{0})\) at some point \(t_{0}\) for which \(j_{\ell}\circ F_{\ell}(t_{0})\neq 0\). Footnote 4: On the universal curve over \(X_{1}(\ell)\). In addition, we will obtain a description of the double cover \(X_{0}(\ell)\to X_{0}(\ell)/\left\langle F_{\ell}\right\rangle=:X_{0}(\ell)^{+}\) in terms of the _Atkin modular polynomial_: \[R_{\ell}(x,y)=x^{2}-a_{\ell}(y)x+b_{\ell}(y)\] We will describe the quotient map \(X_{0}(\ell)\to X_{0}(\ell)^{+}\) explicitly, and give formulas for the polynomials \(a_{\ell},b_{\ell}\). Note that analogs of these polynomials \(R_{\ell}(x,y)\) can be obtained whenever \(X_{0}(\ell)^{+}\) has genus \(0\). #### 3.0.1. Remark The set of primes \(\ell\) for which \(X_{0}(\ell)^{+}\) has genus \(0\) famously coincides with the set of primes that divide the size of the Monster group. Furthermore, these are precisely the primes for which every supersingular curve has a model defined over the ground field \(\mathbb{F}_{p}\). We will refer to them as _supersingular primes_. Now, the curve defined by \(R_{\ell}(x,y)\) will have singularities at _some_ of the points \((x_{0},y_{0})\) where \(a_{\ell}(y_{0})^{2}-4b_{\ell}(y_{0})=0\). Note that \(x_{0}\) is determined by \(y_{0}\) in this case, since the quadratic has a repeated root. These points represent elliptic curves with an endomorphism of degree \(\ell\). We have to be careful dealing with these points, as _some_ of them represent multiple edges on the isogeny graph. To avoid having to determine the correct multiplicity for each edge, we will simply compute these multiplicities in characteristic \(0\), and write down a polynomial \(\Delta_{\ell}(j)\) whose roots are \(j\)-invariants of elliptic curves with an endomorphism of degree \(\ell\), and where the multiplicity of each root indicates the number of isogenies of degree \(\ell\) for that \(j\)-invariant. The factors of these \(\Delta_{\ell}\) will be Hilbert class polynomials; we give explicit formula for the Hilbert class polynomials that appear as factors in the appendix. ### \(\ell=3\) For \(\ell=3\), we can obtain the formulas we need without using \(\mathcal{E}\). Let \(E\) be an elliptic curve with a point \(P\) or order \(3\), and assume \(E\) is given by an equation: \[y^{2}=x^{3}+b_{2}x^{2}+b_{4}x+b_{6}\] with \(x(P)=0\). The condition \([3]P=0\) is equivalent to \(x([2]P)=x(P)\), so we must have \(x([2]P)=0\). Now, a computation shows that \(x([2]P)=\frac{b_{4}^{2}-4b_{2}b_{6}}{4b_{8}}\) so the condition \([3]P=0\) implies \(b_{4}^{2}-4b_{2}b_{6}=0\), which in turn implies that the quadratic: \[b_{2}x^{2}+b_{4}x+b_{6}\] has a repeated root. This means we can find \(c_{0},c_{1}\) such that: \[y^{2}=x^{3}+c_{0}(x+c_{1})^{2}\] Furthermore, replacing \(c_{0},c_{1}\) by \(tc_{0},tc_{1}\) for any nonzero \(t\) does not change the isomorphism type of the pair \((E,P_{0})\), so we can use \(\mathbb{P}^{1}=\left\{[c_{0}:c_{1}]\right\}\) as a model for \(X_{0}(3)\). The map \(j_{3}\) is: \[j_{3}([c_{0}:c_{1}])=\frac{256c_{0}^{3}\left(c_{0}^{3}-6c_{1}\right)^{3}}{c_{1 }^{3}\left(4c_{0}^{3}-27c_{1}\right)}\] In these coordinates, the Fricke involution is given by5: Footnote 5: See [7] for more on 3-isogenies in these coordinates. \[[c_{0}:c_{1}]\mapsto[-27c_{0}:4c_{0}+27c_{1}]\] To match the form of the equations we will obtain for \(\ell>3\), we use the change of variable to move the cusps6 to \(0\), \(\infty\). We can ignore the point at infinity and work with an affine coordinate \(t\). In these coordinates: Footnote 6: In other words, the points on \(X_{0}(\ell)\) that represent singular curves. \[j_{3}(t)=-\frac{(t-27)(t-3)^{3}}{t}\] (J3) and the Fricke involution is: \[F_{3}(t)=\frac{729}{t}\] (F3) The coefficients of the Atkin modular polynomial \(R_{3}(x,y)\) are easy to obtain in these coordinates: \[a_{3}(y) =y\left(y^{2}-126y+2944\right)\] (RA3) \[b_{3}(y) =y(y+192)^{3}\] (RB3) The discriminant of \(R_{3}(x,y)\) is: \[a_{3}(y)^{2}-4b_{3}(y)=y(y-108)(y-64)^{2}(y-8)^{2}\] We compute the \(j\)-invariants represented by \(y=0,8,64,108\) and record the multiplicity of those \(j\)-invariants in the following polynomial: \[\Delta_{3}(j)=j(j-54000)(j-8000)^{2}(j+32768)^{2}\] (D3.1) Using the notation explained in the appendix, we can rewrite the previous equation as: \[\Delta_{3}(j)=H_{-3}(j)H_{-3\cdot 4}(j)H_{-2}(j)^{2}H_{-11}(j)^{2}\] (D3.2) ### \(\ell=5\) The curve \(X_{1}(5)\) is abstractly isomorphic to \(\mathbb{P}^{1}\), and the rational map \(i_{5}:X_{1}(5)\rightarrow\mathbb{A}^{2}\) is essentially the diagonal embedding: if we fix an affine coordinate \(t=\frac{t_{0}}{t_{1}}\) on \(\mathbb{P}^{1}\), then the map \(i_{5}:X_{1}(5)\rightarrow\mathbb{A}^{2}\) is given by \(i_{5}(t)=(t,t)\). The universal elliptic curve with a point of over \(5\) is therefore given by: \[y^{2}+(1-t)xy-ty=x^{3}-tx^{2}\] (E5.1) We compute \(j_{5,1}:=j_{\mathcal{E}}\circ i_{5}\): \[j_{5,1}(t)=\frac{\left(t^{4}-12t^{3}+14t^{2}+12t+1\right)^{3}}{t^{5}\left(t^{2 }-11t-1\right)}\] (J5.1) Now, we can apply Velu's formula ([16]) to the entire family of elliptic curves over \(X_{1}(5)\) to obtain a new family over \(X_{1}(5)\) given by a Weierstrass equation: \[y^{2}+a_{1}^{\prime}xy+a_{3}^{\prime}y=x^{3}+a_{2}^{\prime}x^{2}+a_{4}^{\prime }x+a_{6}^{\prime}\] where: \[a_{1}^{\prime} =1-t\quad(=a_{1})\] \[a_{2}^{\prime} =-t\quad(=a_{2})\] \[a_{3}^{\prime} =-t\quad(=a_{3})\] \[a_{4}^{\prime} =-5t\left(t^{2}+2t-1\right)\] \[a_{6}^{\prime} =-t\left(t^{4}+10t^{3}-5t^{2}+15t-1\right)\] The \(j\)-invariant of this new family is: \[j_{5,1}^{\prime}(t)=\frac{\left(t^{4}+228t^{3}+494t^{2}-228t+1\right)^{3}}{t \left(t^{2}-11t-1\right)^{5}}\] (J5.1') We will use this when computing a formula for the Fricke involution. #### 3.2.1. \(X_{1}(5)\to X_{0}(5)\) To obtain the map \(X_{1}(5)\to X_{0}(5)\), we compute the quotient of \(X_{1}(5)\) by the automorphism \(\alpha_{\mathcal{E}}|_{X_{1}(5)}\). Observe that: \[\alpha_{\mathcal{E}}\circ i_{5}(t)=\alpha_{\mathcal{E}}(t,t)=\left(\frac{-1}{ t},\frac{-1}{t}\right)\] Thus, \(\alpha_{\mathcal{E}}|_{X_{1}(5)}=\alpha_{5}(t)\), where: \[\alpha_{5}(t)=\frac{-1}{t}\] (A5.1) The function \(t+\frac{-1}{t}\) is clearly invariant under \(\alpha_{5}\), so we use it to construct a quotient map \(X_{1}(5)\to X_{0}(5)\): \[q_{5}(t)=\frac{t^{2}-11t-1}{t}\] (Q5) Note that the quotient map has been constructed to ensure the \(I_{5}\) cusp gets sent to the point at infinity, and the \(I_{1}\) cusp gets sent to the point \(0\) on \(X_{0}(5)\). To obtain the map \(j_{5}:X_{0}(5)\to X(1)\), we do the following: * The poles of \(j_{5}\) are at \(0,\infty\) by construction: one can check that the map \(q_{5}\) takes the cusps of \(X_{1}(5)\) to \(0,\infty\) on \(X_{0}(5)\). * We compute the zeros of \(j_{5,1}\), and use the map \(q_{5}\) to determine what the zeros of \(j_{5}\) should be. * Once we know the zeros and poles of \(j_{5}\), the function is determined up to a scalar multiple. We can ensure we have the correct scalar by evaluating \(j_{5,1}\) at a random point which is not a zero or pole. The end result is: \[j_{5}(t)=\frac{\left(t^{2}+10t+5\right)^{3}}{t}\] (J5) The Fricke involution is: \[F_{5}(t)=\frac{125}{t}\] (F5) One can check that \(j_{5,1}(t)=j_{5}\circ q_{5}(t)\). Furthermore, note that \(j_{5,1}^{\prime}(t)=j_{5,0}\circ q_{5}(\frac{125}{t})\), so we no longer need to use Velu's formula. #### 3.2.2. \(X_{0}(5)\to X_{0}(5)^{+}\) It is easy to find a quotient map \(X_{0}(5)\to X_{0}(5)/\left\langle F_{5}\right\rangle\) - we will use \(q_{5}^{+}\): \[q_{5}^{+}:X_{0}(\ell)\to X_{0}(\ell)^{+}\qquad q_{5}^{+}(t)=t+22+\frac{125}{t}\] (Q5+) Note that the constant term 22 is chosen so that the polynomials we obtain using \(q_{5}^{+}\) match those in [12]. We obtain the coefficients of \(a_{5},b_{5}\) by matching coefficients with the system: \[a_{5}(q_{5}^{+}(t))=j_{5}(t)+(j_{5}\circ F_{5})(t)\] \[b_{5}(q_{5}^{+}(t))=j_{5}(t)\cdot(j_{5}\circ F_{5})(t)\] The polynomials are given by: \[a_{5}(y) =y^{5}-80y^{4}+1890y^{3}-12600y^{2}+7776y+3456\] (RA5) \[b_{5}(y) =\left(y^{2}+216y+144\right)^{3}\] (RB5) Finally, we compute the polynomial \(\Delta_{5}\): \[\Delta_{5}(j) =(j-287496)^{2}(j-1728)^{2}(j+32768)^{2}(j+884736)^{2}\left(j^{2}- 1264000j-681472000\right)\] (D5) \[=H_{-1}(j)^{2}H_{-4}(j)^{2}H_{-11}(j)^{2}H_{-19}(j)^{2}H_{-5}(j)\] ### \(\ell=7\) The map \(i_{7}:X_{1}(7)\rightarrow\mathbb{A}^{2}\) is given by: \[i_{7}(t)=(t^{2}-t,t^{3}-t^{2})\] (I7) The universal elliptic curve with a point of order 7 is therefore given by the following equation: \[y^{2}+(1+t-t^{2})xy+t^{2}(1-t)y=x^{3}+t^{2}(1-t)x^{2}\] (E7.1) We compute \(j_{7,1}=j_{\mathcal{E}}\circ i_{7}\): \[j_{7,1}(t)=\frac{\left(t^{2}-t+1\right)^{3}\left(t^{6}-11t^{5}+30t^{4}-15t^{3}- 10t^{2}+5t+1\right)^{3}}{t^{7}(t-1)^{7}\left(t^{3}-8t^{2}+5t+1\right)}\] (J7.1.1) We can compute the formula for \(j^{\prime}_{7,1}(t)\) by applying Velu's formula to the universal elliptic curve, and computing the \(j\)-invariant of the result: \[j^{\prime}_{7,1}(t)=\frac{\left(t^{2}-t+1\right)^{3}\left(t^{6}+229t^{5}+270t^{4 }-1695t^{3}+1430t^{2}-235t+1\right)^{3}}{t(t-1)\left(t^{3}-8t^{2}+5t+1\right)^ {7}}\] (J7.1.2) #### 3.3.1. \(X_{1}(7)\to X_{0}(7)\) To obtain a model of \(X_{0}(7)\), we take the quotient of \(X_{1}(7)\) by the automorphism: \[\alpha_{7}(t)=\frac{1}{1-t}\] (A7.1) Finding a function invariant under this action is straightforward (e.g. we can take the sum of the orbit). We construct our quotient map \(X_{1}(7)\to X_{0}(7)\) so that the \(I_{7}\) singularity is at \(\infty\) and the \(I_{1}\) singularity is at \(0\): \[q_{7}(t)=\frac{t^{3}-8t^{2}+5t+1}{(1-t)t}\] (Q7) The map \(j_{7}:X_{0}(7)\to X(1)\) is: \[j_{7}(t)=-\frac{\left(t^{2}-13t+49\right)\left(t^{2}-5t+1\right)^{3}}{t}\] (J7) The Fricke involution swaps the \(I_{1},I_{7}\) cusps of \(X_{0}(7)\), so it has the form \(t\mapsto\frac{a}{t}\) for some \(a\). We choose \(a\) so that \(j_{7}\circ F_{7}\circ q_{7}=j^{\prime}_{7,1}\), where \(j^{\prime}_{7,1}\) is the \(j\)-map J7.1.2 obtained using Velu's formula: \[F_{7}(t)=\frac{49}{t}\] (F7) #### 3.3.2. \(X_{0}(7)\to X_{0}(7)^{+}\) The computations are similar to the \(\ell=5\) case. The quotient map \(X_{0}(7)\to X_{0}(7)^{+}\) is: \[q_{7}^{+}(t)=-t+13-\frac{49}{t}\] (Q7+) The polynomials \(a_{7},b_{7}\) are: \[a_{7}(y) =y^{5}-80y^{4}+1890y^{3}-12600y^{2}+7776y+3456\] (RA7) \[b_{7}(y) =\left(y^{2}+216y+144\right)^{3}\] (RB7) and the polynomial \(\Delta_{7}\) is: \[\Delta_{7}(j)=\left(H_{-3}(j)\cdot H_{-12}(j)\cdot H_{-27}(j)\cdot H_{-6}(j) \cdot H_{-7}(j)\cdot H_{-19}(j)\right)^{2}\] (D7) ### \(\ell=11\) This is the first modular curve of positive genus, so we actually have to specify an equation. We obtained a nice model of \(X_{1}(11)\), and a map \(X_{1}(11)\to\mathbb{A}^{2}\), using methods described in [1]. The model for \(X_{1}(11)\) is: \[w^{2}-w=z^{3}-z^{2}\] (X11.1) The map \(i_{11}:X_{1}(11)\to\mathbb{A}^{2}\) is given by: \[(z,w)\mapsto\left(\frac{(w-1)(w+z-1)}{z},\frac{w(w-1)(w+z-1)}{z}\right)\] (I12) The formula \(j_{11,1}=j_{\mathcal{E}}\circ i_{11}\) is too complicated to write out, but we will only need to compute it for a few values to obtain \(j_{11}\). #### 3.4.1. Quotient The equation for \(X_{1}(11)\) coincides with the equation of the fiber over \(t=1\) on \(X_{1}(5)\) in E5.1, which means that the point \((0,0)\in X_{1}(11)\) has order \(5\). Furthermore, the automorphism \(\alpha:X_{1}(11)\to X_{1}(11)\) given by \(\alpha(P)=(0,0)+P\) generates \(Aut(X_{1}(11)/X_{0}(11))\), so the quotient map \(X_{1}(11)\to X_{0}(11)\) is an isogeny of degree \(5\), and can be computed using Velu's formula [16]. The equation for \(X_{0}(11)\) we obtain is: \[y^{2}-y=x^{3}-x^{2}-10x-20\] (X11.0) The isogeny \(X_{1}(11)\to X_{0}(11)\) is: \[q_{11}(z,w)=\] \[\left(\frac{z^{5}-2z^{4}+3z^{3}-2z+1}{(z-1)^{2}z^{2}},\frac{w\left( z^{6}-3z^{5}+z^{4}-3z^{3}+6z^{2}-6z+2\right)+(z^{4}+z^{3}-3z^{2}+3z-1)}{(z-1)^ {3}z^{3}}\right)\] (Q11) We obtain \(j_{11}\) by computing the zeros and poles of \(j_{\mathcal{E}}\circ i_{11}\), and using the map \(q_{11}\) to determine the zeros and poles of \(j_{11}\). This determines \(j_{11,0}\) up to a scalar multiple. We then compute compute \(j_{11,1}\) at a point that is not a zero or pole to determine the correct scalar. The curve \(X_{1}(11)\) has \(5\) cusps of width \(11\) over the \(5\)-torsion subgroup, and \(5\) cusps of width \(1\) elsewhere. The cusps of width \(11\) get mapped to the identity on \(X_{0}(11)\), and the cusps of width \(1\) get mapped to the point \((16,61)\in X_{0}(11)\). Thus, the \(j\)-map should have a poly of order \(1\) at \((16,61)\) and a pole of order \(11\) at \(0\). The zeros can be obtained by finding the points on \(X_{1}(11)\) with \(j\)-invariant \(0\), and computing their image under the isogeny. The following function has the correct zeros and poles: \[j_{11}(x,y)=\frac{\left(x^{4}+12x^{3}-8x^{2}y-114x^{2}+40xy+140x-32y+217 \right)^{3}}{\left(-5x+y+19\right)\left(4x^{2}-xy-29x+4y+51\right)^{2}}\] (J11) Furthermore, \(j_{11,0}\circ q_{11}=j_{\mathcal{E}}\circ i_{11}\) at points which are not zeros or poles. Obtaining the Fricke involution is much easier. There is precisely only involution on \(X_{0}(11)\) that swaps \(P_{1}\) and \(0\): the map \(P\mapsto P_{1}-P\). Thus, we deduce that the Fricke involution on \(X_{0}(11)\) is: \[F_{11}(P)=(16,61)-P\] (F12) where the difference is computed using the group law on \(E\). #### 3.4.2. \(X_{0}(11)\to X_{0}(11)^{+}\) To compute the quotient of \(X_{0}(11)\) by the Fricke involution, it is helpful to first do a change of variable7 on \(X_{0}(11)\) so that the Fricke involution coincides with the negation map. Let \(x_{2}(P)=x(P+[3]P_{1})\) and \(y_{2}(P)=y(P+[3]P_{1})\). In terms of \(x_{2},y_{2}\), the Fricke involution is \((x_{2},y_{2})\mapsto(x_{2},1-y_{2})\), and the map \(j_{11,0}\) is given by: Footnote 7: We are simply going to translate every point on \(X_{0}(11)\) by a multiple of \(P_{1}\). \[j_{11}(x_{2},y_{2})\frac{(6x_{2}-y_{2}-35)^{3}(6x_{2}+y_{2}-36)\left(32x_{2}^{2 }+48x_{2}+120y_{2}-429\right)^{3}}{(5x_{2}-y_{2}-19)^{6}(5x_{2}+y_{2}-20)^{2}}\] (J13 Now, the sum and product of \(j_{11}(x_{2},y_{2})\) and \(j_{11}(x_{2},1-y_{2})\) do not depend on \(y_{2}\), so we can write them as rational functions of \(x_{2}\). In fact, they turn out to be polynomials in \(\frac{5x_{2}-1}{x_{2}+2}\). Thus, we define: \[q_{11}^{+}(P)=\frac{5x_{2}(P)-1}{x_{2}(P)+2}\] (Q11+) We have: \[a_{11}(y)=y^{11}-44y^{10} +693y^{9}-4334y^{8}+4400y^{7}+42658y^{6}\cdots\] \[-44968y^{5}-178376y^{4}-58432y^{3}+86240y^{2}+67200y+16000\] (RA.11) and: \[b_{11}(y)=\left(y^{4}+232y^{3}+1176y^{2}+1120y+400\right)^{3}\] (B11) Finally, we compute \(\Delta_{11}(j)\): \[\Delta_{11}(j)=\left(H_{-2}(j)\cdot H_{-7}(j)\cdot H_{-28}(j)\cdot H_{-10}(j) \cdot H_{-11}(j)\cdot H_{-44}(j)\cdot H_{-19}(j)\cdot H_{-35}(j)\cdot H_{-43} (j)\right)^{2}\] (D11) Note that the formulae for \(R_{11}(x,y)\) and \(\Delta_{11}(j)\) also appear in [3], although they are obtained using different means. ### \(\ell=13\) The curve \(X_{1}(13)\) can be described by the following model [8]: \[w^{2}+wz^{3}+wz^{2}+w-z^{2}-z=0\] (X13) To compute \(j\)-invariants on this curve, we use the map \(i_{13}:X_{1}(13)\to\mathbb{A}^{2}\): \[i_{13}(z,w)=\left(\frac{wz(wz-w-1)}{w+1},\frac{wz(1-wz)(wz-w-1)}{w+1}\right)\] (I13) We define \(j_{13,1}:=j_{\mathcal{E}}\circ i_{13}\). #### 3.5.1. Quotient Map Because \(X_{1}(13)\) has genus \(>1\), it only has finitely many automorphisms. In fact, the full automorphism group of \(X_{1}(13)\) is isomorphic to the dihedral group with \(12\) elements - see [8]. This is great news: * The cyclic group of order \(6\) must correspond to \(Aut(X_{1}(13)/X_{0}(13))\). * Let \(\sigma\) be any involution of \(X_{1}(13)\) not contained in the cyclic subgroup of order \(6\) Then \(\sigma\) must be a lift of the Fricke involution. Explicit formulas for the generators of the automorphism group are computed in [8]. We only need the automorphism of order \(6\) to compute \(X_{0}(13)\): \[\alpha_{13}(z,w)=\left(\frac{-1}{1+z},\frac{w-z}{z+z^{2}-w}\right)\] (A13.1) Note that \(\alpha_{13}^{3}\) coincides with the hyperelliptic involution on \(X_{1}(13)\). Thus, the quotient map \(X_{1}(13)\to X_{1}(13)/\left\langle\alpha_{13}^{3}\right\rangle\cong\mathbb{P}^ {1}\) is simply given by \(([z_{0}:z_{1}],w)\mapsto[z_{0}:z_{1}]\). The action of \(\alpha_{13}\) on this partial quotient is \([z_{0}:z_{1}]\mapsto[-z_{0}:z_{0}+z_{1}]\), or \(z\mapsto\frac{-1}{1+z}\) in terms of the affine coordinate \(z=\frac{z_{0}}{z_{1}}\). Thus, \(X_{0}(13)\cong\mathbb{P}^{1}/\left\langle z\mapsto\frac{-1}{1+z}\right\rangle\). We can define: \[q_{13}:X_{1}(13)\to X_{0}(13)\qquad q_{13}(z,w)=\frac{z^{3}+4z^{2}+z-1}{z(z+1)}\] (Q13) \[j_{13}(t)=-\frac{\left(t^{2}-5t+13\right)\left(t^{4}-7t^{3}+20t^{2}-19t+1\right)^{ 3}}{t}\] (J13) Finally, the Fricke involution on this model of \(X_{0}(13)\) is given by: \[F_{13}(t)=\frac{13}{t}\] (F13) #### 3.5.2. \(X_{0}(13)\to X_{0}(13)^{+}\) The quotient map \(X_{0}(13)\to X_{0}(13)^{+}\) is: \[q_{13}^{+}(t)=-t+5-\frac{13}{t}\] (Q13+) For this choice of \(q_{13}^{+}\), the polynomials \(a_{13},b_{13}\) are: \[a_{13}(y)=y\left(y^{4}-13y^{3}+13y^{2}+91y+40\right)\\ \cdot\left(y^{8}-26y^{7}+195y^{6}-156y^{5}-2120y^{4}+624y^{3}+50 70y^{2}+1664y-896\right)\] (RA13) and: \[b_{13}(y)=y^{2}\left(y^{4}+234y^{3}+1417y^{2}+1872y+832\right)^{3}\] (RB13) Finally, we compute \(\Delta_{13}\): \[\Delta_{13}(j)=\\ H_{13}(j)\cdot\left(H_{-1}(j)\cdot H_{-4}(j)\cdot H_{-9}(j)\cdot H _{-3}(j)\cdot H_{-12}(j)\cdot H_{-27}(j)\cdot H_{-48}(j)H_{-43}(j)H_{-51}(j) \right)^{2}\] (D13) ## 4. Algorithms In this section, we describe explicit algorithms for computing the supersingular \(\ell\)-isogeny graph for primes \(\ell\leq 13\). ### Graph Method We give an outline of the algorithm described in [4],[9] for computing supersingular isogeny graphs. We will mainly use this algorithm when \(\ell=2\) to obtain the set of supersingular \(j\)-invariants. To get started, we need a model for _any_ supersingular curve \(E_{0}/\mathbb{F}_{p^{2}}\). * We find a generator for each subgroup of order \(\ell\) on \(E_{0}\). * For each subgroup of order \(\ell\), we obtain a model for the isogenous curve using Velu's formula. This means we now have \(E_{0}^{\prime},E_{1}^{\prime},\ldots,E_{\ell}^{\prime}\). * We now repeat step 1 for each of \(E_{0}^{\prime},E_{1}^{\prime},..,E_{\ell}^{\prime}\). #### 4.1.1. \(\ell=2\) In practice, the main challenge is finding generators for the subgroups of order \(\ell\). When \(\ell=2\), in the worst case scenario, we have to solve a cubic equation to obtain one of the points of order 2. By using carefully chosen models, we only have to find a root of a cubic polynomial once during the entire algorithm. Every pair \((E,P)\), where \(P\) is a point of order 2, is isomorphic to a pair given by a model of the form: \[y^{2}=x(x^{2}+ax+b)\qquad P=(0,0)\] (E2) The isogenous curve \(E/\left\langle P_{0}\right\rangle\) is given by an equation of the same form: \[y^{2}=x(x^{2}-2ax+(a^{2}-4b))\] Given a curve of this form, one of the 2-torsion subgroups is generated by \((0,0)\), and the other two subgroups are generated by \((x_{0},0)\), where \(x_{0}\) is a root of the quadratic: \[x^{2}+ax+b=0\] Altogether, this means we can obtain the complete 2-isogeny graph using Algorithm 1. ``` 0: Coefficients \((a,b)\) of a model of the form E2 for a supersingular curve Set Edges = \(\{\}\) Set Models = \(\{(a,b)\}\) Set \(J=\{\}\). whileModels \(\neq\emptyset\)do Set NewModels = \(\{\}\). for\((a_{0},b_{0})\in\) Models do Compute \(j_{2}(a_{0},b_{0})\) if\(j_{2}(a_{0},b_{0})\not\in J\)then Add \(j_{2}(a_{0},b_{0})\) to the set \(J\). Find the roots \(x_{1},x_{2}\) of the quadratic \(x^{2}+a_{0}x+b_{0}\). for\(i\in\{1,2\}\)do Compute a new equation for \(E_{0}\) with \((x_{i},0)\) now at \((0,0)\). Compute the coefficients \((a_{i},b_{i})\) in equation for \(E\) with \((x_{i},0)\) at \((0,0)\). Compute the coefficients \((a^{\prime}_{i},b^{\prime}_{i})=(-2a_{i},a^{2}_{i}-4b_{i})\) of the isogenous curve. Add the pair \((a^{\prime}_{i},b^{\prime}_{i})\) to NewModels. endfor endif endfor Set Models = NewModels. endwhile ``` **Algorithm 1** Supersingular Isogeny Graphs for \(\ell=2\) #### 4.1.2. \(\ell\in\{3,5,7,13\}\) We can use our models for \(X_{0}(\ell)\) to efficiently mimic what we did for \(\ell=2\), as long as we have the ability to factor polynomials over \(\mathbb{F}_{p^{2}}\). Instead of coefficients \((a,b)\), we use points \(t\in X_{0}(\ell)\) to keep track of isomorphism classes of isogenies. We can obtain \(t^{\prime}\) from \(t\) using the Fricke involution; the only issue is obtaining the points \(t_{1},\cdots,t_{\ell}\) from a point \(t_{0}\), where the \(t_{i}\) represent different subgroups of order \(\ell\) on the same elliptic curve. The key is solving the equation \(j_{\ell}(t)-j_{\ell}(t_{0})\). The numerator of this rational function is a polynomial of degree \(\ell+1\) in \(t\), and has a root at \((t-t_{0})\). The remaining points \(t_{1},\ldots,t_{\ell}\) are the roots of the remaining factor of degree \(\ell\). For example: \[j_{3}(t_{0})-j_{3}(t)=\frac{(t-t_{0})}{tt_{0}}\cdot\left(t^{3}t_{0}+t^{2}t_{0 }^{2}-36t^{2}t_{0}+tt_{0}^{3}-36tt_{0}^{2}+270tt_{0}-729\right)\] Thus, to obtain \(t_{1},t_{2},t_{3}\) on \(X_{0}(3)\), we need to solve: \[\left(t^{3}t_{0}+t^{2}t_{0}^{2}-36t^{2}t_{0}+tt_{0}^{3}-36tt_{0}^{2}+270tt_{0}-72 9\right)=0\] Note that these polynomials will always split completely over \(\mathbb{F}_{p^{2}}\), as long as \(j_{\ell}(t_{0})\) is supersingular, but finding the roots requires more work than solving a quadratic polynomial. #### 4.1.3. Implementation We implemented Algorihtm 1 in Python, and use it to obtain the set of supersingular \(j\)-invariants. The code can be found in: [https://github.com/nhajouji/supsingecs](https://github.com/nhajouji/supsingecs). Furthermore, for \(\ell\in\{2,3,5,7,13\}\), the algorithm can easily be implemented in Sage, as the necessary formulas are already built in.8 Footnote 8: [https://doc.sagemath.org/html/en/reference/arithmetic_curves/sage/schemes/elliptic_curves/isogeny_small](https://doc.sagemath.org/html/en/reference/arithmetic_curves/sage/schemes/elliptic_curves/isogeny_small) ### Parallelizable algorithms Next, we describe simple algorithms for computing the \(\ell\)-isogeny graph for all odd primes \(\ell\leq 13\). While these algorithms are less mathematically sophisticated, they present the following practical advantages: * The algorithms are simple enough that they can be easily implemented on Python, without requiring any additional packages. A Python script that does this can be found at: [https://github.com/nhajouji/supsingecs](https://github.com/nhajouji/supsingecs). * The computations in this algorithm can be done in parallel. In particular, the program can be dramatically sped up if one does the computations on a GPU. #### 4.2.1. \(g(X_{0}(\ell))=0\) To compute the graph for \(\ell\in\{3,5,7,13\}\), we use Algorithm 2. This algorithm assumes that we've already computed the set \(S_{p}\) of supersingular \(j\)-invariants, e.g. using Algorithm 1. As long as we've already obtained this set, all we need is the formulas for \(j_{\ell}\) that we obtained in the previous section (J3,J5,J7,J13), as well as the formula9 for the Fricke involution \(F_{\ell}\). Note that the cusps are at \(0,\infty\), so we can evaluate \(j_{\ell,0}(t)\) at any point in \(\mathbb{F}_{p^{2}}\) without having to worry about dividing by \(0\). Footnote 9: The Fricke involution has the form \(t\mapsto\frac{c}{t}\) for all of these values of \(\ell\), so all we really need to remember is the constant \(c\). Similarly, the denominator of \(j_{\ell}\) is \(t\) so we only need to remember the numerator. ``` \(S_{p}=\{j\in\mathbb{F}_{p^{2}}:j\text{ supersingular}\}\). Set Edges = \(\{\}\) for\(t\in\mathbb{F}_{p^{2}}^{\times}\)do Compute \(j_{\ell,0}(t)\) if\(j_{\ell,0}(t)\in S_{p}\)then Compute \(j_{\ell,0}^{\prime}(t)\). Record the edge \((j_{\ell,0}(t),j_{\ell,0}^{\prime}(t))\). endif endfor ``` **Warning**: We have to be careful when \(p\not\equiv 1\pmod{12}\): we might have multiple isogenies \(E\to E^{\prime}\) that are represented by a single point on \(X_{0}(\ell)\) if \(j(E)\in\{0,1728\}\). * If \(j_{\ell,0}(t)=0\) and \(j^{\prime}_{\ell,0}(t)\neq 0\), the edge \((0,j^{\prime}_{\ell,0}(t))\) should be recorded 3 times. * If \(j_{\ell,0}(t)=1728\) and \(j^{\prime}_{\ell,0}(t)\neq 0\), the edge \((1728,j^{\prime}_{\ell,0}(t))\) should be recorded twice. #### 4.2.2. \(X_{0}(11)\) For \(\ell=11\), we can use the same ideas, but there are additional steps due to the fact that we can't simply list all points on a curve of positive genus the way we can with a curve of genus 0. ``` 0:\(S_{p}=\{j\in\mathbb{F}_{p^{2}}:j\text{ supersingular}\}\). Define a set Edges = \(\{\}\) for\(x_{0}\in\mathbb{F}_{p^{2}}-\{5,16\}\)do Find all points on \((x_{0},y_{i})\in X_{0}(11)(\mathbb{F}_{p^{2}})\). for\((x_{0},y_{i})\in X_{0}(11)(\mathbb{F}_{p^{2}})\)do Compute \(j_{11}(x_{0},y_{i})\). if\(j_{11}(x_{0},y_{i})\in S_{p}\)then Compute \(j_{11}\circ F_{11}(x_{0},y_{i})\). Record the edge \((j_{11}(x_{0},y_{i}),j_{11}\circ F_{11}(x_{0},y_{i}))\). endif endfor endfor ``` **Algorithm 3** Supersingular Isogeny Graphs from \(X_{0}(\ell)\) (\(\ell=11\)) As with \(\ell=3,5,7,13\), we have to count edges with the appropriate multiplicities when \(j=0,1728\). #### 4.2.3. \(X_{0}(\ell)\) general We can adapt Algorithm 3 to work for all values of \(\ell\), as long as we have models for \(X_{0}(\ell)\), together with formulas for \(j_{\ell},F_{\ell}\). Precisely, if \(X_{0}(\ell)\) is the zero set of \(G(z,w)=0\), we can use Algorithm 4. ``` 0:\(S_{p}=\{j\in\mathbb{F}_{p^{2}}:j\text{ supersingular}\}\). Define a set Edges = \(\{\}\) for\(z_{0}\in\mathbb{F}_{p^{2}}\)do Find all roots \(w_{0},w_{1},\ldots\) of \(G(z_{0},w)=0..\) for\((z_{0},w_{i})\in X_{0}(\ell)(\mathbb{F}_{p^{2}})\)do Compute \(j_{\ell}(z_{0},w_{i})\). if\(j_{\ell}(z_{0},w_{i})\in S_{p}\)then Compute \(j_{\ell}\circ F_{\ell}(z_{0},w_{i}))\). Record the edge. endif endfor endfor ``` **Algorithm 4** Supersingular Isogeny Graphs from \(X_{0}(\ell)\) (General \(\ell\)) #### 4.2.4. \(g(X_{0}(\ell)^{+})=0\) We can also compute isogeny graphs by using \(X_{0}(\ell)^{+}\) instead of \(X_{0}(\ell)\). This is not particularly helpful when \(X_{0}(\ell)\) already has genus \(0\)- this algorithm is about as complicated as the algorithm for \(\ell=11\). However, we can use the same version of the algorithm for all supersingular primes \(\ell\)- the only thing that changes when we vary \(\ell\) is the pair of polynomials \(a_{\ell},b_{\ell}\) we use. Furthermore, the polynomials \(R_{\ell}(x,y)\) are easier to obtain via alternative means than \(j_{\ell}\) when \(\ell\) is a supersingular prime and \(X_{0}(\ell)\) has positive genus. Instead of evaluating \(j_{\ell}\), we evaluate the polynomials \(a_{\ell},b_{\ell}\) at each point in \(y_{0}\in\mathbb{F}_{p^{2}}\). Each \(y_{0}\) gives us a quadratic \(x^{2}-a_{\ell}(y_{0})x+b_{\ell}(y_{0})\), whose roots are \(\ell\)-isogenous \(j\)-invariants. To compute the isogeny graph, we simply check whether the roots of each quadratic are supersingular, and if they are, we record them in the graph. Now, we have to be careful when using \(X_{0}(\ell)^{+}\) because the models we are using have singularities. For example, the order of vanishing of \(R_{5}(x,y)\) at \((-884736,36)\) is \(2\): * The polynomial \(R_{5}(x,36)\) has a repeated root at \(-884736\). This means there is an isogeny of degree \(5\) from the curve with \(j\)-invariant \(884736\) to itself. * but they are both represented by the point \(y=36\). To see this, we can factor the polynomial \(R_{5}(-884736,y)\) and check that the factor \((y-36)\) appears with multiplicity \(2\). Fortunately, we can use the factorizations of the polynomials \(\Delta_{\ell}\) to determine the correct number of endomorphisms of degree \(\ell\) for each supersingular curve mod \(p\). This is easy - we just need to compute the roots of each factor of \(\Delta_{\ell}\) in \(\mathbb{F}_{p^{2}}\) and check if they are supersingular. We can simplify this step by grouping together factors whose roots are \(j\)-invariants of elliptic curves that are isogenous in characteristic \(0\). For \(d\in\mathbb{N}\), write \(\mathcal{O}_{d}\) to denote the ring of integers of \(\mathbb{Q}(\sqrt{-d})\) and \(\operatorname{Nrm}_{d}:\mathcal{O}_{d}\to\mathbb{Z}\) for the norm map. Define: \[\operatorname{supp}(\ell)=\{-d:\ell\in\operatorname{Nrm}_{d}(\mathcal{O}_{d})\}\] Note that \(\operatorname{supp}(\ell)\) is finite, e.g. for the values of \(\ell\) we are interested, we have: \[\operatorname{supp}(3) =\{-2,-3,-11\}\] \[\operatorname{supp}(5) =\{-1,-5,-11,-19\}\] \[\operatorname{supp}(7) =\{-3,-6,-7,-19\}\] \[\operatorname{supp}(11) =\{-2,-7,-10,-11,-19,-35,-43\}\] \[\operatorname{supp}(13) =\{-1,-3,-13,-43,-51\}\] The values \(-d\in\operatorname{supp}(\ell)\) coincide with the subscripts that appear in the factors \(H_{-d}\) and \(H_{-dn^{2}}\) of \(\Delta_{\ell}\). For each pair \(\ell,-d\), where \(-d\in\operatorname{supp}(\ell)\), we write \(\Delta_{\ell,-d}(j)\) to denote the product of the factors of \(\Delta_{\ell}\) whose roots are \(j\)-invariants of elliptic curves in the same isogeny as the elliptic curves whose \(j\)-invariants are roots of \(H_{-d}(j)\). For a pair \(p,\ell\), we set: \[\operatorname{supp}_{p}(\ell) =\left\{-d\in\operatorname{supp}(\ell):\left(\frac{-d}{p}\right)= -1\right\}\] \[\Delta_{\ell,p}(j) =\prod_{-d\in\operatorname{supp}_{p}(\ell)}\Delta_{\ell,-d}(j)\] Algorithm 5 uses \(\Delta_{ell,p}(j)\) to compute the diagonal of the adjacency matrix of the isogeny graph. ``` 0:\(\Delta_{\ell}(j)=\prod\Delta_{\ell,-d}(j)\). Set Edges = \(\{\}\). for\(-d\in\operatorname{supp}_{p}(\ell)\)do for Each factor \(H_{-dn^{2}}^{e_{n}}\) of \(\Delta_{\ell,-d}\)do Compute the set of roots \(Z(H_{-dn^{2}})\subset\mathbb{F}_{p^{2}}\). for\(j\in Z(H_{-dn^{2}})\)do Add \(e_{n}\) copies of \((j,j)\) to Edges. endfor endfor endfor ``` **Algorithm 5** Supersingular Isogeny Graphs from \(X_{0}(\ell)^{+}\): Diagonal To compute the rest of the isogeny graph, we use Algorithm 6. ``` 0:\(a_{\ell}(y),b_{\ell}(y)\). 0:\(S_{p}=\{j\in\mathbb{F}_{p^{2}}:j\text{ supersingular}\}\). for\(y_{0}\in\mathbb{F}_{p^{2}}\)do Evaluate \(a_{\ell}(y_{0}),b_{\ell}(y_{0})\). Compute the discriminant \(d_{\ell}(y_{0})=a_{\ell}(y_{0})^{2}-4b_{\ell}(y_{0})\). if\(d_{\ell}(y_{0})\neq 0\) and \(\sqrt{d_{\ell}(y_{0})}\in\mathbb{F}_{p^{2}}\)then Compute the roots \(j_{0},j_{0}^{\prime}\) of \(R_{\ell}(x,y_{0})=0\). if\(j_{0},j_{1}\in S_{p}\)then Add \((j_{0},j_{0}^{\prime}),(j_{0},j_{0}^{\prime})\) to the set of edges. endif endif endfor ``` **Algorithm 6** Supersingular Isogeny Graphs from \(X_{0}(\ell)^{+}\):Off-Diagonal ### Resultants For a prime \(p\), we define: \[ss_{p}(x)=\prod_{j\in S_{p}}x-j\] The polynomial \(ss_{p}\) vanishes if and only if \(x\) is a supersingular \(j\)-invariant. For a pair of polynomials \(f(x),g(x)\), we write \(\operatorname{Res}_{x}(f,g)\) to denote the resultant of \(f,g\). Whenever we have a modular curve of genus \(0\), we can obtain a characterization of the set of points on the modular curve that represent supersingular curves by computing the resultant of \(ss_{p}(x)\) with a suitable polynomial. * Let \(\ell\in\{3,5,7,13\}\). Then \(j_{\ell}(t)\cdot t\) is a polynomial in \(t\). To obtain the set of points on \(X_{0}(\ell)\) whose \(j\)-invariants are supersingular, we can compute: \[\operatorname{Res}_{x}\left(ss_{p}(x),t\cdot(j_{\ell}(t)-x)\right)\] The result is a polynomial in \(t\) whose roots are precisely the points on \(X_{0}(\ell)\) that represent supersingular elliptic curves. * We can do something similar for all supersingular primes using the polynomial \(R_{\ell}(x,y)\). Precisely, we compute: \[\operatorname{Res}_{x}\left(ss_{p}(x),R_{\ell}(x,y)\right)\] to obtain a polynomial in \(y\), whose roots are precisely the values \(y_{0}\) for which the roots of \(R_{\ell}(x,y_{0})\) are supersingular. The roots of the resultant will all lie in the image of \(q_{\ell}^{+}(X_{0}(\ell)(\mathbb{F}_{q}))\), so the resultant will split completely over \(\mathbb{F}_{p^{2}}\).10 To take full advantage of this, we need a way of factoring the resultant over \(\mathbb{F}_{p}\) as a product of polynomials of degree at most \(2\). Fortunately, most computer algebra systems can compute resultants and factor polynomials over \(\mathbb{F}_{p}\). Footnote 10: We discuss this in more detail in 5.2. ## 5. Applications ### Trace Formula Let \(\ell\in\{3,5,7,11,13\}\), let \(p\neq\ell\) be a prime, and let \(\Gamma_{p,\ell}\) be the supersingular \(\ell\)-isogeny graph. We write \(Tr(\Gamma_{p,\ell})\) to denote the trace of the adjacency matrix of \(\Gamma_{p,\ell}\). Algorithm 5 allows us to compute the diagonal of the adjacency graph directly, without needing to compute the rest of the isogeny graph, so we can use it to deduce a formula for \(Tr(\Gamma_{p,\ell})\). It's clear that \(Tr(\Gamma_{p,\ell})=\deg\,(\Delta_{p,\ell}(j))\). Furthermore, the degree of \(\Delta_{p,\ell}\) is determined by the Jacobi symbols \(\left(\frac{-2}{p}\right)\) for \(-d\in\operatorname{supp}(\ell)\). Thus, we can obtain a formula for the trace as: \[Tr(\Gamma_{p,\ell})=\sum_{-d\in\operatorname{supp}(\ell)}m_{\ell,-d}\left( \frac{-d}{p}\right)\] for some integers \(m_{\ell,-d}\) that depend on \(\ell\). For example, when \(\ell\leq 13\), we obtain the following formulae: \[\operatorname{Tr}\,(\Gamma_{p,3}) =3-\left(\left(\frac{-2}{p}\right)+\left(\frac{-3}{p}\right)+ \left(\frac{-11}{p}\right)\right)\] \[\operatorname{Tr}\,(\Gamma_{p,5}) =5-\left(2\left(\frac{-1}{p}\right)+\left(\frac{-5}{p}\right)+ \left(\frac{-11}{p}\right)+\left(\frac{-19}{p}\right)\right)\] \[\operatorname{Tr}\,(\Gamma_{p,7}) =7-\left(3\left(\frac{-3}{p}\right)+2\left(\frac{-6}{p}\right)+ \left(\frac{-7}{p}\right)+\left(\frac{-19}{p}\right)\right)\] \[\operatorname{Tr}\,(\Gamma_{p,11}) =11-\left(\left(\frac{-2}{p}\right)+2\left(\frac{-7}{p}\right)+ 2\left(\frac{-10}{p}\right)+2\left(\frac{-11}{p}\right)+\left(\frac{-19}{p} \right)+2\left(\frac{-35}{p}\right)+\left(\frac{-43}{p}\right)\right)\] \[\operatorname{Tr}\,(\Gamma_{p,13}) =13-\left(4\left(\frac{-1}{p}\right)+5\left(\frac{-3}{p}\right)+ \left(\frac{-13}{p}\right)+\left(\frac{-43}{p}\right)+2\left(\frac{-51}{p} \right)\right)\] We expect that there are similar formulas for all primes \(\ell\), not just the supersingular primes. ### Nakaya's Conjectures The resultants we used at the end of 4.3 play a central role in [12]. Consequently, we can translate many of the questions in that paper into questions about supersingular isogeny graph. In particular, we will use this perspective give a simple proof of Conjecture 2 in that paper.11 Footnote 11: We note that some of the conjectures were recently proven in [10] using different methods. The actual object of the conjectures in [12] is the polynomial: \[ss_{p,\ell}^{*}(y)=\operatorname{rad}\left(\operatorname{Res}_{x}(ss_{p}(x),R _{\ell}(x,y))\right)\] Here, \(\operatorname{rad}(-)\) indicates that we are ignoring the multiplicity of each irreducible factor. Now, the factors of \(\operatorname{Res}_{x}(ss_{p}(x),R_{\ell}(x,y)\) correspond to (pairs) of edges on the isogeny graph, so we can think of \(ss_{p,\ell}^{+}(y)\) as representing the underlying undirected graph. This perspective makes it easier to explain some of the phenomena observed in [12]. As a warm-up, we will use Theorem 2.2 to explain why \(ss_{p,\ell}^{+}(y)\) always splits over \(\mathbb{F}_{p^{2}}\). This will answer Question 1 in [12]. Suppose \(y_{0}\in\mathbb{F}_{p^{n}}\) is a root of \(ss_{p,\ell}^{+}(y)=0\). That means that the roots \(x_{1},x_{2}\) of \(R_{\ell}(x,y_{0})=0\) are supersingular \(j\)-invariants that are related by an isogeny of degree \(\ell\), so there exists \(t_{0}\in X_{0}(\ell)(\mathbb{F}_{p^{2}})\) such that \(j_{\ell}(t_{0})=x_{1}\) and \(j_{\ell}\circ F_{\ell}(t_{0})=x_{2}\), and \(y_{0}=q_{\ell}^{+}(t_{0})\). Since \(t_{0}\in\mathbb{F}_{p^{2}}\) by Theorem 2.2, and \(q_{\ell}^{+}\) is defined over \(\mathbb{F}_{p}\), it follows that \(y_{0}\in\mathbb{F}_{p^{2}}\). This proves that \(ss_{p,\ell}^{+}\) splits over \(\mathbb{F}_{p^{2}}\). We can use similar ideas to prove that \(ss_{p,\ell}\) has a non-linear factors whenever \(p\) is _not_ a supersingular prime. This will allow us to reduce the proof of Conjecture 2 in [12] a finite computation. Furthermore, we will be able to prove additional results in the same vein. **Theorem 5.1**.: _Let \(\ell\) be a supersingular prime, and let \(p\) be a prime which is_ **not** _supersingular. Then \(ss_{p,\ell}(y)^{+}\) has an irreducible factor of degree \(>1\)._ Proof.: Let \(S_{p,1}=S_{p}\cap\mathbb{F}_{p}\) and \(S_{p,2}=S_{p}\backslash\mathbb{F}_{p}\). Note that \(S_{p,1}\) is always nonempty, because there are always supersingular curves defined over \(\mathbb{F}_{p}\), and \(S_{p,2}\) is nonempty because \(p\) is not supersingular. By connectedness of the \(\ell\)-isogeny graph, we can find \(j_{1}\in S_{p,1}\) and \(j_{2}\in S_{p,2}\) such that the elliptic curves with those invariants are \(\ell\)-isogenous. This means there exists \(t_{0}\in\mathbb{F}_{p^{2}}\) such that \(j_{\ell}(t_{0})=j_{1}\) and \(j_{\ell}\circ F_{\ell}(t_{0})=j_{2}\). Let \(y_{0}\) be the image of \(t_{0}\) under the map \(X_{0}(\ell)\to X_{0}(\ell)^{+}\). Then the roots of \(R_{\ell}(x,y_{0})\) are \(j_{1},j_{2}\), so \(y_{0}\) is a root of \(Res_{x}(ss_{p}(x),R_{\ell}(x,y))=0\). Furthermore, the roots of \(R(x,y_{0})\) are not Galois conjugates, so \(a_{\ell}(y),b_{\ell}(y)\not\in\mathbb{F}_{p}\), so \(y_{0}\not\in\mathbb{F}_{p}\). This means the minimal polynomial of \(y_{0}\), which has degree \(>1\), divides \(ss_{p,\ell}(y)\). Consequently, we see that \(ss_{p,\ell}^{+}\) can only split over \(\mathbb{F}_{p}\) if \(p\) is also supersingular. Thus, to determine the complete set of primes \(p\) for which \(ss_{p,\ell}^{+}\) splits over \(\mathbb{F}_{p}\), we just have to compute \(ss_{p,\ell}^{+}\) for the 15 supersingular primes. For \(\ell=5,7,11,13\), we obtained the following sets of primes: \[\ell =5:\quad p\in\{2,3,5,7,11,19\}\] \[\ell =7:\quad p\in\{2,3,5,7,17\}\] \[\ell =11:\quad p\in\{2,3,5,11\}\] \[\ell =13:\quad p\in\{2,3,13\}\] Now, Conjecture 2 predicts that the set of primes for \(\ell=5\) coincides with the set of primes that divide the Hamada-Norton group; and it predicts that the set of primes for \(\ell=7\) will coincide with the set of primes that divide the Held group. Our computation shows that these predictions are correct. The correspondence between sporadic simple groups and modular curves \(X_{0}(\ell)\) is explained in Section 9 of [5]. We note that the sporadic group associated to \(X_{0}(11)\) is the Mathieu group \(M_{12}\), which has cardinality \(2^{6}\cdot 3^{3}\cdot 5\cdot 11\). As with \(\ell=5,7\), the set of primes that divides \(M_{12}\) coincides with the set of primes for which \(ss_{p,11}\) splits over \(\mathbb{F}_{p}\). Finally, we mention that the correspondence between sporadic groups and modular curves is only known for \(\ell<=11\).12 Footnote 12: This is due to the fact that \(PSL(2,p)\) only has a subgroup of index \(p\) when \(p\in\{2,3,5,7,11\}\); see [5] for further details. ## 6. Conclusion ### Implementation and tables Tables containing supersingular isogeny graphs (for \(p\leq 512\)) and adjacency matrices (for \(p\leq 2048\)) were computed by implementing the algorithms in this paper using Python. The results and code can be found at: [https://github.com/nhajouji/supsingecs](https://github.com/nhajouji/supsingecs). ### Generalizations The algorithms we have described for primes \(\ell\leq 13\) can be extended to higher values of \(\ell\), as long as we have analogs of some of the formulas derived in 3. * The algorithms that use \(X_{0}(\ell)^{+}\) only work for supersingular primes \(\ell\). However, they can be used without modification, as long as we know the polynomials \(a_{\ell},b_{\ell}\). We note that the polynomials \(a_{\ell},b_{\ell}\) can be obtained using analytic methods, although they require knowledge of the Fourier coefficients of a Hauptmodul of \(X_{0}(\ell)^{+}\). See [3]. * The algorithms using \(X_{0}(\ell)\) can also be used for all values of \(\ell\), but we need models of the modular curve, together with formulas for the \(j\)-map and the Fricke involution. The complexity of these algorithms will hinge on how hard it is to find all points on \(X_{0}(\ell)\). For example \(X_{0}(\ell)\) is hyperelliptic13, we can essentially mimic Algorithm 3. Footnote 13: Note that \(X_{0}(\ell)\) is hyperelliptic precisely when \(\ell\) is a supersingular prime. However, we can go beyond the set of supersingular primes \(\ell\): for example, if \(X_{0}(\ell)\) has gonality \(\leq 4\),14, we can use the cubic/quartic formula to compute the fibers of \(X_{0}(\ell)\to\mathbb{P}^{1}\), and obtain an algorithm with the same complexity as 3. The curve \(X_{0}(43)\) is 3-gonal, and \(X_{0}(\ell)\) has gonality 4 for the following values of \(\ell\)15: Footnote 14: The idea of using gonality of the modular curve as a threshold came from [15]. Footnote 15: See [11]. \[67,73,103,107\] We can also obtain efficient algorithms when \(X_{0}(\ell)\) is bielliptic. The primes values \(\ell\) for which \(X_{0}(\ell)\) is bielliptic are:16: Footnote 16: See [2]. \[37,53,61,79,83,89,101,131\] ### Acknowledgements The author would like to thank Christian Bueno for helpful discussions. ## Appendix A Hilbert Class Polynomials The formulae for \(\Delta_{\ell}\) in Section 3 are expressed as products of polynomials \(H_{-dn^{2}}(j)\). We give explicit formulas for those factors in this section. The polynomials labelled \(H_{-d}(j)\), where \(d\) is a squarefree integer, are the Hilbert class polynomials for \(\mathbb{Q}(\sqrt{-d})\). For example, when \(d\) is a Heegner number, we have: \[H_{-1}(j) =j-1728\] (H1) \[H_{-2}(j) =j-8000\] (H2) \[H_{-3}(j) =j\] (H3) \[H_{-7}(j) =j+3375\] (H7) \[H_{-11}(j) =j+32768\] (H11) \[H_{-19}(j) =j+884736\] (H19) \[H_{-43}(j) =j+884736000\] (H43) \[H_{-67}(j) =j+147197952000\] (H67) \[H_{-163}(j) =j+262537412640768000\] (H163) Note that H67, H163 do not appear as factors of any of the \(\Delta_{\ell}\),, but they can be used to obtain a supersingular curve \(E/\mathbb{F}_{p}\) to start Algorithm 1. The remaining Hilbert class polynomials of positive degree that appear in the \(\Delta_{\ell}\) are: \[H_{-5}(j) =j^{2}-1264000j-681472000\] (H5) \[H_{-6}(j) =j^{2}-4834944j+14670139392\] (H6) \[H_{-10}(j) =j^{2}-425692800j+9103145472000\] (H10) \[H_{-13}(j) =j^{2}-6896880000j-567663552000000\] (H13) \[H_{-35}(t) =j^{2}+117964800j-134217728000\] (H35) \[H_{-51}(j) =j^{2}+5541101568j+6262062317568\] (H51) Note that additional Hilbert class polynomials can be obtained in the same way using the modular curves \(X_{0}(\ell)\) for \(\ell=4,6,8,9,10,12\). The following three have odd degree, which means they are guaranteed to have at least one root in \(\mathbb{F}_{p}\) (provided \(-d\) is a nonsquare in \(\mathbb{F}_{p}\)): \[H_{-23}(j) =j^{3}+3491750j^{2}-5151296875j+12771880859375\] (H23) \[H_{-31}(j) =j^{3}+39491307j^{2}-58682638134j+1566028350940383\] (H31) Finally, we note that there are a few factors that appear in \(\Delta_{\ell}\) which are not Hilbert class polynomials. We have labelled these as \(H_{-dn^{2}}\) to indicate that the roots of \(H_{-dn^{2}}(j)=0\) are \(n\)-isogenous to the roots of \(H_{-d}(j)=0\). Note that the roots of \(H_{-dn^{2}}(j)\) are supersingular mod \(p\) if and only if \(-d\) is a quadratic nonresidue mod \(p\). The polynomials of this form that appear in the \(\Delta_{i}\) are: \[H_{-4}(j) =j-287496\] (H1.2) \[H_{-9}(j) =j^{2}-153542016j-1790957481984\] (H1.3) \[H_{-12}(j) =j-54000\] (H3.2) \[H_{-27}^{\prime}(j) =j-12288000\] (H3.3) \[H_{-48}(j) =j^{2}-2835810000j+6549518250000\] (H3.4) \[H_{-44}(j) =j^{3}-1122662608j^{2}+270413882112j-653249011576832\] (H11.2)
2301.09837
Constraint on the accretion of NGC 6946 X-1 using broadband X-ray data
We analyze broadband X-ray data of NGC 6946 X-1 and probe plausible accretion scenarios in this ULX. NGC 6946 X-1 is a persistent soft source with broadband continuum spectra described by two thermal disk components. The cool accretion disk temperature $\rm T_{cool} \sim 0.2$ keV and the presence of $\sim 0.9$ keV emission/absorption broad feature suggests the evidence of optically thick wind due to super-critical accretion. The hot geometrically modified accretion disk has an inner temperature of $\rm T_{hot} \sim 2$ keV with a radial dependent profile $\rm T(r) \propto r^{-0.5}$, expected in a slim disk scenario. Further, the measurement based on a realistic inclination angle of the disk indicates that the mass of the host compact object is comparable to $\rm \sim 6-10 ~M_{\odot}$ non-rotating black hole or the system hosts a moderately magnetized neutron star with $\rm B \lesssim 2 \times 10^{11}$ G magnetic field. Overall, the detected spectral curvature, high luminosity, flux contribution from two thermal disk components, and estimated accretion rate imprint the super-Eddington accretion scenario.
Tanuman Ghosh, Vikram Rana
2023-01-24T06:46:11Z
http://arxiv.org/abs/2301.09837v2
# Constraint on accretion of NGC 6946 X-1 using broadband X-ray data ###### Abstract We analyze broadband X-ray data of NGC 6946 X-1 and probe plausible accretion scenarios in this ULX. NGC 6946 X-1 is a persistent soft source with broadband continuum spectra described by two thermal disk components. The cool accretion disk temperature \(\rm T_{cool}\sim 0.2\) keV and the presence of \(\sim 0.9\) keV emission/absorption broad feature suggests the evidence of optically thick wind due to super-critical accretion. The hot geometrically modified accretion disk has an inner temperature of \(\rm T_{hot}\sim 2\) keV with a radial dependent profile \(\rm T(r)\propto r^{-0.5}\), expected in a slim disk scenario. Further, the measurement based on a realistic inclination angle of the disk indicates that the mass of the host compact object is comparable to \(\sim 6-10\)\(\rm M_{\odot}\) non-rotating black hole or the system hosts a weakly magnetized neutron star with \(\rm B\lesssim 2\times 10^{11}\) G magnetic field. Overall, the detected spectral curvature, high luminosity, flux contribution from two thermal disk components, and estimated accretion rate imprint the super-Eddington accretion scenario. Ultraluminous x-ray sources (2164) -- X-ray binary stars (1811) -- X-ray sources(1822) -- Accretion(14) 0000-0002-4880-2880]Tanuman Ghosh 0000-0002-4882-7888]Vikram Rana ## 1 Introduction Ultraluminous X-ray sources (ULXs; \(\rm L_{x}>10^{39}\) erg s\({}^{-1}\)) are best understood to be the super-Eddington accreting X-ray binaries predominantly abundant outside of our Galaxy. Discovery of the extragalactic neutron star ULXs (Bachetti et al., 2014; Furst et al., 2016; Israel et al., 2017, 2017; Brightman et al., 2018; Carpano et al., 2018; Rodriguez Castillo et al., 2020; Sathyaprakash et al., 2019) firmly established that the Eddington ratio (\(\rm\lambda_{Edd}=L_{x}/L_{Edd}\)) can reach a factor of few hundreds. Broadband spectral analyses have shown that two-component thermal disk emission and an additional component like coronal comptonization or emission from a magnetized accretion column provide the X-ray continuum (see, e.g., Walton et al., 2018, 2020) in most of the sources. The thermal photons work as the seed photons for the Compton up-scattering process, which give harder photons \(\sim 20\) keV in many ULX sources. These different emission components are highly dependent on the geometry of the disk, which determines the hardness of the spectra. The softest of ULXs are assumed to be viewed close to the plane of the disk. Hence, the hot inner regions are obscured out of the line of sight. Due to the high mass accretion rate in the case of super-Eddington accretion, the inner region of the disk geometrically deviates from the standard thin disk. It becomes a slim accretion disk where the disk scale height is comparable to the radius, i.e., \(\rm h\approx r\)(Abramowicz et al., 1988; Sadowski, 2011; Dotan & Shaviv, 2011). Simplistically, such a geometrically modified disk is often ascribed to the powerlaw temperature profile of \(\rm T(r)\propto r^{-p}\), where \(\rm p\sim 0.5\) for a geometrically slim disk (Sadowski, 2011) and at the limit of \(\rm p=0.75\), it takes the form of standard thin Shakura & Sunyaev (1973) disk. Due to super-Eddington accretion, near the spherization radius, wind is generated, and it outflows the material from the inner region of the disk. The two thermal disk components in ULXs have several interpretations. For example, in a black hole system, the soft/cool disk component is ascribed to the reprocessed photons from optically thick wind, and the hard counterpart comes from the inner accretion flow (e.g., Walton et al., 2014, 2015; Luangtip et al., 2016). Weakly magnetized (\(\rm B<10^{11}\) G) neutron stars have a similar accretion scenario to that of the super-Eddington black holes (King & Lasota, 2016). One possible ex planation is that the thin Shakura and Sunyaev (1973) disk imprints the cool thermal component, and the optically thick plasma at the boundary layer of the neutron star emits harder spectra (Syunyaev and Shakura, 1986; Koliopanos et al., 2017). Even for highly magnetized neutron stars, due to high luminosity (\(\rm L_{x}\gtrsim 5\times 10^{39}\) erg s\({}^{-1}\)), the material within Alfven radius becomes optically thick and emits hard photons, whereas the cool component comes from the truncated accretion disk (Koliopanos et al., 2017; Mushtukov et al., 2017). However, in these scenarios, it is difficult to explain how the luminosity of a thin accretion disk can be higher than the Eddington luminosity of a neutron star. Hence, it has been widely understood that the soft emission component in ULXs, which resembles the cool accretion disk-like emission, comes from the optically thick wind for both black hole and neutron star systems. The study by Qiu and Feng (2021) on a sample of ULX sources showed that the soft thermal component is a signature of optically thick wind because of the constant blackbody luminosity. In that study, it was established that this correlation between wind scenario and cool blackbody-like emission is plausible in both black holes and pulsar ULXs. The higher blackbody luminosity than the Eddington limit of neutron stars could be explained by the reduced scattering cross-section in the presence of a high magnetic field or increased radiation due to magnetic buoyancy (Qiu and Feng, 2021). The wind or outflow also imprints the emission and absorption lines in the ULX spectra, especially around \(\sim\) 1 keV. These strong line features have been seen in high-quality grating spectral data in several ULXs, e.g., NGC 55 ULX, NGC 247 ULX-1, NGC 1313 X-1, NGC 5408 X-1, NGC 300 X-1, Ho IX X-1, Ho II X-1 (see, e.g., Pinto et al., 2016, 2017, 2020, 2021; Kosec et al., 2018, 2021). Interestingly, it has been found that the soft ULX sources are better candidates for detecting these outflow features compared to the hard sources and can be related to the strong wind/outflow and favorable geometrical occultation of the inner flow of the disk. NGC 6946 galaxy (distance \(\sim\) 7.72 Mpc; Anand et al., 2018) is the host of multiple ULXs (Earnshaw et al., 2019). NGC 6946 X-1 (RA:20:35:00.7, DEC:+60:11:31) is a soft ULX source often referred to as ULX-3 (see Earnshaw et al., 2019 and references therein), has shown emission line signatures (Pinto et al., 2016; Kosec et al., 2021), a similar feature shown by several other soft ULX sources. We study its broadband spectral properties using XMM-Newton and NuSTAR data and provide some crucial constraints on the accretion mechanism in this ULX from the continuum features. In SS 2, we discuss the data utilized in this paper and their extraction processes. SS 3 describes the analysis of the data and the obtained results. Finally, we discuss and conclude our findings in SS 4. ## 2 Data We analyze the simultaneous broadband data of NGC 6946 X-1 jointly observed by XMM-Newton and NuSTAR in 2020-2021. The source has been previously analyzed on several occasions by archival XMM-Newton and NuSTAR data (see, e.g., Kajava and Poutanen, 2009; Hernandez-Garcia et al., 2015; Middleton et al., 2015; Pintore et al., 2017; Kosec et al., 2018; Earnshaw et al., 2019; Kosec et al., 2021; Qiu and Feng, 2021) and the readers are directed to these papers for an overview of the previous analyses. Here, we focus on the recent joint observations by XMM-Newton and NuSTAR observed in 2020-2021. Previous joint XMM-Newton and NuSTAR observation data taken in 2017 were analyzed in Earnshaw et al. 2019 in detail. Therefore, we compare the results from 2020-2021 broadband data to those obtained in Earnshaw et al. 2019. The observations used in this paper are tabulated in Table 1. The XMM-Newton data are processed by SASv20.0.0, and the EPIC products are extracted using epproc and emproc tools. In observation 0870830201, the source falls in the chip gap of the pn camera, thus significantly affecting the data. Therefore, for this observation, we do not utilize the pn data for the scientific analysis. In observation 0870830401, MOS1 data are not available. The data are cleaned from background flaring by espfilt task, and evselect tool is used to generate the spectra and light curves from the cleaned event files with PATTERN<=4 for pn and PATTERN<=12 for MOS data. rmfgen creates the redistribution matrix files and arfgen is used to create the ancillary response files for the source spectrum. The source region is selected from a 20 arcsec circular region, and the background is from a 40 arcsec circle in a nearby region from the same chip. We select FLAG==0 criteria for extraction of spectra for all cameras. The spectra are grouped to have 20 counts per bin with an oversampling factor of 3. The NuSTAR data are extracted using HEA-SOFT version 6.31. The raw data are cleaned and pre-processed using nupipeline with saacalc=3, saamode=STRICT and tentacle=yes parameters to maintain a conservative approach while handling the background due to South Atlantic Anomaly. The nuproducts task is further utilized to generate the spectra and lightcurves from the cleaned data. The source extraction region is a 40 arcsec radius circle, and a nearby 60 arcsec radius circle from the same chip is chosen for the background region for all observations. The spectra are grouped to have 20 counts per energy bin. ## 3 Analysis and Results We first individually analyze the joint XMM-Newton and NuSTAR spectra for all four epochs using XSPEC v12.13.0b (Arnaud, 1996). Throughout the paper, we have estimated uncertainties in parameters within a 90% confidence interval unless mentioned otherwise. We have used updated abundance (Wilms et al., 2000) and cross-section (Verner et al., 1996) for the neutral absorption model tbabs. The flux is measured using the convolution model cflux. The XMM-Newton spectra are utilized in the \(0.3-10.0\) keV energy range, whereas the NuSTAR spectra are fitted in the \(3.0-20.0\) keV energy range, beyond which background significantly dominates the spectra. First, we plot the unfolded spectra for all epochs, shown in figure 1. It is visually apparent that all broadband spectra from these four epochs possess similar features. We then implement an absorbed powerlaw model and find that in all observations, there is a broad feature in the residual around \(\sim 0.9\) keV. However, the strength of the feature varies depending on the signal-to-noise ratio (S/N) of the data. We include a gaussian model component to justify this soft broad feature following the method by Earnshaw et al. 2019. The cross-calibration constant for MOS2 is fixed to 1 and left free to vary for other detectors. The broadband data indicate the typical powerlaw break in the spectra like other ULXs. In figure 2, we show the residual for an absorbed powerlaw fit for the XN1 epoch. The residual indicates an apparent broad feature around \(\sim 0.9\) keV and a high energy spectral cutoff. Hence, we fit the spectra for all epochs with an additional multiplicative component highecut on top of the powerlaw continuum. Therefore, an absorbed gaussian+highecut*powerlaw model gives a good fit in all cases. We find congruent parameter ranges throughout all observations of 2020-2021, implying that the source does not significantly vary in spectral nature during these observations. However, it is important to note that some parameters are not properly constrained for the XN2 epoch due to the unavailability of pn data but remain consistent within the 90% confidence range of the parameters from other epochs. Hence, we simultaneously fit the spectra for all 2020-2021 epochs with the same model with parameters for all epochs linked. While simultaneously fitting all these spectra with an absorbed powerlaw, we get a \(\chi^{2}/\mathrm{d.o.f}\simeq 858/653\). When we add a gaussian component, the improved fit statistics is \(\chi^{2}/\mathrm{d.o.f}\simeq 705/650\). Nevertheless, an additional highecut component improved the statistics further. However, we find that with the addition of highecut component, the neutral absorption (N\({}_{\mathrm{H}}\)) gives a best-fit value close to the Galactic absorption value \begin{table} \begin{tabular}{c c c c c} \hline \hline Serial No. & Observation ID & Observation ID & Observation start date & Epoch ID & Spectral Exposure time (ks) \\ & XMM-Newton & NuSTAR & & & pn/MOS1/MOS2/FPMA/FPMB \\ \hline 1 & 0870830101 & 50601001002 & 2020-07-08 & XN1 & 12.5/16/16/100/99 \\ 2 & 0870830201 & 50601001004 & 2020-12-13 & XN2 & -/16/16/94/91 \\ 3 & 0870830301 & 50601001006 & 2021-04-02 & XN3 & 8.5/14/14/83/84 \\ 4 & 0870830401 & 50601001008 & 2021-05-25 & XN4 & 12/-/16/88/89 \\ \hline \end{tabular} Note. – pn data are not used for observation 0870830201 and MOS1 data are not available for observation 0870830401. \end{table} Table 1: Observation log of NGC 6946 X-1 Figure 1: Unfolded spectra using powerlaw model of 0 index and arbitrary normalization. For clarity, only MOS2 and FPMA spectra are shown for all four epochs. Visually, it is apparent that all four epochs exhibit overlapping spectral features. Data have been rebinned for visual purposes. \(\rm N_{H}\sim 0.22\times 10^{22}\) cm\({}^{-2}\)(HI4PI Collaboration et al., 2016). This is apparently coming from the fact that NGC 6946 is a face-on galaxy and the local absorption is less compared to the sensitivity of the data; hence, we fix the \(\rm N_{H}\) to the Galactic value throughout the analysis. We find that the cutoff and folding energies are around \(\rm E_{cut}\sim 6.34^{+0.72}_{-0.63}\) keV and \(\rm E_{fold}\sim 4.06^{+1.23}_{-1.06}\) keV with a powerlaw index of \(\Gamma\sim 2.35^{+0.03}_{-0.04}\), portraying it as a soft source. The \(\chi^{2}\)/d.o.f is 556/649 for this simultaneous fit. Finally, we study this source by simultaneously fitting all these four epochs spectra for different spectral models by linking the spectral parameters. This helps constrain the individual parameters with much better precision and, in turn, helps constrain the physical parameters we estimate from the analysis. In this paper, we mainly focus on the continuum spectral fitting of 2020-2021 broadband X-ray observations and undertake a simpler approach to fit the \(\sim 0.9\) keV feature by a Gaussian model (see discussion on such a similar feature in other ULXs, e.g.,Middleton et al., 2014, 2015; Ghosh et al., 2022). We investigate the continuum fitting with different models relevant to the physical scenarios expected in ULX systems. A single component diskpbb model does not provide a good fit (\(\chi^{2}/d.o.f\simeq 1340/654\)). There are significant residuals in the soft energy range where typically soft disk blackbody component and the \(\sim 0.9\) keV broad hump-like feature dominate the spectra. Hence, we utilize the well-explored two-component thermal disk models like diskbb+diskpbb to fit the continuum. A Gaussian and this two-component thermal disk continuum provide a statistically acceptable fit (see Table 2). When we fit the tbabs*(gaussian+diskbb+diskpbb) model, we get the cool disk temperature around \(\sim 0.22\) keV and the hot disk temperature around \(\sim 2\) keV. The radial dependence of temperature parameter 'p' in diskpbb component converges to the hard limit of \(0.5\), i.e., the slim-disk limit. Hence, we fix this parameter to \(0.5\). We find that no additional hard component is required for the \(0.3-20.0\) keV spectra in any epoch. The model parameters and flux are described in Table 2. The spectra, models, and residuals are shown in figure 3. The light curves of NGC 6946 X-1 show no significant short-term variability. We utilize the HENDRICS (Bachetti, 2018; Huppenkothen et al., 2019) tool HENacclesearch to search for pulsation in the frequency range of \(0.01-6.8\) Hz in XMM-Newton data and \(0.01-10.0\) Hz in NuSTAR data. However, there is no significant detection of pulsation in any of the observations. ## 4 Discussions and Conclusions We discuss the broadband spectral properties of a soft ULX source NGC 6946 X-1 from the 2020-2021 observations. Previous studies (e.g., Pinto et al., 2016; Kosec et al., 2021) have detected emission lines from the high-resolution grating spectra and confirmed the presence of wind/outflow in the system. Earnshaw et al. 2019 studied its first broadband spectral properties using XMM-Newton, NuSTAR, and Swift data. They found that \begin{table} \begin{tabular}{c c c} \hline \hline Parameter & Unit & Parameter values \\ \hline \(\rm N_{H}\) & \(10^{22}cm^{-2}\) & \(0.22\)(fixed) \\ \(\rm E_{line}\) & keV & \(0.92\pm 0.03\) \\ \(\rm\sigma_{line}\) & keV & \(0.13\pm 0.03\) \\ Norm & \(10^{-5}\) & \(4.84^{+1.78}_{-1.20}\) \\ \(\rm T_{thin}\) & keV & \(0.22^{+0.02}_{-0.03}\) \\ \(\rm Norm_{thin}\) & & \(8.48^{+7.22}_{-2.97}\) \\ \(\rm T_{slim}\) & keV & \(2.03^{+0.13}_{-0.11}\) \\ p & & \(0.5\) \\ \(\rm Norm_{slim}\) & \(10^{-4}\) & \(4.45^{+1.31}_{-1.05}\) \\ \(\chi^{2}\)/d.o.f & & \(555/649\) \\ \(\rm F_{diskbb}^{bol}\) & \(10^{-13}\) erg cm\({}^{-2}\) s\({}^{-1}\) & \(4.17^{+0.61}_{-0.64}\) \\ \(\rm F_{diskpbb}^{bol}\) & \(10^{-12}\) erg cm\({}^{-2}\) s\({}^{-1}\) & \(2.12\pm 0.12\) \\ \hline \end{tabular} \end{table} Table 2: Parameters for the fitted tbabs*(gaussian+diskbb+diskpbb) model of NGC 6946 X-1 broadband spectra. The bolometric flux of the disk components is measured in the \(0.01-100.0\) keV energy range. Figure 2: The residual from absorbed powerlaw fit of XN1 broadband data shows the presence of \(\sim 0.9\) keV broad hump feature and a high energy spectral turnover. Data have been rebinned for visual purposes. X-1 is a persistent ULX as its flux remained consistent with previous estimates (e.g., Middleton et al., 2015). Interestingly, our analysis finds that even in 2020-2021 observations, X-1 shows flux F\({}_{\rm x}\simeq(8.0\pm 0.4)\times 10^{-13}\) erg s\({}^{-1}\) cm\({}^{-2}\) in \(0.3-10.0\) keV energy range, which is close to the previous findings. Hence, we discern that this source is indeed a persistent and steady ULX. One crucial comparison with the analysis of Earnshaw et al. 2019 for the 2017 XMM-Newton+NuSTAR observation is that measurement of N\({}_{\rm H}\) is slightly higher in that observation. Even though our choice of Galactic absorption is higher than that used in Earnshaw et al. 2019, we find that the data can constrain an additional local absorption on top of the Galactic absorption in that dataset. Another important result is that the simultaneous fitting of these recent 2020-2021 data properly constrains the presence of spectral curvature in this ULX, which is an important feature of super-Eddington accretion in ULXs. We further discuss the implications of the spectral fittings and their relevance to the physical scenario in the ULX. Notably, these models are phenomenological, and any degeneracy between different model combinations can have a considerable impact on the observational implications in the source spectra. With this caution, we primarily discuss the broadband spectral features and results from the thermal disk continuum models. The \(\sim 0.9\) keV broad emission line feature, studied for several ULXs, has been simply treated here with a Gaussian model. We focus on the continuum properties and discuss some physical aspects related to that. ### Outflow scenario in super-critical accretion There are ULX sources, e.g., NGC 247 ULX1, NGC 55 ULX, NGC 4395 ULX1, NGC 1313 X-1, NGC 5408 ULX1, NGC 300 X-1, Ho IX X-1, Ho II X-1, and NGC 5408 X-1 (Middleton et al., 2015; Pinto et al., 2016, 2017; Earnshaw and Roberts, 2017; Pinto et al., 2021; Ghosh et al., 2022) where the strong \(\sim 0.9\) keV feature is detected. In fact, most of the bright ULXs where strong \(\sim 0.9\) keV line emission/absorption features have been discovered are found to have predominantly softer spectra (Kosec et al., 2021). A similar \(\sim 0.9\) keV feature is seen in NGC 6946 X-1 (see also Middleton et al., 2015; Pinto et al., 2016; Kosec et al., 2021). This feature is assumed to be related to the wind or outflow in super-Eddington accretion scenario (Takeuchi et al., 2013). Mostly soft sources show these lines because the line of sight inclination is close to the disk plane, and we observe the inner hot photons only after being down-scattered by the wind, which eventually appear to us as soft photons. Also, due to such inclination, the inner disk is occulted by the wind clouds, and we receive a higher fraction of line emission/absorption from the winds. The strength and energy of the lines often depend on the wind velocity and the direction of its motion. In NGC 6946 X-1, we see a moderate fraction of both soft and hard photons. Generally, the hard sources are understood as close to the face-on system, whereas the ultra-soft sources are understood to be viewed close to edge-on. Hence, the accretion disk in NGC 6946 X-1 can be interpreted as moderately inclined towards the line of sight (see also Pinto et al., 2017). As discussed earlier, it is widely accepted that the cool disk component in ULXs comes from the optically thick wind due to super-Eddington accretion. We obtain that the bolometric luminosity of the cool disk component is L\({}_{\rm bol}=4\pi\)D\({}^{2}\)F\({}_{\rm bol}\simeq 3\times 10^{39}\) erg s\({}^{-1}\), which is consistent with a marginally above Eddington luminosity of a \(\sim 10\) M\({}_{\odot}\)black hole and a super-Eddington luminosity for a neutron star system. Hence, we can interpret that the soft spectral components like the cool accretion disk and the broad \(\sim 0.9\) keV feature are related to the emission from optically thick wind due to accretion close to or above the Eddington accretion rate. The hard spectral component, described by a hot diskpbb model, can originate from the inner accretion flow for a black hole or neutron star ULX system. We discuss these two scenarios in detail. ### Black hole model The bolometric luminosity of the hot diskpbb component is around L\({}_{\rm bol}=4\pi\)D\({}^{2}\)F\({}_{\rm bol}\simeq 1.5\times 10^{40}\) erg s\({}^{-1}\). Such high luminosity is expected to be generated via super-Eddington accretion and thus further justifying that the temperature profile of the inner region of the Figure 3: The spectra, model components, and residuals are shown for all epochs simultaneous fit with tbabs*(gaussian+diskbb+diskpbb) model. Data have been rebinned for visual purposes. disk diverges from standard thin disk and takes the form of T(r) \(\propto\) r\({}^{-0.5}\). Here it is important to discuss that typically, for a spherical emitter, L\({}_{\rm bol}=4\pi\)D\({}^{2}\)F\({}_{\rm bol}\) relation is justified. However, for accretion disks, it is shown that L\({}_{\rm bol}=(2\pi\)D\({}^{2}/\cos\theta)\) F\({}_{\rm bol}\) is appropriate (Fukue, 2000; Urquhart & Soria, 2016), and the estimated luminosity will be dependent on the disk inclination angle. Nevertheless, it is common practice to estimate L\({}_{\rm bol}\) from \(4\pi\)D\({}^{2}\)F\({}_{\rm bol}\), which is equivalent to the case of accretion disk at \(60^{\circ}\) inclination. However, for super-critical disks, the self-occultation of the disk due to the geometrical thickness at a high inclination angle and self-irradiation further modifies this simple flux-luminosity relation (Fukue, 2000). However, for simplicity, we consider the L\({}_{\rm bol}=(2\pi\)D\({}^{2}/\cos\theta)\) F\({}_{\rm bol}\) relation in our work. We can quantify some physical parameters from the spectral fitting. We calculate the inner radius of the disk from the best-fit normalization of the hot disk component (\(\sim 4.45\times 10^{-4}\)). Here we assume a constant radius of the accretion disk. The inner disk radius is given by the form R\({}_{\rm in}=\xi\kappa^{2}\)N\({}^{0.5}\)D\({}_{10}(\cos\theta)^{-0.5}\) km, where N is the normalization, \(\theta\) is the disk inclination, D\({}_{10}\) is the distance to the source in 10 kpc unit, \(\xi\) is the geometric and \(\kappa\) is the color correction factor (Kubota et al., 1998; Soria et al., 2015). The inner radius and the mass of the black hole are related by the form R\({}_{\rm in}=6\alpha\frac{\rm GM}{c^{2}}\), for a Keplerian orbit, where \(\alpha\) is a function of spin parameter and can take the value of 1 for a non-rotating black hole, or \(\sim 0.21\) for an extremely rotating (prograde) Kerr black hole with spin parameter \(a^{*}\sim 0.998\)(Bardeen et al., 1972; Thorne, 1974). For the inner hot diskpbb component, using the \(\xi\) and \(\kappa\) factors as 0.353 and 3 (Vierdayanti et al., 2008; Soria et al., 2015), respectively, we get an inner radius of \(\sim 50(\cos\theta)^{-0.5}\) km. This would correspond to a \(\sim 6\) M\({}_{\odot}\)blackhole for a non-rotating and face-on system. The disk inclination angle will influence the mass estimate. However, it is generally understood that ultrasoft ULX sources are found to be high-inclination (close to edge-on) systems. On the contrary, the ULXs where a hot inner disk component is visible are mostly low inclination systems (Gu et al., 2016). Thus, for a realistic disk inclination \(<60^{\circ}\) for NGC 6946 X-1, the mass would be \(<10\) M\({}_{\odot}\)for a non-rotating black hole. In figure 4 (left), we show the dependency of mass estimate on the disk inclination with different black hole spin. Caution is necessary because this mass estimate depends on the assumption of a Keplerian orbit of constant radius, which might be different in reality depending on the geometry of the disk. ### Neutron star model In a neutron star system, the inner hot disk can be interpreted to be truncated by a high magnetic field at the magnetospheric radius (R\({}_{\rm M}\)). Typically, R\({}_{\rm M}=7\times 10^{7}\)M\({}^{1/7}\)R\({}_{6}^{10/7}\)B\({}_{12}^{4/7}\)L\({}_{39}^{-2/7}\) cm, where m = M/M\({}_{\odot}\) is the neutron star mass in solar mass units, B\({}_{12}\) = B/10\({}^{12}\) G, L\({}_{39}\) = L/10\({}^{39}\) erg s\({}^{-1}\), R\({}_{6}\) = R/10\({}^{6}\) cm and for disk accretion \(\Lambda\sim 0.5\)(Mushtukov et al., 2017). This is assumed for a dipole structure magnetic field lines around the neutron star. If the hot inner region of the disk is coming from the diskpbb component, then we can estimate typical magnetic field strength equating the inner radius with the R\({}_{\rm M}\). We find that for a 1.4 M\({}_{\odot}\) neutron star and radius of \(10^{6}\) cm, the magnetic field B \(\sim 2\times 10^{11}\) G for a \(60^{\circ}\) inclined disk with bolometric inner disk luminosity \(\sim 1.5\times 10^{40}\) erg s\({}^{-1}\). We plot the estimated magnetic field strength and corresponding disk inclination angle in figure 4 (right). In the plot, the bolometric luminosity depends on the angle as \(\sim 1.5\times 10^{40}\) erg s\({}^{-1}/(2\cos\theta)\). Since X-1 is not expected to be an extremely high inclination system, this simple angle dependence of luminosity is viable. In realistic inclination, \(<60^{\circ}\) case, the strength of the field is \(B\lesssim 2\times 10^{11}\) G. Hence, if NGC 6946 X-1 is a neutron star system, then it probably hosts a weakly magnetized neutron star core. Several studies have explored different spectral models to justify the neutron star scenario in ULX systems. One characteristic feature is to study the powerlaw model with a high-energy exponential cutoff. Pintore et al. 2017, investigated the pulsator-like spectra in ULXs by characterizing them with highecut*powerlaw model. NGC 6946 X-1 was studied in the sample with an archival XMM-Newton only data. Along with an exponentially cutoff powerlaw continuum, a soft blackbody excess and the \(\sim 1\) keV feature were detected. However, we find that the latest broadband data is sufficiently well fitted with a gaussian+highecut*powerlaw model in our analysis. Nevertheless, we study a crucial comparison with the pulsator-like spectral model and estimate the hardness and softness ratio as defined in Pintore et al. 2017. We estimate the total unabsorbed flux in \(6.0-30.0\) keV, \(4.0-6.0\) keV, and \(2.0-4.0\) keV bands. We find that hardness \(\frac{\rm F_{x}(6.0-30.0)}{F_{x}(4.0-6.0)}\sim 1\) and softness \(\frac{\rm F_{x}(2.0-4.0)}{F_{x}(4.0-6.0)}\sim 2\). This result remains similar to the finding in Pintore et al. 2017. Typically neutron star systems are expected to have more hardness and lesser softness values. This study can indicate that if X-1 is a neutron star system, then it is weakly magnetized, consistent with our estimates, thus making the source less hard. Again, the inclination angle of the disk and the presence of optically thick wind also play a role in determining the hardness of the source. ### Accretion onto X-1 The spectral properties of NGC 6946 X-1 provide evidence of super-critical accretion onto a low massive black hole or a weakly magnetized neutron star. Also, an apparent signature of the optically thick wind is expected in such a super-critical accretion scenario. This helps us constrain some physical accretion parameters from a realistic point of view. For a super-critical accretion disk, the total accretion luminosity can be related to the Eddington factor ( \(\dot{\rm m}_{0}=\dot{\rm M}_{0}/\dot{\rm M}_{\rm Edd}\)) by (Shakura & Sunyaev, 1973), \[{\rm L}\simeq{\rm L}_{\rm Edd}[1+\dot{\rm m}_{0}] \tag{1}\] In our estimates, we make some assumptions. The total luminosity we assume from the bolometric unabsorbed luminosity from the inner diskpbb component. To estimate the luminosity, we have assumed a disk inclination angle of \(60^{\circ}\), so that \({\rm L}_{\rm bol}=4\pi{\rm D}^{2}{\rm F}_{\rm bol}=1.5\times 10^{40}\) erg s\({}^{-1}\). However, the important factor which constrains the \(\dot{\rm m}_{0}\) comes from the beaming (King, 2009; King & Lasota, 2016). An approximate beaming relation gives \({\rm b}\simeq 73/\dot{\rm m}_{0}^{2}\) and the Eddington luminosity is given by \({\rm L}_{\rm Edd}=1.5\times 10^{38}{\rm m}_{1}\) erg s\({}^{-1}\)(Poutanen et al., 2007). Then the relation becomes, \[\frac{{\rm m}_{1}}{{\rm L}_{40}}\simeq\frac{4900}{\dot{\rm m}_{0}^{2}(1+\dot{ \rm m}_{0})}, \tag{2}\] where the accretor mass \({\rm m}_{1}={\rm M}/{\rm M}_{\odot}\) and the luminosity determine the accretion rate. The spherization radius \({\rm R}_{\rm sph}\) for a super-Eddington disk is determined by the accretion rate by (Shakura & Sunyaev, 1973; Begelman et al., 2006; King, 2009), \[{\rm R}_{\rm sph}\simeq\frac{27}{4}\dot{\rm m}_{0}\frac{2{\rm GM}}{{\rm c}^{2}} \tag{3}\] For the bolometric inner disk luminosity of \(1.5\times 10^{40}\) erg s\({}^{-1}\), if we assume a neutron star of 1.4 \({\rm M}_{\odot}\), then \(\dot{\rm m}_{0}\simeq 34\) or in the case of a \(\sim 10\)\({\rm M}_{\odot}\)black hole, \(\dot{\rm m}_{0}\simeq 14\). Eddington accretion rate (\(\dot{\rm M}_{\rm Edd}\simeq 2\times 10^{18}{\rm m}_{1}\) g s\({}^{-1}\)) of a neutron star is \(\sim 4.4\times 10^{-8}\)\({\rm M}_{\odot}{\rm yr}^{-1}\). This gives the accretion rate of X-1 to be \(\dot{\rm M}_{0}=1.5\times 10^{-6}\)\({\rm M}_{\odot}{\rm yr}\)\({}^{-1}\), if it is a neutron star. On the other hand, if it is a 10 \({\rm M}_{\odot}\)black hole, then \(\dot{\rm M}_{\rm Edd}\sim 3.2\times 10^{-7}\)\({\rm M}_{\odot}{\rm yr}^{-1}\) and \(\dot{\rm M}_{0}\) for X-1 is \(\sim 4.5\times 10^{-6}\)\({\rm M}_{\odot}{\rm yr}^{-1}\). The spherization radius \({\rm R}_{\rm sph}\) is \(\sim 9.5\times 10^{7}\) cm for a neutron star system, and \(\sim 3\times 10^{8}\) cm for a 10 \({\rm M}_{\odot}\)black hole. Now, as we have assumed the disk inclination of \(60^{\circ}\), the inner radius \({\rm R}_{\rm in}\sim 7\times 10^{7}\) cm. If we assume \({\rm R}_{\rm M}\sim{\rm R}_{\rm in}\), then the magnetospheric radius is close to the \({\rm R}_{\rm sph}\) of the neutron star estimate, i.e., \({\rm R}_{\rm M}\lesssim{\rm R}_{\rm sph}\). However, non-detection of pulsation in this source could mean that the \({\rm R}_{\rm M}\) is much smaller than the estimated \({\rm R}_{\rm in}\), and thus the pulsation is diluted to be detected (Walton et al., 2018). In that case, the estimated magnetic field in SS 4.3 would be further weaker. In the case of a 10 \({\rm M}_{\odot}\)black hole, \({\rm R}_{\rm in}\) which may be comparable to the inner stable circular orbit radius (\({\rm R}_{\rm ISCO}\)) is much smaller than the \({\rm R}_{\rm sph}\). To summarize, NGC 6946 X-1 is found to be a persistent soft ultraluminous X-ray source. Detection of spectral curvature and high bolometric luminosity of a hot slim accretion disk scenario prefers super-Eddington accretion onto a stellar mass compact object. If the host is a non-rotating black hole, the mass would be \(<10\)\({\rm M}_{\odot}\), or the ULX can host a weakly magnetized neutron star. The estimates of physical length scales are consis Figure 4: Left: Variation of the estimated mass of the black hole as a function of disk inclination angle for three different spins of the black hole. The non-rotating black hole scenario of \(a^{*}=0\), a moderate spin of \(a^{*}=0.5\), and a maximally rotating case of \(a^{*}=0.998\) are shown in the figure. Right: Dependency of estimated magnetic field strength for a neutron star system on the disk inclination angle. tent with a geometry where the disk height is extended down the spherization radius and gets truncated at the inner radius as the inner stable circular orbit of a black hole or magnetospheric radius of a neutron star. The scientific results of this article have used archival data (available at the High Energy Astrophysics Science Archive Research Center (HEASARC)) obtained with XMM-Newton, an ESA science mission with instruments and contributions directly funded by ESA member states and NASA. This research has also utilized archival data (HEASARC) obtained with NuSTAR, a project led by Caltech, funded by NASA, and managed by the NASA Jet Propulsion Laboratory (JPL), and has made use of the NuSTAR Data Analysis Software (NuSTARDAS) jointly developed by the ASI Space Science Data Centre (SSDC, Italy) and the California Institute of Technology (Caltech, USA).
2302.10836
nlive: an R Package to facilitate the application of the sigmoidal and random changepoint mixed models
Background: The use of mixed effect models with a specific functional form such as the Sigmoidal Mixed Model and the Piecewise Mixed Model (or Changepoint Mixed Model) with abrupt or smooth random change allows the interpretation of the defined parameters to understand longitudinal trajectories. Currently, there are no interface R packages that can easily fit the Sigmoidal Mixed Model allowing the inclusion of covariates or incorporating recent developments to fit the Piecewise Mixed Model with random change. Results: To facilitate the modeling of the Sigmoidal Mixed Model, and Piecewise Mixed Model with abrupt or smooth random change, we have created an R package called nlive. All needed pieces such as functions, covariance matrices, and initials generation were programmed. The package was implemented with recent developments such as the polynomial smooth transition of the piecewise mixed model with improved properties over Bacon-Watts, and the stochastic approximation expectation-maximization (SAEM) for efficient estimation. It was designed to help interpretation of the output by providing features such as annotated output, warnings, and graphs. Functionality, including time and convergence, was tested using simulations. We provided a data example to illustrate the package use and output features and interpretation. The package implemented in the R software is available from the Comprehensive R Archive Network (CRAN) at https://CRAN.R-project.org/package=nlive. Conclusions: The nlive package for R fits the Sigmoidal Mixed Model and the Piecewise Mixed: abrupt and smooth. The nlive allows fitting these models with only five mandatory arguments that are intuitive enough to the less sophisticated users.
Ana W Capuano, Maude Wagner
2023-02-21T17:35:12Z
http://arxiv.org/abs/2302.10836v2
nlive: an R Package to facilitate the application of the sigmoidal and random changepoint mixed models ###### Abstract. The use of mixed effect models with a specific functional form such as the Sigmoidal Mixed Model and the Piecewise Mixed Model (or Changepoint Mixed Model) with abrupt or smooth random change allow the interpretation of the defined parameters to understand longitudinal trajectories. Currently, there are no interface R packages that can easily fit the Sigmoidal Mixed Model allowing the inclusion of covariates or incorporate recent developments to fit the Piecewise Mixed Model with random change. To facilitate the modeling of the Sigmoidal Mixed Model, and Piecewise Mixed Model with abrupt or smooth random change, we have created an R package called nlive. All needed pieces such as functions, covariance matrices, and initials generation were programmed. The package was implemented with recent developments such as the polynomial smooth transition of piecewise mixed model with improved properties over Bacon-Watts, and the stochastic approximation expectation-maximization (SAEM) for efficient estimation. It was designed to help interpretation of the output by providing features such as annotated output, warnings, and graphs. Functionality, including time and convergence, was tested using simulations. We provided a data example to illustrate the package use and output features and interpretation. The package implemented in the R software is available from the Comprehensive R Archive Network (CRAN) at [https://CRAN.R-project.org/package=nlive](https://CRAN.R-project.org/package=nlive). The nlive package for R fits the Sigmoidal Mixed Model and the Piecewise Mixed: abrupt and smooth. The nlive allows fitting these models with only five mandatory arguments that are intuitive enough to the less sophisticated users. ## 1. Background Continuous longitudinal data may have a trajectory that is not linear. This is the case in the study of cognitive aging, which presents a faster decline close to death, as well as the process in many other fields such as agriculture [1], pharmacology [2] and marketing [3]. Although some less parsimonious models have been proposed to model such longitudinal data, the use of models with a specific functional form such as the Sigmoidal Mixed Model (SMM) [4] and the Piecewise Mixed Model (PMM) [5] with abrupt or smooth change allow the interpretation of the defined parameters. The SMM is currently implemented in SAS using PROC NLMIXED [4], which maximizes the marginal
2306.13610
Quotients, pure existential completions and arithmetic universes
We provide a new description of Joyal's arithmetic universes through a characterization of the exact and regular completions of pure existential completions. We show that the regular and exact completions of the pure existential completion of an elementary doctrine $P$ are equivalent to the $\mathsf{reg}/\mathsf{lex}$ and $\mathsf{ex}/\mathsf{lex}$-completions, respectively, of the category of predicates of $P$. This result generalizes a previous one by the first author with F. Pasquali and G. Rosolini about doctrines equipped with Hilbert's $\epsilon$-operators. Thanks to this characterization, each arithmetic universe in the sense of Joyal can be seen as the exact completion of the pure existential completion of the doctrine of predicates of its Skolem theory. In particular, the initial arithmetic universe in the standard category of ZFC-sets turns out to be the completion with exact quotients of the doctrine of recursively enumerable predicates.
Maria Emilia Maietti, Davide Trotta
2023-06-23T16:51:47Z
http://arxiv.org/abs/2306.13610v3
# A characterization of regular and exact completions of pure existential completions ###### Abstract. The notion of existential completion in the context of Lawvere's doctrines was introduced by the second author in his PhD thesis, and it turned out to be a restriction to faithful fibrations of Peter Hofstra's construction used to characterize Dialectica fibrations. The notions of regular and exact completions of elementary and existential doctrines were brought up in recent works by the first author with F. Pasquali and P. Rosolini, inspired by those done by M. Hyland, P. Johnstone and A. Pitts on triposes. Here, we provide a characterization of the regular and exact completions of (pure) existential completions of elementary doctrines by showing that these amount to the reg/lex and ex/lex-completions, respectively, of the category of predicates of their generating elementary doctrines. This characterization generalizes a previous result obtained by the first author with F. Pasquali and P. Rosolini on doctrines equipped with Hilbert's \(\epsilon\)-operators. Relevant examples of applications of our characterization, quite different from those involving doctrines with Hilbert's \(\epsilon\)-operators, include the regular syntactic category of the regular fragments of first-order logic (and his effectivization) as well as the construction of Joyal's Arithmetic Universes. (c) Maria Emilia Maietti and Davide Trotta, 2023. Permission to copy for private use granted. ###### Contents * 1 Introduction * 2 Elementary and existential doctrines * 3 The pure existential completion * 4 Regular and Exact completions of pure existential completions of elementary doctrines * 5 Characterization of regular and exact completions of pure existential completions * 6 Conclusion ## 1. Introduction The process of completing a category with quotients to produce an exact or regular category introduced in [1, 3] has been widely studied in the literature of category theory, with applications both to mathematics and computer science, see [14, 25, 26, 21]. Specifically, the concepts of exact completion of a finite limit category, also known as ex/lex-completion, and that of the exact completion of a regular category, referred to as ###### Abstract We consider a class of _generalized existential completion_ of a given primary doctrine \(P\) on a set \(\mathcal{C}\) of \(\mathcal{D}\)-completion \(\mathcal{C}\) of a given primary doctrine \(P\) on a set \(\mathcal{C}\) of \(\mathcal{D}\)-completion \(\mathcal{C}\) of \(\mathcal{D}\)-completion \(\mathcal{C}\) of \(\mathcal{C}\). We show that the _generalized existential completion_ of a given primary doctrine \(P\) on a set \(\mathcal{C}\) of \(\mathcal{C}\)-completion \(\mathcal{C}\) of \(\mathcal{C}\) is _generalized existential completion_ of a given primary doctrine \(P\) on a set \(\mathcal{C}\) of \(\mathcal{C}\)-completion \(\mathcal{C}\) of \(\mathcal{C}\). ## 1 Introduction The _generalized existential completion_ of a given primary doctrine \(P\) on a set \(\mathcal{C}\) of \(\mathcal{C}\)-completion \(\mathcal{C}\) of \(\mathcal{C}\) is a _generalized existential completion_ of a given primary doctrine \(P\) on a set \(\mathcal{C}\) of \(\mathcal{C}\)-completion \(\mathcal{C}\) of \(\mathcal{C}\)-completion \(\mathcal{C}\) of \(\mathcal{C}\)-completion \(\mathcal{C}\) of \(\mathcal{C}\)-completion \(\mathcal{C}\) of \(\mathcal{C}\)-completion \(\mathcal{C}\)-completion \(\mathcal{C}\) of \(\mathcal{C}\)-completion \( significant role in the works by P. Hofstra [8, 7]. In this paper we focus on the notion of pure existential completion of elementary doctrines. In detail, we show that for an elementary existential doctrine \(P\) and an elementary full subdoctrine \(P^{\prime}\) of \(P\) on the same base category, the regular completion \(\mathsf{Reg}(P)\) of \(P\) corresponds to the \(\mathsf{reg}/\mathsf{lex}\)-completion \((\mathpzc{Pr}\mathpzc{d}_{P^{\prime}})_{\mathsf{reg}/\mathsf{lex}}\) of the category of predicates \(\mathpzc{Pr}\mathpzc{d}_{P^{\prime}}\) of \(P^{\prime}\) (via an equivalence induced by the canonical embedding of \(\mathpzc{Pr}\mathpzc{d}_{P^{\prime}}\) into \(\mathsf{Reg}(P)\)) if and only if \(P\) is the pure existential completion of \(P^{\prime}\) (Theorem 5.11). Then, by combining this result with the aforementioned decomposition of exact completions, we immediately deduce that for an elementary existential doctrine \(P\) and an elementary full subdoctrine \(P^{\prime}\) on the same base category, the exact completion \(\mathsf{Ex}(P)\) of \(P\) corresponds to the \(\mathsf{ex}/\mathsf{lex}\)-completion \((\mathpzc{Pr}\mathpzc{d}_{P^{\prime}})_{\mathsf{ex}/\mathsf{lex}}\) of the category of predicates \(\mathpzc{Pr}\mathpzc{d}_{P^{\prime}}\) of \(P^{\prime}\) (via an equivalence induced by the canonical embedding of \(\mathpzc{Pr}\mathpzc{d}_{P^{\prime}}\) into \(\mathsf{Ex}(P)\)) if and only if \(P\) is the pure existential completion of \(P^{\prime}\) (Corollary 5.12). Notice that our characterization is a proper generalization of the one presented in [19], as it was proved in [24] that a doctrine is equipped with Hilbert's \(\epsilon\)-operators if and only if it is equivalent to the pure existential completion of itself. The crucial tool we employ to show our main results is given by the intrinsic characterisation of existential completions in terms of _existential free elements_ presented in [24, Thm. 4.16] slightly extended with another equivalent presentation here (Theorem 3.19). This characterisation states that a doctrine \(P\) is an instance of a generalized existential completion with respect to a class \(\Lambda\) if and only if it has _enough existential free elements_ with respect to \(\Lambda\) and existential free elements are _closed under finite conjunctions_. A similar characterization for full existential completions, in terms of \(\exists\)_-prime predicates_ (which play the same role as _full existential free predicates_ in [24]), can be found in the works [6, 5] by J. Frey. In more detail, to fulfil our goals we show that the notion of existential free elements of a doctrine \(P\) is strictly related to that of regular projective objects in the regular completion \(\mathsf{Reg}(P)\). In particular, we prove that for a doctrine \(P\) arising as the pure existential completion of an elementary subdoctrine \(P^{\prime}\), the _comprehensions_ (in the sense of [21]) of the existential free elements of \(P\) in the category of predicates of \(P^{\prime}\) provide a _projective cover_ of the regular completion \(\mathsf{Reg}(P)\). Moreover, the additional requirement that existential free elements are closed under finite conjunctions and \(P^{\prime}\) is elementary enables us to conclude that the projective objects of \(\mathsf{Reg}(P)\) are closed under finite limits, and that every object of \(\mathsf{Reg}(P)\) is a subobject of a regular projective. Our characterization has relevant applications in the completion of doctrines beyond those equipped with Hilbert's \(\epsilon\)-operators. Examples include the regular syntactic doctrine of the regular fragments of first-order logic in the sense of [13] and its effectivatization, as well as the construction of Joyal's Arithmetic Universes. Finally, another notable application of our results, which has already been described in [33] employing the main theorem presented here (first presented in [23] with a different proof), regards the exact completion of Godel hyperdoctrines introduced in [33, 32] as an equivalent presentation of the restriction to faithful fibrations of Peter Hofstra's Dialectica fibrations [9]. ## 2. Elementary and existential doctrines The name _doctrine_ followed by some adjectives is often associated to some generalization of the notion of _hyperdoctrine_ introduced by F.W. Lawvere in a series of seminal papers [15, 16]. We recall from _loc. cit._ some definitions which will be useful in the following. The reader can find all the details about the theory of elementary and pure existential doctrine also in [19, 20, 21, 22], and an algebraic analysis of the elementary structure of a doctrine in [4]. In the following we adopt the notation \(fg\) to mean the composition of a morphism \(f:Y\to Z\) with another \(g:X\to Y\) within a category. We indicate with \(\mathsf{Set}\) the category of sets formalizable within the classical axiomatic set theory ZFC. **2.1 Definition**.: [primary doctrine] _A **primary doctrine** is a functor \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSI}\) from the opposite of a category \(\mathcal{C}\) with finite products to the category of inf-semilattices._ We will use the notation \(\alpha\wedge\beta\) to denote the binary \(\inf\) of \(\alpha\) and \(\beta\) in \(P(A)\) and \(\top_{A}\) to denote the top element of \(P(A)\). **2.2 Definition**.: [elementary doctrine] _A primary doctrine \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSI}\) is **elementary** if for every \(A\) in \(\mathcal{C}\) there exists an object \(\delta_{A}\) in \(P(A\times A)\) such that_ 1. _the assignment_ \[\exists_{\langle\mathrm{id}_{A},\mathrm{id}_{A}\rangle}(\alpha):=P_{\pi_{1}}( \alpha)\wedge\delta_{A}\] _for an element_ \(\alpha\) _of_ \(P(A)\) _determines a left adjoint to_ \(P_{\langle\mathrm{id}_{A},\mathrm{id}_{A}\rangle}\colon P(A\times A)\to PA\)_;_ 2. _for every morphism_ \(e\) _of the form_ \(\langle\pi_{1},\pi_{2},\pi_{2}\rangle\colon X\times A\to X\times A\times A\) _in_ \(\mathcal{C}\)_, the assignment_ \[\exists_{e}(\alpha):=P_{\langle\pi_{1},\pi_{2}\rangle}(\alpha)\wedge P_{ \langle\pi_{2},\pi_{2}\rangle}(\delta_{A})\] _for_ \(\alpha\) _in_ \(P(X\times A)\) _determines a left adjoint to_ \(P_{e}\colon P(X\times A\times A)\to P(X\times A)\)_._ **2.3 Example**.: Let \(\mathcal{L}_{=}\) be the \((\top,\wedge,=)\)-fragment of (many sorted) Intuitionistic Logic, called Horn-fragment in [13], and let \(\mathbb{T}\) be a theory in such a fragment. Let us denote by \(\mathcal{V}\) the syntactic category whose objects are contexts (up to \(\alpha\)-equivalence), and arrows are term substitutions. The elementary doctrine \[\mathsf{LT}_{=}^{\mathbb{T}}\colon\mathcal{V}^{\mathrm{op}}\longrightarrow \mathsf{InfSI}\] on \(\mathcal{V}\) is given on a given context \(\Gamma\) by taking \(\mathsf{LT}_{=}^{\mathbb{T}}(\Gamma)\) as the Lindenbaum-Tarski algebra of well-formed formulas of \(\mathcal{L}_{=}\) with free variables in \(\Gamma\). **2.4 Definition**.: [pure existential doctrine] _A primary doctrine \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSI}\) is **pure existential** if, for every object \(A\) and \(B\) in \(\mathcal{C}\) for any product projection \(\pi\colon A\to B\), the functor_ \[P_{\pi}\colon P(B)\to P(A)\] _has a left adjoint \(\exists_{\pi_{i}}\), and these satisfy:_ 1. _[leftmargin=*]_ 2. _Beck-Chevalley condition:_ _for any pullback diagram_ _the canonical arrow_ \[\exists_{\pi^{\prime}}P_{f^{\prime}}(\alpha)\leq P_{f}\exists_{\pi}(\alpha)\] _is an isomorphisms for every element_ \(\alpha\) _of the fibre_ \(P(A)\)_.;_ 3. _Frobenius reciprocity:_ _for any projection_ \(\pi_{A}\colon A\times B\to A\)_, for any object_ \(\alpha\) _in_ \(P(A)\) _and_ \(\beta\) _in_ \(P(X\times A)\)_, the canonical arrow_ \[\exists_{\pi_{A}}(P_{\pi_{A}}(\alpha)\wedge\beta)\leq\alpha\wedge\exists_{\pi _{A}}(\beta)\] _in_ \(P(A)\) _is an isomorphism._ **2.5 Remark**.: _Notice that pure existential doctrines are simply called existential in previous works of both authors, including [31, 21, 22]. In this paper we call them "pure existential" following the notation adopted in the recent work [24], where these kind of doctrines have been considered as a particular case of a more general notion of generalized existential doctrine. In accordance with the notion, we will introduce the pure existential completion in the next section, which is again a particular instance of a more general construction called generalized existential completion and described in [24, 31]._ **2.6 Remark**.: _In a pure existential elementary doctrine, for every map \(f\colon A\to B\) in \(\mathcal{C}\) the functor \(P_{f}\) has a left adjoint \(\exists_{f}\) that can be computed as_ \[\exists_{\pi_{2}}(P_{f\times\mathrm{id}_{B}}(\delta_{B})\wedge P_{\pi_{1}}( \alpha))\] _for \(\alpha\) in \(P(A)\), where \(\pi_{1}\) and \(\pi_{2}\) are the projections from \(A\times B\). However, observe that such a definition guarantees only the validity of the corresponding Frobenius reciprocity condition for \(\exists_{f}\), but it does not guarantee the validity of the Beck-Chevalley condition with respect to pullbacks along \(f\) (see the counterexample in [24, Rem. 6.4]). Primary doctrines, whose base category has finite limits, having left adjoints along every morphisms satisfying BCC and FR are called full existential in [24]._ ### Example Let \(\mathcal{L}_{=,\exists}\) be the \((\top,\wedge,=,\exists)\)-fragment of first-order Intuitionistic Logic (also called regular in [13]), i.e. the fragment with top element, conjunction, equality and pure existential quantifiers, and let \(\mathbb{T}\) be a theory in such a fragment. Then the syntactic doctrine \[\mathsf{LT}_{=,\exists}^{\mathbb{T}}\colon\mathcal{V}^{\mathrm{op}}\longrightarrow \mathsf{InfSI}\] where \(\mathcal{V}\) is the category of contexts and substitutions (as in Example 2.3) and \(\mathsf{LT}_{=,\exists}^{\mathbb{T}}(\Gamma)\) is given by the Lindenbaum-Tarski algebra of well-formed formulas of \(\mathcal{L}_{=,\exists}\) with free variables in \(\Gamma\), is elementary and pure existential. The following examples are discussed in [15, 10]. ### Example [subobjects doctrine] Let \(\mathcal{C}\) be a category with finite limits. The functor \[\mathsf{Sub}_{\mathcal{C}}\colon\mathcal{C}^{\mathrm{op}}\longrightarrow \mathsf{InfSI}\] assigns to an object \(A\) in \(\mathcal{C}\) the poset \(\mathsf{Sub}_{\mathcal{C}}(A)\) of subobjects of \(A\) in \(\mathcal{C}\) and, for an arrow \(B\xrightarrow{f}A\) the morphism \(\mathsf{Sub}_{\mathcal{C}}(f)\colon\mathsf{Sub}_{\mathcal{C}}(A)\to\mathsf{ Sub}_{\mathcal{C}}(B)\) is given by pulling a subobject back along \(f\). The fibered equalities are the diagonal arrows. This is a pure existential doctrine and it is elementary if and only if the category \(\mathcal{C}\) is regular. ### Example [weak subobjects doctrine] Consider a category \(\mathcal{D}\) with finite products and weak pullbacks: the doctrine is given by the functor of weak subobjects (or variations) \[\Psi_{\mathcal{D}}\colon\mathcal{D}^{\mathrm{op}}\longrightarrow\mathsf{InfSI}\] where \(\Psi_{\mathcal{D}}(A)\) is the poset reflection of the slice category \(\mathcal{D}/A\), whose objects are indicated with \([f]\) for any arrow \(B\xrightarrow{f}A\) in \(\mathcal{D}\), and for an arrow \(B\xrightarrow{f}A\), the homomorphism \(\Psi_{\mathcal{D}}([f])\colon\Psi_{\mathcal{D}}(A)\to\Psi_{\mathcal{D}}(B)\) is given by the equivalence class of a weak pullback of an arrow \(X\xrightarrow{g}A\) with \(f\). This doctrine is a pure existential doctrine, and the pure existential left adjoint are given by the post-composition. The category of primary doctrines \(\mathsf{PD}\) is a 2-category, where: * **0-cell** are primary doctrines; * a **1-cell** is a pair \((F,\mathfrak{b})\) where * \(F\colon\mathcal{C}\to\mathcal{D}\) is a finite product preserving functor; * \(\mathfrak{b}\colon P\to R\circ F\) is a natural transformation. * a **2-cell**\(\theta\colon(F,\mathfrak{b})\to(G,\mathfrak{c})\) is a natural transformation \(\theta\colon F\to G\) such that \[\mathfrak{b}_{A}(\alpha)\leq R_{\theta_{A}}(\mathfrak{c}_{A}(\alpha))\] for every \(\alpha\) of \(P(A)\). We denote by \(\mathsf{ED}\) the 2-full subcategory of \(\mathsf{PD}\) whose objects are pure existential doctrines, and whose 1-cells are those 1-cells of \(\mathsf{PD}\) which preserve the pure existential structure. Similarly, we denote by \(\mathsf{EED}\) the 2-full subcategory of \(\mathsf{PD}\) whose objects are elementary and pure existential doctrines, and whose 1-cells preserve both the pure existential and the elementary structure. We recall from [19] the notion of pure existential doctrine with Hilbert's _\(\epsilon\)-operators_: **2.10 Definition**.: _Let \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSI}\) be an elementary pure existential doctrine. An object \(B\) of \(\mathcal{C}\) is equipped with Hilbert's \(\epsilon\)-operator if, for any object \(A\) in \(\mathcal{C}\) and any \(\alpha\) in \(P(A\times B)\) there exists an arrow \(\epsilon_{\alpha}\colon A\to B\) such that_ \[\exists_{\pi_{1}}(\alpha)=P_{\langle\mathrm{id}_{A},\epsilon_{\alpha}\rangle }(\alpha)\] _holds in \(P(A)\), where \(\pi_{1}\colon A\times B\to A\) is the first projection._ **2.11 Definition**.: _We say that an elementary pure existential doctrine \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSI}\) is **equipped with** Hilbert's \(\epsilon\)**-operators** if every object in \(\mathcal{C}\) is equipped with \(\epsilon\)-operator._ We recall from [19, Ex. 5.14] the following example of doctrine equipped with Hilbert's \(\epsilon\)-operators. **2.12 Example**.: Let \(\mathsf{Set}_{*}\) be the category of non-empty sets and let \(\xi\) be an ordinal with greatest element, and \(\mathcal{H}:=(\xi,\geq)\) be the frame given by the set \(\xi\) equipped with the reverse order. We consider then the doctrine \[\mathcal{H}^{(-)}\colon\mathsf{Set}_{*}^{\mathrm{op}}\longrightarrow\mathsf{ InfSI}\] that is elementary and pure existential. In particular, we recall that for every \(\alpha\in\mathcal{H}^{A\times B}\), the left adjoint \(\exists_{\pi_{A}}\) is defined as \[\exists_{\pi_{A}}(\alpha)(a)=\bigvee_{b\in B}\alpha(a,b)\] and the equality predicate \(\delta(i,j)\in\mathcal{H}^{A\times A}\) is defined as the top element if \(i=j\), and the bottom otherwise. Moreover, the doctrine \(\mathcal{H}^{(-)}\colon\mathsf{Set}_{*}^{\mathrm{op}}\longrightarrow\mathsf{ InfSI}\) is equipped with \(\epsilon\)-operators. In particular for every element \(\alpha\in\mathcal{H}^{A\times B}\), and for every \(a\in A\) one can consider the (non empty) set \[I_{\alpha}(a)=\{b\in B\mid\alpha(a,b)=\bigvee_{c\in B}\alpha(a,c)\}.\] Then, by the axiom of choice, there exists a function \(\epsilon_{\alpha}\colon A\to B\) such that \(\epsilon_{\alpha}(a)\in I_{\alpha}(a)\). Therefore we have that \[\alpha(a,\epsilon_{\alpha}(a))=\bigvee_{c\in B}\alpha(a,c)=\exists_{\pi_{A}}( \alpha)(a)\] and this proves that \(\mathcal{H}^{(-)}\) is equipped with Hilbert's \(\epsilon\)-operators. ### m-Variational doctrines We start by recalling from [16, 20, 21] the notion of doctrine with (strong) comprehensions. This notion connects an abstract elementary doctrine with that of the subobjects of the base when this has finite limits and provides an abstract algebraic counterpart of the set-theoretic "comprehension axiom". **Definition** **.: _Let \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSI}\) be a primary doctrine and \(\alpha\) be an object of \(P(A)\). A **weak comprehension** of \(\alpha\) is an arrow \(\{\!\!\{\alpha\}\!\}\colon X\to A\) such that \(P_{\{\!\!\{\alpha\}\!\}}(\alpha)=\top_{X}\) and, for every \(f\colon Z\to A\) such that \(P_{f}(\alpha)=\top_{Z}\), there exists a map \(g\colon Z\to X\) such that \(f=\{\!\!\{\alpha\}\!\}\circ g\). If the arrow \(g\) is also the unique arrow such that \(f=\{\!\!\{\alpha\}\!\}\circ g\), we say that \(\{\!\!\{\alpha\}\!\}\colon X\to A\) is a **(strong) comprehension**._ We say that \(P\)**has comprehensions** if every \(\alpha\) has a comprehension, and that \(P\)**has full comprehensions** if, moreover, \(\alpha\leq\beta\) in \(P(A)\) whenever \(\{\!\!\{\alpha\}\!\}\) factors through \(\{\!\!\{\beta\}\!\}\). **Notation:** given an \(\alpha\in P(A)\), we will denote by \(A_{\alpha}\) the domain of the comprehension \(\{\!\!\{\alpha\}\!\}\). We introduce a class of doctrines whose fibre equality turns out to be equivalent to the morphism equality of their base morphisms (see Proposition 2.2 in [19] and the original notion called "comprehensive equalizers" in [21]). **Definition** **.: _An elementary doctrine \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSI}\) has **comprehensive diagonals** if the arrow \(\Delta_{A}:=\langle\mathrm{id}_{A},\mathrm{id}_{A}\rangle\colon A\to A\times A\) is the comprehension of the element \(\delta_{A}\in P(A)\) for every objects \(A\)._ Recall from [19] that: **Definition** **.: _An elementary doctrine is called **m-variational** if it has full comprehensions and comprehensive diagonals._ We summarize some useful properties and results about pure existential m-variational doctrines, and we refer to [19, 24] for all the details. **Lemma** **.: _Let \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSI}\) be a pure existential m-variational doctrine. Then_ 1. _the category_ \(\mathcal{C}\) _has finite limits;_ 2. _every left adjoint_ \(\exists_{f}\)_, defined as_ \(\exists_{f}(\alpha):=\exists_{\pi_{2}}(P_{f\times\mathrm{id}_{B}}(\delta_{B}) \wedge P_{\pi_{1}}(\alpha))\)_, satisfies the Beck-Chevalley condition;_ 3. _comprehensions are monic and stable under pullbacks;_ 4. _an arrow_ \(f\colon A\to B\) _is monic if and only if_ \(P_{f\times f}(\delta_{B})=\delta_{A}\) _ 5. _an element_ \(\phi\in P(A\times B)\) _is functional, i.e. it satisfies_ \[P_{\langle\pi_{1},\pi_{2}\rangle}(\phi)\wedge P_{\langle\pi_{1},\pi_{3}\rangle}( \phi)\leq P_{\langle\pi_{2},\pi_{3}\rangle}(\delta_{B})\] _in_ \(P(A\times B\times B)\) _if and only if_ \(\pi_{A}\{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ### The internal language of an elementary pure existential doctrine It is known (see for example [27]) that given an elementary and pure existential doctrine \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSI}\) we can define a many-sorted \((\top,\wedge,=,\exists)\)-language \(\mathcal{L}_{P}\) and a \(\mathcal{L}_{P}\)-theory \(\mathbb{T}_{P}\) called the _internal language of_\(P\). The language \(\mathcal{L}_{P}\) has a sort \(A\) for every object \(A\) of the base category \(\mathcal{C}\), an \(n\)-ary function symbol \(f\colon A_{1},\ldots,A_{n}\to A\) for every morphism \(f\colon A_{1}\times\cdots\times A_{n}\to A\) of \(\mathcal{C}\) and an \(n\)-relation symbol \(R:A_{1},\ldots,A_{n}\) for each element of \(P(A_{1}\times\cdots\times A_{n})\). The terms and the formulas of \(\mathcal{L}_{P}\) are the ones inductively generated as usual by applying to this signature the first-order symbols admitted by the \((\top,\wedge,=,\exists)\)-fragment of first-order logic. The \(\mathcal{L}_{P}\)-theory \(\mathbb{T}_{P}\) consists of sequents \(\phi\vdash\psi\) (in some context \(A\)) such that \(\phi\leq\psi\) in \(P(A)\), and of term-equalities \(f=g\) (where \(f\) and \(g\) are terms of the same sort) if \(g\) and \(f\) are equal arrows of the category \(\mathcal{C}\). Notice that here the theory \(\mathbb{T}_{P}\) is considered in the sense of [27] because, to obtain a tight correspondence between the internal language and the doctrine, we have to consider also term-equations as axioms of the theory. This notion is then different to that used in [13], where a theory is defined just by a set of sequents. The two notions happen to coincide when the doctrine \(P\) has comprehensive diagonals because, in this case, the equality is extensional. **Theorem**.: _Let \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSI}\) be an elementary pure existential doctrine. Then \(P\) is equivalent to the syntactic doctrine \(\mathsf{LT}_{\mathcal{L}_{P}}^{\mathbb{T}_{P}}\colon\mathcal{V}^{\mathrm{op}} \longrightarrow\mathsf{InfSI}\)._ ## 3. The pure existential completion In [31] the second author introduced a free construction, called _existential completion_, that freely adds left adjoints along a given class of morphisms \(\Lambda\) (closed under pullbacks, compositions and isomorphisms) to a given primary doctrine. Such a notion has been renamed _generalized existential completion_ in [24] to distinguish some of its relevant instances. Here we follow the terminology used in [24] and we call _pure existential completion_ the instance of the (generalized) existential completion of a primary doctrine when \(\Lambda\) is the class of product projections of its base. This instance turned out to coincide with the restriction to faithful fibrations of the _simple coproduct (or sum) completion_ of a fibration employed by Peter Hofstra in [9]. Instead, the instance of the existential completion when \(\Lambda\) is the class all the morphisms of its base, called _full existential completion_ in [24], coincides with the \(\exists\)-_completion_ introduced by J. Frey in [6, 5]. In [24] we provided an intrinsic characterization of generalized existential completions with respect to a class \(\Lambda\) in terms of _having enough existential free elements_ with respect to \(\Lambda\). An equivalent characterization for the instance of full existential completions has been independently introduced by J. Frey in [6, 5] naming \(\exists\)_-prime predicates_ what we call _existential free elements_. Here, we present a further version of this characterization based on the notion of _pure existential free objects of an existential doctrine \(P\) relative to a subdoctrine \(P^{\prime}\)_ which slightly generalizes the notion of _pure existential free objects of a doctrine \(P\)_ and it will be useful in the proof of Theorem 5.11. In the following, we recall the details of the construction of the pure existential completion from [31] and its characterization in terms of pure existential free elements from [24], since in this paper we will prove theorems involving just this notion. In detail, recall that this construction consists in producing a pure existential doctrine \(P^{\exists}\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\), called _pure existential completion_, from a given primary doctrine \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\). This assignment extends to a \(2\)-adjunction from the \(2\)-category \(\mathsf{PD}\) of primary doctrines into the \(2\)-category \(\mathsf{ED}\) of pure existential doctrines [31]. **3.1 Definition**.: [pure existential completion [31]] _Let \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) be a primary doctrine. For every object \(A\) of \(\mathcal{C}\) consider the preorder defined as:_ * _the objects are pairs_ \((A,B,\alpha\in P(A\times B))\)_;_ * \((A,B,\alpha\in P(A\times B))\leq(A,C,\gamma\in P(A\times C))\) _if there exists_ \(w\colon A\times B\to C\) _such that_ \(\alpha\leq P_{\langle\pi_{A},w\rangle}(\gamma)\)_._ _We denote by \(P^{\exists}(A)\) the partial order obtained by identifying two objects when_ \[(A,B,\alpha)\gneqq(A,C,\gamma)\] _in the usual way. With abuse of notation we denote the equivalence class of an element in the same way. Given a morphism \(f\colon A\to C\) of \(\mathcal{C}\), the functor \(P^{\exists}_{f}\colon P^{\exists}(C)\to P^{\exists}(A)\) sends an object \((C,D,\gamma)\) of \(P^{\exists}(C)\) to the object \((A,D,P_{\langle f\pi_{A},\pi_{D}\rangle}(\gamma))\) of \(P^{\exists}(A)\), where \(\pi_{A},\pi_{D}\) are the projections from \(A\times D\)._ **3.2 Theorem**.: _For every primary the doctrine \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\), the doctrine \(P^{\exists}\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) is pure existential._ Observe that the pure existential completion preserves the elementary structure as shown in [31] and this is necessarily so as shown in [24, Thm. 6.1] which we recall here: **3.3 Theorem**.: [preservation of elementary structure] _The pure existential completion of a primary doctrine is elementary if and only if the generating primary doctrine is elementary._ Pure existential doctrines arising as instances of the pure existential completion have been characterized in [24] in terms of _pure existential free elements_. We briefly recall the main definitions and the characterization. **3.4 Definition**.: [pure existential splitting] _Let \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) be a pure existential doctrine. An object \(\alpha\) of the fibre \(P(A)\) is said to be a **pure existential splitting** if for every projection \(\pi_{A}\colon A\times B\to A\) and for every element \(\beta\) of the fibre \(P(A\times B)\), whenever \(\alpha=\exists_{\pi_{A}}(\beta)\) holds then there exists an arrow \(h\colon A\to B\) such that \(\alpha=P_{\langle\mathrm{id}_{A},h\rangle}(\beta)\)_ Notice that nothing guarantees that pure existential splitting objects as defined in Definition 3.4 are closed under reindexing along a given morphism. Therefore, we introduce a stronger notion: **3.5 Definition**.: [pure existential free] _Let \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) be a pure existential doctrine. An object \(\alpha\) of the fibre \(P(A)\) is said to be **pure existential free** if for every morphism \(f\colon B\to A\), \(P_{f}(\alpha)\) is a pure existential splitting._ In the following proposition we recall from [24, Prop. 4.4] a useful equivalent characterization of pure existential splitting elements in terms of a form of Existence Property: **3.6 Proposition**.: _Let \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) be a pure existential doctrine, let \(\alpha\) be an element of the fibre \(P(A)\). Then the following conditions are equivalent:_ 1. \(\alpha\) _is pure existential splitting;_ 2. _for every projection_ \(\pi_{A}\colon A\times B\to A\) _and for every element_ \(\beta\) _of the fibre_ \(P(A\times B)\)_, whenever_ \(\alpha\leq\exists_{\pi_{A}}(\beta)\) _holds then there exists an arrow_ \(h\colon A\to B\) _such that_ \(\alpha\leq P_{\langle\mathrm{id}_{A},h\rangle}(\beta)\)_._ **3.7 Definition**.: [Rule of Choice] _For a pure existential doctrine \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\), we say that \(P\) satisfies the **Rule of Choice**, for short \(\mathbf{RC}\), if whenever_ \[\top_{A}\leq\exists_{\pi_{A}}(\beta)\] _there exists an arrow \(h\colon A\to B\) such that_ \[\top_{A}\leq P_{\langle\mathrm{id}_{A},h\rangle}(\beta).\] **3.8 Remark**.: Observe that a pure existential doctrine \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) satisfies (RC) if and only if for every object \(A\) of \(\mathcal{C}\), the top element \(\top_{A}\in P(A)\) is a pure existential splitting. **3.9 Definition**.: _Given a pure existential doctrine \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\), we say that an element \(\alpha\) of the fibre \(P(A)\) is **covered** by an element \(\beta\in P(A\times B)\) if \(\alpha=\exists_{\pi_{A}}(\beta)\)._ **3.10 Definition**.: _We say that a pure existential doctrine \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) has **enough pure existential free objects** if for every object \(A\) of \(\mathcal{C}\), any element \(\alpha\in P(A)\) is covered by some pure existential free element \(\beta\in P(A\times B)\) for some object \(B\) of \(\mathcal{C}\), namely \(\beta\) is a pure existential free element and_ \[\alpha=\exists_{\pi_{A}}(\beta).\] **3.11 Remark**.: Observe that if all elements of a full subdoctrine \(P^{\prime}\) of a pure existential doctrine \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) are pure existential splitting for \(P\) then they are also pure existential free for \(P\), being the doctrine \(P^{\prime}\) closed under reindexing. It also holds that, if a doctrine has enough pure existential free elements then every pure existential splitting is pure existential free. We refer to [24, Lem. 4.11] for a proof of this fact. **3.12 Definition**.: _Let \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) be a pure existential doctrine equipped with a full subdoctrine \(P^{\prime}\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\). We say that \(P^{\prime}\) is a **pure existential cover** of \(P\) if for any object \(A\), every element \(\alpha^{\prime}\) of \(P^{\prime}(A)\) is pure existential splitting for \(P\) (and hence pure existential free) and every element \(\alpha\) of \(P(A)\) is covered by an element of \(P^{\prime}\)._ We summarize in the following proposition some properties useful properties. We refer to [24] for all the details: **3.13 Lemma**.: _Let \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) be a pure existential doctrine, and let \(P^{\prime}\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) be a full subdoctrine of \(P\). Then:_ * _if_ \(P\) _is the pure existential completion of a primary doctrine_ \(P^{\prime}\) _then_ \(P^{\prime}\) _is a pure existential cover of_ \(P\)_;_ * _if_ \(P^{\prime}\) _is a pure existential cover of_ \(P\)_, then the existential free elements of_ \(P\) _coincides exactly with the elements of_ \(P^{\prime}\)_. Hence if a pure existential cover exists, it is unique._ The previous notions can be generalized by relativizing each concept to a given subdoctrine: **3.14 Definition**.: [pure existential splitting of \(P\) relative to \(P^{\prime}\)] _Let \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) be a pure existential doctrine, and let \(P^{\prime}\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) be a full subdoctrine of \(P\). An object \(\alpha\) of the fibre \(P^{\prime}(A)\) is said to be a **pure existential splitting of \(P\) relative to \(P^{\prime}\)** if for every projection \(\pi_{A}\colon A\times B\to A\) and for every element \(\beta\) of the fibre \(P^{\prime}(A\times B)\), whenever \(\alpha=\exists_{\pi_{A}}(\beta)\) holds in \(P(A)\) then there exists an arrow \(h\colon A\to B\) such that \(\alpha=P_{\langle\mathrm{id}_{A},h\rangle}(\beta)\)_ **3.15 Definition**.: [pure existential free relative to \(P^{\prime}\)] _An object \(\alpha\) of the fibre \(P^{\prime}(A)\) is said to be **pure existential free of \(P\) relative to \(P^{\prime}\)** if for every morphism \(f\colon B\to A\), \(P_{f}(\alpha)\) is a pure existential splitting of \(P\) relative to \(P^{\prime}\)._ **3.16 Definition**.: [pure existential relative cover ] _Let \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) be a pure existential doctrine equipped with a full subdoctrine \(P^{\prime}\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\). We say that \(P^{\prime}\) is a **pure existential relative cover of \(P\)** if for any object \(A\), every element \(\alpha^{\prime}\) of \(P^{\prime}(A)\) is a pure existential splitting element of \(P\) relative to \(P^{\prime}\) and every element \(\alpha\) of \(P(A)\) is covered by an element of \(P^{\prime}\)._ **3.17 Lemma**.: _Let \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) be a pure existential doctrine, and let \(P^{\prime}\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) be a full subdoctrine of \(P\). If every element of \(P\) is covered by an element of \(P^{\prime}\), then every pure existential splitting of \(P\) relative to \(P^{\prime}\) is a pure existential splitting._ Proof.: Let \(\alpha\) be an element of \(P^{\prime}(A)\), ans let us suppose that it is a pure existential splitting of \(P\) relative to \(P^{\prime}\). Now suppose that \(\alpha\leq\exists_{\pi_{A}}(\beta)\), with \(\beta\) element of \(P(A\times B)\). By assumption, \(\beta\) can be written as \(\beta=\exists_{\pi_{A\times B}}(\gamma)\) with \(\gamma\) element of \(P^{\prime}(A\times B\times C)\). Hence \[\alpha\leq\exists_{\pi_{A}}(\ \exists_{\pi_{A\times B}}(\gamma)\ )=\exists_{\pi_{A}}(\gamma).\] since left adjoints compose. Since \(\alpha\) is a pure existential is a pure existential splitting of \(P\) relative to \(P^{\prime}\), then there exists an arrow \(\langle f,g\rangle\colon A\to B\times C\) such that \[\alpha\leq P_{\langle\mathrm{id}_{A},f,g\rangle}(\gamma).\] Then, since \(\beta=\exists_{\pi_{A\times B}}(\gamma)\), and hence, \(\gamma\leq P_{\pi_{A\times B}}(\beta)\), we deduce that \[\alpha\leq P_{\langle\mathrm{id}_{A},f,g\rangle}(\ P_{\pi_{A\times B}}(\beta) \ )=P_{\langle\mathrm{id}_{A},f\rangle}(\beta).\] Therefore, by Lemma 3.6, we can conclude that \(\alpha\) is a pure existential splitting. Combining this lemma with the definition of pure existential cover, we obtain the following corollary: **3.18**.: _Let \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) be a pure existential doctrine, and let \(P^{\prime}\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) be a full subdocrine of \(P\). Then \(P^{\prime}\) is a pure existential cover of \(P\) if and only if \(P^{\prime}\) is a pure existential relative cover of \(P\)_ Proof.: If \(P^{\prime}\) is a pure existential cover of \(P\), then it is in particular a a pure existential relative cover of \(P\) since every pure existential splitting object of \(P\) is obviously a pure existential splitting object of \(P\) relative to \(P^{\prime}\). The converse follows from Lemma 3.17 since every element of \(P^{\prime}\) which is a pure existential free object of \(P\) relative to \(P^{\prime}\) is also a pure existential splitting object for \(P\). Now we are ready to recall the main result from [24]. Notice that, with respect to the original result, here we present an extra equivalent condition, based on Corollary 3.18: **3.19**.: _[characterization of pure existential completions [24]] Let \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) be a pure existential doctrine. Then the following are equivalent:_ 1. \(P\) _is isomorphic to the pure existential completion_ \(\left(P^{\prime}\right)^{\exists}\) _of a primary doctrine_ \(P^{\prime}\)_;_ 2. \(P\) _satisfies the following points:_ 1. \(P\) _satisfies the rule of choice RC;_ 2. _for every pure existential free object_ \(\alpha\) _and_ \(\beta\) _of_ \(P(A)\)_, then_ \(\alpha\wedge\beta\) _is a pure existential free._ 3. \(P\) _has enough pure existential free elements;_ 3. \(P\) _has a (unique) pure existential cover;_ 4. \(P\) _has a (unique) pure existential relative cover._ A relevant application of the previous characterisation is given by the following result, originally presented in [24]: **3.20 Theorem**.: _Every elementary pure existential doctrine is equipped with Hilbert's \(\epsilon\)-operators if and only if it is (equivalent to) the pure existential completion of itself._ We conclude this section by recalling the following example from [24]: **3.21 Example**.: Let \(\mathbb{T}_{0}\) be the fragment \(\mathcal{L}_{=,\exists}\) of first-order Intuitionistic Logic, defined in Example 2.7, with no extra-logical axioms on an arbitrary signature and let \(\mathsf{H}_{0}\) be the Horn theory given the Horn fragment \(\mathcal{L}_{=}\) of first-order Intuitionistic Logic, defined in Example 2.3, with no extra-logical axioms on the same signature. Then, the elementary pure existential doctrine of \(\mathbb{T}_{0}\) \[\mathsf{L}\mathsf{T}_{=,\exists}^{\mathbb{T}_{0}}\colon\mathcal{V}^{\mathrm{ op}}\longrightarrow\mathsf{InfSl}\] is the pure existential completion of the syntactic elementary doctrine of \(\mathsf{H}_{0}\) \[\mathsf{L}\mathsf{T}_{=}^{\mathsf{H}_{0}}\colon\mathcal{V}^{\mathrm{op}} \longrightarrow\mathsf{InfSl}\] namely \(\left(\mathsf{L}\mathsf{T}_{=}^{\mathsf{H}_{0}}\right)^{\exists}\equiv \mathsf{L}\mathsf{T}_{=,\exists}^{\mathbb{T}_{0}}\). ## 4 Regular and Exact completions of pure existential completions of elementary doctrines The process of completing a category with finite limits to a regular category or exact category has been well-studied and employed in the categorical logic, raising to the so-called notion of reg/lex and ex/lex completions [1, 3]. These notions have been further generalized by relating the constructions to elementary and pure existential doctrines in the works [21, 22, 19]. These were inspired by the tripos-to-topos construction of J.M.E. Hyland, P.T. Johnstone and A.M. Pitts presented in [11, 28]. In this section we first recall well known characterizations of the reg/lex and ex/lex-completions and then we pass to remind notions and results related to the regular and exact completions of elementary pure existential doctrines. Remember from [1, Lem. 5.1] the following characterization of the reg/lex completion: **4.1 Theorem**.: _[_1_]_ _Any regular category \(\mathcal{A}\) is the regular completion of the full subcategory \(\mathcal{P}_{\mathcal{A}}\) of its regular projectives if and only if \(\mathcal{P}_{\mathcal{A}}\) is closed under finite limits in \(\mathcal{A}\) and \(\mathcal{A}\) has enough projectives, and in addition every object of \(\mathcal{A}\) can be embedded in a regular projective._ Then, recall the following decomposition of ex/lex-completion shown by A. Carboni and E. Vitale [3]: **4.2 Theorem**.: _[_3_]_ _For any category \(\mathcal{C}\) with finite limits, the ex/lex-completion of \(\mathcal{C}\) is equivalent to the ex/reg-completion of the reg/lex-completion of \(\mathcal{C}\), namely_ \[(\mathcal{C})_{\mathsf{ex/lex}}\equiv((\mathcal{C})_{\mathsf{reg/lex}})_{ \mathsf{ex/reg}}\] ### Regular and exact completions of pure existential doctrines The notion of elementary pure existential doctrine contains the logical data which allow to describe the concept of relational composition as well as that of functionality and entirety. Given an elementary and pure existential doctrine \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSI}\), an element \(\alpha\in P(A\times B)\) is said _entire_ from \(A\) to \(B\) if \[\top_{A}\leq\exists_{\pi_{A}}(\alpha).\] Moreover it is said _functional_ if \[P_{\langle\pi_{1},\pi_{2}\rangle}(\alpha)\wedge P_{\langle\pi_{1},\pi_{3} \rangle}(\alpha)\leq P_{\langle\pi_{2},\pi_{3}\rangle}(\delta_{B})\] in \(P(A\times B\times B)\). Notice that for every relation \(\alpha\in P(A\times B)\) and \(\beta\in P(B\times C)\), the _relational composition of_\(\alpha\) and \(\beta\) is given by the relation \[\exists_{\langle\pi_{1},\pi_{3}\rangle}(P_{\langle\pi_{1},\pi_{2}\rangle}( \alpha)\wedge P_{\langle\pi_{2},\pi_{3}\rangle}(\beta)\] in \(P(A\times B)\), where \(\pi_{i}\) are the projections from \(A\times B\times C\). We denote by \(\not{E\!\!f}_{P}\) the category of _entire functional relations_ of \(P\): its objects are the same of \(\mathcal{C}\), and an arrow \(\phi\colon A\to B\) is an entire functional relation \(\phi\) of \(P(A\times B)\). Combining the comprehension completion with the construction of the category of relations of an elementary and pure existential doctrine we can define the _regular completion_ of an elementary and pure existential doctrine [19]: **4.4 Definition**.: [regular completion [19]] _Let \(P\) be an elementary pure existential doctrine. We call the category \(\mathsf{Reg}(P):=\not{E\!\!f}_{P_{\mathit{ex}}}\) the **regular completion** of \(P\)._ **4.5 Remark**.: Notice that, by definition, an arrow of \(\mathsf{Reg}(P)\) from \((A,\alpha)\) to \((B,\beta)\) is given by an element \(\phi\) of \(P(A\times B)\) such that: 1. \(\phi\leq P_{\pi_{1}}(\alpha)\wedge P_{\pi_{2}}(\beta)\); 2. \(\alpha\leq\exists_{\pi_{1}}(\phi)\); 3. \(P_{\langle\pi_{1},\pi_{2}\rangle}(\phi)\wedge P_{\langle\pi_{1},\pi_{3}\rangle }(\phi)\leq P_{\langle\pi_{2},\pi_{3}\rangle}(\delta_{B})\). **4.6 Remark**.: Observe that, if \(\psi\) and \(\varphi\) are morphisms of \(\mathsf{Reg}(P)\) from \((A,\alpha)\) to \((B,\beta)\) and \(\psi\leq\varphi\) then \(\psi=\varphi\). We refer to [29] for this remark. **4.7 Remark**.: Recall from [19] that given an elementary and pure existential doctrine \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSI}\) and an arrow \(f\colon A\to B\) of \(\mathcal{C}\), we have that the _graph_\(P_{f\times\mathrm{id}_{B}}(\delta_{B})\) of \(f\) is an a entire and functional relation from \(A\) to \(B\) and this defines the _graph functor_\(G\colon\mathcal{C}\rightarrow\mathsf{Reg}(P)\) which preserves finite products. **4.8 Remark**.: Let \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) be an elementary and pure existential doctrine. Notice that the graph functor \(G\colon\mathcal{C}\rightarrow\mathsf{Reg}(P)\) defined in Remark 4.7 extends to a functor from \(\mathcal{G}_{P}\) (and also \(\mathpzc{Pr}d_{P}\)) to \(\mathsf{Reg}(P)\). More in detail, every arrow \(f\colon A\to B\) of \(\mathcal{C}\) such that \(\alpha\leq P_{f}(\beta)\) with \(\alpha\) element of \(P(A)\) and \(\beta\) of \(P(B)\) induces a morphism \(P_{f\times\mathrm{id}_{B}}(\delta_{B})\wedge(P_{\pi_{1}}(\alpha)\ \wedge\ P_{\pi_{2}}(\beta)\ )\colon(A,\alpha) \rightarrow(B,\beta)\) of \(\mathsf{Reg}(P)\). Motivated by the previous remark, we introduce the following notation for the restriction of the graph functor to the category of predicates of a given subdoctrine: **4.9 Definition**.: [graph functor] _Given an elementary pure existential doctrine \(P\) and an elementary subdoctrine \(P^{\prime}\) we can define an embedding, called **graph functor**_ \[G_{|_{P^{\prime}}}\colon\mathpzc{Pr}d_{P^{\prime}}\rightarrow\mathsf{Reg}(P)\] _by mapping \((A,\alpha)\) of \(\mathpzc{Pr}d_{P^{\prime}}\) into \((A,\alpha)\) of \(\mathsf{Reg}(P)\) and an arrow \([f]\colon(A,\alpha)\rightarrow(B,\beta)\) of \(\mathpzc{Pr}d_{P^{\prime}}\) into the arrow \(G_{|_{P^{\prime}}}([f])=P_{f\times\mathrm{id}_{B}}(\delta_{B})\wedge\ (P_{\pi_{1}}(\alpha)\ \wedge\ P_{\pi_{2}}(\beta)\ )\) from \((A,\alpha)\) to \((B,\beta)\) of \(\mathsf{Reg}(P)\)._ **4.10 Remark**.: The graph functor of an elementary pure existential doctrine \(P\) with an elementary subdoctrine \(P^{\prime}\) can be also defined as \(G_{|_{P^{\prime}}}([f])=\exists_{\langle\mathrm{id}_{A},f\rangle}(\alpha)\) because \(P\) has left adjoints along arbitrary arrows. In using such a presentation one has just to pay attention to the fact that \(\exists_{\langle\mathrm{id}_{A},f\rangle}\) does not necessarily satisfy the Beck-Chevalley condition unless it is considered as an object of the doctrine \(P_{cx}\) (see Lemma 2.17). We refer to [19] for the following results: **4.11 Proposition**.: _The previous assignments provide a well-defined functor \(G_{|_{P^{\prime}}}\colon\mathpzc{Pr}d_{P^{\prime}}\rightarrow\mathsf{Reg}(P)\), and it preserves finite limits. Moreover, it is faithful, and it induces a regular functor \(G_{|_{P^{\prime}}}^{\mathsf{reg}}\):_ Proof.: The first part follows by [19, Thm. 3.2] (and from the fact that \(\mathpzc{Pr}d_{P^{\prime}}\) is lex, see Remark 2.20), while the existence of the regular functor \(G_{|_{P^{\prime}}}^{\mathsf{reg}}\) follows by the universal property of \(\mathsf{reg}/\mathsf{lex}\)-completions in [1]. The universal properties of the regular completion of a doctrine are studied in [19, Thm. 3.3]. We recall here the main result: **4.12 Theorem**.: [regular completion of an elementary pure existential doctrine [19]]. _Let \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) be an elementary and pure existential doctrine. Then the category \(\mathsf{Reg}(P)\) is regular, and the assignment \(P\mapsto\mathsf{Reg}(P)\) extends to a 2-functor_ \[\mathsf{Reg}(-)\colon\mathsf{EED}\rightarrow\mathsf{RegCat}\] _which is left biadjoint to the inclusion of the 2-category \(\mathsf{RegCat}\) of regular categories in the 2-category \(\mathsf{EED}\) of elementary and pure existential doctrines._ #### Example The regular completion \(\mathsf{Reg}(\Psi_{\mathcal{D}})\) of the weak subobjects doctrine \(\Psi_{\mathcal{D}}\colon\mathcal{D}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) presented in Example 2.9 coincides with the regular completion \((\mathcal{D})_{\mathsf{reg}/\mathsf{lex}}\) of the lex category \(\mathcal{D}\), in the sense of [3]. We refer to [19] for more details. #### Example The regular completion \(\mathsf{Reg}(\mathsf{LT}_{=,\exists}^{\mathbb{T}})\) of the syntactic doctrine \(\mathsf{LT}_{=,\exists}^{\mathbb{T}}\colon\mathcal{V}^{\mathrm{op}}\longrightarrow \mathsf{InfSl}\) defined in Example 2.7 provides exactly the syntactic category denoted \(\mathcal{C}_{\mathbb{T}}^{\mathsf{reg}}\) associated with the theory of the regular fragment of first-order logic called _regular theory_ in [13, Sec. D1.4]. Moreover, observe that a regular theory \(\mathbb{T}\) provides only _the internal language_ of its syntactic doctrine \(\mathsf{LT}_{=,\exists}^{\mathbb{T}}\) as in section 2.22 and not of its regular completion. Indeed, the internal language of regular categories can be described via a dependent type theory a la Martin-Lof as shown in [17]. Combining the regular exact completion of an elementary pure existential doctrine with the exact completion \((-)_{\mathsf{ex/reg}}\) of a regular category [1, 3] one can define the so-called _exact completion of an elementary and pure existential doctrine_ as pointed out in [22, Sec. 3]: **4.15 Definition** (exact completion [22]): _Let \(P\) be an elementary pure existential doctrine. We call the category \(\mathsf{Ex}(P):=(\mathsf{Reg}(P))_{\mathsf{ex/reg}}\) the **exact completion** of \(P\)._ The universal properties of the exact completion of an elementary and pure existential doctrine can be deduced by combining the universal properties of the regular completion of a doctrine (see Theorem 4.12) with the universal properties of the \(\mathsf{ex/reg}\)-completion, see [22, Cor. 3.4]. We recall here the main result: **4.16 Theorem** (exact completion of an elementary pure existential doctrine [22]): _Let \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) be an elementary and pure existential doctrine. Then the category \(\mathsf{Ex}(P)\) is exact, and the assignment \(P\mapsto\mathsf{Ex}(P)\) extends to a 2-functor_ \[\mathsf{Ex}(-)\colon\mathsf{EED}\rightarrow\mathsf{ExCat}\] _which is left biadjoint to the inclusion of the 2-category \(\mathsf{ExCat}\) of exact categories in the 2-category \(\mathsf{EED}\) of elementary and pure existential doctrines._ **4.17 Example** (): _The exact completion \(\mathsf{Ex}(\Psi_{\mathcal{D}})\) of the weak subobjects doctrine \(\Psi_{\mathcal{D}}\colon\mathcal{D}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) presented in Example 2.9 coincides with the \(\mathrm{ex}\) completion \((\mathcal{D})_{\mathsf{ex/lex}}\) of the lex category \(\mathcal{D}\), in the sense of [3]. We refer to [22, Ex. 4.4] for more details._ **4.18 Example** (): _The exact completion \(\mathsf{Ex}(\mathsf{LT}_{=,\exists}^{\mathbb{T}})=(\mathsf{Reg}(\mathsf{LT}_{=,\exists}^{\mathbb{T}}))_{\mathsf{ex/reg}}\) of the syntactic doctrine \(\mathsf{LT}_{=,\exists}^{\mathbb{T}}\colon\mathcal{V}^{\mathrm{op}} \longrightarrow\mathsf{InfSl}\) provides the exact category called the _effectivization_ the syntactic category \(\mathcal{C}_{\mathbb{T}}^{\mathsf{reg}}\) in [13, pp. 849-850] and denoted by \(\mathcal{E}_{\mathbb{T}}:=\mathsf{Eff}(\mathcal{C}_{\mathbb{T}}^{\mathsf{reg}})\)._ **4.19 Example** (): _The exact completion of a subobjects doctrine \(\mathsf{Sub}_{\mathcal{C}}\colon\mathcal{C}^{\mathrm{op}}\longrightarrow \mathsf{InfSl}\) coincides with the well known construction of the exact on regular category \((\mathcal{C})_{\mathsf{ex/reg}}\) in [19]._ Observe that, since the exact completion of an elementary pure existential doctrine is defined as the \(\mathsf{ex}/\mathsf{reg}\)-completion of its regular completion, from Proposition 4.11 we deduce the existence of the following embedding: **4.20 Proposition**.: _The functor \(G_{|_{P^{\prime}}}\colon\mathcal{P}\mathsf{r}\mathsf{r}\mathsf{f}_{P^{\prime}} \to\mathsf{Reg}(P)\) obviously extends to a well-defined functor \(G_{|_{P^{\prime}}}\colon\mathcal{P}\mathsf{r}\mathsf{f}_{P^{\prime}}\to \mathsf{Ex}(P)\) which preserves finite limits. Moreover, it is faithful, and it induces an exact functor \(G_{|_{P^{\prime}}}^{\mathsf{ex}}\)._ Proof.: It follows by the definition of exact completion of \(P\), from Proposition 4.11 and by the universal properties of the \(\mathsf{ex}/\mathsf{reg}\)-completion. ## 5. Characterization of regular and exact completions of pure existential completions In this section we present our main results characterizing the regular and exact completions of pure existential completions (Theorem 5.11 and Corollary 5.12). In particular, we first show that a doctrine \(P\) is the pure existential completion of an elementary doctrine \(P^{\prime}\) if and only if the canonical embedding of \(\mathcal{P}\mathsf{r}\mathsf{f}_{P^{\prime}}\) into \(\mathsf{Reg}(P)\) gives rise to an equivalence \(\mathsf{Reg}(P)\equiv(\mathcal{P}\mathsf{r}\mathsf{f}_{P^{\prime}})_{\mathsf{ reg}/\mathsf{lex}}\). To this aim, we show that within the regular completion \(\mathsf{Reg}(P)\) of the pure existential completion \(P\) of an elementary subdoctrine \(P^{\prime}\), the comprehension \((A,\alpha)\) of a pure existential free object \(\alpha\) of \(P(A)\) is regular projective. To this purpose, we recall a standard, but useful lemma holding in every regular category. We refer to [34, Sec. 4.3]. **5.1 Lemma**.: _In a regular category \(\mathcal{C}\), an arrow \(f\colon A\to B\) is a regular epi if and only if the sequent \(\top\vdash_{\{x:B\}}\exists y:A.(f(y)=x)\) holds in the internal language of the doctrine \(\mathsf{Sub}_{\mathcal{C}}\colon\mathcal{C}^{\mathrm{op}}\longrightarrow \mathsf{InfSI}\)._ This lemma, combined with the definition of arrows in \(\mathsf{Reg}(P)\), allows us to provide a simple description of the regular epimorphisms of the regular completion \(\mathsf{Reg}(P)\) of an elementary pure existential doctrine \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSI}\). We refer to [29, Sec. 2.5] and [34] for more details. **5.2 Lemma**.: _A morphism \(\phi\colon(A,\alpha)\to(B,\beta)\) of \(\mathsf{Reg}(P)\) is a regular epimorphism if and only if \(\beta=\exists_{\pi_{B}}(\phi)\) in \(P(B)\)._ Now, we are going to show that pure existential splitting elements of a pure existential completion \(P\) gives rise to regular projective objects in its regular completion \(\mathsf{Reg}(P)\). To this purpose, we first prove the following useful lemma: ### Lemma _If \(\phi\colon(A,\alpha)\to(B,\beta)\) is an arrow of \(\mathsf{Reg}(P)\) and \(\alpha\) is a pure existential splitting element of \(P\), then there exists an arrow \(f\colon A\to B\) such that \(\alpha=P_{\langle\mathrm{id}_{A},f\rangle}(\phi)\), with \(\alpha\leq P_{f}(\beta)\). Moreover, for every arrow \(g\colon A\to B\) with such a property, we have that \(\alpha\leq P_{\langle f,g\rangle}(\delta_{B})\)._ Proof.: Let \(\phi\colon(A,\alpha)\to(B,\beta)\) be an arrow of in \(\mathsf{Reg}(P)\). By definition of arrows \(\mathsf{Reg}(P)\), we have that \(\alpha=\exists_{\pi_{1}}(\phi)\), and then, by the universal property of pure existential splittings, we can conclude that there exists an arrow \(f\colon A\to B\) such that \(\alpha=P_{\langle\mathrm{id}_{A},f\rangle}(\phi)\). Moreover, since \(\phi\leq P_{\pi_{2}}(\beta)\), we can conclude that \(\alpha\leq P_{f}(\beta)\). Now let us consider another arrow \(g\colon A\to B\) such that \(\alpha=P_{\langle\mathrm{id}_{A},g\rangle}(\phi)\). By definition, we have that \(\phi\) is functional in \(P_{cx}\), and hence \(\phi\) is functional in \(P\) (see Remark 4.5), namely \[P_{\langle\pi_{1},\pi_{2}\rangle}(\phi)\wedge P_{\langle\pi_{1},\pi_{3} \rangle}(\phi)\leq P_{\langle\pi_{2},\pi_{3}\rangle}(\delta_{B}).\] Then we can apply \(P_{\langle\mathrm{id}_{A},f,g\rangle}\) to both sides of this inequality, obtaining \[P_{\langle\mathrm{id}_{A},f\rangle}(\phi)\wedge P_{\langle\mathrm{id}_{A},g \rangle}(\phi)\leq P_{\langle f,g\rangle}(\delta_{B})\] that is \[\alpha\leq P_{\langle f,g\rangle}(\delta_{B}).\] ### Remark Notice that, by Lemma 5.3, we have that every arrow \(\phi\colon(A,\alpha)\to(B,\beta)\) of \(\mathsf{Reg}(P)\) with \(\alpha\) pure existential splitting induces a unique arrow in \(\mathcal{P}\!\mathsf{r}\!\mathsf{d}_{P}\). Now we show that every object \((A,\alpha)\) of the category \(\mathsf{Reg}(P)\) with \(\alpha\) pure existential splitting is a regular projective. **5.5** Proposition**.: _Let \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) be an elementary and pure existential doctrine. Then every object \((A,\alpha)\) where \(\alpha\) is pure existential splitting is regular projective in \(\mathsf{Reg}(P)\)._ Proof.: Let us consider the following diagram (1) with \(\psi\) regular epi in \(\mathsf{Reg}(P)\). By Lemma 5.2 we know that \(\beta=\exists_{\pi_{2}}(\psi)\), so now we are going to use the fact that \(\alpha\) is pure existential splitting to show that there exists a morphism \(\xi\) of \(\mathsf{Reg}(P)\) such that the diagram (1) commutes. From \(\beta=\exists_{\pi_{2}}(\psi)\) and \(\alpha\leq\exists_{\pi_{1}}(\phi)\leq\exists_{\pi_{1}}(\phi\wedge P_{\pi_{2}}( \beta))\) we can deduce (combining these with BCC and FR) that \[\alpha\leq\exists_{\pi_{1}}(P_{\langle\pi_{1},\pi_{3}\rangle}\phi)\wedge P_{ \langle\pi_{2},\pi_{3}\rangle}(\psi)).\] Therefore, since \(\alpha\) is existential splitting, there exists a morphism \(\langle\operatorname{id}_{A},h_{1},h_{2}\rangle\colon A\to A\times C\times B\) such that \[\alpha\leq P_{\langle\operatorname{id}_{A},h_{2}\rangle}(\phi)\wedge P_{ \langle h_{1},h_{2}\rangle}(\psi). \tag{2}\] Hence we can define \[\xi:=G([h_{1}])=P_{h_{1}\times\operatorname{id}_{C}}(\delta_{C})\wedge(P_{\pi_{ 1}}(\alpha)\wedge P_{\pi_{2}}(\gamma))\] with \([h_{1}]\colon(A,\alpha)\to(C,\gamma)\) arrow of \(\mathcal{H}\!\!\operatorname{\mathit{fd}}_{P}\). Notice that this arrow is well-defined since \(\alpha\leq P_{\langle h_{1},h_{2}\rangle}(\psi)\leq P_{h_{1}}(\gamma)\). Now we show that \(\psi\circ\xi=\phi\) (where \(\psi\circ\xi\) denotes the composition of morphisms in \(\operatorname{\mathsf{Reg}}(P)\)). The fact that predicates are descent objects for the equality allows us to deduce that \[\psi\circ\xi=\exists_{\langle\pi_{1},\pi_{3}\rangle}(P_{\langle\pi_{1},\pi_{2} \rangle}(G([h_{1}]))\wedge P_{\langle\pi_{2},\pi_{3}\rangle}(\psi))\leq P_{ \pi_{1}}(\alpha)\wedge P_{h_{1}\times\operatorname{id}_{B}}(\psi)\] and by functionality of \(\psi\), together with \(\alpha\leq P_{\langle h_{1},h_{2}\rangle}(\psi)\) by (2), we can deduce \[P_{\pi_{1}}(\alpha)\wedge P_{h_{1}\times\operatorname{id}_{B}}(\psi)\leq P_{ \langle h_{2},\pi_{2}\rangle}(\delta_{B}).\] Therefore, since \(\alpha\leq P_{\langle\operatorname{id}_{A},h_{2}\rangle}(\phi)\) by (2), we can conclude that \[\psi\circ\xi\leq\phi\] Hence, by Remark 4.6, we can conclude that \(\phi=\psi\circ\xi\), i.e. that the diagram (1) commutes in \(\operatorname{\mathsf{Reg}}(P)\). This concludes the proof that \((A,\alpha)\) is a regular projective. Recall that in the context of regular categories, we say that an object \(A\) is _covered_ by a regular projective \(B\) if there exists a regular epi \(e\colon B\to A\). In the following lemma, we show that in the regular completion \(\operatorname{\mathsf{Reg}}(P)\) of a doctrine \(P\colon\mathcal{C}^{\operatorname{op}}\longrightarrow\mathsf{InfSl}\) having enough pure existential free objects (see Definition 3.10), every object is covered by a regular projective (defined by a pure existential free element): **5.6 Lemma**.: _Let \(P\colon\mathcal{C}^{\operatorname{op}}\longrightarrow\mathsf{InfSl}\) be an elementary and pure existential doctrine with enough pure existential free objects. Then every object \((B,\beta)\) of \(\operatorname{\mathsf{Reg}}(P)\) is covered by a regular projective object \((A\times B,\alpha)\), with \(\alpha\) pure existential splitting._ Proof.: By definition of doctrine with enough pure existential free elements, for any element \(\beta\) of \(P(B)\) there exists a pure existential free element \(\alpha\) of \(P(A\times B)\) such that \(\beta=\exists_{\pi_{B}}(\alpha)\). Thus, for every object \((B,\beta)\) of \(\operatorname{\mathsf{Reg}}(P)\), we can define in \(\operatorname{\mathsf{Reg}}(P)\) the arrow \[G([\pi_{B}])\colon(A\times B,\alpha)\to(B,\beta)\] which is well-defined by Remark 4.8 and is a regular epi of \(\operatorname{\mathsf{Reg}}(P)\) since \(G([\pi_{B}])=\exists_{\langle\operatorname{id}_{A\times B},\pi_{B}\rangle}(\alpha)\) by Remark 4.10 implies \(\exists_{\pi_{3}}(G([\pi_{A}]))=\exists_{\pi_{3}}((\exists_{\langle \operatorname{id}_{A\times B},\pi_{B}\rangle}(\alpha))=\exists_{\pi_{2}}( \alpha)=\beta\). Finally, since every pure existential free element is in particular a pure existential splitting element, by Proposition 5.5 we conclude that \((A\times B,\alpha)\) is a regular projective object of \(\operatorname{\mathsf{Reg}}(P)\) and it covers \((B,\beta)\). **5.7 Lemma**.: _Let \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSI}\) be an elementary and pure existential doctrine. If \(P\) satisfies the rule of choice then every object of \(\mathsf{Reg}(P)\) is a subobject of a regular projective._ Proof.: If \(P\) satisfies the rule of choice, then we have that every top element \(\top_{A}\) is a pure existential splitting (see Remark 3.8), and hence \((A,\top_{A})\) is a regular projective of \(\mathsf{Reg}(P)\) by Proposition 5.5. Hence, every object \((A,\alpha)\) is a subobject of \((A,\top)\) in \(\mathsf{Reg}(P)\) via \(G([\mathrm{id}_{A}])\) with \([\mathrm{id}_{A}]\colon(A,\alpha)\to(A,\top_{A})\), i.e. \(\exists_{\langle\mathrm{id}_{A},\mathrm{id}_{A}\rangle}(\alpha)\). Observe that we can embed the doctrine of weak subobject of \(\Psi_{\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}}\) into the doctrine of \(P_{cx}\) since the category of predicates \(\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\) of \(P^{\prime}\) is a subcategory of the category of predicates \(\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\) of \(P\): **5.8 Lemma**.: _Let \(P\) an elementary pure existential doctrine, and let \(P^{\prime}\) be an elementary subdoctrine of \(P\). There exists a morphism \((I,\iota)\colon\Psi_{\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}}\to P_{cx}\) of elementary pure existential doctrines defined as follows:_ * _the functor_ \(I\colon\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P} \mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{PP}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{PP}\mathpzc{P}\mathpzc{PP}\mathpzc{P}\mathpzc{PP}\mathpzc{P}\mathpzc{P}\mathpzc{PP}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{PP}\mathpzc{P}\mathpzc{PP}\mathpzc{P}\mathpzc{PP}\mathpzc{P}\mathpzc{PP}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{PP}\mathpzc{P}\mathpzc{PP}\mathpzc{P}\mathpzc{P}\mathpzc{PP}\mathpzc{P}\mathpzc{P}\mathpzc{P}\mathpzc{PP}\ Proof.: Notice that \(\mathsf{Reg}(I,\iota)\) is well-defined since \(\mathsf{Reg}(P_{cx})\cong\mathsf{Reg}(P)\) by [19, Prop. 3.5]. Hence \(\mathsf{Reg}(I,\iota):\mathsf{Sub}_{(\mathcal{H}\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Now we show that every element \(\alpha\) of \(P^{\prime}(A)\) is a pure existential splitting element of \(P\) relative to \(P^{\prime}\) (see Definition 3.14). Suppose that \(\alpha=\exists_{\pi_{A}}(\beta)\) in \(P(A)\) with \(\alpha\) element of \(P^{\prime}(A)\) and \(\beta\) element of \(P^{\prime}(A\times B)\). Then, observe that \(G_{|_{P^{\prime}}}([\pi_{A}])\colon(A\times B,\beta)\to(A,\alpha)\) is a well defined arrow in \(\mathpzc{Pr}\mathpzc{d}_{P^{\prime}}\) and that the arrow \(G_{|_{P^{\prime}}}([\pi_{A}])\colon(A\times B,\beta)\to(A,\alpha)\) is a surjective epi and hence a regular epimorphism in \(\mathsf{Reg}(P)\) (as in the proof of Lemma 5.6). Since \((A,\alpha)\) is regular projective, being in the image of \(G_{|_{P^{\prime}}}^{\mathsf{reg}}\), there exists an arrow \(\phi\) such that the diagram (3) commutes in \(\mathsf{Reg}(P)\). Then, by fullness of \(G_{|_{P^{\prime}}}^{\mathsf{reg}}\) since \(\beta\) is an element of \(P^{\prime}\), there exists a unique arrow \([\langle f_{1},f_{2}\rangle]:(A,\alpha)\to(A\times B,\beta)\) of \(\mathpzc{Pr}\mathpzc{d}_{P^{\prime}}\) such that \(\phi=G_{|_{P^{\prime}}}([\langle f_{1},f_{2}\rangle])\) and also \[\alpha\leq P_{\langle f_{1},f_{2}\rangle}(\beta)\] Hence we have that (4) commutes in \(\mathsf{Reg}(P)\) and by faithfulness of \(G_{|_{P^{\prime}}}\) we conclude \[\alpha\leq P_{\langle f_{1},\mathrm{id}_{A}\rangle}(\delta_{A}).\] Combining this with \(\alpha\leq P_{\langle f_{1},f_{2}\rangle}(\beta)\) by the properties of equality we conclude \[\alpha\leq P_{\langle\mathrm{id}_{A},f_{2}\rangle}(\beta).\] This ends the proof that any \(\alpha\) of \(P^{\prime}\) is a pure existential splitting element of \(P\) relative to \(P^{\prime}\). Furthermore, since any object of \(P\) is (existentially) covered by an element of \(P^{\prime}\), we get that \(P\) is a pure existential relative cover for \(P\) and by Theorem 3.19 we finally conclude that \(P\) is the pure existential completion of \(P^{\prime}\). **5.12 Corollary**.: _Let \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) be an elementary and pure existential doctrine. Then \(P\) is the pure existential completion of one of its full elementary doctrine \(\mathcal{T}_{\!{}_{0}}\) if and only if the functor \(G^{\mathsf{ex}}_{|_{P^{\prime}}}\colon(\mathpzc{Pr}\mathpzc{d}_{P^{\prime}})_{ \mathsf{ex}/\mathsf{lex}}\to\mathsf{Ex}(P)\) provides an equivalence \(\mathsf{Ex}(P)\equiv(\mathpzc{Pr}\mathpzc{d}_{P^{\prime}})_{\mathsf{ex}/ \mathsf{lex}}\)._ Proof.: It follows from by combining the definition of \(\mathsf{Ex}(P)\) (see Definition 4.15) with the decomposition of the \(\mathsf{ex}/\mathsf{lex}\)-completion in Theorem 4.2 and Theorem 5.11. The characterization of the regular and exact completions of doctrines equipped with Hilbert's \(\epsilon\)-operators presented in[19, Thm. 6.2 (ii)] can be seen now as a particular case of Theorem 5.11 and Corollary 5.12. In fact, combining Theorem 3.20 with these results we obtain the following corollary: **5.13 Corollary**.: _Let \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) be an elementary and pure existential doctrine. Then the following are equivalent:_ * \(P\) _is equipped with Hilbert's_ \(\epsilon\)_-operators;_ * _the functor_ \(G^{\mathsf{reg}}\colon(\mathpzc{Pr}\mathpzc{d}_{P})_{\mathsf{reg}/\mathsf{ lex}}\to\mathsf{Reg}(P)\) _provides an equivalence_ \(\mathsf{Reg}(P)\equiv(\mathpzc{Pr}\mathpzc{d}_{P})_{\mathsf{reg}/\mathsf{ lex}}\)_;_ * _the functor_ \(G^{\mathsf{ex}}\colon(\mathpzc{Pr}\mathpzc{d}_{P})_{\mathsf{ex}/\mathsf{ lex}}\to\mathsf{Ex}(P)\) _provides an equivalence_ \(\mathsf{Ex}(P)\equiv(\mathpzc{Pr}\mathpzc{d}_{P})_{\mathsf{ex}/\mathsf{ lex}}\)_._ Proof.: It follows from Theorem 5.11 and Corollary 5.12, since \(P\) is isomorphic to the pure existential completion of itself \(P^{\exists}\) by Theorem 3.20. Another corollary of our main results regards the presentation of the syntactic category \(\mathcal{C}^{\mathsf{reg}}_{\mathbb{T}_{0}}\) and of its effectivization \(\mathcal{E}_{\mathbb{T}_{0}}\) associated to a regular theory \(\mathbb{T}_{0}\) given by the fragment with true constant, binary conjunction, equality and existential quantifiers of first-order Intuitionistic Logic with no extra-logical axioms, as defined in [13] (see Examples 4.14 and 4.18). **5.14 Corollary**.: _Let \(\mathbb{T}_{0}\) be a regular theory in the sense of [13] given by the fragment \(\mathcal{L}_{=,\exists}\) of first-order Intuitionistic Logic and no extra-logical axioms on a generic signature. Let \(\mathsf{H}_{0}\) be the Horn theory given by the corresponding fragment \(\mathcal{L}_{=}\) with no extra-logical axioms on the same signature._ _The syntactic category \(\mathcal{C}^{\mathsf{reg}}_{\mathbb{T}_{0}}\) of \(\mathbb{T}_{0}\) is equivalent to the \(\mathsf{reg}/\mathsf{lex}\)-completion \((\mathpzc{Pr}\mathpzc{d}_{\mathsf{LT}^{\mathsf{H}_{0}}_{=}})_{\mathsf{reg}/ \mathsf{lex}}\) of the category of predicates of the syntactic doctrine \(\mathsf{LT}^{\mathsf{H}_{0}}_{=}\) of \(\mathsf{H}_{0}\). Hence, also its effectivization \(\mathcal{E}_{\mathbb{T}_{0}}\) is the \(\mathsf{ex}/\mathsf{lex}\)-completion \((\mathpzc{Pr}\mathpzc{d}_{\mathsf{LT}^{\mathsf{H}_{0}}_{=}})_{\mathsf{ex}/ \mathsf{lex}}\) of the category of predicates of \(\mathsf{LT}^{\mathsf{H}_{0}}_{=}\)._ Proof.: This follows from from Theorem 5.11 and Corollary 5.12 after recalling from Examples 4.14 and 4.18 that \(\mathcal{C}^{\mathsf{reg}}_{\mathbb{T}_{0}}=\mathsf{Reg}(\mathsf{LT}^{\mathbb{ T}_{0}}_{=,\exists})\) and that \(\mathsf{LT}^{\mathbb{T}_{0}}_{=}\) is the pure existential completion of \(\mathsf{LT}^{\mathsf{H}_{0}}_{=}\) as observed in Example 3.21. Now, we apply our main results to the categorical setting of _Joyal's arithmetic universes_ reported in [18]. Here we refer to [18] for the definition of _primitive recursive predicates_ on a _Skolem theory_ and of Joyal's arithmetic universes. **5.15 Definition**.: _Let \(\mathcal{S}\) be a Skolem theory. We define the elementary doctrine of \(\mathcal{S}\)-primitive recursive predicates the functor \(\mathsf{R}\colon\mathcal{S}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) sending an object \(\mathsf{Nat}^{n}\) into the poset \(\mathsf{R}(\mathsf{Nat}^{n})\) of primitive recursive predicates over \(\mathsf{Nat}^{n}\), namely the arrows \(P:\mathsf{Nat}^{n}\rightarrow\mathsf{Nat}\) of the Skolem theory such that \(P\cdot P=P\) where \(\cdot\) is the multiplication of predicates (defined pointwise with the multiplication of natural numbers), and where \(P\leq Q\) is defined as the pointwise order induced by natural numbers._ **5.16 Remark**.: _The category \(\mathcal{Prd}_{\mathcal{S}}\) in [18] built by Joyal is a key example of category of predicates introduced in [28]._ **5.17 Remark**.: _Let us call \(\mathcal{S}_{\mathsf{Nat}}\) the embedding of the initial Skolem theory described in [18] within \(\mathsf{Set}\). Hence, in this category every object is isomorphic to a finite product of \(\mathsf{Nat}\) and the arrows of \(\mathcal{S}_{\mathsf{Nat}}\) are precisely the primitive recursive functions. In this case, the fibres of the elementary doctrine \(\mathsf{R}\colon\mathcal{S}^{\mathrm{op}}_{\mathsf{Nat}}\longrightarrow \mathsf{InfSl}\) of \(\mathcal{S}_{\mathsf{Nat}}\)-primitive recursive predicates are given by:_ \[\mathsf{R}(\mathsf{Nat}^{n}):=\{f\colon\mathsf{Nat}^{n}\rightarrow\mathsf{Nat} \mid f\in\mathcal{S}_{\mathsf{Nat}}(\mathsf{Nat}^{n},\mathsf{Nat})\text{ and }\forall n\in\mathsf{Nat}^{n},\ f(n)=0\text{ or }1\}\] _In the following proposition we summarize some useful properties of the category \(\mathcal{Prd}_{\mathsf{R}}\) associated with the doctrine \(\mathsf{R}\colon\mathcal{S}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\). We refer to [18, Prop. 4.7] for more details._ **5.18 Proposition**.: _Given a Skolem theory \(\mathcal{S}\) and its elementary doctrine \(\mathsf{R}\colon\mathcal{S}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) of \(\mathcal{S}\)-primitive recursive predicates, the category \(\mathcal{Prd}_{\mathsf{R}}\) is regular and every epimorphism of image factorizations splits._ Now recall the constructions of Joyal's arithmetic universes: **5.19 Definition**.: _[_18_]_ _Given a Skolem theory \(\mathcal{S}\), the category \((\mathcal{Prd}_{\mathsf{R}})_{\mathsf{ex/reg}}\) is a **Joyal's arithmetic universe**._ Then, combining Corollary 5.12 with Proposition 5.18 we obtain the following result: **5.20 Corollary**.: _Every Joyal's arithmetic universe on a Skolem theory \(\mathcal{S}\) is equivalent to the exact completion \(\mathsf{Ex}(\mathsf{R}^{\exists})\) of the pure existential completion \(\mathsf{R}^{\exists}\colon\mathcal{S}^{\mathrm{op}}\longrightarrow\mathsf{ InfSl}\) of the elementary doctrine \(\mathsf{R}\colon\mathcal{S}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) of \(\mathcal{S}\)-primitive recursive predicates._ Proof.: First, notice that since in the category \(\mathcal{Prd}_{\mathsf{R}}\) we have that epimorphisms of image factorizations split by Proposition 5.18, then \((\mathcal{Prd}_{\mathsf{R}})_{\mathsf{ex/reg}}\equiv(\mathcal{Prd}_{\mathsf{ R}})_{\mathsf{ex/lex}}\). Therefore we can apply Corollary 5.12 to conclude that this arithmetic universe is equivalent to the exact completion of the pure existential completion \(\mathsf{R}^{\exists}\) of the elementary doctrine \(\mathsf{R}\): \[(\mathcal{Prd}_{\mathsf{R}})_{\mathsf{ex/reg}}\equiv\mathsf{Ex}(\mathsf{R}^{ \exists}).\] **5.21**.: _Corollary. Joyal's arithmetic universe \((\mathcal{P}\!\mathsf{r}\!\mathsf{d}_{\mathsf{R}})_{\mathsf{ex/reg}}\) on the initial Skolem theory embedded in \(\mathsf{Set}\) is equivalent to the exact completion \(\mathsf{Ex}(\mathsf{R}^{\exists})\) where the elements of the fibre \(\mathsf{R}^{\exists}(\mathsf{Nat})\) are exactly the recursive enumerable subsets of \(\mathsf{Nat}\) in \(\mathsf{Set}\)._ Proof.: By Corollary 5.20 we have that \[(\mathcal{P}\!\mathsf{r}\!\mathsf{d}_{\mathsf{R}})_{\mathsf{ex/reg}}\equiv \mathsf{Ex}(\mathsf{R}^{\exists}).\] Then, observe that the fibres of the pure existential completion \(\mathsf{R}^{\exists}\colon\mathcal{S}^{\mathrm{op}}_{\mathsf{Nat}}\longrightarrow \mathsf{InfSl}\) of the elementary doctrine \(\mathsf{R}\colon\mathcal{S}^{\mathrm{op}}_{\mathsf{Nat}}\longrightarrow \mathsf{InfSl}\) are exactly the _recursively enumerable predicate_ because, by Theorem 3.19, every element of \(\mathsf{R}^{\exists}(\mathsf{Nat})\) can be written as a pure existential quantifier of a primitive recursive predicate. This provides exactly the recursive enumerable predicates since it is known that every r.e. predicate can be proved to be presented as a pure existential quantifier of a primitive recursively enumerable predicate, for example, from [30, Thm II.1.8, Thm. I.3.3, Ex I.2.8]. **5.22**.: _Remark. Notice that the previous result suggests that we can define the notion of recursive enumerable predicates relative to an arbitrary Skolem theory \(\mathcal{S}\) as the objects of the pure existential completion \(\mathsf{R}^{\exists}\colon\mathcal{S}^{\mathrm{op}}\longrightarrow\mathsf{ InfSl}\) of the doctrine \(\mathsf{R}\colon\mathcal{S}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) of \(\mathcal{S}\)-primitive recursive predicates._ Finally, a last relevant example of doctrines arising as pure existential completions is that of the so-called Godel hyperdoctrines presented in [33], arising in context of Dialectica interpretation. The original observation, for the more general case of fibrations, that a Dialectica fibration can be obtained combining the simple product and simple coproduct completions (i.e. the pure universal and pure existential completions in the case of doctrines) is due to Hofstra [9]. Therefore, by Corollary 5.12, we have that also every exact completion of a Godel hyperdoctrine is an instance of the \(\mathsf{ex}/\mathsf{lex}\)-completion a category of predicates: **5.23**.: _Corollary. Let \(P\colon\mathcal{C}^{\mathrm{op}}\longrightarrow\mathsf{InfSl}\) be a Godel hyperdoctrine (as defined in [33]). Then we have the equivalences_ * \(\mathsf{Reg}(P)\equiv(\mathcal{P}\!\mathsf{r}\!\mathsf{d}_{P^{\prime}})_{ \mathsf{reg/lex}}\)_;_ * \(\mathsf{Ex}(P)\equiv(\mathcal{P}\!\mathsf{r}\!\mathsf{d}_{P^{\prime}})_{ \mathsf{ex/lex}}\)_;_ _where \(P^{\prime}\) is the elementary full subdoctrine of \(P\) given by the pure existential free elements of \(P\)._ ## 6. Conclusion We have provided a characterization of the regular and exact completions of elementary pure existential completions, extending a previous characterization proved in [19] for doctrines equipped with Hilbert's \(\epsilon\)-operators. In particular, we have proved that for this kind of doctrine, their regular and exact completions happen to be equivalent to the reg/ex and ex/lex-completions, respectively, of the category of predicates associated with the subdoctrine of pure existential free elements. To reach this goal, we took advantage of the intrinsic characterization of doctrines arising as pure existential completions presented in [24], slightly extended with another equivalent presentation here. Then, we have described various examples of applications of the mentioned characterization including the syntactic category associated with the regular fragment of fist-order logic as presented in [13] (and its effectivization), the construction of Joyal's arithmetic universes, and the regular and exact completion of a Godel hyperdoctrine [33, 32]. In the future we aim to extend our results to regular and exact completions of other classes of doctrines obtained as generalized existential completions, including the case of the _full existential completion_ of primary doctrines in the sense of [24], as initiated in [23], with applications to sheaf theory. ### Acknowledgements We acknowledge fruitful conversations with Samuele Maschio, Fabio Pasquali and Pino Rosolini on topics presented in this paper.
2307.13539
Model Calibration in Dense Classification with Adaptive Label Perturbation
For safety-related applications, it is crucial to produce trustworthy deep neural networks whose prediction is associated with confidence that can represent the likelihood of correctness for subsequent decision-making. Existing dense binary classification models are prone to being over-confident. To improve model calibration, we propose Adaptive Stochastic Label Perturbation (ASLP) which learns a unique label perturbation level for each training image. ASLP employs our proposed Self-Calibrating Binary Cross Entropy (SC-BCE) loss, which unifies label perturbation processes including stochastic approaches (like DisturbLabel), and label smoothing, to correct calibration while maintaining classification rates. ASLP follows Maximum Entropy Inference of classic statistical mechanics to maximise prediction entropy with respect to missing information. It performs this while: (1) preserving classification accuracy on known data as a conservative solution, or (2) specifically improves model calibration degree by minimising the gap between the prediction accuracy and expected confidence of the target training label. Extensive results demonstrate that ASLP can significantly improve calibration degrees of dense binary classification models on both in-distribution and out-of-distribution data. The code is available on https://github.com/Carlisle-Liu/ASLP.
Jiawei Liu, Changkun Ye, Shan Wang, Ruikai Cui, Jing Zhang, Kaihao Zhang, Nick Barnes
2023-07-25T14:40:11Z
http://arxiv.org/abs/2307.13539v2
# Model Calibration in Dense Classification with Adaptive Label Perturbation ###### Abstract For safety-related applications, it is crucial to produce trustworthy deep neural networks whose prediction is associated with confidence that can represent the likelihood of correctness for subsequent decision-making. Existing dense binary classification models are prone to being over-confident. To improve model calibration, we propose Adaptive Stochastic Label Perturbation (ASLP) which learns a unique label perturbation level for each training image. ASLP employs our proposed Self-Calibrating Binary Cross Entropy (SC-BCE) loss, which unifies label perturbation processes including stochastic approaches (like DisturbLabel), and label smoothing, to correct calibration while maintaining classification rates. ASLP follows Maximum Entropy Inference of classic statistical mechanics to maximise prediction entropy with respect to missing information. It performs this while: (1) preserving classification accuracy on known data as a conservative solution, or (2) specifically improves model calibration degree by minimising the gap between the prediction accuracy and expected confidence of the target training label. Extensive results demonstrate that ASLP can significantly improve calibration degrees of dense binary classification models on both in-distribution and out-of-distribution data. The code is available on [https://github.com/Carlisle-Liu/ASLP](https://github.com/Carlisle-Liu/ASLP). ## 1 Introduction Binary segmentation aims to differentiate foreground areas from the background in images. Its tasks include Salient Object Detection [86], Camouflaged Object Detection [13], Smoke Detection [79], _etc_. Performance in these tasks has been significantly advanced using the strong representation powers of Deep Neural Networks (DNNs). However, with complex structures and a tremendous number of parameters, DNNs are prone to over-fitting to training data and producing over-confident predictions in the real world [18]. Such issues can render the model predictions unreliable in decision making or utilisation in downstream tasks. Recently, a growing body of literature has been proposed to address model mis-calibration problems in DNNs. They can be roughly categorised as: (1) post-hoc operations, such as temperature scaling [18], Platt scaling [54], _etc._, (2) training objective approaches [27], like MMCE [30], soft calibration objective [27], focal loss [45, 17], and (3) data/label augmentation techniques, _e.g_. label smoothing [46] and mixup [85]. We propose an Adaptive Label Perturbation which learns a unique label perturbation level for each training image. As illustrated in Fig. 1, training with Adaptive Stochastic Label Perturbation, a form of ALP, can effectively moderate incorrect predictions and highlight them with high entropy values. Adaptive Label Perturbation employs our proposed Self-Calibrating Binary Cross Entropy (SC-BCE) loss, which unifies label perturbation processes including stochastic approaches (like DisturbLabel [75]), and label smoothing [60] to correct calibration while maintaining classification accuracy. SC-BCE loss is equivalent to a factored combination of (i) a BCE loss w.r.t. groundtruth label, and (ii) a BCE loss w.r.t. a uniform binary categorical distribution. The former enhances dense binary classification performance and the latter improves the model calibration degree. Our method can be connected to Maximum Entropy Inference [23] of classic statistical mechanics, to maximise prediction Figure 1: Applying Adaptive Label Perturbation during training can effectively moderate predictions at incorrect areas, highlighting them with high entropy values (red). \(Z_{t}(x,y)\) is a sample-wise Bernoulli variable, parameterized by \(\alpha\), at the \(t^{\text{th}}\) iteration. After \(k\) iterations, update \(\alpha\) using Eq. (5) to adjust the likelihood (or level) of label perturbation to increase entropy for incorrect predictions and so correct model calibration. The Perturbed Label (shown inverted) replaces the Groundtruth Label with probability \(\alpha\). entropy with respect to missing information while preserving the classification accuracy on known data. The proposed Adaptive Label Perturbation (ALP) can approximate Maximum Entropy Inference [23] to maximise prediction entropy while preserving the ideal dense classification performance on known data. This represents a conservative solution that adopts classification accuracy as a proxy for known data and assumes maximum disorder on unknown data. We also present an alternative ALP solution that, instead, takes model calibration degree as a proxy for known data, using a calibration regulariser which constrains the expected confidence of individual supervision signal to not drop below the ideal accuracy on the validation set. This effectively minimises the gap between the distributions of prediction confidence and prediction accuracy, which is the source of model mis-calibration. Our contributions can be summarised as: (i) We propose Adaptive Stochastic Label Perturbation that learns a sample-wise label perturbation level to improve model calibration; (ii) We present a Self-Calibrating Binary Cross Entropy loss that unifies label perturbation processes including stochastic approaches and label smoothing; (iii) Following Maximum Entropy Inference [23] we show that Adaptive Stochastic Label Perturbation (ASLPME), can maximise the prediction entropy while preserving the ideal dense classification accuracy, and (iv) We present an alternative Adaptive Stochastic Label Perturbation (ASLPMC) solution to maximise model calibration degree, which achieves state-of-the-art performance in terms of model calibration degree on both in-distribution and out-of-distribution data. We thoroughly evaluate our method on Salient Object Detection and demonstrate its effectiveness for Camouflaged Object Detection, Smoke Detection and Semantic Segmentation. ## 2 Related Works **Model Calibration:** Model calibration methods can be roughly divided into three categories of approach: (1) post hoc processing (2) training object, and (3) input/label augmentation. The first category utilises a validation set to align the prediction confidence distribution with prediction accuracy distribution. It includes histogram binning [82], isotropic regression [83], Platt scaling [54, 49], Bayesian binning into quantiles [47], Dirichlet scaling [28], mix-n-match [87] and temperature scaling [18, 81]. The second category focuses on designing training objectives that (in)directly improve model calibration degree. Some methods address the in-continuity of expected calibration error, a widely adopted model calibration measure, and propose trainable calibration objectives like maximum mean calibration error [30], soft calibration objective [27]. Other works discover that certain existing training objects are beneficial to model calibration, _e.g_. Brier loss [4, 10], confidence penalty [52] and focal loss [45, 17]. The third category employs data or label augmentation techniques to regularise the prediction confidence distribution. Mixup [85, 62] explores the neighbourhood of training data through random interpolation of input images and associated labels to improve model calibration degree. Label smoothing [46] augments the one-hot training labels with softer versions to prevent the model being over-confident. **Salient Object Detection:** Inspired by pioneering work [22], traditional Saliency Object Detection (SOD) methods rely on various heuristic priors with handcraft features to explore low-level cures [1, 25, 26, 38, 66]. However, these methods cannot cope with complex scenes because of the limited representation ability of handcrafted features [3]. Recently, deep learning based SOD methods broke the bottleneck of traditional methods due to the powerful capability of neural networks, achieving improvement performance [6, 24, 55, 76, 37, 64, 68]. Early deep SOD methods use multi-layer perception to predict a map with a pixel-wise score for each image [71, 8]. These approaches rely on fixed fully connected layers and thus severely limit the ability of spatial information extraction. Later methods address this issue via using fully convolutional networks (FCNs) [40]. Most contemporary SOD methods are designed based on FCNs with various schemes to improve performance. One of the most popular strategies is to fuse multi-scale information extracted from different layers and aggregate them in the network [91, 74, 16, 51, 96, 90]. Attention modules are also applied to capture powerful multi-scale features via exploring relationship between local and global information [37, 53, 95, 21]. Training SOD networks using auxiliary supervision is also a popular strategy [70, 70, 61]. For example, the body map and detail map are utilized by [70] to help the network focus on center areas and edges, respectively. Skeleton [35] and uncertainty [61] are applied to the training processing due to their important roles in taking photos. **Noisy Label:** Noisy labels refer to incorrect ground truth classes/values in classification/regression tasks. They arise from data collection or annotation processes, and exist commonly in real-world datasets [2]. Efforts are put to identify the noisy labels and exclude them from network training in various computer vision tasks, deeming their incorporation as harmful. [88] proposes a framework that learns from noisy labels, being a collection of predictions from classic SOD methods. The framework approximates the noise distribution in order to recover clean labels for model training. Differently from data augmentation techniques that are applied simultaneously to training samples and corresponding labels to generate more training data, one may artificially corrupt the label. We refer to this category of approaches as _label perturbation_, which includes label smoothing approaches [60], and DisturbLabel, Xie _et al_. [75]. In image classification, Xie _et al_. [75] shows that randomly replacing training labels with a prior distribution leads to a regularising effect, preventing overfitting. Our work is different from [75] by employing labels corrupted to different scales to enhance the model calibration degrees for both in-distribution and out-of-distribution data. Further, in performing this, we assume noise that varies with different samples, making our method more adaptable. ## 3 Proposed Method We first lay out the task setting in Sec. 3.1. Then we introduce our proposed Self-Calibrating Binary Cross Entropy loss in Sec. 3.2 and prove its connection to Maximum Entropy Inference [23] in Sec. 3.3. Lastly, we detail our major contribution - Adaptive Label Perturbation in Sec. 3.4. ### Task Setting Binary segmentation problems aim to differentiate between foreground object(s) and background. They can be formulated as a pixel-wise binary classification problem. Given an independent and identically distributed (i.i.d) training dataset \(\mathcal{D}_{\text{TR}}=\{x_{i},y_{i}\}_{i=1}^{N}\) drawn from an unknown joint distribution of training images and groundtruth labels \(P(\mathcal{X},\mathcal{Y})\), a neural network model parameterised by \(\theta\) is employed to predict labels for an input image \(x\in\mathcal{X}\): \(f_{\theta}(x)\in(0,1)^{1\times H\times W}\). We use \(\hat{y}\) and \(P_{\hat{y}}\) to denote the winning class and its associated probability respectively. The groundtruth label \(y\in\{0,1\}^{1\times H\times W}\) represents the foreground pixels with "1" and background with "0". In the following equations, we omit the spatial indexes \(H\) and \(W\) for simplicity. A perfectly calibrated model has \(\mathrm{P}(\hat{y}=y|P_{\hat{y}})=P_{\hat{y}},\,\forall P_{\hat{y}}\in(0,1)\). That is, in the entire range of prediction probabilities, prediction with probability \(P_{\hat{y}}\) has exactly \(P_{\hat{y}}\) chance to be correct. The calibration degree of a model \(f_{\theta}(\cdot)\) over a distribution \(\mathcal{D}\) is quantified with Expected Calibration Error (ECE), defined as \(\mathbb{E}_{f_{\theta}(x)}[\mathrm{P}(\hat{y}=y|f_{\theta}(x))-f_{\theta}(x)|]\). ### Self-Calibrating Binary Cross Entropy Loss We propose a Self-Calibrating Binary Cross Entropy (SC-BCE) loss that unifies Label Smoothing [60], DisturbLabel [75] and Stochastic Label Perturbation as: \[\begin{split}&\mathcal{L}_{\text{SC-BCE}}(\theta,X,Y,\alpha, \beta)\\ =&\mathbb{E}_{x,y\in X,Y}\Big{[}(1-Z_{t}(x,y))\cdot \mathcal{L}_{\text{BCE}}(\theta,x,y)\\ &\qquad\qquad+Z_{t}(x,y)\cdot\mathcal{L}_{\text{BCE}}(\theta,x,p( y,\beta))\Big{]}\\ &\beta\in[0,2],\quad\alpha\in[0,\frac{1}{\beta}),\end{split} \tag{1}\] where \(Z_{t}(x,y)\sim B(1,\alpha)\) follows a Bernoulli distribution with \(\alpha\) probability to be \(1\) and \(1-\alpha\) chance to be \(0\), \(t\) denotes the training epoch, \(\alpha\in[0,\frac{1}{\beta})\) and \(\beta\in[0,2]\) are Label Perturbation Probability (LPP) and Label Perturbation Strength (LPS) respectively, \(p(y,\beta)=(1-\beta)\cdot y+\frac{\beta}{2},\beta\in[0,2]\) is a perturbed label and \(\mathcal{L}_{\text{BCE}}(\theta,x,y)\)1 is a Binary Cross Entropy (BCE) loss computed for training pair \((x,y)\). For \(\alpha=1\), the label perturbation equation follows the label smoothing equation for a binary label [60]. In the proposed SC-BCE loss, different label perturbations can be applied by setting (i) Label Smoothing [60]: \(\alpha=1\) and \(\beta\in[0,1)\), (2) DisturbLabel: \(\beta=1\) and \(\alpha\in(0,1)\), and (3) Stochastic Label Perturbation (SLP): \(\beta\in(0,2]\) and \(\alpha\in(0,\frac{1}{\beta})\). For example, Hard Inversion (HI) that inverts the label category as shown in Fig. 2, can be stochastically applied by setting \(\beta=2\) and \(\alpha\in(0,\frac{1}{2})\). Footnote 1: \(\mathcal{L}_{\text{BCE}}(\theta,x,y)=-y\cdot\log(f_{\theta}(x))-(1-y)\cdot \log(1-f_{\theta}(x))\) In the implementation of SLP, the supervision for an individual training image in each epoch is sampled by drawing from a Bernoulli distribution. That is, the individual supervision can take the form of the groundtruth label or perturbed label in each training iteration. The overall function of SLP can be connected to that of a smoothed label by taking expectation of the Bernoulli variable: \(\mathbb{E}_{Z_{t}}[(1-Z_{t}(y))\cdot Y+Z_{t}(y)\cdot p(y,\beta)],\,\forall y \in Y\). Taking the expectation over the Bernoulli variation in each iteration is too expensive to implement in model training. Instead, following [75], the expectation of stochastically perturbed label is approximated by taking expectation across training epochs: \(\mathbb{E}_{t\in T}[(1-Z_{t}(y))\cdot Y+Z_{t}(y)\cdot p(y,\beta)],\,\forall y \in Y\), where \(T\) is the total number of training epochs and \(Z_{t}(y)\) is a variable drawn from a Bernoulli distribution for the tth epoch. ### Maximum Entropy Inference Maximum Entropy Inference (MEI), assuming minimum distribution commitment in respect to missing information, was initially proposed by Jaynes [23]. That is, the probability distribution should have maximum Shannon entropy subject to the partially available information. Thus, in the complete absence of information, Shannon entropy for a binary prediction defined in Eq. (2) should be maximised: \[\begin{split}\mathbb{H}(f_{\theta}(X))=\mathbb{E}_{x\in X}\Big{[} &-f_{\theta}(x)\cdot\log f_{\theta}(x)-\\ &\big{(}1-f_{\theta}(x)\big{)}\cdot\log\big{(}1-f_{\theta}(x) \big{)}\Big{]}\end{split} \tag{2}\] For salient object detection and other binary segmentation problems, maximising Eq. (2) can be achieved with a binary uniform categorical distribution. The proposed SC-BCE loss can be transformed into a factored combination of a BCE loss w.r.t. groundtruth label (the constraints of the data) and a BCE loss w.r.t. a binary uniform categorical distribution (See Appendix A.2 for derivation) as: \[\begin{split}&\mathcal{L}_{\text{SC-BCE}}(\theta,X,Y,\alpha,\beta) \\ =&\mathbb{E}_{x,y\in X,Y}\Big{[}(1-\beta Z_{t}(x,y)) \cdot\mathcal{L}_{\text{BCE}}(\theta,x,y)\\ &\qquad\qquad+\beta Z_{t}(x,y)\cdot\mathcal{L}_{\text{BCE}}( \theta,x,u)\Big{]},\end{split} \tag{3}\] where \(Z_{t}(x,y)\sim B(1,\alpha)\), \(u\) is a binary uniform categorical distribution, and minimising the second term pushes the prediction distribution towards a uniform binary categorical distribution, equivalently maximising the inference entropy. Therefore, our proposed SC-BCE loss, a combination of a regular BCE loss and a BCE loss with a perturbed label, effectively performs a type of MEI. That is, the regular BCE loss component improves the model's binary classification accuracy in the presence of information while the perturbed label maximizes prediction entropy with respect to missing information in order to close the gap between the available training data and the entire data distribution. ### Adaptive Label Perturbation Stochastic Label Perturbation (SLP) uses a single label perturbation probability and perturbation strength for the entire training dataset. However, this approach cannot adapt to predictive error that varies for different input images. To address this, we propose an Adaptive Stochastic Label Augmentation (ASLP) method to adjust the label augmentation probability for individual training samples. That is we allow the variable in Eq. (1) to be drawn from a per training image Bernoulli distribution with sample-specific label perturbation probability as: \[Z_{t}(x,y)\sim B(1,\alpha_{x,y}),\,\forall x,y\in X,Y \tag{4}\] where \(\alpha_{x,y}\) is the label perturbation probability for sample (image-label) pair \((x,y)\). Initially, we set all label perturbation probabilities to \(\{\alpha_{i}=0\}_{i=1}^{N}\) and train a model with a regular BCE loss without label augmentation techniques, which is equivalent to \(\mathcal{L}_{\text{SC-BCE}}(\theta,X,Y,\alpha=0,\beta=0)\). The trained model has weight \(\theta_{\text{lm}}\) and its accuracy on the validation set, \(\mathbb{A}(\theta_{lm},\mathcal{D}_{\text{VAL}})\), is held as an ideal performance. Subsequently, we select a label perturbation technique and continue to train the model with SC-BCE loss with a learning rule to update the label perturbation probability for individual training samples. We propose the learning rule for \(\alpha\), (ASLP\({}_{\text{MEI}}\)) to approximate maximum entropy inference. The rule has two components: (1) \(\nabla_{\alpha_{i}}=(2/\beta)\cdot\partial\mathbb{E}_{Z_{t}}(x_{i},y_{i}) \big{[}\mathcal{L}_{\text{SC-BCE}}(X,Y,\theta,\{\alpha_{i}\}_{i=1}^{N},\beta )\big{]}/\partial\alpha_{i}\) is the derivative of the expectation of SC-BCE over the Bernoulli variable w.r.t. \(\alpha_{i}\). We divide this by \(\beta/2\) to ensure that different perturbation techniques (varying \(\beta\) values) have the same convergence speed (See derivation in Appendix A.3), and (2) Accuracy Regularization to encourage maintenance of prediction accuracy. The rule is: \[\begin{split}&\alpha_{i}^{n+1}=\ \alpha_{i}^{n}+\eta\cdot(\nabla_{\alpha_{i}}+\lambda\cdot\text{Reg}_{A}), \ \ \text{for}\ i=1,\ldots,N,\\ &\nabla_{\alpha_{i}}=\frac{2\cdot\big{(}\mathcal{L}_{\text{BCE} }(\theta,x_{i},p(y_{i},\beta))-\mathcal{L}_{\text{BCE}}(\theta,x_{i},y_{i}) \big{)}}{\beta},\\ &\text{Reg}_{\text{A}}=\min\Big{(}\frac{\mathbb{A}(\theta, \mathcal{D}_{\text{VAL}})-\mathbb{A}(\theta_{lm},\mathcal{D}_{\text{VAL}})}{ \mathbb{A}(\theta_{lm},\mathcal{D}_{\text{VAL}})},0\Big{)},\end{split} \tag{5}\] where \(\eta\) and \(\lambda\) are hyperparameters controlling the updating pace of label perturbation probability and the regularisation strength respectively, \(\mathbb{A}(\theta,\mathcal{D}_{\text{VAL}})\) and \(\mathbb{A}(\theta_{lm},and\mathcal{D}_{\text{VAL}})\) denote the current and ideal accuracy on the validation set separately. \(\nabla_{\alpha_{i}}\) aims to increase label perturbation proba Figure 2: The method overview is comprised of model implementation (top) and Adaptive Stochastic Label Perturbation (ASLP) learning rule (bottom). In each iteration, the model uses a Bernoulli variable to sample a supervision, which can be a groundtruth label \(y_{i}\) or a perturbed label \(p_{i}\) and computes a sample-specific \(\nabla_{\alpha_{i}}\) based on the prediction. Regularisation is computed with Eq. 5 (ASLP\({}_{\text{MEI}}\)) or Eq. 6 (ASLP\({}_{\text{MC}}\)) on a validation set after each training epoch. The ASLP learning rule combines Grad-\(\alpha_{i}\) and a factored regularisation to update label perturbation probability \(\alpha_{i}\) for each individual sample. bility to confident and correct samples and otherwise for incorrectly classified samples. For example, it returns a large positive value for correct predictions with small BCE loss value w.r.t. to groundtruth label \(y_{i}\) and large BCE loss value w.r.t. to perturbed label \(p(y_{i},\beta)\). The "Accuracy Regularisation" (\(\text{Reg}_{\text{A}}\)) is designed to reduce the overall perturbation probability if the accuracy on the validation set reduces to be below the local minima. It returns 0 if there is no accuracy drop on the validation set and a large decrease will overwhelm the \(\nabla_{\alpha_{i}}\) value and reduce the sample label perturbation probability. Intuitively, \(\text{ASLP}_{\text{MEI}}\) aims to construct a model that preserves the ideal classification accuracy while otherwise maximising the entropy [23]. Note that which particular examples are classified correctly are able to change, but the accuracy is constrained to remain the same. Intuitively, having a model that better captures ignorance may lead to changes in the treatment of test examples that are distant from training distribution. Note, however that adopting classification accuracy as proxy for known data and otherwise maximizing entropy is a conservative strategy and we find that it results in the model being significantly under-confident. The model mis-calibration arises from the distribution mismatch between prediction confidence and prediction accuracy [45]. We offer an alternative model that uses the model calibration as proxy for known data and maximises the prediction entropy in respect to unknown data \(\text{ASLP}_{\text{MC}}\). The learning rule replaces the "Accuracy Regularisation" in Eq. (5) with a "Calibration Regularisation" (\(\text{Reg}_{\text{C}}\)) as: \[\text{Reg}_{\text{C}}=\min\Bigl{(}\bigl{(}1-\frac{\beta\cdot\alpha_{i}^{n}}{2 }\bigr{)}-\mathbb{A}(\theta_{lm},\mathcal{D}_{\text{VAL}}),0\Bigr{)}, \tag{6}\] where \(1-(\beta\cdot\alpha_{i}^{n}/2)\) denotes the expected confidence of the perturbed label (Derivation in Appendix A.4). For example, a foreground label "1" with 5% chance of being inverted to "0" has an expected confidence of 0.95. "Calibration Regularisation" constrains the expected confidence of perturbed label of each sample to not drop below the ideal classification accuracy on validation set, preventing the model from becoming under-confident. Note that we can also have an updating rule \(\text{ALS}_{\text{MC}}\) to learn per-image label perturbation strength (adaptive \(\beta\) and fixed \(\alpha=1\)) (See Appendix A.5). ## 4 Experiments and Results We verify the proposed method primarily on Salient Object Detection and also implement it for Camouflaged Object Detection, Smoke Detection and Semantic Segmentation tasks and report their results in the Appendices. ### Implementation Details **Evaluation Metrics:** We use Equal-Width Expected Calibration Error (\(\text{ECE}_{\text{EW}}\)) [18] and Equal-Width Over-confidence Error (\(\text{OE}_{\text{EW}}\)) [62] with 10 bins (\(B=10\)) to evaluate the model calibration degrees. Additionally, we adopt \(\text{ECE}_{\text{EM}}\)[48], \(\text{ECE}_{\text{DEBIAS}}\)[29] and \(\text{ECE}_{\text{SWEEP}}\)[56] to corroborate with the results of \(\text{ECE}_{\text{EW}}\) (See Appendix C). **Datasets:** The proposed methods are trained with the DUTS-TR [63] training dataset. It is divided into a training set \(|\mathcal{D}_{\text{TR}}|=9,553\) and validation set \(|\mathcal{D}_{\text{VAL}}|=1,000\). We use six testing datasets, including DUTS-TE [63], DUT-OMRON [80], SOD [44], PASCAL-S [34], ECSSD [78], HKU-IS [33], to evaluate the model calibration degree. **Compared Methods:** We compare with both SOD models and model calibration methods in terms of model calibration degrees. The SOD models include: MSRNet [32], SRM [65], Amulet [92], BMPM [91], DGRL [67], PAGR [93], PiCANet [37], CPD [73], BASNet [55], EGNet [94], AFNet [15], PoolNet [36], GCPANet [7], MINet [51], F\({}^{3}\)Met [69], EBMGSOD [89], ICON [97], EDN [72]. We evaluate ECE on their published prediction results, or results produced with their released model weights. We also compare with model calibration methods include: Temperature Scaling (TS) [18], Brier Loss [4], MMCE [30], Label Smoothing [46], Mixup [62], Focal Loss [45] and AdaFocal [17] implemented on our baseline model. **Baseline Model Structure:** We implement our method in the Pytorch framework. Our model has a simple U-Net structure, comprising of a ResNet50 encoder [20] and a decoder, where the former is initialised with ImageNet-pretrained weights and the later by default. We also experiment with VGG16 [58] and Swin transformer [39] encoders and report their results in Appendix I. **Label Perturbation Techniques:** We experiment with four different label perturbation strategies: (1) Hard Inversion \(T(Y,\alpha,\beta=1.0)\), (2) Soft Inversion (SI) \(T(Y,\alpha,\beta=0.75)\), (3) Moderation (M) \(T(Y,\alpha,\beta=0.5)\) and Dynamic Moderation (DM) \(T(Y,\alpha,\beta=0.5)\) with additional Gaussian noise. See Appendix H.1 for implementation details. **Training Details:** Each model is trained with an Adam optimiser for 30 epochs. The learning rate is initialised to \(2.5\times 10^{-5}\), and decays by a factor of 0.9 for each epoch after the \(10^{\text{th}}\) epoch. All training images are scaled to \(384\times 384\). Basic data augmentation techniques, including random flipping, random translation and random cropping, are applied. **Hyperparameters:** The reported model calibration results associated with ASLP are obtained by setting \(\eta=0.002\) and \(\lambda=2,000\). We set the number of bins to \(B=10\) for ECE and OE evaluation metrics. ### Model Calibration Degree Performance Tab. 1 presents the calibration degree of existing SOD models, existing model calibration methods and our proposed technique on the six SOD testing datasets. Our proposed \(\text{ASLP}_{\text{MC}}\), designed to optimise the model calibration degree, achieves the best ECE performances on all testing datasets. In addition, \(\text{ASLP}_{\text{MC}}\) also obtains the second-best - assuming minimum distribution commitment with respect to missing information. That is, in the presence of limited training data, to maximise the prediction entropy while maintaining the prediction accuracy for in-distribution data. Fig. 3 presents the joint distribution of prediction con \begin{table} \begin{tabular}{l l|c c|c c|c c|c c|c c|c c} \hline \hline \multicolumn{2}{c|}{\multirow{2}{*}{Methods}} & \multirow{2}{*}{Year} & \multicolumn{2}{c|}{DUTS-TE [63]} & \multicolumn{2}{c|}{DUTS-TE [63]} & \multicolumn{2}{c|}{DUTS-OMRON [80]} & \multicolumn{2}{c|}{PASCAL-S [34]} & \multicolumn{2}{c|}{SOD [44]} & \multicolumn{2}{c}{ECSSD [78]} & \multicolumn{2}{c}{HKU-IS [33]} \\ & & \multicolumn{2}{c|}{ECE \(\downarrow\)} & \multicolumn{2}{c|}{OE \(\downarrow\)} & \multicolumn{2}{c|}{ECE \(\downarrow\)} & \multicolumn{2}{c|}{OE \(\downarrow\)} & \multicolumn{2}{c|}{ECE \(\downarrow\)} & \multicolumn{2}{c|}{OE \(\downarrow\)} & \multicolumn{2}{c|}{OE \(\downarrow\)} & \multicolumn{2}{c|}{ECE \(\downarrow\)} & \multicolumn{2}{c}{OE \(\downarrow\)} \\ \hline MSRNet [32] & 2017 & 2.57 & 2.34 & 3.32 & 3.16 & 3.44 & 3.23 & 6.42 & 6.14 & 0.97 & 0.94 & 0.92 & 0.87 \\ SRM [65] & 2017 & 4.02 & 3.72 & 4.19 & 3.96 & 4.88 & 4.59 & 9.93 & 9.58 & 2.53 & 2.35 & 1.86 & 1.72 \\ Amulet [92] & 2017 & 5.67 & 5.28 & 5.84 & 5.49 & 5.76 & 5.43 & 10.03 & 9.59 & 2.56 & 2.42 & 1.98 & 1.87 \\ BMPM [91] & 2018 & 3.74 & 3.52 & 4.52 & 4.37 & 4.88 & 4.68 & 8.16 & 7.93 & 1.95 & 1.89 & 1.58 & 1.53 \\ DGRL [67] & 2018 & 4.12 & 3.86 & 4.41 & 4.21 & 5.01 & 4.77 & 8.44 & 8.20 & 2.13 & 2.02 & 1.63 & 1.53 \\ PAGR [93] & 2018 & 4.04 & 3.79 & 5.14 & 4.96 & 5.64 & 5.37 & 12.17 & 11.87 & 2.84 & 2.70 & 1.62 & 1.54 \\ PICANet [37] & 2018 & 5.12 & 4.90 & 4.84 & 4.70 & 8.14 & 7.92 & 10.50 & 10.30 & 3.48 & 3.39 & 2.55 & 2.47 \\ CDP [73] & 2019 & 3.97 & 3.78 & 4.20 & 4.06 & 5.37 & 5.17 & 9.65 & 9.39 & 2.29 & 2.19 & 1.99 & 1.90 \\ SOD & BASNet [50] & 2019 & 5.00 & 4.86 & 4.93 & 4.83 & 6.50 & 6.36 & 10.40 & 10.27 & 2.74 & 2.70 & 2.30 & 2.26 \\ Methods & EGNet [94] & 2019 & 3.33 & 3.14 & 3.66 & 3.50 & 5.42 & 5.19 & 8.04 & 7.79 & 1.98 & 1.88 & 1.47 & 1.40 \\ AFNet [15] & 2019 & 3.95 & 3.74 & 4.25 & 4.09 & 5.06 & 4.84 & 8.15 & 8.02 & 2.38 & 2.27 & 1.87 & 1.78 \\ PoolNet [66] & 2019 & 3.33 & 3.12 & 3.86 & 3.70 & 5.32 & 5.07 & 8.14 & 7.87 & 2.00 & 1.90 & 1.82 & 1.75 \\ GCPANet [7] & 2020 & 3.18 & 2.99 & 3.99 & 3.84 & 4.16 & 3.97 & 7.05 & 6.88 & 1.61 & 1.54 & 1.27 & 1.21 \\ MINet [51] & 2020 & 3.65 & 3.48 & 4.45 & 4.29 & 4.94 & 4.75 & 8.01 & 7.89 & 2.13 & 2.03 & 1.74 & 1.65 \\ F\({}^{3}\)Met [69] & 2020 & 3.67 & 3.50 & 4.25 & 4.10 & 4.85 & 4.67 & 7.95 & 7.78 & 2.26 & 2.16 & 1.92 & 1.83 \\ EBMGSOD [89] & 2021 & 3.45 & 3.29 & 4.11 & 3.95 & 4.79 & 4.61 & 7.48 & 7.30 & 2.14 & 2.05 & 1.79 & 1.70 \\ ICON [97] & 2021 & 2.89 & 2.76 & 3.84 & 3.71 & 4.08 & 3.95 & 6.70 & 6.55 & 1.56 & 1.49 & 1.38 & 1.32 \\ PFSNet [34] & 2021 & 2.94 & 2.72 & 3.95 & 3.81 & 4.45 & 4.27 & 7.59 & 7.39 & 2.41 & 2.25 & 2.06 & 1.96 \\ EDN [72] & 2022 & 3.62 & 3.47 & 4.02 & 3.90 & 4.89 & 4.74 & 8.81 & 8.66 & 2.20 & 2.13 & 1.65 & 1.58 \\ \hline \multirow{4}{*}{Model} & \multirow{2}{*}{MACE [30]} & \multirow{2}{*}{2018} & \multirow{2}{*}{2.86} & \multirow{2}{*}{2.67} & \multirow{2}{*}{3.56} & \multirow{2}{*}{3.41} & \multirow{2}{*}{4.00} & \multirow{2}{*}{3.81} & \multirow{2}{*}{6.85} & \multirow{2}{*}{6.63} & \multirow{2}{*}{1.41} & \multirow{2}{*}{1.35} & \multirow{2}{*}{1.18} & \multirow{2}{*}{1.13} \\ Calibration & & \multicolumn{2}{c|}{Lalbed Smoothing [18]} & \multicolumn{2}{c|}{2019} & \multirow{2}{*}{2.00} & \multirow{2}{*}{1.79} & \multirow{2}{*}{2.89} & \multirow{2}{*}{2.71} & \multirow{2}{*}{3.04} & \multirow{2}{*}{2.83} & \multirow{2}{*}{5.97} & \multirow{2}{*}{5.69} & \multirow{2}{*}{0.83} & \multirow{2}{*}{0.68} & \multirow{2}{*}{0.82} & \multirow{2}{*}{0.47} \\ Methods & Mixup [62] & 2019 & 2.45 & 2.25 & 3.41 & 3.23 & 3.13 & 2.99 & 5.82 & 5.70 & 1.41 & 0.18 & 3.83 & 0.05 \\ & Focal Loss [45] & 2020 & 2.25 & 2.08 & 3.10 & 2.82 & 3.40 & 3.13 & 6.21 & 5.98 & 1.41 & 1.03 & 1.24 & 0.77 \\ & AdaFocal [17] & 2022 & 1.61 & 1.41 & 2.31 & 1.84 & 2.53 & 2.27 & 5.88 & 5.47 & 1.63 & 0.79 & 1.35 & 0.52 \\ \hline \multirow{2}{*}{Our Methods} & \(\text{ASLP}_{\text{MEI}}\) & \multirow{2}{*}{2023} & \multirow{2}{*}{**1.40**} & \multirow{2}{*}{1.22} & \multirow{2}{*}{**1.99**} & \multi fidence and prediction accuracy of some of the best calibrated methods and our proposed technique on DUTS-TE dataset (See Appendix D for other testing datasets and other methods). Existing SOD methods produce extremely confident predictions whose confidence scores are nearly 100% for the majority of samples. On the contrary, the prediction accuracy is on average lower than prediction confidence, resulting in the model being over-confident. On the other hand, existing model calibration methods are generally more calibrated than the SOD methods which in general do not strive to improve model calibration degree. Our proposed \(\text{ASLP}_{\text{MC}}\) produces the most calibrated model whose joint distribution is closer to the oracle than those of the compared calibration methods and SOD methods. AdaFocal [17] produces the second most calibrated model on DUTS-TE. However, the majority of their joint distribution (blue high density area) is slightly to the right bottom of the oracle line, making their model slightly less calibrated. Whereas the high density area of our joint distribution is well aligned with the oracle line, showing \(\text{ASLP}_{\text{MC}}\) is more calibrated. Despite the small difference on the plot, \(\text{ASLP}_{\text{MC}}\) improves over AdaFocal by more than 10% in terms of ECE scores on DUTS-TE dataset. We can also observe that \(\text{ASLP}_{\text{MEI}}\) is significantly under-confident for in-distribution data with its joint distribution being at the top-left side of the oracle line. Its prediction confidences are limited to between 70% and 80% while the prediction accuracies are generally above 90%. ### Model Calibration Degrees on Out-of-Distribution Dataset We compare our proposed method with existing model calibration methods in terms of model calibration degrees on Out-of-Distribution data. We consider texture images, where salient objects are completely absent, as OoD samples for the SOD task. We use Describable Texture Dataset [9] to evaluate the model calibration degrees on OoD samples. Fig. 4 shows the joint distribution of prediction confidence and prediction accuracy of various model calibration methods and our proposed techniques. It can be seen that the baseline model produces extremely confident predictions for OoD data. However, its accuracy is only 41.88%, worse than a uniform prior in a binary classification task. We also observe that Temperature Scaling does not calibrate the model under data distribution shift in accordance with literature [50, 45]. Our \(\text{ASLP}_{\text{MC}}\), being the most calibrated for in-distribution data, is also more calibrated on the OoD samples than the existing model calibration methods by a large margin as shown in Tab. 2. On the other hand, our \(\text{ASLP}_{\text{MEI}}\) is more successful in handling OoD data. It is the most calibrated on OoD data, with a larger proportion of the distribution aligned with the oracle line. As shown in Tab. 2, it outperforms existing model calibration methods in terms of both ECE and OE by significant margins. This can be attributed to its minimum distribution assumption in the presence of limited training data. ### Discussion **Adaptive Stochastic Label Perturbation:** We study the effect of ASLP on ECE and OE and present the experimental results in Tab. 3. It shows that ASLPs significantly outperforms the baseline model, "B", which does not adopt any model calibration measures. In addition, ASLPs also outperforms their static counterparts (SLPs) which use a single \begin{table} \begin{tabular}{l|c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{Evaluation (\%)} \\ & ECE \(\downarrow\) & OE \(\downarrow\) & ACC \(\uparrow\) \\ \hline Baseline & 52.36 & 51.05 & 41.88 \\ \hline Brier Loss [4] & 38.85 & 37.18 & 53.62 \\ Temperature Scaling [18] & 51.95 & 50.46 & 41.59 \\ Label Smoothing [46] & 37.22 & 35.48 & 55.41 \\ MMCE [30] & 40.64 & 39.67 & 54.39 \\ Mixup [62] & 31.07 & 29.10 & 58.71 \\ Focal Loss [45] & 40.01 & 38.43 & 49.71 \\ AdaFocal [17] & 27.55 & 25.07 & 55.39 \\ \hline \(\text{ASLP}_{\text{MC}}\) & 18.31 & 16.37 & 61.93 \\ \(\text{ASLP}_{\text{MEI}}\) & **13.43** & **8.40** & **62.47** \\ \hline \hline \end{tabular} \end{table} Table 2: Model calibration methods and our \(\text{ASLP}_{\text{MC}}\) and \(\text{ASLP}_{\text{MEI}}\) are evaluated on the Out-of-Distribution dataset, Describable Texture Dataset [9], in terms of \(\text{ECE}_{\text{EW}}\) and \(\text{OE}_{\text{EW}}\) with 10 bins, and Accuracy (ACC). Figure 4: Joint distribution of prediction confidence (horizontal axis) and prediction accuracy (vertical axis) on the Describable Texture Dataset [9], of compared model calibration methods and our proposed \(\text{ASLP}_{\text{MC}}\) and \(\text{ASLP}_{\text{MEI}}\). \(\alpha\) for the entire dataset. This can be attributed to the approach modelling variance of noise with input image. **Generalisation of Adaptive Label Smoothing:** We generalise the proposed AALP to label smoothing, developing an Adaptive Label Smoothing (ALS) that fixes the label perturbation probability to 100%, akin to the label smoothing technique [46], and tunes a smoothing factor for each training sample. As shown in Tab. 3, ALS effectively reduces the ECE and OE scores over its static version LS\({}^{\beta=0.03}\), and achieves similar performances with \(\text{ASLP}_{\text{MC}}\) approaches on the six testing datasets. It manifests that our proposed AALP can be generalised onto other label perturbation techniques as a measure to calibrate the SOD models. **Compatibility with SOTA SOD Models** We retrain EBMGSOD [89], ICON [97] and EDN [72] with the proposed \(\text{ASLP}_{\text{MC}}\) and find significant improvements in terms of model calibration degrees without compromising their classification abilities (See Appendix E). **Compatibility with Different Backbones:** We demonstrate that our proposed method is also compatible with VGG16 [58] and Swin transformer [39] backbones. See Appendix I for details. **Effectiveness in Other Dense Binary Classification Tasks:** Experiments on Smoke Detection [79] and Camou-flaged Object Detection [14] demonstrate that our method can be generalised to dense binary classification tasks to improve model calibration degrees. See Appendix F. **Generalisation to Multi-Class Segmentation task:** Experiments on Semantic Segmentation [11] demonstrate that our method can also be generalised to dense multi-class classification tasks. See Appendix G. ### Hyperparameters **Static Stochastic Label Perturbation:** Tab. 3 shows that, under a small label perturbation probability, the four label perturbation techniques can alleviate the model over-confidence issues of the baseline model, "B", on the six testing datasets. They also achieve similar results to Label Smoothing [46], setting \(\beta=0.03\) and \(\alpha=1\). Each SLP has a wide range of effective label perturbation probabilities that improves model calibrations (See Appendix H.2 Tab. 13), and these improvements do not sacrifice the model's classification performance (see Appendix H.3). Larger values of the label perturbation probability eventually lead to increasing ECE scores as the model transitions to being under-confident (see Appendix H.2). **Updating Rate \(\eta\):**\(\text{ASLP}_{\text{MC}}\) models trained with \(\eta\in[0.0002,0.005]\) are generally stable, producing similar calibration degrees and classification performances. Values smaller than 0.001 require longer training and high values lead to sub-optimal results (See Appendix J). **Regularisation Strength \(\lambda\):**\(\lambda\) spanning between 500 and 10,000 are optimal. Very high values for \(\lambda\) can lead to oscillation resulting in poor performance (See Appendix J). ## 5 Conclusion This work first introduces a Self-Calibrating Binary Cross Entropy loss that unifies label perturbation processes including stochastic approaches and label smoothing to improve model calibration while preserving classification accuracy. We further propose an Adaptive Stochastic Label Perturbation that learns a unique label perturbation level for individual training image. Following Maximum Entropy Inference, AALP adopts classification / calibration as proxy for known data and maximises the prediction entropy with respect to missing data. The proposed \(\text{ASLP}_{\text{MC}}\) improves model calibration degrees on both in-distribution samples and out-of-distribution samples, without negatively impacting classification performance. The approach can be easily applied to different models, which we demonstrate with several SOTA models. It is also demonstrated to be effective on a semantic segmentation task and other binary tasks. Acknowledgments.This research was in-part supported by the ANU-Optus Bushfire Research Center of Excellence. \begin{table} \begin{tabular}{l|c c|c c|c c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{Perturbation Params} & \multicolumn{2}{c|}{DUTS-TE [63]} & \multicolumn{2}{c|}{DUT-OMRON [80]} & \multicolumn{2}{c|}{PASCAL-S [34]} & \multicolumn{2}{c|}{SOD [44]} & \multicolumn{2}{c|}{ECSSD [78]} & \multicolumn{2}{c}{HKU-IS [33]} \\ & \(\alpha\) & \(\beta\) & e & ECE \(\downarrow\) & OE \(\downarrow\) & ECE \(\downarrow\) & OE \(\downarrow\) & ECE \(\downarrow\) & OE \(\downarrow\) & ECE \(\downarrow\) & OE \(\downarrow\) & ECE \(\downarrow\) & OE \(\downarrow\) & OE \(\downarrow\) \\ \hline Baseline (“B”) & 0 & 0 & 0 & 3.48 & 3.29 & 4.17 & 4.02 & 4.60 & 4.41 & 7.42 & 7.17 & 1.93 & 1.86 & 1.64 & 1.59 \\ \hline \(\text{SLP}_{\text{m}}^{\beta=0.01}\) & 0.01 & 1.0 & ✗ & 2.21 & 1.84 & 2.96 & 2.78 & 3.11 & 2.82 & 6.09 & 5.80 & 1.03 & 0.68 & 1.01 & 0.53 \\ \(\text{SLP}_{\text{m}}^{\beta=0.02}\) & 0.02 & 0.75 & ✗ & 2.25 & 2.05 & 3.00 & 2.82 & 3.05 & 2.83 & 6.40 & 6.09 & 0.93 & 0.84 & 0.87 & 0.60 \\ \(\text{SLP}_{\text{m}}^{\beta=0.03}\) & 0.03 & 0.5 & ✗ & 2.24 & 2.03 & 3.17 & 2.97 & 3.41 & 3.20 & 6.26 & 5.97 & 0.83 & 0.77 & 0.96 & 0.81 \\ \(\text{SLP}_{\text{m}}^{\beta=0.03}\) & 0.03 & 0.5 & ✓ & 2.29 & 2.09 & 3.00 & 2.83 & 3.47 & 3.24 & 6.72 & 6.43 & 1.13 & 1.04 & 0.96 & 0.80 \\ \(\text{LS}_{\text{s}}^{\beta=0.03}\) & 1.0 & 0.03 & ✗ & 2.20 & 1.99 & 3.09 & 2.91 & 3.24 & 3.03 & 6.27 & 5.99 & 1.03 & 0.78 & 0.92 & 0.67 \\ \hline \(\text{ASLP}_{\text{MC}}^{\beta}\) & \(\alpha_{\text{sh}}\) & 1.0 & ✗ & 1.40 & 1.22 & 1.99 & 1.83 & 2.31 & 2.10 & 5.50 & 5.17 & 0.48 & 0.20 & 0.79 & 0.17 \\ \(\text{ASLP}_{\text{MC}}^{\beta}\) & \(\alpha_{\text{sh}}\) & 0.75 & ✗ & 1.51 & 1.29 & 2.14 & 1.95 & 2.29 & 2.07 & 5.12 & 4.80 & 0.61 & 0.34 & 0.84 & 0.22 \\ \(\text{ASLP}_{\text{MC}}^{\beta}\) & \(\alpha_{\text{sh}}\) & 0.5 & ✗ & 1.47 & 1.27 & 1.87 & 1.80 & 2.37 & 2.13 & 5.63 & 5.29 & 0.51 & 0.23 & 0.80 & 0.20 \\ \(\text{ASLP}_{\text{MC}}^{\beta}\) & \(\alpha_{\text{sh}}\) & 0.5 & ✓ & 1.64 & 1.20 & 1.94 & 1.75 & 2.03 & 1.81 & 4.14 & 3.84 & 0.80 & 0.42 & 0.87 & 0.42 \\ ALS & 1.0 & \(\beta_{\text{sh}}\) & ✗ & 1.46 & 1.25 & 2.07 & 1.87 & 2.30 & 2.10 & 5.44 & 5.18 & 0.61 & 0.25 & 0.81 & 0.32 \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation: Effect of Stochastic Label Perturbation (SLP) and Adaptive Stochastic Label Perturbation (ASLP) with different label perturbation techniques on the model calibration degrees evaluated on Expected Calibration Error (ECE) and Over-confidence Error (OE). The proposed AALP is generalised to an Adaptive Label Smoothing (ALS) technique that adaptively tunes the label softening scale (\(\beta_{\text{add}}\)).
2310.17390
Validation of SOLPS-ITER Simulations against the TCV-X21 Reference Case
This paper presents a quantitative validation of SOLPS-ITER simulations against the TCV-X21 reference case and provides insights into the neutral dynamics and ionization source distribution in this scenario. TCV-X21 is a well-diagnosed diverted L-mode sheath-limited plasma scenario in both toroidal field directions, designed specifically for the validation of turbulence codes [D.S. Oliveira, T. Body, et al 2022 Nucl. Fusion 62 096001]. Despite the optimization to reduce the impact of the neutral dynamics, the absence of neutrals in previous turbulence simulations of TCV-X21 was identified as a possible explanation for the disagreements with the experimental data in the divertor region. This motivates the present study with SOLPS-ITER that includes kinetic neutral dynamics via EIRENE. Five new observables are added to the extensive, publicly available TCV-X21 dataset. These are three deuterium Balmer lines in the divertor and neutral pressure in the common and private flux regions. The quantitative agreement metric is combined with the conjugate gradient method to approach the SOLPS-ITER input parameters that return the best overall agreement with the experiment. A proof-of-principle of this method results in a modest improvement in the level-of-agreement; shortcomings of the method and how to improve it are discussed. Alternatively, a scan of the particle and heat diffusion coefficients shows an improvement of 10.4% beyond the agreement level achieved by the gradient method. The result is found for an increased transport coefficient compared to what is usually used for TCV L-mode plasmas, suggesting the need for accurate self-consistent turbulence models for predictive boundary simulations. The simulations indicate that ~65% of the total ionization occurs in the SOL, motivating the inclusion of neutrals in future turbulence simulations towards improved agreement with the experiment.
Y. Wang, C. Colandrea, D. S. Oliveira, C. Theiler, H. Reimerdes, T. Body, D. Galassi, L. Martinelli, K. Lee, TCV team
2023-10-26T13:39:17Z
http://arxiv.org/abs/2310.17390v1
# Validation of SOLPS-ITER Simulations against the TCV-X21 Reference Case ###### Abstract This paper presents a quantitative validation of SOLPS-ITER simulations against the TCV-X21 reference case and provides insights into the neutral dynamics and ionization source distribution in this scenario. TCV-X21 is a well-diagnosed diverted L-mode sheath-limited plasma scenario in both toroidal field directions, designed specifically for the validation of turbulence codes [D.S. Oliveira, T. Body, et al 2022 Nucl. Fusion **62** 096001]. Despite the optimization to reduce the impact of the neutral dynamics in this scenario, the absence of neutrals in previous turbulence simulations of TCV-X21 was identified as a possible explanation for the observed disagreements with the experimental data in the divertor region. This motivates the present study with SOLPS-ITER that includes kinetic neutral dynamics via EIRENE. Five new observables are added to the extensive, publicly available TCV-X21 dataset. These are three deuterium Balmer lines in the divertor and neutral pressure measurements in the common and private flux regions. The quantitative agreement metric used in the validation is combined with the conjugate gradient method to approach the SOLPS-ITER input parameters that return the best overall agreement with the experiment. A proof-of-principle test of this method results in a modest improvement in the level-of-agreement; the shortcomings impacting the result and how to improve the methodology are discussed. Alternatively, a scan of the particle and heat diffusion coefficients shows an improvement of 10.4% beyond the agreement level achieved by the gradient method. The result is found for an increased transport coefficient compared to what is usually used for TCV L-mode plasmas, suggesting the need for accurate self-consistent turbulence models for predictive boundary simulations. The simulations indicate that \(\sim 65\%\) of the total ionization occurs in the SOL, motivating the inclusion of neutrals in future turbulence simulations on the path towards improved agreement with the experiment. ## I Introduction The power exhaust problem is one of the key challenges faced by the magnetic confinement fusion community. Significant progress has been achieved with the introduction of diverted magnetic geometries such as the single-null configuration planned for ITER[1]. In a single-null plasma, a poloidal magnetic null (X-point) is introduced in the plasma boundary, localizing the hot core plasma some distance away from the vacuum vessel wall and directing the "open" magnetic field lines and thus the majority of the heat flux to specially designed plates called divertor targets. However, it will still be challenging to keep the heat load deposited onto the target plates of future devices such as ITER[2] and SPARC[3], and even more so of a DEMO[4] within technological limits. To address this issue, it is necessary to have reliable modeling of the transport and the underlying physical process governing the "open" field line region called the scrape-off layer (SOL). The investigation of the plasma dynamics in the SOL is difficult due to the limited diagnostic access and the complexity of the most complete theoretical models. Validation of numerical simulations against experimental observables stands as a suitable methodology to test and improve the capabilities of the current models[5; 6]. In this approach, the largest possible number of experimental observables are quantitatively compared to the simulation results, and the overall level of agreement is quantified by an agreement metric, i.e. a single numerical value denoted by \(\chi\). Such a validation provides a robust framework to assess the current agreement, effect of model improvements and, when agreement is judged satisfactory, provides confidence in the code outputs and their predictive capabilities. The modeling of the tokamak boundary plasma can be carried out by edge plasma transport codes such as SOLPS-ITER[7], SOLEDGE2D[8], UEDGE[9] or edge turbulence codes, as in the recent multi-code validation involving GBS[10], GRIL-LIX[11], and TOKAM3X[12], which presented the first full tokamak size edge turbulence simulations[13] of a diverted single-null plasma in the Tokamak & Configuration Variable (TCV)[14]. The scenario simulated by the turbulent edge codes in this study, referred to as TCV-X21 reference case, is a L-mode plasma in sheath-limited condition to minimize the effect of the neutral dynamics in the divertor volume, as these effects were not included in the turbulence simulations. Instead, the ionization source was prescribed as an input and assumed to be localized in the outer region of the confined plasma. However, a non-negligible effect of neutrals was suggested as a possible cause of the relatively low level of agreement in the divertor region and at the divertor targets in the simulation-experiment comparison[13]. In this work, we validate the SOLPS-ITER code against the TCV-X21 scenario, as a first step to understand the role of the neutrals in this case, with kinetic neutrals simulated with EIRENE. For this purpose we extend the publicly avail able [13] TCV-X21 dataset (Tab. 1) with two new observables for the neutral dynamics: Balmer line intensities measured by the Divertor Spectroscopy System (DSS) [15] and neutral pressure measurements from the Baratron gauges [16]. We conduct a similar quantitative validation as in Ref. [13], using the methodology of Ref. [17] (briefly reviewed in Sec. IV). With this, we expect to help guide future edge turbulence simulations of the TCV-X21 reference case including neutral dynamics. We also provide proof-of-principle tests of using different approaches to determine the SOLPS-ITER input parameters that result in the best simulation-experimental agreement, in particular, the conjugate gradient method applied on the validation metric. The paper is organized as follows: the TCV-X21 experiment reference case and the associated dataset are introduced and extended in Sec. II. The SOLPS-ITER simulations are described in Sec. III. Then in Sec. IV, we present the qualitative and quantitative validation results for simulations with the standard up-stream matching approach and the systematic methodology to determine the input parameters based on the quantitative validation result. In Sec. V, we analyze and discuss the results of the validation and the effect of neutrals in the TCV-X21 scenario. Finally, the conclusions are presented in Sec. VI. ## II TCV-X21 experimental dataset and its extension In this work we use the experimental dataset of the TCV-X21 reference case which is publicly available at [https://github.com/SPCData/TCV-X21](https://github.com/SPCData/TCV-X21). This scenario is a lower single null L-mode Ohmic plasma in the TCV tokamak [14] with a toroidal field of \(B_{\phi,axis}\simeq 0.95\)T, a plasma current of \(I_{p}\simeq 165\)kA and an electron line-average density \(\langle n_{e}\rangle\sim 2.5\times 10^{19}\)m\({}^{-3}\) measured by the far infrared camera (FIR, Fig. 1(a), vertical cyan line). TCV-X21 includes data in both toroidal field directions to study the effect of drifts, with the convention that "forward" (Fowr) denotes the field direction where the ion \(\nabla B\) drift (\(\textbf{B}\times\nabla B\)) points downwards, from the core towards the X-point and "reversed" (Rev) when it points upwards [13]. Tab. 1 lists the diagnostics, observables and their respective hierarchies (used in the validation metric) of the TCV-X21 dataset. Fig. 1(a) shows the position of the listed diagnostics. The TCV-X21 dataset includes mean and fluctuation profiles of observables covering the divertor targets and volume, the divertor entrance, and the outward midplane. In this work, we only consider the mean profiles because SOLPS-ITER does not predict fluctuation quantities. Detailed information about the observables and diagnostics can be found in Ref. [13]. In this work we add new observables to the dataset, namely the divertor neutral pressure measurements and deuterium Balmer lines, which will be introduced in the following subsections. ### Baratron Pressure Gauges The baratron pressure gauges (BAR) considered here provide measurements of neutral pressure at the TCV floor (\(p_{\text{div}}\)) and in the turbo pump duct (\(p_{\text{mpp}}\) - see Fig. 1(a)). The gauges are installed at the end of extension tubes to protect them from the magnetic field of the tokamak. Therefore, we need a model to relate the measurement to the in-vessel pressure [16]. The energetic atomic and molecular divertor neutrals flowing into the tube undergo thermalization through collisions with the walls and the atoms recombine to form molecules. At the end of the tube, after several bends, the pressure is determined solely by the molecular density at wall temperature, \(T_{\text{wall}}=300\)K. For the experiment-simulation comparison, we use the OD model discussed in Ref. [19] and Ref. [20] to determine \(p_{\text{div}}\) and \(p_{\text{mpp}}\) from SOLPS-ITER outputs. To compensate for the pressure drop in \(p_{\text{mpp}}\) induced by the turbo pump, the experimental data of \(p_{\text{mpp}}\) is multiplied by a factor of \(1.5\)[21]. The measured pressure is averaged over 0.8s of the flattop phase of several, repeat discharges of the TCV-X21 scenario. The main source of uncertainty of the \(p_{\text{div}}\) and \(p_{\text{mpp}}\) measurements is \(\Delta e_{rep}\), the uncertainty related to reproducibility [13], estimated from repeat discharges. \begin{table} \begin{tabular}{c c c c c} \hline \hline Diagnostic & Observable & \multicolumn{3}{c}{Hierarchy} \\ & \(h_{Esp}\) & \(h_{Sint}\) & \(H_{J}\) \\ \hline \multirow{4}{*}{Wall Langmuir Probes (LP)} & \(n_{e}\), \(T_{e}\), \(V_{pl}\) & 2 & 1 & 1 / 2 \\ & \(J_{\text{sat}}\), \(\sigma(J_{\text{sat}})\) & 1 & 2 & 1 / 2 \\ & \(\text{skew}(J_{\text{sat}})\), \(\text{kurt}(J_{\text{sat}})\) & 1 & 2 & 1 / 2 \\ & high-field-side targets & \(J_{\parallel}\), \(\sigma(J_{\parallel})\) & 1 & 1 & 1 \\ & \(V_{H}\), \(\sigma(V_{H})\) & 1 & 2 & 1 / 2 \\ \hline Infrared camera (IR) & \multirow{2}{*}{\(q_{\parallel}\)} & \multirow{2}{*}{2} & \multirow{2}{*}{2} & \multirow{2}{*}{1 / 3} \\ & for low-field-side target & & & & \\ \hline \multirow{4}{*}{Reciprocating divertor} & \(n_{e}\), \(T_{e}\), \(V_{pl}\) & 2 & 1 & 1 / 2 \\ & \(M_{\parallel}\) & 2 & 2 & 1 / 3 \\ \cline{1-1} & probe array (RDPA) & \(J_{\text{sat}}\), \(\sigma(J_{\text{sat}})\) & 1 & 2 & 1 / 2 \\ \cline{1-1} & for divertor volume & \(\text{skew}(J_{\text{sat}})\), \(\text{kurt}(J_{\text{sat}})\) & 1 & 2 & 1 / 2 \\ \cline{1-1} & \(V_{H}\), \(\sigma(V_{H})\) & 1 & 2 & 1 / 2 \\ \hline Thomson scattering (TS) & \multirow{2}{*}{\(n\), \(T_{e}\)} & \multirow{2}{*}{2} & \multirow{2}{*}{1} & \multirow{2}{*}{1 / 2} \\ & for divertor entrance & & & & \\ \hline Fast horizontally- & \multirow{2}{*}{\(M_{\parallel}\)} & \multirow{2}{*}{2} & \multirow{2}{*}{1} & \multirow{2}{*}{1 / 2} \\ reciprocating probe (FHRP) & & & & & \\ for outboard midplane & \(\text{skew}(J_{\text{sat}})\), \(\text{kurt}(J_{\text{sat}})\) & 1 & 2 & 1 / 2 \\ \cline{1-1} & \(V_{H}\), \(\sigma(V_{H})\) & 1 & 2 & 1 / 2 \\ \hline Divertor spectroscopy & \multirow{2}{*}{\(D_{5\to 2}\), \(D_{6\to 2}\), \(D_{7\to 2}\)} & \multirow{2}{*}{2} & \multirow{2}{*}{3} & \multirow{2}{*}{1 / 4} \\ system (DSS) & & & & \\ \hline Baratron gauges (BAR) & \multirow{2}{*}{\(p_{\text{tmp}}\)} & \multirow{2}{*}{2} & \multirow{2}{*}{1 / 3} \\ & & & & \\ \cline{1-1} & \(p_{\text{div}}\) & 1 & 2 & 1 / 2 \\ \hline \hline \end{tabular} \end{table} Table 1: **Observables and comparison hierarchies for validation.** Observables constituting the TCV-X21 dataset but not used in the present validation are shown in gray. The hierarchy weighting, \(H_{j}\), is a constant value depending on the number of assumptions and/or models used to obtain the observable, \(H_{j}=[h_{sim}+h_{exp}-1]^{-1}\), see Sec. IV and Appendix A. The data from the Divertor Spectroscopy System (DSS) and the Baratron gauges (BAR) are newly added into the TCV-X21 dataset. ### Divertor Spectroscopy System The Divertor Spectroscopy System (DSS) installed in TCV (Fig. 1(a)) provides measurements of the line-integrated visible radiation at different wavelengths, corresponding to different atomic processes [22]. The DSS system has 30 chords along which it is possible to measure the wavelength spectra with a spectral resolution of up to 0.02 nm [15]. The measurement is a line integral of the emission along a given chord. For each discharge, the time traces are averaged over 0.22s, providing \(D_{n\to 2}\) as a function of the chord number. The final average emission profile is obtained by averaging the profiles from different shots. In this work, we consider three deuterium Balmer lines, \(D_{5\to 2}\), \(D_{6\to 2}\), and \(D_{7\to 2}\). The main sources of uncertainty of the line brightnesses are \(\Delta e_{rep}\), which is estimated comparing different shots, and \(\Delta e_{dia}\), the uncertainty due to inherent characteristics of the diagnostics, which is estimated as 15% of the measured intensity. ## III Solps-ITER Simulations SOLPS-ITER (Scrape-Off Layer Plasma Simulation-ITER) is composed of the transport code B2.5 that solves the Braginskii multi-fluid equations, and the kinetic neutral Monte Carlo solver EIRENE [23]. EIRENE is coupled self consistently with B2.5 to calculate the sources and sinks due to plasma-neutral interactions. The simulations in this work consider a multi-component plasma, including carbon impurities and kinetic neutrals and their main reactions in the plasma [24]. Drifts are not included in these simulations, since convergence in low density, high temperature plasmas could not be achieved so far for SOLPS-ITER simulations of TCV plasmas. Unlike in the previous turbulence validation, _ad-hoc_ diffusion coefficients for cross-field heat and particle transport are used. The simulations presented here are a good testbed for the role of the neutrals in the TCV-X21 scenario, in particular, enabling the investigation of the distribution of the ionization profile across the SOL. The absence of drifts in these simulations is a limitation, but may help disentangle the origin of the flows in the divertor, i.e., whether drift or transport-driven [25]. Fig. 1(b) shows the computational grid used in this work. The radial particle diffusion and heat conduction is described using spatially uniform anomalous diffusion coefficients. For the simulations presented in Sec. IV.1, we choose a particle diffusivity of \(D_{\perp}=0.2\) m\({}^{2}\)/s and a thermal diffusivity of \(\chi_{e,\perp}=1.0\) m\({}^{2}\)/s and \(\chi_{i,\perp}=1.0\) m\({}^{2}\)/s, which were found in previous works to result in a good upstream match for TCV L-mode plasmas [7]. A deuterium gas puff rate of \(\Gamma_{\rm D_{2}}=6.8\times 10^{19}\)/s is used for a close upstream density match, determined after a scan of the gas puff \(\Gamma_{\rm D_{2}}=\{4.5,5.6,6.8,7.2,7.6,8.0,8.4\}\times 10^{19}\)/s. The chemical sputtering coefficient of carbon impurities on the wall is assumed as \(Y_{\rm chem}=3.5\%\), and the particle recycling coefficient is set to be \(R=0.99\). At the divertor targets, sheath Bohm boundary conditions are applied. At the core boundary, power transferred from the core to the edge is set to be comparable to the experimental value (160 kW). Neutrals crossing the core boundary are returned as fully ionized particles. More details about the boundary conditions used can be found in Ref. [24]. In Sec. IV.2, an iterative method using the overall-agreement metric and the conjugate gradient is used to adapt the input parameters \(\Gamma_{\rm D_{2}}\), \(D_{\perp}\) and \(\chi_{e,\perp}\) to achieve a better overall agreement. In Sec. IV.3, \(D_{\perp}\) and \(\chi_{e,\perp}\) are scanned to find a better overall agreement. ## IV Validation results The validation of the SOLPS-ITER simulations in this work follows the same methodology as used in Ref. [13]. The details of the mathematical model can be found in Ref. [17], and Figure 1: (a) Poloidal view of the magnetic reconstruction (dark blue lines) of the TCV-X21 reference case from LIUQE reconstruction [18], also shown are: the Langmuir probes (LP, blue circles), the Reciprocating Divertor Probe Array (RDPA, black L-shaped structure) and its covered area (transparent black rectangle), the Thomson Scattering system (TS, red square array), the Fast Horizontal Reciprocating Probe (FHRP, purple rectangle) and its covered area (transparent purple), the Far InfraRed interferometer (FIR, vertical cyan line), the area of sight of the Vertical Infrared Camera (IR, green transparent patched area), the position of the top gas fueling valve (magenta rectangle), the ports of the baratron pressure gauges (dark green boxes on the wall), and the DSS viewing chords (radial orange lines in the divertor). (b) Computational grid used for the SOLPS ITER plasma model. Regions marked by different colors are 1⃝: the core; 2⃝: the low-field-side (LFS) divertor region; 3⃝: the high-field-side (HFS) divertor region; and 4⃝: other region of the SOLL. the basic concept of this methodology is summarized as follows: The level of agreement between simulation and experiment is evaluated using a large set of observables and is quantified by the _overall agreement metric_\(\chi\). \(\chi=0\) and \(\chi=1\), means, respectively, perfect agreement and complete disagreement. The fundamental quantity used to calculate \(\chi\) is the _normalised simulation-experimental distance_\(d_{j}\), which, for each observable \(j\) is defined as: \[d_{j}=\left[\frac{1}{N_{j}}\sum_{i=1}^{N_{j}}\frac{(s_{j,i}-e_{j,i})^{2}}{ \Delta s_{j,i}^{2}+\Delta e_{j,i}^{2}}\right]^{1/2} \tag{1}\] where \(e_{j,i}\), \(\Delta e_{j,i}\), \(s_{j,i}\), and \(\Delta s_{j,i}\) denote, respectively, the experimental values, their uncertainties, the simulation values, and their uncertainties, defined on a series of discrete data points \(i\in\{1,2,...,N_{j}\}\). \(d_{j}\to 0\) means a perfect agreement between simulation and experiment for the observable \(j\). Due to the difficulty to provide a rigorous estimate of the simulation uncertainty, we set \(\Delta s_{j,i}=0\) as in Ref. [13]. Other important quantities used in the validation are: the _sensitivity_\(S_{j}\) of an observable \(j\), which takes values between 0 and 1, approaching 1 for very small relative uncertainties (high precision) of the experimental and simulation observables; the _hierarchy weighting_\(H_{j}\), as given in Tab.1, is a value associated with each observable \(j\) that is smaller the higher the number of model assumptions and/or measurement combinations needed to determine the observable. Based on \(d_{j}\) and the weighting factors \(S_{j}\) and \(H_{j}\), a metric \(\chi\) is then evaluated to indicate the overall simulation-experiment agreement over all (or a subset of) the observables, see Appendix A for more information. In addition, the _quality_\(Q\) is evaluated, which denotes the quality of the validation. This value is higher when a higher number of more directly computed, higher precision observables are included in the validation, see Appendix A. ### Standard approach: matching upstream profiles The validation results for the standard approach of matching upstream profiles are given in Tab. 2, in the columns labeled Standard (Forw) and Standard (Rev). Since the present simulations do not include drifts, the results are compared to experimental data in both field directions. Some selected profiles showing simulation-experiment comparisons are given in Fig. 2, where the radial coordinate \(r-r_{\rm sep}\) denotes the distance from the separatrix, after mapping along the magnetic flux surface to the outboard midplane. The comparison at the outboard midplane and the divertor entrance are shown for the electron density \(n_{e}\) and the electron temperature \(T_{e}\) in Fig. 2(a) to 2(d). An appreciable match is found as expected from the values in Tab. 2, and since the simulations are tuned for a good upstream match. As in the simulations, the \(J_{sat}\) is estimated from \(n_{e}\) and \(T_{e}\), its good outboard midplane agreement in Tab. 2 is also expected. The good agreement observed for the upstream \(V_{pl}\) in Tab. 2 is a consequence mostly of large experimental error bars, as can be seen from the lower value of \(S_{j}\) compared to that of \(n_{e}\) and \(T_{e}\). On the other hand, the floating potential \(V_{fl}\) and Mach number \(M_{\parallel}\) at the outboard midplane show considerable discrepancy between simulation and experiment as indicated by the \(d_{j}\) values in Tab. 2. The value of \(M_{\parallel}\) is approximately zero, different from the experiment and simulations in Ref. [13] (not shown). This is attributed to the absence of drifts, which are the basic mechanism of the Pfirsch-Schluter flows at the outboard midplane [25; 26]. In the divertor volume (Fig. 2(e) to 2(h)), the simulation roughly reproduces the radial shape of the experimental \(n_{e}\) and \(T_{e}\) profiles. However, the simulated \(n_{e}\) profile peaks at Figure 2: **Comparison of the profiles in the standard approach of matching upstream profiles.** The 1-D experimental profiles in forward field (blue lines) and reversed field (purple lines) and in the SOLPS-ITER simulation (orange lines) for electron density (left column) and electron temperature (right column). The measurements are from the FHRP (see Fig. 1(a)) at the outboard midplane ((a) and (b)), from TS at the divertor entrance ((c) and (d)), and from LPs at the low-field-side divertor target ((i) and (j)), also shown are (e) the 2-D profile of electron density, and 1 temperature of the simulation and the corresponding reversed field experiments (g) and (h) measured by RDPA in the divertor volume. (k) and (l) show the 2D Mach number measured in the experiment (reversed field) and from the SOLPS simulations. the target, while in the experiments it peaks at the X-point. For \(T_{e}\), both simulation and experimental peaks are close to the X-point, with the simulation showing a stronger poloidal gradient. The experimental \(n_{e}\) and \(T_{e}\) profiles show overall similar trends in this region in both field directions, although the density profile is shifted more towards the private flux region in forward field. For the Mach number in the divertor volume (Fig. 2(k) and 2(l)), the SOLPS simulation predicts high values in the region \(r-r_{\rm sep}>0\), where RDPA measured Mach numbers close to zero. At the low-field-side (LFS) target (Fig. 2(i) and 2(j)), the simulation reproduces the peak magnitude of \(T_{e}\), but gives a larger peak \(n_{e}\) value. The overestimation of \(n_{e}\) was also observed in the reversed field direction in previous SOLPS-ITER simulations of TCV L-mode plasma at 1.45 T [7]. The simulation profile of \(n_{e}\) and \(T_{e}\) in Fig. 2(i)- 2(j) are narrower compared with the experiment in both field directions. It is worth noting that in the forward field direction, the \(n_{e}\) experimental profile shows a double peak structure not present in the simulations. Such profile shape is usually attributed to the effect of drifts [27], which are not included in this simulation. Figure 4: **Baratron neutral pressure.** The experimental baratron measurements (blue bars) and the simulation results using the synthetic diagnostic from Ref. [19] (green bars) at (a) the divertor floor (\(p_{\rm div}\)) and (b) the turbo pump (\(p_{\rm turbo}\)). In each subfigure we show the experimental measurements in forward (Frow) and reversed (Rev) field directions, and the simulation value in the standard approach (standard, Sec. IV.1) and for the best agreement cases in the conjugate gradient method (gradient, Sec. IV.2) and the \(\chi_{e,\perp}\) scan (\(D_{\perp},\chi_{e,\perp}\) scan, Sec. IV.3). Figure 3: **Comparison of the DSS Balmer line profiles.** The 1-D experimental profiles of forward field (blue lines) and reversed field (purple lines) and the SOLPS-ITER simulation profiles (orange lines) for three deuterium Balmer line intensities (a) D\({}_{5\to 2}\), (b) D\({}_{6\to 2}\), and (c) D\({}_{7\to 2}\) measured by DSS. Fig. 3 shows the comparison of the Balmer line intensity profiles measured by DSS, where the DSS chords, see Fig. 1 a), are labeled with increasing number from the bottom up to the X-point. The measured intensity in the forward field direction is systematically larger than the reversed field direction. For all three Balmer lines, the simulation successfully reproduces the profile shape, with the two peaks of the intensity, corresponding to the LFS target and the X-point/high-field-side (HFS) target. In the simulation, the location of the second peak (with the larger chord number) matches well with the experiment, while the first peak is shifted. Generally, the simulation underestimates the intensity, especially in the region between the two peaks, i.e., along the leg. At the first peak, the underestimation is small for \(D_{5\to 2}\) and for \(D_{6\to 2}\), while large for \(D_{7\to 2}\). At the second peak, the value in the simulation is much smaller than that in the forward field measurement, while closer to that measured in the reversed field experiments. Fig. 4 presents the experimental and simulation results for the neutral pressure in the two locations in the divertor. Both \(p_{\rm div}\) and \(p_{\rm imp}\) in the forward field direction are higher than in the reversed field direction. The simulation result for \(p_{\rm div}\) in the standard approach is within the range of uncertainty of the forward field case, while the simulated \(p_{\rm imp}\) matches within experimental uncertainty of the reversed field case. Compared to the full-field TCV SOLPS simulations studied in Ref. [7], where the simulated \(p_{\rm div}\) systematically exceeded the measured value by a factor \(\sim 400\%\), here in the reversed field, \(p_{\rm div}\) is overestimated only by \(\sim 50\%\) at most. In summary, the validation in this case gives an overall agreement metric \(\chi=0.770\) for the forward field direction, and \(\chi=0.807\) for the reversed field direction. Good agreement is found for the \(n_{e}\), \(T_{e}\), \(J_{sat}\), and \(V_{pl}\) profiles at the outboard midplane and the \(n_{e}\) and \(T_{e}\) profile in the divertor entrance, for both field directions. \(V_{fl}\) in the midplane and \(p_{\rm div}\) in the forward field direction, and \(p_{\rm imp}\) in the reversed field direction also show good quantitative agreement. Poorer matches are found in \(M_{\parallel}\) at the outboard midplane, \(M_{\parallel}\) and \(V_{fl}\) in the divertor volume, \(q_{\parallel}\) at the low field side target, and \(V_{pl}\) at the high field side target, in both field directions, and in \(V_{pl}\) and \(V_{fl}\) at the low field side target in the reversed field direction. To gain insight on role of the neutrals in the TCV-X21 scenario, we plot in Fig. 5 the simulated total ionization sources (more precisely, what is shown is the source of electrons due to ionization with the generation of both D\({}^{+}\) and D\({}_{2}^{+}\)). We can observe that in the SOL, most of the ionization happens along the separatrix, especially at the two targets. According to the SOLPS simulation, the ionization in the SOL (regions 2, 3 and 4 in Fig. 1(b)) accounts for \(\sim 65.0\%\) of the total ionization. The HFS divertor region (3 in Fig. 1(b)) accounts for \(\sim 14.9\%\) and the LFS divertor region (2 in Fig. 1(b)) accounts for \(\sim 31.2\%\). Another difference between these SOLPS-ITER simulations and the turbulence simulations in [13] is the inclusion of impurity species. Here the SOLPS simulation includes carbon impurities. Their radiation is relatively weak in this scenario, \(\sim 15\%\) of the total input power. ### Conjugate gradient method We explore here a systematic method to determine SOLPS-ITER input parameter values that lead to an overall improvement of the agreement metric \(\chi\). This is done by minimizing the multi-variable function \(\chi\) (we recall that \(\chi=0\) indicates perfect experiment-simulation agreement: \[\chi(\mathbf{x}),\quad\mathbf{x}=(\Gamma_{\rm D_{2}},D_{\perp},\chi_{e,\perp}), \tag{2}\] considering its dependence on the gas puff rate \(\Gamma_{\rm D_{2}}\), the particle diffusivity \(D_{\perp}\), and the electron thermal diffusivity \(\chi_{e,\perp}\). As a proof of principle, only these three input parameters are used in this test, but one could, in principle, use all input parameters of the simulation subjected to tuning. For this task, we apply the conjugate gradient method, an algorithm used to solve unconstrained optimization problems [29, 8]. The main advantage of this method, compared to the gradient descent method, is the avoidance of oscillating behaviors when calculating the gradient directions in the iterative minimization [29].The algorithm to determine the input parameters can be briefly described as follows: Step 1: The index \(r\) indicates the iteration step number, with \(r\in\{0,1,2,...\}\). In the first iteration, \(r=0\), we compute the gradient \(\mathbf{g}_{0}=\nabla\chi(\mathbf{x}_{0})\) at the starting point \(\mathbf{x}_{0}=(\Gamma_{\rm D_{2},0},D_{\perp,0},\chi_{e,\perp,0})\). This is done using finite differences between \(\chi(\mathbf{x}_{0})\) and three neighbouring simulations. Then, the minimization direction is set as \(\mathbf{s}_{0}=-\mathbf{g}_{0}\). Step 2: For the current \(r\), perform simulations along the direction \(\mathbf{s}_{r}\) using the parameters determined as \(\mathbf{x}^{\prime}_{r+1}=\mathbf{x}_{r}+\lambda\mathbf{s}_{r}\), with the values of \(\lambda\) chosen appropriately. Step 3: Evaluate \(\chi\) for all simulations in Step 2 and determine the local minimum, \(\mathbf{x}_{r+1}\). Figure 5: **Simulated total ionization source.** In (a) the values at the two targets go up to \(\sim 10^{23}\)m\({}^{-3}\)s\({}^{-1}\), which is saturated in the color code, and are thus shown in the two sub figures (b) and (c). \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & & \multicolumn{2}{c}{Standard (Forw)} & \multicolumn{2}{c}{Standard (Rev)} & \multicolumn{2}{c}{Gradient (Rev)} & \multicolumn{2}{c}{\(D_{\perp}\)\(\chi_{e,\perp}\) scan(Rev)} \\ Diagnostics & Observables & \(d_{j}\) & \(S_{j}\) & \(d_{j}\) & \(S_{j}\) & \(d_{j}\) & \(S_{j}\) & \(d_{j}\) & \(S_{j}\) \\ \hline & \(n_{e}\) & 0.800 & 0.835 & 1.275 & 0.889 & 0.800 & 0.880 & 0.683 & 0.884 \\ & \(T_{e}\) & 0.359 & 0.733 & 0.907 & 0.791 & 1.151 & 0.799 & 0.567 & 0.792 \\ Fast horizontally & \(J_{sat}\) & 1.523 & 0.905 & 1.845 & 0.894 & 1.780 & 0.890 & 1.559 & 0.891 \\ reciprocating probe & \(V_{pl}\) & 0.797 & 0.695 & 1.386 & 0.734 & 1.248 & 0.743 & 1.179 & 0.745 \\ (FHRP) for outboard & \(V_{fl}\) & 1.749 & 0.703 & 4.217 & 0.835 & 3.851 & 0.846 & 4.091 & 0.830 \\ midplane & \(M_{\parallel}\) & 7.295 & 0.847 & 6.879 & 0.876 & 6.881 & 0.876 & 6.946 & 0.875 \\ \cline{2-10} & (\(\chi\); \(Q\)) & (0.310; 2.218) & (0.501; 2.363) & (0.459; 2.371) & (0.388; 2.362) \\ \hline & \(n_{e}\) & 2.559 & 0.871 & 2.335 & 0.888 & 2.640 & 0.881 & 2.467 & 0.888 \\ & \(T_{e}\) & 2.041 & 0.899 & 1.929 & 0.901 & 1.642 & 0.906 & 1.196 & 0.909 \\ & \(J_{sat}\) & 4.191 & 0.909 & 3.722 & 0.917 & 3.506 & 0.913 & 4.164 & 0.917 \\ Reciprocating divertor & \(V_{pl}\) & 2.588 & 0.869 & 3.748 & 0.884 & 3.367 & 0.890 & 3.087 & 0.892 \\ probe array (RDPA) for divertor volume & \(V_{fl}\) & 0.122 & 0.745 & 2.186 & 0.816 & 21.370 & 0.828 & 22.232 & 0.818 \\ & \(M_{\parallel}\) & 1.944 & 0.898 & 13.329 & 0.901 & 12.870 & 0.899 & 12.374 & 0.895 \\ \cline{2-10} & (\(\chi\); \(Q\)) & (0.979; 2.446) & (0.966; 2.503) & (0.915; 2.509) & (0.829; 2.510) \\ \hline & \(n_{e}\) & 0.848 & 0.876 & 1.148 & 0.898 & 1.393 & 0.891 & 1.153 & 0.894 \\ Thomson scattering & \(T_{e}\) & 0.818 & 0.893 & 1.095 & 0.901 & 1.342 & 0.905 & 0.908 & 0.900 \\ (TS) for divertor & \(\chi\); \(Q\)) & (0.004; 0.884) & (0.045; 0.899) & (0.190; 0.898) & (0.031; 0.897) \\ \cline{2-10} Infrared camera (IR) for low-field-side target & \(q_{\parallel}\) & 0.350 & 0.911 & 6.322 & 0.941 & 6.437 & 0.943 & 6.588 & 0.943 \\ \cline{2-10} & (\(\chi\); \(Q\)) & (1.000; 0.304) & (1.000; 0.314) & (1.000; 0.314) & (1.000; 0.314) \\ \hline & \(n_{e}\) & 3.222 & 0.879 & 2.046 & 0.880 & 1.667 & 0.866 & 1.953 & 0.881 \\ & \(T_{e}\) & 2.170 & 0.891 & 1.907 & 0.866 & 1.599 & 0.876 & 1.293 & 0.880 \\ & \(J_{sat}\) & 3.688 & 0.909 & 2.342 & 0.908 & 2.066 & 0.902 & 2.347 & 0.909 \\ Wall Langmuir probes & \(V_{pl}\) & 3.441 & 0.888 & 4.982 & 0.887 & 4.483 & 0.893 & 4.031 & 0.897 \\ for low-field-side target & \(V_{fl}\) & 2.184 & 0.639 & 4.356 & 0.732 & 4.237 & 0.741 & 4.278 & 0.733 \\ & \(J_{\parallel}\) & 3.451 & 0.746 & 3.015 & 0.768 & 3.028 & 0.767 & 2.903 & 0.755 \\ \cline{2-10} & (\(\chi\); \(Q\)) & (0.985; 2.849) & (0.954; 2.904) & (0.842; 2.906) & (0.841; 2.906) \\ \hline & \(n_{e}\) & 3.360 & 0.886 & 4.269 & 0.901 & 3.046 & 0.888 & 3.602 & 0.901 \\ & \(T_{e}\) & 3.136 & 0.922 & 2.750 & 0.892 & 2.248 & 0.899 & 1.531 & 0.902 \\ & \(J_{sat}\) & 2.079 & 0.878 & 3.702 & 0.900 & 3.054 & 0.892 & 2.992 & 0.900 \\ Wall Langmuir probes & \(V_{pl}\) & 5.234 & 0.915 & 4.756 & 0.884 & 4.490 & 0.890 & 3.972 & 0.895 \\ for high-field-side & \(V_{fl}\) & 2.907 & 0.673 & 3.583 & 0.687 & 3.537 & 0.696 & 3.730 & 0.687 \\ target & \(J_{\parallel}\) & 3.617 & 0.788 & 3.648 & 0.781 & 3.643 & 0.780 & 3.846 & 0.777 \\ \cline{2-10} & (\(\chi\); \(Q\)) & (0.987; 2.925) & (0.999; 2.913) & (0.994; 2.912) & (0.904; 2.920) \\ \hline & \(D_{5+2}\) & 3.460 & 0.869 & 3.014 & 0.870 & 3.240 & 0.862 & 2.830 & 0.876 \\ Divertor spectroscopy & \(D_{6+2}\) & 2.218 & 0.753 & 2.627 & 0.825 & 2.774 & 0.818 & 2.481 & 0.835 \\ system (DSS) for divertor volume & \(D_{7+2}\) & 3.467 & 0.818 & 2.896 & 0.826 & 3.053 & 0.821 & 2.641 & 0.835 \\ \cline{2-10} & (\(\chi\); \(Q\)) & (0.986; 0.610) & (0.997; 0.630) & (0.998; 0.625) & (0.993; 0.637) \\ \hline & \(P_{\rm div}\) & 0.469 & 0.877 & 3.336 & 0.941 & 1.193 & 0.933 & 2.871 & 0.940 \\ gauges (BAR) for divertor volume & \(P_{\rm tmp}\) & 3.685 & 0.940 & 0.845 & 0.936 & 2.099 & 0.931 & 1.512 & 0.933 \\ \cline{2-10} & (\(\chi Step 4: If an asymptotic convergence is not achieved, the scan continues using the new gradient \(\mathbf{g}_{r+1}=\nabla\chi(\mathbf{x}_{\mathbf{r}+1})\), and the new scan direction (the conjugate gradient direction) is set by \[\mathbf{s}_{r+1}=-\mathbf{g}_{r+1}+\frac{g_{r+1}^{2}}{g_{r}^{2}}\mathbf{s}_{r}. \tag{3}\] Step 5: Once \(\mathbf{x}_{r+1}\) and \(\mathbf{s}_{r+1}\) are determined in Step 3 and Step 4, restart from Step 2 with \(r=r+1\) for the next iteration. In this way, \(\mathbf{x}_{r}\) is expected to converge to the minimum possible of \(\mathbf{x}\), i.e., maximized agreement. Fig. 6 shows the results of performing two iterations of the conjugate gradient method. The agreement metric is computed with respect to the reversed field case. In this demonstration, the parameters were first normalized to their values at the starting point, such that all quantities have a similar order of magnitude, and then the conjugate gradient method was applied. Finally, the parameters were denormalized back to real values. In each step, in order to find an approximate minimum along \(\mathbf{s}_{r}\), we performed simulations for three positive values of \(\lambda\) and treated the one with minimum \(\chi\) as the minimum point. The gradient needed to determine \(\mathbf{s}\) was calculated numerically, using the metric differences between the point of interest and three neighboring points. The starting point was chosen from the simulation in Sec. IV.1, with \(\Gamma_{\mathrm{D}_{2}}=6.8\times 10^{19}\mathrm{m/s}\), \(D_{\perp}=0.2\ \mathrm{m^{2}/s}\), and \(\chi_{e,\perp}=1.0\ \mathrm{m^{2}/s}\). In the first iteration (Fig. 6, red line), the metric decreases first, then increases, revealing a minimum. We select this lowest point as the starting point of the second iteration (Fig. 6, magenta line), where we observe an alternating increase and decrease, without exhibiting a clear asymptotic convergence or improvement of the agreement. Overall, the best case (lowest \(\chi\)) is obtained in the first step, where \(\Gamma_{\mathrm{D}_{2}}=5.7\times 10^{19}\mathrm{m/s}\), \(D_{\perp}=0.22\ \mathrm{m^{2}/s}\), and \(\chi_{e,\perp}=1.06\ \mathrm{m^{2}/s}\). The procedure is stopped after these two iterations, as no improvement was achieved in the second step and due to the involved simulation costs (12 simulations were needed in total for these two minimization steps). In Tab. 2, the \(d_{j}\) and \(\chi\) values for the simulations with the smallest overall metric \(\chi\) found with the gradient method are presented in column "Gradient(Rev)". In general, the two step demonstration gives an overall improvement of the agreement by 5.5%. The majority (21 out of 32) of the observables have been improved as indicated by the decrease of the corresponding \(d_{j}\). Among them, 11 observables for the gradient method show a decrease of the \(d_{j}\) value larger than 10%. Several observables have been significantly improved (from disagreement to agreement), for example, the \(n_{e}\) and \(T_{e}\) profiles measured at the LFS target by the LPs, \(T_{e}\) of RDPA, and \(p_{\mathrm{div}}\). ### \(D_{\perp}\) and \(\chi_{e,\perp}\) scans As an alternative of the conjugate gradient method, we also conduct here a scan of \(D_{\perp}\) and \(\chi_{e,\perp}\), on the grids spanned by \(D_{\perp}=\{0.15,\ 0.2,\ 0.25\}\ \mathrm{m^{2}/s}\) and \(\chi_{e,\perp}=\{1.0,\ 1.5,\ 2.0\}\ \mathrm{m^{2}/s}\), which are plotted as blue triangles in Fig. 6. For these nine simulations, the deuterium gas puff is fixed to be \(\Gamma_{\mathrm{D}_{2}}=6.8\times 10^{19}\ \mathrm{/s}\). In Fig. 6, we find that the simulation with the largest \(D_{\perp}\) and \(\chi_{e,\perp}\) value, \(D_{\perp}=0.25\ \mathrm{m^{2}/s}\) and \(\chi_{e,\perp}=2.0\ \mathrm{m^{2}/s}\) gives the best agreement. In Tab. 2, the \(d_{j}\) and \(\chi\) values for the simulation with the smallest overall metric \(\chi\) in the \(D_{\perp}\) and \(\chi_{e,\perp}\) scan are given in column "\(D_{\perp}\)\(\chi_{e,\perp}\) scan (Rev)". The overall increase of agreement given by \(\chi\) is \(\sim 10.4\)%. 23 out of 32 observables have been improved as indicated by the decrease of their \(d_{j}\). Among them, 14 observables show a decrease of their \(d_{j}\) value by more than 10%. Several observables have been significantly improved (from disagreement to agreement), for example, the \(T_{e}\) profile from LPs at the LFS and HFS targets, and \(T_{e}\) from RDPA. ## V Discussion In Sec. IV.1, a SOLPS-ITER simulation without drifts and with constant particle and energy diffusivity \(D_{\perp}\) and \(\chi_{e/i,\perp}\), was validated against the TCV-X21 reference case. The standard TCV L-mode values for \(D_{\perp}\), \(\chi_{e,\perp}\) and \(\chi_{i,\perp}\) were used [7] and the gas puff rate was manually tuned to have a good upstream density match. The simulation shows good agreement with experimental data at the outboard midplane and the divertor entrance, with the exception of the parallel Mach number. The agreement is especially good for the electron density \(n_{e}\) and temperature \(T_{e}\). When approaching the divertor targets, a less good agreement is found, but the simulation captures the shape and order of magnitude of the \(n_{e}\) and \(T_{e}\) profiles. The quantitative match with the floating potential \(V_{fl}\), plasma potential \(V_{pl}\), parallel Mach number \(M_{\parallel}\), and parallel heat flux \(q_{\parallel}\), are not satisfactory. The large experimental-simulation distance for \(q_{\parallel}\) can be mainly attributed to the small experimental uncertainties and a shift of the profile peak positions (Not shown). The mismatch of \(V_{fl}\) and \(V_{pl}\) with experimental data is mainly attributed to the omission of drifts in the simulations. In Sec. IV.2 and Sec. IV.3, using two different methods, we obtained improved quantitative agreement compared with the standard approach in Sec. IV.1. The best agreement case in the conjugate gradient method of Sec. IV.2 features a decrease of gas puff, and slight increases in the two transport coefficients. From Fig. 7(a), we can observe that the significant improvement in the \(d_{j}\) of the outer target \(n_{e}\) is mainly due to the decrease of the peak value, as a result of a reduced gas puff. The improved match of \(p_{\mathrm{div}}\) in the divertor region for the reversed field case is also related to the decrease of the gas puff, resulting in a decrease of neutral pressure (Fig. 4). The best agreement case in the \(D_{\perp}\) and \(\chi_{e,\perp}\) scan features a significant increase of the transport coefficients, which leads to an increase of the fall-off length of \(n_{e}\) and \(T_{e}\) at both targets and improves the match with the experiment profile. Similarly, \(T_{e}\) in the divertor volume displays a shallower radial decrease, which agrees better with the RDPA measurements. This may indicate that the experimental case has a higher perpendicular transport compared to what is assumed in the simulation with the standard approach. The conjugate gradient method does not give the overall best agreement (lowest metric \(\chi\)) although it does improve the result marginally. This can be attributed to several possible factors. First, the conjugate gradient method does not guarantee the increase of the agreement level in every iteration step. As shown in an example in Fig. 8, which aims to reproduce a similar behavior as observed in Sec. IV.2, two iteration steps are not enough to reach the minimum point and non-monotonic behaviors can be found in the second iteration. To go towards the minimum point, more iteration steps may be needed. Second, the finite difference method used for the gradient calculation might introduce non-negligible errors. In the calculation of the gradient in iteration 2, we assumed the gradient along the first direction of the 1-D search to be zero, which is also an approximation. Third, the step length of each 1-D search is limited by the numerical cost of each simulation. Therefore, the 1-D minimization can only be estimated approximately, with an error of the order of the step length. Possible solutions to these problems could be using other methods for the numerical differentiation, for example the central difference method, to get a better estimate of the gradient; or trying other minimization methods independent of the evaluation of the gradients, for example, the multi-dimensional simplex method [29]. Compared to the improvement it brought, the conjugate gradient method tested here is found to be a computationally too expensive approach. The ionization profile given by the SOLPS-ITER simulation (Fig. 5) is clearly different to what was assumed in the turbulence simulations in Ref. [13], where the ionization sources were assumed to be localized in the outer region of the confined plasma. This motivates further studies to add more realistic neutral particle profiles, or self-consistent inclusion of neutrals, to the turbulence simulations. To explore the effect of neutrals in the flows in the divertor region, we plotted the parallel Mach number profile along a magnetic flux surface at \(r-r_{\rm sep}=0.5\) cm in Fig. 9. The GBS turbulence simulation without neutrals from Ref. [13] and the SOLPS simulation results are of the same order of magnitude. The simulated parallel flows point towards the target Figure 6: **Validation metric of simulations using the gradient method and parameter grid scan.** The \(\chi\) of each simulation is plotted as a function of the gas puff \(\Gamma_{\rm D_{\perp}}\), the particle diffusivity \(D_{\perp}\), and the electron thermal diffusivity \(\chi_{e,\perp}\). The gas puff scan of section IV.1 is shown by the green circles, while the \(D_{\perp}\) and \(\chi_{e,\perp}\) scan is plotted as blue triangles. The simulations of the first and second iteration of the conjugate gradient method are plotted in red and magenta squares, respectively. The arrows denote the directions of searching. Figure 7: **Comparison of profiles from the standard simulation in Sec. IV.1 (starting point), the best case from gradient method, and the best case from the \(D_{\perp}\) and \(\chi_{e,\perp}\) scan.** We show the 1-D experimental profiles of reversed field (purple lines) and the SOLPS-ITER simulation profiles for (a) electron density and (b) electron temperature measured by the LPs at the low-field-side divertor target, and (c) electron density and (d) electron temperature measured by the LPs at the high-field-side divertor target. We also show (e) the 2-D profile of temperature measured by RDPA, and (f)-(h) the corresponding simulation results. (positive Mach number) and reveal a significant flow all along the divertor leg, being somewhat weaker in SOLPS-ITER. Instead, the Mach number measured with RDPA is much small, close to zero. Both GBS and SOLPS simulations show an increasing Mach number as approaching the divertor target, while the RDPA measurements feature a flat profile. Comparing the GBS simulations in reversed and forward toroidal field directions, we find that the effect of drifts is small compared with the difference between simulation and RDPA measurement. Ref. [13] suggested the difference between the GBS simulations and RDPA measurements to be due to the ionization source along the divertor leg and primarily to potentially be located just in front of the target. In this study using the SOLPS simulation with neutrals, the Mach number is lower than the GBS, but the flows in the simulations are still significant along the entire divertor leg and considerably higher than in the experiment. This raises questions in the Mach number measurements with RDPA in these conditions or the model used for its interpretation. Further investigation is needed in order to disentangle the differences between the flow velocities presented here. ## VI Conclusion SOLPS-ITER transport fluid simulations without drifts and with uniform particle and energy cross-field transport coefficients were qualitatively compared and quantitatively validated against the TCV-X21 experimental reference case from Ref. [13]. Three Balmer lines measured across the outer divertor leg by DSS and two divertor neutral pressure measurements from the Baratron gauges were added to the publicly available TCV-X21 dataset. In the standard approach, where SOLPS-ITER input parameters are tuned to match upstream quantities, qualitative comparisons of profiles from upstream to the divertor were carried out. As expected, in this standard approach, agreement between simulation and experiment is found in the outer midplane and at the divertor entrance, with the exception of parallel Mach flows. Reduced agreement is found in the divertor volume and at the divertor targets, for quantities such as density, temperature, and plasma potential profiles. Despite TCV-X21 being a (near) sheath-limited plasma designed to minimize the effect of neutrals in the SOL/divertors, the simulation still finds a \(\sim 65.0\%\) of the ionization happening in the SOL, with \(\sim 14.9\%\) in the HFS divertor region and \(\sim 31.2\%\) in the LFS divertor region. The simulation also shows \(\sim 15\%\) of the input power to be radiated by carbon impurities. Using the quantitative validation metric \(\chi\), in a proof-of-principle test, we use the conjugate gradient method and \(D_{\perp}\) and \(\chi_{e,\perp}\) scans to improve the agreement level, resulting in a 5.5% and 10.4% improvement respectively compared to the results achieved using the standard approach. This suggests that a reduced gas puff and increased particle and energy transport coefficients compared to what is used when exclusively trying to match upstream profiles lead to a better match with the experimental case, mainly via decreasing the peak target density and broadening the density and temperature profiles. While the performance achieved here with the conjugate gradient method is rather modest, this method may be improved by using finer steps and bet Figure 8: **A simple example that could give a similar behavior for the conjugate gradient method as in our study.** For illustration, we consider a 2-D minimization problem with the filled contour plot. The first and second 1-D search are denoted by the red dash line and magenta dash line. The red arrow represents the direction of the first gradient (opposite to the first 1-D search direction), while the magenta arrow represents the second gradient. Figure 9: **The Mach number along the separatrix to the LFS target.** These 1-D plots consider the parallel Mach number along a flux surface (at \(r-r_{\text{sep}}=0.5\text{ cm}\)) in the SOL near the LFS separatrix. The RDPA measurements are plotted by solid lines with shaded errorbars. The SOLPS simulations from Sec. IV.1 with three gas puff values are given together with GBS simulations in Ref. [13]. ter numerical differentiation methods. Other algorithms, such as multidimensional simplex method may, however, be better a solution for the problems presented here in the iterative method to determine the input parameters resulting in an optimal match with the experiment. The SOLPS simulations with neutrals show a significant portion of neutral ionization to occur in the SOL, a major difference compared with the assumption used in the first turbulence code validation in the TCV-X21 validation case in Ref. [13]. The parallel flows in the divertor observed in SOLPS and GBS turbulence simulations from Ref. [13] are similar in shape, with the GBS divertor flows systematically larger in comparison to the SOLPS flows. This suggests some flow reduction in the divertor by the neutrals. The parallel Mach numbers from SOLPS-ITER are, however, still substantially larger than those measured with RDPA, raising questions on the latter that will be further explored in the future. The results in this work provide useful information for future turbulence simulations of TCV with neutrals, while suggesting that the contribution of the neutrals to the flow velocity and, therefore, to the parallel heat flow towards the targets, should be further investigated. ## VII Acknowledgement This work has been carried out within the framework of the EUROfusion Consortium, partially funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No 101052200 -- EUROfusion). The Swiss contribution to this work has been funded by the Swiss State Secretariat for Education, Research and Innovation (SERI). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union, the European Commission or SERI. Neither the European Union nor the European Commission nor SERI can be held responsible for them. This work was supported in part by the Swiss National Science Foundation. ## Appendix A Definition of the quantities in the validation methodology This appendix summarizes the quantities used in the validation procedure, following the same method as in Ref. [13]. The sensitivity \(S_{j}\), for the observable \(j\), is given by \[S_{j}=\exp\left(-\frac{\sum_{i}\Delta e_{j,i}+\sum_{j}\Delta s_{j,i}}{\sum_{i} \left|e_{j,i}\right|+\sum_{i}\left|s_{j,i}\right|}\right), \tag{10}\] where \(e_{j,i}\), \(\Delta e_{j,i}\), \(s_{j,i}\), and \(\Delta s_{j,i}\) denote, respectively, the experimental values, their uncertainties, the simulation values, and their uncertainties, defined on a series of discrete data points \(i\in\{1,2,...,N_{j}\}\). The hierarchy weighting \(H_{j}\) is defined as \[H_{j}=(h_{sim}+h_{exp}-1)^{-1}, \tag{11}\] where \(h_{sim}\) and \(h_{exp}\) are the simulation and experimental primacy hierarchy level for an observable, being higher the higher number of assumptions and/or measurement combinations used to obtain the observable. The overall agreement metric \(\chi\) and the quality \(Q\) of a set of observables are obtained by: \[\chi=\frac{\sum_{j}R_{j}\left(d_{j}\right)H_{j}S_{j}}{\sum_{j}H_{j}S_{j}}, \qquad Q=\sum_{j}H_{j}S_{j}, \tag{12}\] where _the level-of-agreement function_\(R_{j}(d_{j})\) is an increasing function of the normalised simulation-experimental distance \(d_{j}\) (eq. 1), defined as \[R_{j}\left(d_{j}\right)=\frac{\tanh\left[\left(d_{j}-1/d_{j}-d_{0}\right)/ \lambda\right]+1}{2}. \tag{13}\] In this work we set \(d_{0}=1\), \(\lambda=0.5\), as in Ref. [17]. \(R_{j}\) takes values between 0 and 1. It is used to unify the distance to a level of agreement with fixed range, from perfect agreement \((0)\) to complete disagreement within errorbars \((1)\).
2304.13069
In-Lab X-ray Particle Velocimetry for Multiphase Flows: Design Principles and Demonstration of $O$(1 kHz) XPV
We combine X-ray-specific tracer particles, a photon counting detector, and a liquid metal jet anode X-ray source to achieve $O$(1 kHz) X-ray imaging speeds in the laboratory, 15$\times$ faster than previous comparable studies with $O$(50 $\mu$m) tracers. To examine the limits of this measurement technique we conduct three experiments: 2D and 3D X-ray particle velocimetry (XPV) of Poiseuille pipe flow, 3D XPV of flow around a Taylor bubble, and 3D scalar mixing with a laminar jet. These experiments demonstrate the performance improvement achievable by combining the aforementioned elements, the applicability to multiphase flows and deforming systems, and the potential to capture scalar and vector quantities simultaneously. Most importantly, these experiments are conducted with a laboratory-scale system, showing that in-lab X-ray particle velocimetry techniques are now becoming usable for a wider range of flows of interest. Furthermore, the design of XPV experiments is discussed to clarify the trade offs between achievable imaging speed, domain size and spatiotemporal resolution.
Jason T. Parker, Till Dreier, Daniel Nilsson, Simo A. Mäkiharju
2023-04-25T18:02:52Z
http://arxiv.org/abs/2304.13069v2
# \(O(1\) kHz) In-Lab X-ray Particle Velocimetry for Multiphase Flows ###### Abstract We combine X-ray-specific tracer particles, a photon counting detector, and a liquid metal anode X-ray source to achieve \(\mathbf{O}(1\) kHz) X-ray imaging speeds in the laboratory, \(15\times\) faster than previous comparable studies. To demonstrate the capabilities of these imaging speeds, we conduct three experiments: 2D and 3D X-ray particle tracking velocimetry (XPTV) of Poiseuille pipe flow, 3D XPTV of flow around a Taylor bubble, and 3D scalar mixing with a laminar jet. These experiments demonstrate the performance improvement of combining the aforementioned elements, the applicability to multiphase flows and deforming systems, and the potential to capture scalar and vector quantities simultaneously. Most importantly, these experiments are conducted in the laboratory, showing that in-lab X-ray particle velocimetry techniques are now usable for a wider range of flows of interest. X-ray, Particle Velocimetry, Flow Visualization, Multiphase, Mixing ## 1 Introduction Particle tracking velocimetry and particle image velocimetry are commonly used in contemporary fluid dynamics experiments. One of the major limitations of these techniques, though, is their inability to measure flows that are opaque to visible wavelengths of light. The opacity could be due to either the surrounding media, the fluid itself, or multiple refractive interfaces. X-rays, on the other hand, have an index of refraction near unity, and many materials are transparent to X-rays. Recently, researchers have started to combine X-ray imaging and particle-based velocimetry to study multiphase flows (Aliseda and Heindel (2021); Makiharju et al (2017, 2013); Ganesh et al (2016); Heindel et al (2008); Seeger et al (2001); Rodriguez and Makiharju (2023)), porous media flows (Makiharju et al (2022); Bultreys et al (2022); Kingston et al (2015); Gollin et al (2017)), biological flows (Antoine et al (2013); Jamison et al (2012); Kim and Lee (2006); Park et al (2016)), and many others that are difficult to observe with visible light image-based measurement techniques. So far, however, the temporal resolution of X-ray velocimetry and other X-ray imaging-based techniques is insufficient to resolve many flows of interest. Wider availability of X-ray particle tracking velocimetry (XPTV) and particle image velocimetry (XPIV) could offer previously unobtainable insight into many of these flows. Unlike their visible light counterparts, 2D and tomographic XPTV and XPIV - all of which comprise a class of measurement we call X-ray particle velocimetry (XPV) - are not currently capable of reaching frame rates \(O(1\)-\(100\) kHz) needed for many in-lab applications (i.e., outside of synchrotrons). Although continuously improving, non-scintillator based X-ray imaging detectors are orders of magnitude slower than visible light high-speed cameras. Even if X-ray imaging detectors were fast enough, the relatively modest polychromatic photon flux generated by most in-lab X-ray sources imposes another limitation. A brighter X-ray source enables shorter exposure times while retaining usable image quality. As a result, most prior work in XPV has been done at synchrotrons, which can achieve photon fluxes order of \(O(10^{17})\) ph/m\({}^{2}\)/sec (MacDowell et al (2012)). Laboratory X-ray sources are orders of magnitude dimmer than a synchrotron. Lee and Kim (2003) captured the first XPIV measurement at the Pohang Light Source. They generated contrast by attenuation, with edge enhancement due to refraction. Their experiment measured the 2D-projection of Poiseuille flow. Im et al (2007) used phase contrast imaging at the Advanced Photon Source. Their study also used tomography to reconstruct the velocity field of pipe flow in 3D. Measurements of simple, low speed flows eventually gave way to more complex, high speed flows such as the cavitating sheet experiment recently performed by Ge et al (2022) at the Advanced Photon Source synchrotron. Synchrotrons, while incredibly bright, impose practical limitations on the types of experiments one can run. Typically, synchrotrons beam widths are on the order of a few millimeters, making large experiment geometries impossible. Multiphase flows in particular are notoriously challenging to scale because complete similarity is rarely achieved in scaled experiments. Furthermore, synchrotron beam time is in finite supply and high demand. For most users, using a synchrotron requires costly, inconvenient travel. When one does get time on synchrotrons, it is often for a few hours to a few days, making long term or complicated experiments impractical to conduct. The limitations posed by synchrotrons motivate research to make XPV techniques practical for laboratory use. Prior work has demonstrated that XPV is feasible in a laboratory setting with currently available equipment (Parker and Makiharju (2022); Parker et al (2022); Makiharju et al (2022); Lappan et al (2020)). However, most of these experiments use exposure times \(O\)(10-100 ms) - too slow for most flows of interest. Many of these systems also use large, high Stokes number particles (Heindel et al (2008); Lappan et al (2020); Drake et al (2012)). While they may be appropriate for the selected application, large particles are not suitable for every flow. For energetic or low-speed flows they may not be accurate flow tracers due to inertial and buoyancy effects, respectively. The three primary limitations for in-lab XPV currently are detector performance, tracer contrast, and source brightness (Makiharju et al (2022)). The means to address these shortfalls work best in combination: "noiseless" photon counting detectors (PCDs) (Russo (2018)) with new, high contrast, neutrally buoyant flow tracers (Parker et al (2022)), and brighter in-lab sources such as liquid metal jet (LMJ) anode sources (Hemberg et al (2003)). Previous work has demonstrated the effectiveness of the former two approaches (Parker et al (2022)). This study combines for the first time a LMJ X-ray source with X-ray imaging-specific tracer particles and a PCD. Laboratory-scale LMJ X-ray sources can be orders of magnitude brighter than typical laboratory solid target sources for a given focal spot size \(O\)(10 um). A jet of liquid metal acts as the anode, overcoming the thermal limitations of stationary or rotating anode sources. That is, LMJ sources can achieve orders of magnitude higher output power density, achieving greater brightness without defocusing the focal spot. For this study, we use an Excillum MetalJet E1+ X-ray source to capture up to 1 kHz X-ray footage of _i_) laminar pipe flow, _ii_) Taylor bubble flow, and _iii_) laminar jet scalar mixing. We demonstrate that LMJ sources, coupled with a PCD and contrast-optimized nominally neutrally \(O\)(50\) um) tracer particles, enable particle tracing in \(O\)(10\) mm) domains with exposure times roughly \(15\times\) lower than previously achieved. 1 kHz frame rates enable domain-resolving full 360 degree computed tomography (CT) up to 3 Hz. These advancements make resolving flow with speeds \(O\)(100 mm/s) possible with an \(O\)(50 um) tracer particle. Such flow speeds are well within the range of human blood flow (Klarhofer et al (2001)), for example. Many porous media flows fall into this category as well (Makiharju et al (2022); Bultreys et al (2022)). In effect, this study demonstrates that laboratory XPV is currently usable for studying previously inaccessible flows of interest. This paper is organized as follows: section 2 details the flow experiment designs, X-ray source, tracer particles, and detector; section 3 discusses the experiment results; finally, section 4 provides the conclusions and future outlook for laboratory XPV. Additional details on image processing and particle tracking algorithms are provided in the appendices. ## 2 Experiment Design ### Flow Setup Three experiments are conducted with one setup to explore what is achievable with a brighter source in combination with improved flow tracers and a PCD. As discussed in section 1, many of the early synchrotron XPV experiments that focused on technique development measured developed pipe flow. It is straightforward to compare to theory; we replicate that experiment in the laboratory as the first of the XPV experiments for this study. This also enables clear comparison to Parker et al (2022), which used an identical pipe, fluid, and nearly identical tracer particles. This comparison usefully elucidates the performance improvement due to the brighter source. We study developed pipe flow with both 2D-projected and tomographic XPTV (TXPTV). The second experiment considers flow around a Taylor bubble, the results from which can be compared to Makiharju et al (2022), which used a similar setup to study a Taylor bubble with a traditional solid-target X-ray source and silver coated hollow tracers particles developed for visible light PIV. Lastly, we study the 3D mixing of a potassium iodide (KI)-water solution jet being injected into glycerine with high-speed CT. Figure 1 depicts the X-ray imaging setup. For these experiments, the source-to-object distance (SOD) is 56.5 mm; the source-to-detector distance (SDD) is 521.5 mm. We measure the former from the focal spot to the center of the pipe; the latter from the focal spot to the face of the detector pixel plane. The field of view is 22.2 mm \(\times\) 2.8 mm due to geometric magnification. Figure 2 shows the flow experiment setup. A plastic 6.35 mm (0.25 in.) inner diameter, 9.53 mm (0.375 in.) outer diameter polycarbonate pipe is held at the top to linear motion stages (Optics Focus MOX-06-150, MOX-06-200, and MOX-06-100-B) and a LAB Motion Systems RT100S air bearing rotation stage. The RT100S air bearing stage has a maximum rotation speed of 3 Hz, which constrains maximum the CT scan rate in these experiments. The pipe is connected to a T-joint that enables particle and fluid seeding, as well as pumping with a Harvard Apparatus Pump 33 Dual Drive System (HAP33DDS). The valve to the syringe pump is closed during seeding while the valve to the seeding syringe is open, and vice versa during experiment runs. The HAP33DDS pump is able to drive two syringes independently and simultaneously. The first syringe contains pumping fluid (in this case glycerine); the other contains air for the Taylor bubble or KI-water solution for the jet. The HAP33DDS is accurate to 0.25% of the flow rate and can achieve flow rates ranging from (1.02 pL/min to 106 mL/min) depending on the syringe size. Our experiments use glycerine and range from \(Re=8.7\times 10^{-4}\) for the pipe flow TXPTV experiments to \(Re=486\) for the KI jet. A 3.18 mm (0.125 in.) diameter brass pipe is inserted concentrically and terminates below the plastic pipe in the T-joint as seen in figure 1(b). For the Taylor bubble experiments, this pipe is connected to the second syringe, which is full of air. The Taylor bubble is released from this pipe into the plastic pipe test section. For the KI jet experiments, this pipe is connected to a syringe full of KI-water solution, which is then ejected from the brass pipe into the test section. Figure 1: The X-ray imaging setup. The flow setup is placed in front of the X-ray source aperture. A LAB Motion RT100S air bearing rotation stage controls rotation of the pipe. Figure 2: (a) Schematic of the flow setup where the test section is connected to supply lines via a swivel joint that rotates freely. (b) The inside of the flow setup shows a brass pipe mounted concentrically within the plastic pipe. The Taylor bubble and KI jet are injected via this brass pipe. For these experiments, we use pure glycerine as the working fluid. The glycerine temperature is nominally constant at 27.6degC, the density is 1260 kg/m\({}^{3}\), and the dynamic viscosity is 0.73 Pa\(\cdot\)s. Glycerine has an entrance length two orders of magnitude shorter than water for the same velocity. This allowed us to test higher velocities while retaining a fully-developed pipe flow. Glycerine also retarded the rise of the Taylor bubble, making it possible to reconstruct its morphology with the 3 Hz full angle CT acquisition rate. The glycerine is seeded with hollow carbon tungsten-coated microsphere (CW) tracer particles. ### Experiment Parameters For the 2D-projected pipe flow experiments the glycerine-tracer mixture is pumped at 42.75 mL/min, yielding a center line speed of 45 mm/s (\(Re=0.40\)). Images are captured at 1 kHz. A much slower flow is used for the TXPTV experiments to accommodate the lower temporal resolution. For the pipe flow experiments, the pipe is rotated at 720 degrees per second (dps) for a full 360deg CT temporal resolution of 2 Hz. The fluid is pumped at 0.0760 mL/min, resulting in a 0.08 mm/s center line flow speed (\(Re=8.7\times 10^{-4}\)). At 720 dps, the centripetal acceleration is \(0.05g\) at the pipe wall, where \(g\) is the acceleration due to gravity on earth. At 720 dps, then, we can safely neglect the effect of centripetal acceleration on the flow. Before image acquisition begins the pipe is allowed to rotate for at least 5 seconds so that liquid will be experiencing solid body rotation when the data is captured. The characteristic time scale for flow inside a rotating cylinder is given by \(\tau=R^{2}\nu^{-1}\), where \(\nu\) is the kinematic viscosity of glycerine and \(R\) is the pipe radius. At the temperature we operate at, \(\tau=0.02\) s - shorter than the duration we wait before capturing data. At low \(Re\), the swirling flow from entering the pipe should not have a significant effect (Yao and Fang (2012)). Our TXPTV pipe flow measurements do not detect azimuthal velocity components. In the Taylor bubble experiments 0.5 mL of air is injected through the ejection pipe within the T-joint. The CW-laden glycerine is already seeded into the viewing section. There is no pump-induced co-flow; the only flow is due to the Taylor bubble motion. The bubble rises at approximately 1.9 mm/s (\(Re=0.014\)). Images are captured at a 1 kHz frame rate; the pipe is rotated at 1080 dps (3 Hz). Although for the Taylor bubble \(\tau=0.65\) s (assuming air fills the pipe), we do note a slight azimuthal component in our results. At this rotation speed the centripetal acceleration in the pipe at the wall is \(0.11g\). This could begin to distort the Taylor bubble shape. As shown later, though, this centripetal acceleration is not a major concern for the tracers since they are nominally density matched with the liquid. To increase the CT acquisition rate without concern for centripetal acceleration effects, one could rotate the source-detector pair as opposed to the flow loop, or utilize multiple source-detector pairs. In the KI jet experiments, the KI-water solution is nearly saturated at 1.4 g/mL of KI. We inject the solution at 37.21 mL/min. \(Re=486\) based on a nozzle ejection speed of 78.1 mm/s, the density of the KI-water solution (1650 kg/m\({}^{3}\)), the nozzle diameter, and the dynamic viscosity of water at 27.6\({}^{\circ}\)C. The jet is observed roughly 19 nozzle diameters downstream. Images are captured at 500 Hz and the pipe is rotated at 720 dps (2 Hz). For finer temporal interrogation of the jet evolution, we reconstruct the CT scans by overlapping the projections used for each reconstruction by 90%. While this does not increase the true temporal resolution (which would mitigate motion artifacts) it leverages data contained in the 500 projections per scan to show the jet evolution in finer temporal detail. Velocimetry data is not captured in this experiment. Instead, we measure the evolution of the KI-water solution concentration in 3D. The parameters for all of the experiments can be found in table 1. ### Tracer Particles Tracer particle properties determine how quantitative a particle-based velocimetry measurement can be. Ideally, tracer particles should be neutrally buoyant, small, and have low inertia. Neutral buoyancy ensures that there are no bias errors in the vertical velocity component measurement or due to centripetal acceleration. Small particles maximize the spatial resolution of the measurement. Smaller particles can also perform better in presence of steep velocity gradients (Westerweel (2008)). Lastly, low inertia (for which being a small particle helps) means that the particle will more readily accelerate with the surrounding fluid. The parameter used to determine how well particles trace the flow is the Stokes number, \(St\), defined by equation 1. When \(St\ll 1\), the particles can be said to be accurate flow tracers. \[St\equiv\frac{\tau_{p}}{\tau_{f}} \tag{1}\] In equation 1, \(\tau_{p}\) is the characteristic response time of the particle to an impulse uniform acceleration and \(\tau_{f}\) is a characteristic time of the flow. Typically \(\tau_{f}\) is taken to be a length scale divided by a characteristic velocity. For a particle that approaches neutral buoyancy, the response time can be calculated by \[\tau_{p}=d_{p}^{2}\frac{\Delta\rho}{18\mu}. \tag{2}\] \begin{table} \begin{tabular}{l c c c} Experiment & Rotation Speed (dps) & Reynolds Number & Max. Flow Speed (mm/s) \\ \hline \hline 2D Pipe Flow & 0 & 8.7\(\times 10^{-4}\) & 45 \\ 3D Pipe Flow & 720 & 0.40 & 0.08 \\ 3D Taylor Bubble & 1080 & 0.014 & 0.5 \\ 3D KI Jet & 720 & 486 & 78.1\({}^{*}\) \\ \end{tabular} \end{table} Table 1: The experiment parameters for each of the three experiments that are conducted. \({}^{*}\)At the nozzle exit; we measure roughly 19 nozzle diameters downstream. where \(d_{p}\) is the particle diameter, \(\Delta\rho\) is the difference between the fluid and particle densities, and \(\mu\) is the dynamic viscosity of the fluid. From equations 1 and 2 it is clear that a small, neutrally buoyant tracer particle is necessary to take an accurate measurement. #### CW Tracers Nominally neutrally buoyant 45-53 um tungsten-coated hollow carbon microsphere (CW) tracers are used in this study. The CW tracer particles, designed with tools developed in Parker and Makiharju (2022), were shown in Parker et al (2022) to have higher contrast and better localizability than similarly sized silver-coated hollow glass tracer particles, so they are used for the high-speed measurements in these experiments. The CW particles are manufactured by Ultramet in California. We use a Monte Carlo calculation to approximate the settling speed of the tracer particles. Inherent to our calculations are the assumptions that 1) the tracer particles are spherical and 2) the relative velocity between the settling or rising particles is low enough that the particles are in the Stokes flow regime. These particles are still in the prototype stage, however, so they exhibit strong polydispersity and uneven coating thicknesses. We assume a 200 nm mean tungsten coating thickness that is Gaussian-distributed with a standard deviation of 20 nm. Based on nanofocus X-ray images of the particles, we measure the particle diameter distribution to use in the Monte Carlo calculations. An example nanofocus image can be seen in figure 3; the particle size distribution based on all such batches is shown in figure 4. We find the particles have an average diameter of 50 um, as designed. The standard deviation is large, as expected given the polydispersity, at 13 um. In figure 5 we show the settling velocity distribution of the CW particles in glycerine and water based on Monte Carlo calculations with \(10^{7}\) trials. Table 2 lists the settling speed as a ratio of the characteristic flow speed and the Stokes number of CW particles in both water and glycerine. In glycerine, each of these values is much less than unity, so we can take the CW particles to be ideal flow tracers. \begin{table} \begin{tabular}{c c c} Fluid & \(|\langle u\rangle_{St}|/U\) & \(St\times 10^{6}\) \\ \hline \hline Glycerine & 6.8\(\times 10^{-4}\) & 2.8 \\ Water & 0.019 & 2300 \\ \end{tabular} \end{table} Table 2: The mean settling speed to center line speed (\(U\)) ratio and Stokes number of the CW tracer particles that we use in these experiments. For the speed ratio, we compare to the slowest flow speed we measured (\(U=0.08\) mm/s); for the Stokes number we use the highest flow speed we measured (\(U=45\) mm/s). This is to ensure our Stokes number and buoyancy measurement bias are low for all of our flow conditions. ### The MetalJet X-ray Source The critical element that enables \(O\)(1 kHz) frame rates is the MetalJet E1+ X-ray source from Excillum (2023). This source is an order of magnitude brighter than a traditional solid anode microfocus X-ray source. For example, the MetalJet E1+ model used is roughly 20\(\times\) brighter for the same focal spot size and source acceleration voltage than the solid anode source used in Parker et al (2022). The apparent X-ray focal spot size is approximately 30 um at the settings used in this study. We operate the MetalJet E1+ LMJ at a 160 kV source acceleration voltage and 4.375 mA for an output power of 700 W. The spectra for the MetalJet E1+ and a traditional solid anode microfocus tube used in Parker and Makiharju (2022) and Parker et al (2022) can be seen in figures 6 for a rough comparison of a LMJ source to a typical solid anode microfocus source. Here, brightness is defined as the total number of photons emitted per steradian per second. While convenient, this definition does not account for Figure 3: (a) Multiple batches, such as this one, of CW particles are examined with a nanofocus X-ray source (NanoTube N3, Excillum AB). (b) The prototype nature of these particles is evident from the various coating thicknesses, broken particles, and polydispersity. different photon energy spectra between sources. Different spectra may affect the image quality improvement depending on the materials in the field of view. LMJ X-ray sources use a liquid metal jet target - in this case, a gallium-indium mixture - instead of a solid tungsten target. The benefit of using a liquid target is that the heat generated from electrons hitting the target is rapidly advected away and the anode regenerates such that pitting and material vaporization are not a concern like they would be for a solid anode target. By removing heat from the focal spot rapidly, the target can withstand Figure 4: The measured and fit particle size distribution. We fit a log-normal distribution to the particles, which exhibit clear polydispersity. Figure 5: The settling speed distribution of the CW tracer particles in water and glycerine. Using glycerine dramatically reduces the spread in settling speeds compared to water. greater power from the electron beam before vaporization is a concern. In these experiments, we run the LMJ source at 160 kV, 4.375 mA (700 W) compared to 55 kV, 0.5 mA (25.5 W) for the source in Parker et al (2022), resulting in a nominally 83\(\times\) brighter source compared to that study. When matching the focal spot size and source acceleration voltage, the MetalJet E1+ source is 20\(\times\) brighter than the YXLON source in Parker et al (2022). Such a bright source can improve the image signal-to-noise ratio (SNR), decrease exposure time, or both. How much a brighter polychromatic source helps is dependent on the materials in the experiment geometry, and what photon energies they most attenuate. ### Image Processing We collect data images with an XC-Thor FX20.1.256 PCD. For this study the flat field corrections are applied by the detector software prior to any custom processing of data. The flat field correction map is generated by the detector software with images taken after placing 0, 2, 7, and 21 mm thick plates of aluminum in front of the source one-by-one and collecting flat images at the source settings to be used. For 2D-projected XPTV processing, the flat field corrected images are inverted, then imported to LaVision DaVis 8.4. We use standard DaVis algorithms. With TXPTV, we reconstruct the field of view using the ASTRA Toolbox (van Aarle et al (2015); Aarle et al (2016)). Then, we segment the particles from the fluid in MATLAB to extract the particle centroid locations. A Figure 6: The X-ray flux from the MetalJet E1+ LMJ source used (Excillum (2023)) versus a traditional solid anode microfocus X-ray source (Poludniowski et al (2009); Poludniowski and Evans (2007); Poludniowski (2007)). A LMJ is able to withstand greater focal spot power density. As a result the spectrum is 20\(\times\) brighter for the same focal spot size and source acceleration voltage. particle tracking code in MATLAB was developed to trace the particle movement through multiple reconstructions. Our workflows for the pipe flow, Taylor bubble flow, and KI jet flow are detailed in appendices. #### Calibration and Uncertainty for 2D XPTV We define the pixel-to-mm calibration by counting the pixels required to span the known pipe outer diameter - 9.5 mm (0.375 in.). As discussed in Parker et al (2022), the actual pixel-to-mm scaling changes as a function of particle distance from the source due to geometric magnification. However, because we collect 2D-projected images, this depth information is lost, so a single calibration value is used for the entire experiment depth. A good rule of thumb is to keep the relative error associated with geometric magnification, termed the relative magnification error (RME), below 10%. Parker et al (2022) derives the equation for the RME, shown to be \[\epsilon(y)=\frac{|\Delta z-\Delta Z|}{\Delta Z}=1-\left(1+\frac{|y|}{SOD} \right)^{-1}, \tag{3}\] where \(\Delta z\) and \(\Delta Z\) are the measured tracer particle motion and the true particle motion, respectively; \(y\) is the depth-wise location of the tracer particle measured from the center of the experiment. In order to keep the maximum RME below 10%, the magnification aspect ratio (\(MAR\)) should be kept below 0.2. The \(MAR\) is defined in equation 4, where \(\delta\) is the experiment depth in the beam direction. \[MAR=\frac{\delta}{SOD} \tag{4}\] Equation 4 assumes symmetry in the depth-wise direction (\(\delta=2\ y_{max}\)). For all \(y<y_{max}\), the RME will then be less than 10%. For these experiments, the \(MAR=0.11\). Another source of measurement variance that is unique to 2D-projected data is the depth-averaged velocity profile (DAVP). In 2D-projected data, particles at the back wall, center, and front wall can all appear at the same horizontal location across the pipe. As a result, when collecting particle tracking data, one is actually sampling from the velocity distribution in the depth-wise direction at a given horizontal location. In flows with strong velocity gradients, this can be problematic. Even in flows with relatively shallow gradients, such as laminar pipe flow, the measured velocity profile is depressed compared to the radial cross section velocity profile. For laminar pipe flow, where there is an analytical solution, the DAVP can be calculated by taking a depth-wise average, shown in equation 5. \[\left\langle u(x,y)\right\rangle_{y}=\frac{4Q}{3\pi R^{2}}\left[1-\left(\frac{x }{R}\right)^{2}\right]=\frac{2}{3}u(r=x) \tag{5}\] It is to this profile that we compare our data. Here, \(\left\langle\cdot\right\rangle_{i}\) denotes mean in the \(i\) direction, \(Q\) is the volumetric flow rate, \(R\) is the pipe radius, and \(x\) is the horizontal coordinate (aligned here with the projected radial coordinate). Measuring from a distribution of velocities that ranges from \(u=0\) to center line speed, \(U\), means that there will inevitably be a wide spread in the velocity measurement. As with the RME, issues with depth-averaged profiles can be alleviated with 3D tomographic or stereo XPTV measurements. The final source of uncertainty is buoyancy, which is discussed in section 2.3. Buoyancy effects in vertically aligned flows can create a velocity measurement bias. We showed in section 2.3 that the flow speeds are sufficiently high compared to the particle settling speeds that buoyancy error can be neglected. In many previous proof-of-concept XPTV experiments, this was not the case (Lappan et al (2020); Lee et al (2009); Seeger et al (2001)). #### 2.5.2 Calibration and Uncertainty for TXPTV For TXPTV we define a voxel-to-mm\({}^{3}\) calibration similar to the 2D calibration. We identify the outer diameter of the pipe - known to be 9.5 mm - and count the number of pixels required to traverse it. Since the pixels are square, a voxel volume is simply the pixel width cubed. TXPTV is subject to unique sources of uncertainty compared to visible light tomographic PTV. Particles are segmented in a reconstructed volume; that is, they are identified based on their voxel intensity. The particle volume centroid is taken to be its location. The particles and Taylor bubble are moving during the CT scan, though, which can result in a number of artifacts that may smear or warp the particle and bubble reconstruction. In order for the particles and bubble to appear static in the reconstruction, it is crucial that the scan time is much faster than the particle and bubble motion. If the scan time is too fast, though, the particles and bubble may be blurred in the projections by rotational motion. Balancing these effects is important for getting an accurate CT scan. Makiharju et al (2022) discusses some typical reconstruction artifacts for moving particles. Table 3 shows the expected maximum particle rotation blur and particle motion blur during scanning. These blurring phenomena create the motion artifacts in the reconstruction. We use the maximum expected velocities, the rotation speed, and the maximum particle radial distance to calculate the values in table 3. Based on these values, we would expect to see some blurring and warping. In the experiments, we find that a sufficient number of particles are not significantly blurred or warped, making particle tracking feasible. \begin{table} \begin{tabular}{l c c} Experiment & Particle Rotation Blur (\(d_{p}\)) & Particle Motion Blur (\(d_{p}\)) \\ \hline \hline 3D Pipe Flow & 1.6 & 0.8 \\ 3D Taylor Bubble & 1.2 & 3.33 \\ \end{tabular} \end{table} Table 3: The expected maximum particle blur due to pipe rotation and particle motion in units of nominal particle diameter, \(d_{p}\) (50 \(\mathrm{\SIUnitSymbolMicro m}\)). Maximum expected flow velocities and radial distances are used to calculate these values. Values greater than one indicate that some blurring is possible. Figure 7 shows a vertical cross section through a Taylor bubble flow reconstruction. The bubble exhibits a motion blur artifact because it is rising too quickly for our scan speed. Although some particles may appear blurred, there are numerous particles that do not exhibit blurring and warping in the reconstruction, making tracking feasible. In order to mitigate the error introduced by particle distortions (appendix of Makiharju et al (2022)), we introduced a radius match to the particle tracking cost function in order to make sure particle volumes of the same size are tracked as one particle. Particles that experience a large change in reconstructed volume are not tracked, mitigating error introduced from particle warping. ## 3 Results and Discussion ### Poiseuille Pipe Flow #### 3.1.1 2D-projected Measurements of Poiseuille Pipe Flow Figure 8 shows the DAVP as measured by CW tracer particles in a 45 mm/s center line speed flow (\(Re=0.40\)). The data agree reasonably well with the analytical solution for the depth-averaged velocity profile. This profile is averaged over 850 frames to gather converged particle tracking statistics and assess the 2D-projected variance. While 2D measurements from a single source-detector pair are of limited value, multiple source-detector pairs can triangulate the location of particles in 3D in what is known as stereo-XPTV (SXPTV). SXPTV would not require the rotation of source-detector pairs or the flow experiment and the temporal resolution would only be limited by the detector frame rate. In other words, 1 kHz SXPTV is possible if multiple sources and detectors are used. Figure 7: A \(5\times 5\times 5\) median-filtered vertical slice through the central plane from a reconstructed Taylor bubble experiment. Only the pipe inner diameter is shown here for clarity. The Taylor bubble exhibits some motion blur manifested as a roughly 200 μm thick “halo” around the bubble, while in reality the interface is sharp. Fast moving particles may similarly warp, but many particles do not. There is a sufficient number of unwarped particles to successfully track them. #### TXPTV of Poiseuille Pipe Flow Figure 9 shows the average velocity profile from six reconstructions captured at 2 Hz. This rotation speed is 4.5\(\times\) faster than previously demonstrated in Makiharju et al (2022). The measured profile shows good agreement with the theoretical Poiseuille pipe flow profile, demonstrating that TXPTV can accurately measure fluid flow. Although Poiseuille pipe flow is a simple, well-understood flow, it can serve as a useful benchmark for steadily improving XPV measurements. The relatively large spread in the data is likely due to particle motion artifacts in the reconstruction. Makiharju et al (2022) explains in detail the root cause of these artifacts. The relative rotation between the moving particles and the source-detector pair smears the particles in the reconstruction, making it difficult to locate their true centroids. Reducing rotation time relative to the flow velocity reduces the impact of these artifacts. ### TXPTV of Flow Around a Rising Taylor Bubble One benefit of fast TXPTV is that it becomes possible to reconstruct the flow morphology in addition to capturing the flow velocity. For example, a full CT reconstruction of the Taylor bubble can be seen in figure 10 with the pathlines of tracked particles flowing around it. These data are captured at 3 Hz, which Figure 8: The depth-averaged velocity profile of a pipe flow with a 45 mm/s center line speed (\(Re=0.40\)) agrees well with the expected profile. The gray region depicts the variation of the velocity measurement between the \(16^{th}\) and \(84^{th}\) percentiles to approximate the \(2\sigma\) error bounds for a non-Gaussian-distributed variable. The data are normalized by the theoretical center line speed. is 9\(\times\) faster than previously achieved by Makiharju et al (2022). Furthermore, due to the higher frame rate we use here, we can capture 333 frames per rotation as opposed to the 200 frames per rotation used in Makiharju et al (2022). More frames improves reconstruction spatial resolution and reduces noise, making particle detection more accurate and ultimately enabling the use of smaller particles. A near order of magnitude improvement in the temporal resolution over the previous state of the art exemplifies the rapid development of TXPTV specifically and XPV in general. As new, brighter X-ray sources become available alongside faster detectors TXPTV will become increasingly useful as a technique for studying optically opaque fluid flows. ### Scalar Mixing of a KI Jet Figure 11 shows the evolution of the KI jet as it passes through the center slice of the reconstructed volume. The KI-water solution attenuates X-rays more than glycerine, so the KI jet appears darker. Initially, a more diffuse solution becomes visible; after 0.25 s the jet is more sharply visible. In fact, we can calculate the concentration of KI-water solution in glycer Figure 9: The TXPTV radial flow profile from a single scan shows good agreement with the theoretical Poiseuille profile. The 16\({}^{th}\) and 84\({}^{th}\) percentiles are shown to approximate 2\(\sigma\) error bounds for a non-Gaussian-distributed variable. Unlike 2D-projected XPTV, TXPTV can measure the true radial flow profile. The data are normalized by the theoretical center line speed, \(U=0.08\) mm/s. intensity. According to the Beer-Lambert law for a mixture, \[I=I_{0}\exp\left(-\sum\sigma_{i}n_{i}(c)x_{i}\right) \tag{6}\] the photon intensity \(I\) is a function of the source intensity \(I_{0}\), the attenuation cross section of material \(i\), \(\sigma_{i}\), the atomic number density \(n_{i}\) as a function of concentration \(c_{i}\), and the material thickness \(x_{i}\). For a given material - in this case KI-water mixture - the attenuation cross-section and material thickness (i.e., voxel size) are constant. The source intensity is assumed to be effectively constant. Flickering, from either the source or the detector, are accounted for with a correction factor in post-processing. Thus, the detected photon intensity for a given voxel is a function only of the concentration in said voxel. Atomic number density is proportional to concentration, so we can calculate from equation 6 the concentration \[c=A\ln(I)+B, \tag{7}\] where \(A\) and \(B\) are constants that are determined by examining the voxel intensity with known concentrations. We can identify the voxel intensity at zero concentration by assuming no solution is in the first four reconstructions. Then, by assuming the solution is well mixed in the four final reconstructions, we can identify the intensity at the well mixed concentration. The well mixed concentration, based on a control volume of 1.9 mL and the injected volume in 4 sec of 2.48 mL, is 2.15 g/mL. The control volume here is taken to be Figure 10: The particle pathlines flowing around the Taylor bubble moving through the field of view. The bubble-glycerine interface is shown in blue. A black circle on the bottom indicates the theoretical pipe interface. the pipe length from the nozzle injection point to the viewing section of the pipe. Data is collected over 4 sec. We find the coefficients \(A=-116\) and \(B=1216\). Figure 11 shows the concentration of the KI-water solution in the reconstruction as a function of time. As the jet develops in the field of view, the concentration of the solution increases. The qualitative data in each reconstruction is transformed into quantitative data in figure 11. Figure 11 demonstrates the potential of high speed CT scans for measuring the time evolution of scalar mixing in 3D. Although not done here, these data could be combined with TXPTV measurements to simultaneously capture the evolution of fluid velocity and concentration fields. ## 4 Conclusions Measurements of Poiseuille pipe flow, a rising Taylor bubble, and a laminar jet demonstrate the improved capabilities of in-lab high speed 2D XPTV, TXPTV, and CT thanks to the combination of a LMJ source, X-ray imaging-specific tracer particles, and a PCD. The 2D and 3D Poiseuille pipe flow measurements demonstrate the improved accuracy of the technique compared to our prior work. 2D-projected Poiseuille pipe flow data is acquired at 1 kHz, showing that \(O\)(1 kHz) acquisition rates are possible in the laboratory with the requisite hardware. The Taylor bubble and KI jet measurements demonstrate the applicability of TXPTV to multiphase flows and the ability of this technique to reconstruct the flow morphology in 3D. The laminar KI jet experiments demonstrate the potential for high speed CT to capture scalar mixing in 3D alongside 3D flow velocity measurements. The measurements captured in this study are orders of magnitude faster than the previous state of the art thanks to the combined usage of a PCD, X-ray imaging-specific \(O\)(50 um) tracer particles, and a LMJ X-ray source. This combination enabled imaging frame rates of 1 kHz, which is 15\(\times\) faster than previous comparable studies. Additionally, the bright LMJ source allows us to capture full 360 degree data for TXPTV reconstructions at nearly an order of magnitude higher speed than previous studies. These acquisition speeds bring more diverse fluid dynamic applications - from biological flows, to soil mechanics, to boiling flows - into the realm where in-lab XPV may be used. Many of the limitations in this study are practical in nature, and are easily overcome. The rotation stage used here, for example, is limited to 1080 dps, meaning 3 Hz CT was the fastest achievable scan speed due to the rotation stage. Additionally, there is only a single, static source-detector pair. More source-detector pairs would improve the temporal resolution by requiring fewer angles to be traversed to capture a full CT scan. The detector used in this study also has a relatively small buffer size, which limited the number of frames that could be continuously acquired. This is to say that there is ample room for easy improvement. A more involved improvement to the X-ray imaging setup could include implementing SXPTV, which uses multiple static source-detector pairs. For example, three source-imager pairs would enable particle or feature tracking at imaging frame rates, i.e., \(O\)(1 kHz). Such a system would theoretically only be temporally limited by the detector frame rate and source brightness. In this case, as demonstrated by the 2D-projected pipe flow measurements, that would allow 1 kHz SXPTV. Source and detector technology is rapidly improving, and the 2D-projected pipe flow measurements are so future source-detector pairs will likely be able to go to even faster speeds. SXPTV is also possible with a single source-detector pair by using limited-angle reconstruction, although that is not attempted in this study and is an area for future work. XPV is rapidly improving. Already these techniques could be useful for slow flows such as porous media flows and biological flows. The authors expect that the technique will see a dramatic increase in data acquisition speed over the coming years as PCDs become faster, sources become brighter, and newer, better X-ray tracer particles are developed. This study shows that the combination of these three elements yields orders of magnitude improvement in acquisition speed, thereby increasing the usefulness of XPV. Supplementary information.Acknowledgments.We gratefully acknowledge the support of NSF EAGER award #1922877 program managers Ron Joslin and Shahab Shojaei-Zadeh and the additional support provided by the Society of Hellman Fellows Fund. This work was also partially supported by AFRI Competitive Grant no. 2020-67021-32855/project accession no. 1024262 from the USDA National Institute of Food and Agriculture. This grant is being administered through AIFS: the AI Institute for Next Generation Food Systems. [https://aifs.ucdavis.edu](https://aifs.ucdavis.edu). ## Declarations Some journals require declarations to be submitted in a standardised format. Please check the Instructions for Authors of the journal to which you are submitting to see if you need to complete this section. If yes, your manuscript must contain the following sections under the heading 'Declarations': * Funding * Conflict of interest/Competing interests (check journal-specific guidelines for which heading to use) * Ethics approval * Consent to participate * Consent for publication * Availability of data and materials * Code availability * Authors' contributions If any of the sections are not relevant to your manuscript, please include the heading and write 'Not applicable' for that section. ## Appendix A Image Processing Algorithm ### 2D Pipe Flow For the 2D pipe flow experiments, the following algorithm was applied in LaVision DaVis version 8.4: 1. Apply a mask such that only the pipe inner diameter is visible. 2. Subtract the average intensity of all the frames from each frame. 3. Apply a strict sliding minimum filter with a scale length of 5 pixels. 4. Subtract a constant 0.01 from each frame. 5. Set all pixel values below zero to zero. 6. Apply a median filter with a \(5\times 5\) kernel size. 7. Multiply each frame by a constant 100. 8. Conduct PIV in two iterations: 1. One pass of \(64\times 64\) pixel window size with 50% overlap. 2. Two passes of \(48\times 48\) pixel window size with 50% overlap. 3. Apply vector removal based on pixel displacement with the following limits: \(V_{x}\in[-5,5]\), \(V_{y}\in[-12,0]\). 9. Conduct PTV with particle tracking assisted with PIV. The particle size range is 1 - 8 pixels; the intensity threshold is 0 counts; the correlation window size is 32 pixels; the allowed vector range relative to reference is 20 pixels. 10. Export to MATLAB to apply calibration and plot results. Average the profile over 850 frames since PTV requires statistical convergence to calculate a profile. ### 3D Pipe Flow and Taylor Bubble Flows In the tomographic, 3D flow experiments, the following was algorithm was used. First, for the CT reconstruction: 1. Convert flat field corrected images to integrated attenuation values by dividing by the empty field of view (FOV) pixel value. 2. Identify the center of rotation. 3. Use normalization ring artifact removal with a standard deviation of 15 pixels. 4. Use ASTRA Toolbox (van Aarle et al (2015); Aarle et al (2016)) to reconstruct the volume. A standard Ram-Lak filter is used. For the segmentation, MATLAB is used to segment the reconstructed data based on the voxel intensity in the following manner. For each reconstructed volume: 1. Normalize the reconstructed volume to grayscale. 2. 3D median filter the volume with a \(5\times 5\times 5\) kernel and replicating boundary conditions. 3. Binarize the volume using imbinarize. The threshold is adaptive, the sensitivity is 55%, and the foreground polarity is dark. 4. Invert the binarized image. 5. Use conncomp to identify 6-connected components volumes. 6. Use regionprops3 to measure the volume and centroid location of the volumes. Weight the centroid location with voxel intensity from the normalized reconstructed volume. For particle tracking, a MATLAB code was developed based on a nearest-neighbor algorithm. The code will be publicly available at the FLOW Lab website: flow.berkeley.edu. ### KI Jet Processing The KI jet is captured with high speed CT. These scans were reconstructed using ASTRA Toolbox (van Aarle et al (2015)). Each reconstruction consisted of 250 projections. For better temporal interrogation, we reconstruct each volume with a 90% overlap in the projections used. For example, the first reconstruction is done with projections 1 to 250; the second reconstruction is done with projections 25 to 275, and so on. Post-reconstruction, MATLAB is used to correct the source flicker and calculate the solution concentration. The algorithm is as follows: 1. The average intensity of the middle slice of each volume is normalized by the average intensity of the middle slices for all volumes. This provides the flicker correction factor. 2. Each volume is divided by the correction factor, then 3D Gaussian-filtered with an \(11\times 11\times 11\) kernel with a standard deviation of 7 pixels. 3. The first five and the final five volumes are averaged to calculate the zero concentration and fully mix concentration voxel intensities. The average inner pipe voxel intensity is taken to be the intensity for its respective concentration. The coefficients in equation 7 are then calculated. 4. Equation 7 is applied to each Gaussian-filtered volume to calculate the concentration in each voxel.
2303.11801
SACPlanner: Real-World Collision Avoidance with a Soft Actor Critic Local Planner and Polar State Representations
We study the training performance of ROS local planners based on Reinforcement Learning (RL), and the trajectories they produce on real-world robots. We show that recent enhancements to the Soft Actor Critic (SAC) algorithm such as RAD and DrQ achieve almost perfect training after only 10000 episodes. We also observe that on real-world robots the resulting SACPlanner is more reactive to obstacles than traditional ROS local planners such as DWA.
Khaled Nakhleh, Minahil Raza, Mack Tang, Matthew Andrews, Rinu Boney, Ilija Hadzic, Jeongran Lee, Atefeh Mohajeri, Karina Palyutina
2023-03-21T12:35:12Z
http://arxiv.org/abs/2303.11801v1
SACPlanner: Real-World Collision Avoidance with a Soft Actor Critic Local Planner and Polar State Representations ###### Abstract We study the training performance of ROS local planners based on Reinforcement Learning (RL), and the trajectories they produce on real-world robots. We show that recent enhancements to the Soft Actor Critic (SAC) algorithm such as RAD and DrQ achieve almost perfect training after only 10000 episodes. We also observe that on real-world robots the resulting _SACPlanner_ is more reactive to obstacles than traditional ROS local planners such as DWA. ## I Introduction We study the efficacy of Reinforcement Learning (RL) algorithms for obstacle avoidance and local planning in ROS-based robotics systems. RL algorithms are able to learn optimal actions based on a current state and a reward function. The purpose of the ROS local planner is to adhere to a global path to the current robot goal while avoiding local obstacles (which may be dynamic). The RL paradigm is attractive for such a problem since the behavior of an RL agent does not have to be explicitly programmed for every possible scenario. In the RL framework, we specify the reward function, state space, and permissible actions the robot can take. The goal is to obtain a near-optimal planning policy given sufficient training samples. RL agents can potentially exhibit more complex (and hence more responsive) behavior than traditional local planners such as the Dynamic Window Approach (DWA) to Collision Avoidance [1]. RL has recently seen many advances due to the emergence of Deep RL, where the actions are chosen from a policy parametrized by a Deep Neural Network (DNN). One notable success of Deep RL is in learning policies for game environments (e.g. Atari games) modeled as Markov Decision Processes (MDPs) and standardized as _OpenAI Gym_ environments [2]. As a result of this success, multiple authors have examined how Deep RL can be applied to robot control [3, 4, 5, 6]. However, these works raise a number of questions that we address in our study. First, they typically measure performance via an _episodic success_ criterion, e.g. does the robot reach the goal, does it suffer any collisions etc? We are also interested in the _quality_ of the trajectory. Is it smooth? How does it back off from an obstacle? Second, many of these papers address challenging environments where success rates are significantly below 90%. We believe such performance is unacceptable for practical deployments. Therefore, we are interested in how to achieve near 100% success rates even in complex scenarios. Third, there are alternative obstacle-avoidance algorithms that do not use RL and we would like to quantify the benefits and drawbacks of using an RL-based approach. Lastly, we would like to know which specific RL techniques produce the best performance. We follow [7, 3] and use ROS together with a _waypoint generator_ that specifies a _next_ waypoint based on the current robot location and a global plan to the goal. The task of the RL local planner is to reach this next waypoint without hitting any static or dynamic obstacles. Our RL state is an image representation of the obstacles and the next waypoint in polar coordinates. It mimics the image states used in the OpenAI gym environments for Atari games. We train our agents in a simulator with sample maps, and then upload the trained agents onto the robot for testing in the real world. With this setup, we list our contributions as follows: \(\bullet\) We show that modern variants of the Soft Actor-Critic (SAC) RL algorithm such as Reinforcement Learning with Augmented Data (RAD) [8] and Data-regularized Q (DrQ) [9] give significantly improved performance compared to earlier RL algorithms and implementations, and achieve success rates close to 100% after only 10,000 episodes. We refer to the resulting local planner as _SACPlanner_. \(\bullet\) We demonstrate that polar image state representations outperform natural alternatives. \(\bullet\) We analyze the trajectories produced by SACPlanner on real-world robots. (Prior work mostly limited trajectory analysis to simulations with perfect localization etc.) We compare with trajectories produced by DWA and a shortest-path based local planner. In all cases with an unexpected or dynamic obstacle, SACPlanner is much more reactive and hence performs better. The trade-off is a less smooth trajectory when the local planner simply has to follow the global plan. ## II Training and Validation Framework We use a standard ROS stack in which the robot knows its position up to the accuracy of the localization system. The robot has a 2D map for fixed, known obstacles and it detects dynamic and unknown static obstacles using a LiDAR sensor. From the raw obstacle information the robot constructs a costmap in the form of an Occupancy Grid using the approach of Lu et al. [10]. The costmap window size for the local planner is \(8m\times 8m\). We integrate RL into the robot navigation stack using the framework pioneered by Guldenring et al. [7, 3]. When a new goal is specified the global planner creates a path from the current position to the goal (Fig.1). In this work we use without change the standard ROS _NavFn_ planner based on the Dijkstra search algorithm. The path is found based on the obstacles in the map together with any obstacles seen by the LiDAR at the time of path creation. Whenever a path is created by the global planner, a _waypoint generator_ breaks it up into a sequence of waypoints. At all times the local planner maintains a list of 8 waypoints, starting with the one after the waypoint that is closest to the robot. (The method of [7] sometimes starts the list with the closest waypoint to the robot, but we found that could create excessive "pingponging" in the eventual choice of waypoint). From this list of 8, the local planner chooses the first on the list that is not too close to an obstacle. The aim of the RL agent is to move towards the selected waypoint while not hitting any obstacles, _including obstacles that appeared after the global plan was computed_. ### _RL Environment_ The RL environment is defined by a state space \(\mathcal{S}\), an action space \(\mathcal{A}\), and a reward function \(R(\cdot,\cdot)\). When the RL agent takes action \(a\in\mathcal{A}\) in state \(s\in\mathcal{S}\), it gains reward \(R(s,a)\) and moves to a new state \(s^{\prime}\) according to some state-transition distribution \(s^{\prime}\sim p(\cdot|s,a)\). The actions are linear/angular velocity pairs \((v,\omega)\). The state space is defined by the positions of the next waypoint and the local obstacles relative to the current robot position. We represent the state with an image since this allows us to utilize the convolutional deep RL architectures that have worked well for visually-rich environments such as Atari video games and some robot control tasks. In addition, using such game-like image states is a convenient way to merge the information from the waypoint position, the static objects from the map, and the dynamic obstacles sensed by the LiDAR. Specifically, our RL state is an image that we refer to as the _polar costmap_. (See Fig. 1.) It is generated by converting the Occupancy Grid representation of the ROS costmap and the next waypoint to polar coordinates. The horizontal axis represents distance from the robot and the vertical axis represents angle. Obstacles are presented in red and the next waypoint is a white square. The motivation for using a polar representation is that it matches the linear/angular velocities that form the action. The state transition naturally follows from the robot movement after an action is taken. It remains to define the reward function \(R(s,a)\) for taking action \(a\) in state \(s\). We employ a mix of both dense and sparse rewards. For a given state \(s\), let \((d_{\mathrm{old}},\theta_{\mathrm{old}})\) be the distance and bearing to the next waypoint in state \(s\), let \(s^{\prime}\) be the new state after taking action \(a\), and let \((d_{\mathrm{new}},\theta_{\mathrm{new}})\) be the distance and bearing in state \(s^{\prime}\). Here the bearing is defined to be the difference between the angle to the waypoint and the current yaw. We define: \[\begin{array}{l}R(s,a)=(d_{\mathrm{old}}-d_{\mathrm{new}})\cdot(1\mbox{ if }d_{\mathrm{old}}-d_{\mathrm{new}}\geq 0,\mbox{ else }2)\\ +(|\theta_{\mathrm{old}}|-|\theta_{\mathrm{new}}|)\cdot(1\mbox{ if }|\theta_{ \mathrm{old}}|-|\theta_{\mathrm{new}}|\geq 0,\mbox{ else }2)\\ -R_{\mathrm{max}}\cdot(1\mbox{ if collision, else }0)\\ +R_{\mathrm{max}}\cdot(1\mbox{ if }d_{\mathrm{new}}=0,\mbox{ else }0)\\ -G(s^{\prime}),\end{array}\] where \(R_{\mathrm{max}}\) is a fixed reward/penalty for reaching the waypoint and colliding with an obstacle, respectively, and \(G(s^{\prime})\) is the product of a truncated Gaussian kernel centered at the robot location and the Occupancy Grid in state \(s^{\prime}\). The first two terms of \(R(s,a)\) incentivize getting closer to the waypoint both in terms of distance and bearing. Note that the penalty for moving away from the waypoint (both in distance and bearing) is double the reward for moving towards it. Hence there is a net penalty for moving away from the waypoint and then back towards it. We have found that this "doubling the penalty for negative progress" has a significant effect on encouraging the agent to move directly to the waypoint if there are no obstacles in the way. The final Gaussian term penalizes movement towards an obstacle. We find that it is more efficient to train our RL agents on a "dummy" training environment that does not require the full complexity of ROS or a detailed physics simulation. For this dummy training environment we place a robot start position and a single waypoint in an environment with obstacles as shown in Fig.2. The robot is the blue square, the waypoint is the red square, and the larger green square around the robot is the support of the truncated Gaussian kernel. For each episode in the RL training, we pick an obstacle configuration and then use the above reward to encourage the RL agent to move towards the waypoint without hitting obstacles. Once the agent is trained we can run it directly in our ROS environment (either a Gazebo simulation or on real robots) since the state definition is the same in all cases. We remark, however, that the specific obstacle configurations on which Fig. 1: ROS framework with global map, and polar costmap. The black square represents an obstacle that appeared after the global plan was computed. Fig. 2: Dummy training environment (left) with polar costmap (right). we do the training are _not_ the same as the configurations on which we do our eventual experiments, since we want trained agents that generalize to any unseen obstacle configuration. ## III Previous Work and Comparison Algorithms The canonical local planner algorithm for ROS is the Dynamic Window Approach [1]. At each instant, DWA calculates a set of achievable \((v,\omega)\) pairs based on the current velocities and achievable acceleration characteristics of the robot. For each velocity pair DWA calculates a score based on how closely the arc follows the global plan, and on how far the arc is from any obstacle. It then chooses the best velocity pair based on this score. Multiple recent papers have investigated how well local planner behavior can be learned via RL. Guldenring et al. [7] developed a framework that has been followed by many subsequent papers in which the global plan is partitioned into waypoints and the task of the RL agent is to get to the next waypoint. Patel et al. [4] combines the DWA and RL approaches. The resulting DWA-RL algorithm calculates a cost for each potential velocity pair, but then uses RL to select the best pair based on the full spectrum of costs, rather than just picking the lowest cost pair. The work of Kastner et al. [5] distinguishes between humans, robots and static objects and uses an RL state that is a combination of the raw LiDAR input, the distance/angle to the goal, the position of nearby humans and the position of nearby robots. A follow-up paper [11] looks at different methods for choosing the next waypoint, and compares the fixed partition of Guldenring et al. [7, 3], with alternative methods that choose the waypoint more dynamically. The work of Liu et al. [6] uses a similar RL state. The main difference is that they represent pedestrian and robot movement using the CrowdNav algorithm of [12], and they represent the LiDAR information via both the raw LiDAR values and an Occupancy Grid. In many of these papers the success rate of the trained agent is significantly under 100%. For example, the agent of [7] converges to a rate less than 70%. Moreover, this prior work typically provides trajectory plots from a Gazebo simulation. Our goal is to train an agent with close to 100% success, and then analyze trajectories from a real-world deployment (with the associated imperfections in sensing and localization). We also observe that if the goal is to get to the next waypoint, then an alternative is to repeatedly calculate a shortest path in the Occupancy Grid. We have found that modern python implementations of Dijkstra's algorithm can do this sufficiently fast, and so we also compare against a local planner that uses the next segment of the shortest path to define the robot velocities. Note however that the shortest path will change over time as the robot and obstacles move. ## IV Soft Actor Critic Algorithm The objective in RL is to maximize the expected sum of rewards that the agent will receive in the future: \(G=\mathbb{E}[\sum_{t=0}^{\infty}\gamma^{t}R(s_{t},a_{t})]\), where the expectation is taken over the agent policy \(a_{t}\sim\pi(\cdot|s_{t})\) and the state transition function \(s_{t+1}\sim p(\cdot|s_{t},a_{t})\). The parameter \(\gamma\in(0,1]\) is a discount factor used to reduce the weight given to future rewards. Continuous control problems, such as the local navigation task considered in this paper, are often approached using actor-critic algorithms that learn two functions called the actor and the critic. The actor is a policy function \(a\sim\pi_{\theta}(\cdot|s)\) with parameters \(\theta\). The critic \(Q_{\phi}(s,a)\) with parameters \(\phi\) estimates the action-value function \(Q^{\pi}(s,a)=\mathbb{E}[\sum_{t=0}^{\infty}\gamma^{t}R(s_{t},a_{t})|s_{0}=s, a_{0}=a]\) of policy \(\pi\), which is the expected cumulative reward after taking action \(a\) in state \(s\) and following policy \(\pi\) after that. In this work, we use a state-of-the-art off-policy actor-critic algorithm called Soft Actor-Critic (SAC) [13, 14]. It is based on the maximum entropy RL framework which augments the standard RL objective with an entropy maximization objective: \(G=\mathbb{E}[\sum_{t=0}^{\infty}\gamma^{t}(R(s_{t},a_{t})+\alpha\mathcal{H}( \pi(\cdot|s_{t})))]\), where \(\alpha\) is a learnable temperature parameter that balances the importance of both objectives. The entropy maximization motivates the agent to succeed at the task while acting as randomly as possible, aiding exploration. In SAC, the actor and critic functions are parameterized as deep neural networks. The actor is a Gaussian policy with the mean and diagonal covariance parameters produced by the neural network. The actor and critic networks are updated by sampling minibatches of \((s_{t},a_{t},r_{t},s_{t+1},d_{t})\) transitions from a replay buffer \(\mathcal{D}\), where \(d_{t}\) is a terminal signal denoting the end of the episode. The parameters for the critic network \(Q_{\phi}\) are trained to minimize the soft Bellman residual: \[J_{Q}(\phi)=\mathbb{E}_{(s_{t},a_{t},r_{t},s_{t+1},d_{t})\sim\mathcal{D}}\left[ Q_{\phi}(s_{t},a_{t})-y_{t}\right]^{2}\,,\] where the learning target \(y_{t}\) is \[y_{t}=r_{t}+\gamma(1-d_{t})V(s_{t+1})\,,\] and the soft value function \[V(s_{t})=E_{a_{t}\sim\pi(\cdot|s_{t})}\left[Q_{\bar{\phi}}(s_{t},a_{t})-\alpha \log\pi(a_{t}|s_{t})\right] \tag{1}\] is approximated using a Monte Carlo estimate of the policy \(\pi_{\theta}\) and a target Q network \(Q_{\bar{\phi}}(s_{t},a_{t})\) whose parameters \(\bar{\phi}\) is maintained as the exponentially moving average of the Q network parameters \(\phi\). SAC also makes use of clipped double Q-learning [15], where the Q estimates are computed as the minimum value of an ensemble of two critic networks with different initializations trained on the same data. This helps prevent overestimation bias in Q-learning with non-linear function approximators. The parameters of the actor/policy network \(\pi_{\theta}\) are updated to maximize the maximum entropy RL objective: \[J_{\pi}(\theta)=\mathop{\mathbb{E}}_{s_{t}\sim\mathcal{D}}\left[\mathop{ \mathbb{E}}_{a_{t}\sim\pi_{\theta}(\cdot|s_{t})}\left[\alpha\log\pi_{\theta}(a _{t}|s_{t})-Q_{\phi}(s_{t},a_{t})\right]\right]\,.\] The learnable temperature parameter \(\alpha\) can be automatically updated such that the policy network satisfies a minimum expected entropy constraint. See [14] for more details. While SAC often performs well on continuous control tasks with low-dimensional observations, learning a mapping from high-dimensional states (images) to continuous actions (linear and angular velocities) typically requires massive amounts of robot-environment interactions. This is because the agent must learn to extract the right information from the images to successfully perform the task at hand. SAC with a convolutional encoder can be used to learn low-dimensional representations of image observations, which are then provided to the actor and critic networks. However, this often fails. Sample-efficient learning of SAC agents from image observations requires additional supervision such as input reconstruction [16], contrastive representation learning [17], or image augmentations [8, 9]. In this work, we consider the recently proposed RAD [8] and DrQ [9] methods that apply image augmentations for sample-efficient learning of continuous control policies from image observations. In RAD and DrQ, the image observations are transformed with a random shift before each forward pass on the convolutional encoder. DrQ further proposes to average the Q-learning targets in Eq. 1 over \(K\) image transformations. This reduces the variance in the learning targets of the critic, improving the stability and efficiency of learning. We apply random shift image augmentation (by \(\pm 4\) pixels) to the costmap observations. The augmented images are passed to a convolutional encoder consisting of four \(3\times 3\) convolutional layers with 32 filters and a stride of 1 followed by ReLU activation. The output of the final convolutional layer is flattened and passed to a fully connected layer, followed by layer normalization and tanh activation to yield a 50-dimensional state representation. The actor and critic networks use the same MLP architecture with 4 fully connected layers of 1024 hidden units. The actor predicts the mean and diagonal covariance of the Gaussian policy based on the encoded state vector. The critic networks predict the scalar state-action values based on the encoded state vector and an action vector. Following previous works [16, 17, 8, 9], we train the convolutional encoder network using only the critic loss and then detach the network parameters from the actor loss for improved training stability. ## V Performance of RL Training We now evaluate the training performance of RAD, DrQ and other baseline RL methods in our dummy training environment. We train the RL agents on our polar costmap environment from Section II-A, and compare against Cartesian costmap environments (similar to [7]) where we do not convert to polar coordinates before generating the image. We train for 10,000 episodes with the hyper-parameter values listed in Table I, The trained agents are evaluated over 1000 episodes in the training environment. We define the success rate as the percentage of episodes in which the agent reached the goal. The collision rate is defined as the percentage of episodes where the agent collides with the obstacles. An episode can be neither a success nor a collision if the robot stops and the episode times out. In Table II, we compare the polar and Cartesian costmaps using the RAD version of SAC. While the information regarding the robot's orientation is implicit in the polar costmap, this information is missing in the Cartesian costmap. We explored three ways to represent this: (i) rotating the Cartesian costmap by the robot orientation angle, (ii) drawing an arrow at the center of the costmap to denote the robot orientation, or (iii) appending an extra channel to the costmap with the robot orientation angle. The agent with polar costmap observations significantly outperforms those with Cartesian costmap observations. We hypothesize that this is because the polar costmaps better match the action space of the robot and also implicitly represent the robot orientation information, which allows for better generalization. We use the better performing polar costmap environment in the rest of our experiments. We next compare the performance of RAD, DrQ (with K=2), and the following RL baselines in Table III: \(\bullet\)**DQN**. To evaluate if discrete control is easier to learn, we discretize the action space of the robot with six possible linear/angular velocity pair combinations and train a standard DQN agent from the stable baselines library [18]. \(\bullet\)**PPO**. To evaluate if the SAC agents perform better than other actor-critic algorithms, we also compare against the popular PPO agent from the stable baselines library [18]. \(\bullet\)**SAC from raw LiDAR observations**. To evaluate the importance of image-based game-like states, we compare against a SAC agent trained on raw LiDAR observations (similar to [5, 6]). For this agent, the actor and critic networks receive state vectors consisting of the raw LiDAR readings and the coordinates of the next waypoint. \(\bullet\)**DWA-RL with SAC**. To evaluate if it is beneficial to combine the standard DWA planner with RL, we implement the observation space and reward function of the DWA-RL method [4] and train our SAC agent on this hybrid setup. The DrQ method achieves the highest success rate (\(>99\%\)) with the fewest collisions. We also experimented with stacking four consecutive frames as observations to the DrQ method but observed that these agents tend to have trouble navigating around obstacles, reducing the success rate to \(94.9\%\). We note that the success rates we obtain with the baseline algorithms are lower than those observed in the literature [7, 4, 5, 6]. We believe this is partly because we only run for 10,000 episodes (which corresponds to \(<500000\) steps). However, this is sufficient for training the DrQ agent and demonstrates the sample-efficiency of this variant of SAC. Another potential reason is that our training environment contains challenging scenarios requiring tight turns (see Fig. 2), but this is necessary to obtain agents that will work for the real-world cases described below. ## VI Design of Robot Experiments We now describe our experiments for testing the local planners on a physical robot. We use a ClearPath Robotics Jackal robot [19] equipped with LiDAR, set to a scanning frequency of 5Hz. The experiments cover a range of scenarios that an autonomous robot would encounter in the physical world. A failed traversal translates into a robot's collision with a static obstacle (e.g. wall), or a dynamic obstacle (e.g. pedestrian). Moreover, if the planner fails to complete the global plan, then the robot fails that scenario. In addition to simply measuring success/failure, we are also interested in the nature of the trajectory produced by each approach. Is it smooth? How does the robot react to an obstacle? **Test cases.** The experiments were conducted in a facility that includes an open room and a maze component with tight corners and narrow doorways shown in Fig.3. We refer to the maze shown in the first two images of Fig.3 as the _UNIX maze room_ (named after letters that make up obstacles in four separate rooms). We describe four test cases: \(\bullet\) **(C1) Room I to room N through doorway:** shown in Fig.3 (left). Here the robot's task is to travel through a narrow doorway while making a 180-degree turn. In this case, all the obstacles are fixed and included in the global map, and so the only job of the local planner is to follow the global plan (which will be a collision-free path from start to goal) as closely as possible. However, in order to make the turn smoothly the planner must maintain a small turn radius (the ratio between linear and angular velocity). \(\bullet\) **(C2) Room I to room X with "unexpected" static obstacle:** shown in Fig.3 (mid). In this experiment, the robot goal is selected before the obstacle is in place. After the goal is selected and the global plan is computed, a static obstacle (a cardboard cutout of a person) is placed in the robot's global plan. As the robot nears the obstacle, the next eligible waypoint will be beyond the obstacle and the local planner will need to navigate round the obstacle. \(\bullet\) **(C3) Avoiding a walking pedestrian on a straight path:** shown in Fig.3 (right). Here the robot must traverse a straight path while a pedestrian is walking towards the robot. This case tests the local planner's ability to detect and navigate around a moving object. For this experiment it would always be possible to generate an "unavoidable collision" by having the pedestrian walk quickly at high speed into the robot. To avoid this we ask the pedestrian to stop when they are right in front of the robot. The desired behavior is then for the robot to back up or turn round the pedestrian. The undesired behavior is to keep on moving forward into the pedestrian. \(\bullet\) **(C4) Pedestrian crossing the robot path:** We extend the previous test case (C3) by asking the pedestrian to perpendicularly cross the robot's global path. The desired behavior is for the robot to wait and then continue after the pedestrian has crossed. **Local planners.** We test with the DrQ variant of SAC since it had the best training performance of all the RL algorithms in Section V. We log the trajectories for the resulting _SACPlanner_ and compare against the Dynamic Window Approach (DWA), as well as the Shortest Path (SP) planner discussed in Section III that always tries to get to the next waypoint using a shortest path in the Occupancy Grid. ## VII Experimental Results The robot trajectories for each of (C1)-(C4) are shown in Fig.4. We denote the start and goal along with the collision points. For (C1)&(C2) we swap the direction of travel for half the runs. The color of the trajectory represents linear velocity. We also show the Occupancy Grid values in gray (taken from the map and the LiDAR). For (C3)&(C4) with a dynamic obstacle the gray shading captures all the positions of the obstacle over time. The 3 local planners have qualitatively different behavior which we now describe in detail for each case. \(\bullet\) **(C1):** DWA (which generates circular arcs) has the smoothest trajectory through the door. However, when starting at the top it miscalculated the best turning radius and aborted next to the 'N' obstacle each time. The SP planner never collided with an obstacle and (not surprisingly since it was running shortest paths on a grid) it traveled in a series of straight lines (whose endpoints are denoted with green dots). SACPlanner was also successful in all cases. However, it had Fig. 3: Robot experiment test cases. to "back off" multiple times (denoted by the blue parts of the trajectory) before aligning correctly with the doorway. \(\bullet\)**(C2):** In this case a static obstacle appears on the global plan. Although DWA tried to deviate from the global plan, it did not do so enough, and therefore collided with the obstacle every time. The SP planner was succesful when starting from the bottom. When starting from the top, the shortest path around the obstacle alternated between "going left" and "going right". This indecision led to some collisions. SACPlanner often backed off multiple times when confronted with the obstacle. However, it eventually made it round the obstacle every time. \(\bullet\)**(C3):** Both DWA and SP were unable to deal with the fact that the pedestrian obstacle was approaching and hence the "correct" trajectory kept changing. Even though the pedestrian stopped right in front of the robot, both DWA and SP kept going and caused a collision. SACPlanner went backwards when the pedestrian got close and then directed the robot to take a wide berth in the available open space. **Quantitative metrics:** In Table IV we show the mean travel time (\(s\)), mean travel distance (\(m\)), mean speed (\(m/s\)), and collision rate for the 3 local planners on (C1)-(C3) across all runs. For DWA on (C1) we only consider the non-aborted runs. For (C2)-(C3) we remove the obstacle after each collision and so the robot will still reach the goal. We note that the "backing off" behavior of SACPlanner leads to greater distances/times than DWA and SP, but this how it is able to achieve a much lower collision rate. \(\bullet\)**(C4)** When the pedestrian switches to walking across the robot's path rather than walking towards it, the results are similar to (C3). Both DWA and SP are not reactive enough and collide every time. However, SACPlanner backs off when the pedestrian is close, and then resumes traveling towards the goal after the pedestrian has passed through. ### _Trajectory Analysis_ In order to understand more deeply the difference in behavior of DWA and SACPlanner, Fig.5 depicts a single run from test case (C3). Fig.5(a) shows the trajectory, Fig.5(b) plots the linear velocity, Fig.5(c) shows the distance to the nearest 'front obstacle' (within \(\pm\frac{\pi}{4}\)rad range from the current yaw), and Fig.5(d) plots the angular velocity. The key feature of these plots is that when the pedestrian is close, DWA slows down and turns a little, whereas SACPlanner goes into reverse (note the blue color in Fig.5(b)&(c)) and turns a lot so as to go around the pedestrian. This "reactiveness" to obstacles also manifests in more turning even when the robot can go in a straight line. ## VIII Conclusions and Future Work In this work, we have examined how training for RL-based local planners can be improved by using polar costmaps and regularization on top of the SAC algorithm to achieve success rates close to 100% after only 10,000 episodes. In addition, we have done a detailed trajectory analysis to show how the resulting SACPlanner is more robust and more responsive to dynamic obstacles than non-RL algorithms. For future work, we would like to improve the smoothness of SACPlanner when there are no unexpected obstacles, and we plan to develop a cooperative version of SACPlanner for when two or more robots are in close proximity. Fig. 4: Trajectory comparison between DWA, Shortest Path (SP) vs. SAC agent for each test case. Fig. 5: Trajectory comparison between DWA and SACPlanner based on logs from the test case (C3).
2307.12627
An individual-based model to explore the impact of psychological stress on immune infiltration into tumour spheroids
In recent in vitro experiments on co-culture between breast tumour spheroids and activated immune cells, it was observed that the introduction of the stress hormone cortisol resulted in a decreased immune cell infiltration into the spheroids. Moreover, the presence of cortisol deregulated the normal levels of the pro- and anti-inflammatory cytokines IFN-{\gamma} and IL-10. We present an individual-based model to explore the interaction dynamics between tumour and immune cells under psychological stress conditions. With our model, we explore the processes underlying the emergence of different levels of immune infiltration, with particular focus on the biological mechanisms regulated by IFN-{\gamma} and IL-10. The set-up of numerical simulations is defined to mimic the scenarios considered in the experimental study. Similarly to the experimental quantitative analysis, we compute a score that quantifies the level of immune cell infiltration into the tumour. The results of numerical simulations indicate that the motility of immune cells, their capability to infiltrate through tumour cells, their growth rate and the interplay between these cell parameters can affect the level of immune cell infiltration in different ways. Ultimately, numerical simulations of this model support a deeper understanding of the impact of biological stress-induced mechanisms on immune infiltration.
Emma Leschiera, Gheed Al-Hity, Melanie S. Flint, Chandrasekhar Venkataraman, Tommaso Lorenzi, Luis Almeida, Chloe Audebert
2023-07-24T08:58:33Z
http://arxiv.org/abs/2307.12627v2
An individual-based model to explore the impact of psychological stress on immune infiltration into tumour spheroids ###### Abstract In recent _in vitro_ experiments on co-culture between breast tumour spheroids and activated immune cells, it was observed that the introduction of the stress hormone cortisol resulted in a decreased immune cell infiltration into the spheroids. Moreover, the presence of cortisol deregulated the normal levels of the pro- and anti-inflammatory cytokines IFN-\(\gamma\) and IL-10. We present an individual-based model to explore the interaction dynamics between tumour and immune cells under psychological stress conditions. With our model, we explore the processes underlying the emergence of different levels of immune infiltration, with particular focus on the biological mechanisms regulated by IFN-\(\gamma\) and IL-10. The set-up of numerical simulations is defined to mimic the scenarios considered in the experimental study. Similarly to the experimental quantitative analysis, we compute a score that quantifies the level of immune cell infiltration into the tumour. The results of numerical simulations indicate that the motility of immune cells, their capability to infiltrate through tumour cells, their growth rate and the interplay between these cell parameters can affect the level of immune cell infiltration in different ways. Ultimately, numerical simulations of this model support a deeper understanding of the impact of biological stress-induced mechanisms on immune infiltration. _Keywords:_ Numerical simulations; Immune infiltration; Psychological stress; Individual-based models; Tumour-Immune interactions ## 1 Introduction The ability of psychological stress to induce immune suppression is widely recognised, but the mechanisms underlying the effects of psychological stress on the adaptive immune response during tumour progression are not completely understood. There has been increasing interest in detailing the mechanistic role that psychological stress may play in the context of initiation and progression of cancer. In particular, it has been reported that psychological stress positively influences carcinogenesis through mechanisms that promote proliferation, angiogenesis and metastasis, as well as mechanisms that protect tumour cells from apoptosis [1, 2]. The negative role played by psychological stress on the immune system has also been documented. Using a pre-clinical mouse model, in [3] the authors have shown that psychological stress has a negative impact on T cell numbers and activation, as evidenced by a decrease in the numbers of CD8+ and CD3+CD69+ T cells. In [4], the authors developed a 3D _in vitro_ model to explore the effects of the stress hormone cortisol on immune cell infiltration into tumour spheroids. Using two independent image-based algorithms, they quantified the effects of cortisol on immune infiltration, which was assessed by counting the number of immune cells within the tumour spheroid boundary. The results from this model recapitulated the conclusions of [3], by showing that cortisol triggered a reduction in immune infiltration levels. The mixture of cytokines produced in the tumour-microenvironment plays a key role in tumour progression [5]. Pro-inflammatory cytokines that are released in response to infection can inhibit tumour development and progression. Alternatively, tumour cells can produce anti-inflammatory cytokines that promote growth, attenuate apoptosis and facilitate metastasis. In the experiments reported in [4], cortisol downregulated IFN-\(\gamma\) and upregulated IL-10. IFN-\(\gamma\) is a pro-inflammatory cytokine that stimulates immune response, through T cell trafficking in the tumour-microenvironment and infiltration [6, 7], whereas IL-10 is an anti-inflammatory cytokine that inhibits immune response by reducing T cell proliferation [8, 9]. From a biological and medical perspective, it is difficult to investigate the connection between psychological stress, immune infiltration and the underlying molecular and cellular processes. The challenge lies in integrating theoretical and empirical knowledge to achieve a deeper understanding of the mechanisms and factors that contribute to inhibition of the anti-tumour immune response. In this context, mathematical models provide easy and cheap tools towards identifying dependencies between different biological phenomena and how these may affect the efficacy of the immune response on much shorter timescales than laborious and expensive experiments. Different aspects of the interaction dynamics between immune and tumour cells have been studied using different deterministic continuum models formulated as ordinary differential equations (ODEs) [10, 11, 12, 13], integro-differential equations (IDEs) [14, 15, 16] and partial differential equations (PDEs) [17, 18, 19, 20]. These models usually describe the evolution of tumour and immune cell densities that depend on one or more independent variables, usually time and/or space. Such models are defined on the basis of cell population-level phenomenological assumptions, which may limit the amount of biological detail incorporated into the model. By using computational models, such as cellular-automaton (CA) models [21, 22], hybrid PDE-CA models [23, 24] and individual-based models (IBMs) [25, 26, 27, 28], a mathematical representation of biological phenomena that are challenging to include in purely continuum models can be achieved. In fact, these models can be posed on a spatial domain, where cells spatially interact with each other according to a defined set of rules, which can collectively generate global emergent behaviours of tumour-immune cell competition. In [24], we proposed an IBM that describes the earliest stages of tumour-immune competition. In this model, we included cytotoxic T lymphocytes (CTLs) and tumour cells, which interact in a two-dimensional domain under a set of rules describing cell division, migration via chemotaxis, cytotoxic killing of tumour cells by CTLs and immune evasion. However, the model in [24] does not consider the role played by psychological stress in immune infiltration and the influence of pro- and anti-inflammatory cytokines on tumour progression. These aspects are addressed in the present work. In light of these considerations, and motivated by _in vitro_ experimental observations in co-culture between cancer spheroids and immune cells [4], in this paper we develop an IBM to study the effect of psychological stress on immune infiltration. The model builds on our previous work [24] and is calibrated to qualitatively reproduce, _in silico_, the experimental results presented in [4]. As mentioned earlier, in this study the authors found that the introduction of cortisol in the co-culture resulted in a decrease in immune cell infiltration into tumour spheroids, as well as in the alteration of IFN-\(\gamma\) and IL-10 levels. In our model, we assume that cells are exposed to psychological stress, and that this deregulates IFN-\(\gamma\) and IL-10 levels. We explore the processes underlying the emergence of different levels of immune infiltration, with particular focus on biological mechanisms regulated by IFN-\(\gamma\) and IL-10. Based on one of the two image-based algorithms developed in [4] to quantify immune infiltration, in our numerical simulations we compute a score to quantify the effects of psychological stress on immune infiltration. ## 2 Methods ### Summary of the experimental protocol employed in [4] A summary of the experimental protocol employed in [4] is provided below (further details can be found in [4]). The mathematical model and numerical simulations will then be implemented accordingly, in order to facilitate comparison with the experiments. A schematic summarising the experimental procedure employed in [4] is displayed in Fig.1. Growth of the spheroidsSpheroids from a murine triple negative 66CL4 breast cancer cell line were generated. It took 4 days for the spheroids to fully form, during which their area increased over time. The cell line was seeded at different densities. The seeding density with the largest area, and with the least variation over 4 days of growth post culturing cells into spheroids, was chosen (_cf._ Group-**1** in Fig.1). This was done to ensure that the size of the spheroids remained stable and that changes in the spheroids were due to the infiltration of immune cells. Figure 1: **Schematic summarising the experimental protocol employed in [4].** Introduction of splenocytesAfter full spheroid generation, immune cells (splenocytes), containing activated T lymphocytes, were co-cultured with spheroids. Cortisol was added to the co-culture and spheroids were later split into different groups. The groups relevant to our study are: Group-**1** containing spheroids only, Group-**2** comprising spheroids and splenocytes, and Group-**3** containing spheroids, splenocytes and cortisol. Trafficking index measuring infiltration levels of immune cellsTo test whether cortisol caused a reduction in immune infiltration in the co-culture, each group was imaged daily for 4 days. From two imaged-based algorithms, two trafficking indices were computed every day to measure immune cell infiltration into the spheroids [4] (_i.e._ to quantify the number of immune cells within the boundary of the tumour spheroids). Below we report on the implementation of the trafficking index that inspired the development of the score proposed in the present study to measure infiltration levels. Further details can be found in [4]. The _classification-based trafficking index_ (TIC) is an algorithm in which first a machine learning algorithm is used to classify each pixel in the image into different groups obtaining three classes: background, tumour cell or immune cell. The TIC is then based on the number of pixels classified as immune cells that are completely surrounded by pixels classified as tumour cells, divided by the total count of pixels surrounded by tumour cells. The resulting statistic yields a number in the interval \([0,1]\), with a larger TIC indicating a greater level of trafficking. Investigating the effects of cortisol on immune infiltrationOver 4 days, the co-cultures in Group-**2** and Group-**3** were imaged and the TIC was computed from the corresponding images. It was found that, compared to the group with untreated spheroids and splenocytes only (_i.e._ Group-**2**), the introduction of cortisol significantly reduced immune cell infiltration into the spheroids in Group-**3**. Moreover, cortisol significantly reduced IFN-\(\gamma\) levels and increased IL-10 levels. ### Modelling framework Building upon our previous work [24], to reproduce the _in vitro_ results presented in [4], we consider two cell types: tumour cells and immune cells. Although activated immune cells have been considered in the experiments, here we focus on CTLs only. We use a Cellular Potts (CP) approach and the CompuCell3D open-source software [29] to model the interactions between these two cell types. Adhesive interactions between cells may affect the physical capability of CTLs to infiltrate through tumour cells. In this context, the choice of a CP model is of particular interest, since adhesive interactions between neighboring cells are represented through specific parameters, which describe the net adhesion/repulsion between cell membranes [29] (see Supp.Mat.A for a detailed description of the implementation of our CP model). Moreover, the CompuCell3D software easily allows for the visualisation of the results of numerical simulations. To carry out numerical simulations of the model, we consider both a 2D square spatial domain and a 3D cubic spatial domain. At each time step, the states of the cells are updated according to the rules described below. Growth of tumour cellsWe denote by \(N_{T}(t)\) the number of tumour cells in the system at time \(t\) and we label each cell by an index \(n=1,\ldots,N_{T}(t)\). In the experimental setup described in Sec. 2.1, spheroids are cultured for a sufficient time until they attain a stable size. To reproduce such dynamics, we allow a certain time for the tumour cells to grow to a maximal reachable size corresponding to the carrying capacity of the population. At the initial time of simulations, we assume a certain number of tumour cells to be tightly packed in a circular configuration positioned at the centre of the domain. At each time-step, tumour cells grow at a rate drawn from a uniform distribution. Mitosis occurs when tumour cells grow to a critical size and then divide. We refer the reader to our previous paper [24] for a detailed description of the modelling strategy employed to represent cell division. Tumour cells can die due to intra-population competition (_i.e._ competition between tumour cells for limited space and resources), at a rate proportional to the total number of tumour cells. If tumour cells exhaust their lifespan (which is drawn from a uniform distribution when cells are created) then they die. Dead tumour cells are removed from the domain. Introduction of CTLsWe denote by \(N_{C}(t)\) the number of CTLs in the system at time \(t\), and we label each of them by an index \(m=1,...,N_{C}(t)\). When introduced, CTLs are randomly distributed at the border of the spatial domain. Once in the domain, CTLs grow, divide through mitosis and die due to intra-population competition (_i.e._ competition between CTLs for limited space and resources) and natural death according to rules similar to those used for tumour cells. Tumour cells at the border of the tumour secrete a chemoattractant (which models the effects of IFN-\(\gamma\)) which triggers the movement of CTLs towards tumour cells. A detailed description of the chemoattractant dynamics are given in Sup.Mat.A. Once in the domain, CTLs move up the gradient of the chemoattractant towards tumour cells. According to the experiments, CTLs are activated against tumour cells. Therefore, we suppose that, upon contact, CTLs can induce tumour cell death with a certain probability. We refer to this probability as the "immune success rate". If tumour cells satisfy the conditions to be eliminated then they die. Infiltration scoreSimilarly to the TIC proposed in [4] to measure immune infiltration, in our work we define an 'infiltration score'. This score allows us to quantify the level of CTL infiltration into the tumour. Provided that there are tumour cells in the domain, we define the infiltration score as the number of CTLs surrounded by tumour cells, divided by the number of tumour cells and CTLs surrounded by tumour cells; that is, \[I(t):=\frac{\sum_{m=1}^{N_{C}(t)}\delta_{m\in N_{CS}(t)}}{\sum_{m=1}^{N_{C}(t) }\delta_{m\in N_{CS}(t)}+\sum_{n=1}^{N_{T}(t)}\delta_{n\in N_{TS}(t)}}. \tag{2.1}\] In (2.1), \(\delta_{m\in N_{CS}(t)}=1\) if \(m\in N_{CS}(t)\), and \(\delta_{m\in N_{CS}(t)}=0\) otherwise, where \(N_{CS}(t)\) denotes the set of indices of CTLs surrounded by tumour cells at time \(t\). Function \(\delta_{n\in N_{TS}(t)}\) is defined in a similar way, where \(N_{TS}(t)\) denote the set of indices of tumour cells surrounded by tumour cells at time \(t\). In Compucell3D, these terms are handled by using specific functions which track neighbors of every cell (further details can be found in Sup.Mat.A). Note that \(0\leq I(t)\leq 1\). Investigating the effects of psychological stress on immune infiltrationOver 4 days, CTLs move via chemotaxis towards the tumour and infiltrate into it. Through numerical simulations, we investigate the effects of three parameters associated to IFN-\(\gamma\) and IL-10, which we expect to play a key role in determining the infiltration of CTLs into the tumour. These three parameters are: the secretion rate of the chemoattractant by tumour cells, the "tumour cell-CTL adhesion strength" and the growth rate of CTLs. Below we detail how these three parameters are associated to IFN-\(\gamma\) and IL-10. It has been shown that IFN-\(\gamma\) induces the stimulation of various chemokines (_e.g._ CXCL9/10/11) which drive the chemotactic movement of CTLs towards the tumour [6, 30, 31]. Therefore, in our study, the role of IFN-\(\gamma\) is investigated by varying the secretion rate of the chemoattractant by tumour cells. Moreover, IFN-\(\gamma\) induces the expression of cellular adhesion molecules (_e.g._ E-cadherin or ICAM-1), which enhance the infiltration of CTLs into the tumour [32, 33]. Therefore, the role of IFN-\(\gamma\) is also investigated by varying the "tumour cell-CTL adhesion strength" (TC-CTL adhesion strength). This parameter refers to the CP parameter associated to the adhesion between tumour cell and CTL membranes. In our model, the TC-CTL adhesion strength regulates CTL ability to infiltrate through tumour cells. In particular, high values of this parameter facilitate the infiltration of CTLs through tumour cells, while low values lead the CTLs to accumulate at the margin of the tumour, without infiltrating into it. More details on the calibration of this parameter are provided in Sup.Mat.B. Finally, IL-10 is an immunoregulatory cytokine that can attenuate inflammatory responses by suppressing CTL production and proliferation [8, 9]. Therefore, the effect of IL-10 is investigated by varying the growth rate of CTLs. In this work, we explore different scenarios. We suppose that in non-stressed conditions IFN-\(\gamma\) levels are high, IL-10 levels are low, and CTLs infiltrate into the tumour. In stressed conditions instead, we suppose that IFN-\(\gamma\) levels decrease and IL-10 levels increase, leading to a decreased CTL infiltration. By considering a range of values of these three parameters, we explore their impact on tumour-immune dynamics independently and together, assessing their influence on immune infiltration in a controlled manner. ## 3 Preliminary results in 2D and 3D In this section, the results of preliminary numerical simulations of the model in 2D and 3D are presented, which will be used to guide the simulations leading to the main results presented in Sec. 4. All the results we present in this section and in Sec. 4 are obtained by averaging over \(5\) simulations, which were carried out using the parameter values reported in Tab.1 and Tab.2. Full details of model implementation and model parameterisation are provided in Sup.Mat.A and Sup.Mat.B, respectively. Files to run a simulation example of the model with Compucell3D [29] are available at [https://plmlab.math.cnrs.fr/leschiera/roleofstress](https://plmlab.math.cnrs.fr/leschiera/roleofstress). ### Tumour development in the absence of CTLs We first establish a preliminary scenario where tumour cells grow, divide and die according to the rules described in Sec. 2.2, in the absence of CTLs. At the initial time of simulations, 35 tumour cells are placed in the domain. More details about the definition of the model initial conditions are given in Sup.Mat.B. We carry out numerical simulations for \(11\) days (which we count from day -7 to day 4). The plots in Fig.2**(a1)-(b1)** show the time evolution of the tumour cell number in 2D and 3D, while Fig.2**(a2)-(a3)** and Fig.2**(b2)-(b3)** display samples of initial and final spatial distributions of tumour cells 2D and 3D, respectively. The tumour growth is of logistic type, as expected due to the rules that govern tumour cell division and death. In more detail, as shown by Fig.2**(a1)-(b1)**, the number of tumour cells increases from day -7 to day 0. Around day 0, it reaches the carrying capacity. The number of tumour cells at carrying capacity is similar to the seeding density chosen in [4] (_cf._ Fig.1 for 66CL4 spheroids in [4]). From day 0 to day 4, the tumour cell number fluctuates around the carrying capacity. These simulations allowed us to calibrate the model parameters related to tumour cells in order to qualitatively reproduce the growth of the spheroids obtained in the experiments. The other simulations were carried out keeping the values of these parameters fixed and equal to those used for these simulations. ### Control scenario: CTL infiltration in non-stressed conditions In the experimental results presented by [4], in the absence of cortisol, immune cells are able to infiltrate the tumour spheroids. Here we verify the ability of our model to reproduce such dynamics by exploring the infiltration of CTLs into the tumour over \(4\) days. For these simulations, the initial number of tumour cells is set at carrying capacity, whereas 150 CTLs are introduced in the domain. The values of the parameters related to CTLs are chosen so as to qualitatively reproduce the interaction dynamics between spheroids and immune cells in non-stressed conditions presented in [4]. The parameters related to IFN-\(\gamma\) and IL-10 levels are set to baseline values (_i.e._ non-stressed conditions). In particular, we let tumour cells secrete the chemoattractant at a high rate, CTLs grow at their normal rate and display a high capability to infiltrate through tumour cells (_i.e._ we consider a sufficiently high value for the TC-CTL adhesion strength). In order to gain a deeper understanding of the effects produced by the three aforementioned parameters on immune infiltration, for the moment we simplify our model by assuming that CTLs are not able to eliminate tumour cells (_i.e._ the immune success rate is set equal to 0). The full model with an immune success rate greater than 0 will be considered in Sec. 4.3. The plots in Figs.3**(a1)-(b1)** show the time evolution of the number of tumour cells and CTLs in 2D and 3D, while Figs.3**(a2)-(a3)** and Figs.3**(b2)-(b3)** display samples of initial and final spatial distributions of tumour cells and CTLs in 2D and 3D, respectively. The choice of parameter values corresponding to these figures results in the infiltration of CTLs into the tumour. The plots in Figs.3**(c)-(d)** display, respectively, the corresponding average value of the infiltration score, computed via (2.1), and the average number of infiltrated CTLs over 4 days. Both in 2D and 3D, as soon as CTLs are introduced in the domain, they move towards the tumour and infiltrate it. Fig.3**(c)** indicates that the infiltration score increases over time, both in Figure 2: **Tumour development in the absence of CTLs. Panels (a1) and (b1) display the time evolution of the tumour cell number in the absence of CTLs in 2D and 3D, respectively. These results correspond to the average over 5 simulations and the shaded areas indicate \(+/-\) standard deviation. Panels (a2)-(a3) and (b2)-(b3) display samples of the initial and final spatial distributions of tumour cells in 2D and 3D, respectively.** 2D and 3D. In 2D its value tends to saturate between day 3 and day 4. Moreover, the value of the infiltration score in the 3D setting is larger than in the 2D case. Note that, in 2D, the mean value of the infiltration score obtained at day 4 of simulations is similar to the mean value of the TIC computed in [4], when cortisol was not introduced in the co-culture (_cf._ Fig.5 in [4]). Fig.3**(d)** demonstrates that, in 2D, most of the CTLs infiltrate the tumour already at day 1, as the average number of infiltrated CTLs increases only slightly between day 1 and 4. On the other hand, in 3D, CTLs seem to be slightly slower in moving towards the tumour. However, the average number of infiltrated CTLs at the end of numerical simulations is similar in the two settings. Finally, as shown by Figs.3**(a1)-(b1)**, and as expected on the basis of the rules that govern tumour cell and CTL growth and death, both in 2D and 3D, over time the number of tumour cells fluctuates around the carrying capacity, while CTL number increases until it reaches a saturation value. This result indicates that the changes in the tumour surface and volume observed in Fig.3**(a3)** and Fig.3**(b3)** are due to the infiltration of CTLs into the tumour. ## 4 Main results In this section we explore the effects of psychological stress on immune infiltration. To do so, first we decrease the secretion rate of the chemoattractant and the TC-CTL adhesion strength. These two parameters are associated with decreased levels of IFN-\(\gamma\). Next, for each scenario considered, we decrease the growth rate of CTLs, which is associated with increased levels of IL-10. The initial number and position of tumour cells and CTLs is kept equal to that used in the control scenario. Exploiting the good agreement between the results of 2D and 3D simulations presented in the previous subsections, we now carry out 2D simulations only, since they require much less computational time than the corresponding 3D simulations. In this section, we report on results obtained by varying the values of the three aforementioned parameters while the other parameters are kept equal to the values used in the previous section. For each scenario, the infiltration score is computed via (2.1). Decreasing the secretion rate of the chemoattractant and the TC-CTL adhesion strength reduces the infiltration of CTLs into the tumour To investigate how immune infiltration is affected by IFN-\(\gamma\) levels in the domain, we start by comparing the control scenario of Sec. 3.2 with scenarios in which the values of the secretion rate of the chemoattractant and of the TC-CTL adhesion strength are reduced (_cf._ Tab.1 and Tab.2). Fig.4**(a)** displays the average value of the infiltration score at different times of the simulations, for high, intermediate and low values of the secretion rate of the chemoattractant and the TC-CTL adhesion strength. This figure shows that both parameters affect the infiltration of CTLs into the tumour, as the infiltration score decreases as soon as one of the two parameters is reduced. In addition, when the TC-CTL adhesion strength is sufficiently high, decreasing the secretion rate of the chemoattractant considerably reduces the infiltration score. On the other hand, for sufficiently low values of the TC-CTL adhesion strength, decreasing the secretion rate of the chemoattractant does not have an impact on the infiltration score, as its value is already small. Taken together, these results suggest that the secretion rate of the chemoattractant has an impact on T cell infiltration only when CTLs display a sufficiently high capability to infiltrate through tumour cells. Then, we analyse the spatial cell distributions observed at the end of simulations. Figs.4**(c)-(d)** show samples of the final spatial distributions of tumour cells and CTLs for intermediate and low values of the TC-CTL adhesion strength. Figs.4**(e)-(f)** show similar plots for intermediate and low values of the secretion rate of the chemoattractant. These plots are to be compared with the one in Fig.4**(b)**, which displays the final spatial distributions of tumour cells and CTLs obtained in the control scenario. In particular, Figs.4**(b)-(d)** show that decreasing the TC-CTL adhesion strength leads to scenarios in which CTLs accumulate around the tumour, because the secretion rate of the chemoattractant is high, but they do not infiltrate into it. CTLs around the tumour are not comprised in the cell count underlying the computation of the infiltration score defined via (2.1), because they are not surrounded by tumour cells. Therefore, the infiltration score decreases. On the other hand, Figs.4**(b)-(e)-(f)** indicate that decreasing the secretion rate of the chemoattractant leads to scenarios in which CTLs away from the tumour are not sensi Figure 3: **Control scenario: CTL infiltration in non-stressed conditions.** Panel **(a1)-(a2)** display, respectively, the time evolution of the numbers of tumour cells (in green) and CTLs (in red) in 2D and 3D for a choice of parameter values that results in the infiltration of CTLs into the tumour. These results correspond to the average over 5 simulations and the shaded area indicates \(+/-\) standard deviation. Panels **(a2)-(a3)** and **(b2)-(b3)** display samples of the initial and final spatial distribution of tumour cells (in green) and CTLs (in red) in 2D and 3D, respectively. Panel **(c)** displays the corresponding average value of the infiltration score, computed via (2.1), at different times of the 2D and 3D simulations. The error lines represent the standard deviation between 5 simulations. Panel **(d)** displays the ratio between the corresponding average number of infiltrated CTLs and the total number of CTLs at the end of 2D and 3D simulations. The error lines represent the standard deviation between 5 simulations. tive to the gradient of the chemoattractant and, therefore, do not move towards the tumour. The more CTLs are not sensitive to the chemoattractant and do not infiltrate the tumour, the more the infiltration score decreases. Taken together, these results qualitatively reproduce key experimental findings presented in [4], which indicated that cortisol reduced IFN-\(\gamma\) levels and led also immune infiltration to reduce. The modelling assumption underlying these computational results may provide the following theoretical explanation for such behaviour. Since IFN-\(\gamma\) may affect both CTL movement and infiltration capabilities, deregulation of IFN-\(\gamma\) levels inhibits CTL ability to migrate towards the tumour and to infiltrate through tumour cells. The interplay between these mechanisms results in a progressive reduction of immune infiltration levels. Figure 4: **Decreasing the secretion rate of the chemoattractant and the TC-CTL adhesion strength reduces the infiltration of CTLs into the tumour.** Panel **(a)** displays the average value of the infiltration score, computed via (2.1), for different values of the secretion rate of the chemoattractant and the TC-CTL adhesion strength, at different times of the simulations. The error lines represent the standard deviation between 5 simulations. Panel **(b)** displays a sample of the final spatial distribution of tumour cells (in green) and CTLs (in red) for the parameter values considered in the control scenario. Panels **(c)-(d)** display similar plots for 2 different values of the TC-CTL adhesion strength. Panels **(e)-(f)** display similar plots for 2 different values of the secretion rate of the chemoattractant. ### Decreasing the growth rate of CTLs reduces the number of infiltrated CTLs We further investigate the effects of psychological stress on immune infiltration by exploring the role played by IL-10. For these simulations, we consider the same parameter values used in the previous subsection but we reduce the value of the growth rate of CTLs (_cf._ Tab.1 and Tab.2). Figs.5**(a)-(b)** show a comparison between the infiltration score obtained when the effects of IL-10 are not considered (_i.e._ for a normal value of the CTL growth rate), and the one obtained when the effects of IL-10 are considered (_i.e._ when the CTL growth rate is reduced). Figs.5**(c)-(d)** also compare the number of tumour cells and CTLs at the end of numerical simulations (_i.e._ at day 4 of the experiments) for the two scenarios considered. Comparing the results of Fig.5**(a)** with those displayed in Fig.5**(b)**, we see that, similarly to the results observed in the previous subsection, decreasing the growth rate of CTLs reduces the infiltration score only when the TC-CTL adhesion strength is sufficiently high. However, when the TC-CTL adhesion strength is sufficiently low, decreasing the growth rate of CTLs does not have an impact on the infiltration score, as its value is already small. As shown by Figs.5**(c)-(d)**, decreasing the growth rate of CTLs leads to a decreased number of CTLs at the end of simulations, while the final number of tumour cells remains similar in the two scenarios. If we assume that high levels of IL-10 inhibit CTL growth, the outputs of our model indicate that, as expected, decreasing the proliferation rate of CTLs diminishes the number of CTLs in the domain. Moreover, if CTLs display a sufficiently high capability to infiltrate through tumour cells, we observe a reduction in the number of infiltrated CTLs (_i.e._ the infiltration score decreases). On the other hand, if CTLs have a low capability to infiltrate through tumour cells, decreasing the proliferation rate of CTLs does not affect the infiltration score, as the number of infiltrated CTLs is already low. This suggests that high levels of IL-10 decrease immune infiltration only when CTLs display a sufficiently high capability to infiltrate through tumour cells, that is, when IFN-\(\gamma\) levels are sufficiently high. Increasing the immune success rate has an impact on the infiltration score only when the TC-CTL adhesion strength is sufficiently large So far, we have investigated with our model the effects of psychological stress on immune infiltration in the case where CTLs are not able to eliminate tumour cells. However, in [4] is reported that immune cells are activated against the spheroids, although the cytotoxic effect of immune cells on tumour cells is not particularly pronounced. Motivated by these considerations, now we investigate tumour-immune dynamics and the effects of psychological stress on immune infiltration in the case where CTLs can eliminate tumour cells with a small probability. Figs.6**(a)-(b)** show a comparison between the infiltration score obtained with the parameter values considered in Sec. 4.1, assuming that CTLs are able, or not able, to eliminate tumour cells (_i.e_ the immune success rate is either zero or different from zero - _cf._ Tab.2). Figs.6**(c)-(d)** show a comparison between the number of tumour cells and CTLs at the end of simulations (corresponding to day 4 of the experiments) for the two scenarios considered. Comparing the results of Fig.6**(a)** with those displayed in Fig.6**(b)**, we see that, when the TC-CTL adhesion strength is sufficiently high, increasing the immune success rate decreases the infiltration score. This is probably due to the fact that, when CTLs can infiltrate through tumour cells, they are more likely to come into contact with tumour cells, thus increasing the chance for CTLs to eliminate them. Since dead tumour cells are cleared from the domain, this in turn diminishes the number of CTLs surrounded by tumour cells, leading to a reduced infiltration score. However, when the TC-CTL adhesion strength is sufficiently low, increasing the immune success rate does not have an impact on the infiltration score. In fact, in this scenario, CTLs accumulate around the tumour, decreasing their probability to come into contact with tumour cells. This reduces their chance to eliminate tumour cells. Analogous considerations hold for the case in which lower growth rates of CTLs are considered (results not shown). As shown by Figs.6**(c)-(d)**, increasing the immune success rate leads to a slightly decreased number of tumour cells at the end of simulations only when sufficiently large values of the TC-CTL adhesion strength and the secretion rate of the chemoattractant are considered. On the other hand, for intermediate and sufficiently small values of these two parameters, increasing the immune success rate does not have an impact on the final number of tumour cells. Figure 5: **Decreasing the growth rate of CTLs reduces the number of infiltrated CTLs.** Plots in panels **(a)-(b)** display the average value of the infiltration score, computed via (2.1), for different values of the secretion rate of the chemoattractant and the TC-CTL adhesion strength, and at different times of the simulations. In panel **(a)** CTLs grow at their normal rate, while their growth rate is decreased in panel **(b)**. The error lines represent the standard deviation between 5 simulations. Panels **(c)-(d)** display the corresponding number of tumour cells and CTLs at the end of simulations (corresponding to day 4 of the experiments). ## 5 Discussion, conclusions and research perspectives The _in vitro_ co-culture experiments presented in [4] are performed in an isolated and relatively homogeneous environment and involve only a few constituents: tumour spheroids, activated immune cells, culture medium and cortisol. Furthermore, each experiment has clear observables, namely the confocal images of the co-culture, the trafficking indices and the levels of IFN-\(\gamma\) and IL-10, which make these experiments highly suitable to be studied through a mathematical model. In this paper, we have presented an IBM to describe the interaction dynamics between CTLs and tumour cells, to reproduce qualitative aspects presented in [4] and evaluate immune cell Figure 6: **Increasing the immune success rate has an impact on the infiltration score only when the TC-CTL adhesion strength is sufficiently large.** Panel **(a)-(b)** displays the average value of the infiltration score, computed via (2.1), for different values of the secretion rate of the chemoattractant and the TC-CTL adhesion strength, and at different times of the simulations. In panel **(a)** CTLs are assumed not to be able to eliminate tumour cells, while they are assumed to be able to eliminate them in panel **(b)**. Panels **(c)-(d)** display the corresponding number of tumour cells and CTLs at the end of simulations (corresponding to day 4 of the experiments). trafficking into tumour cells under normal and stressed conditions. In particular, on the basis of the experiments presented in [4], we have investigated in a causal, systematic manner the way in which IFN-\(\gamma\) and IL-10 may impact on the infiltration of CTLs into tumour cells. The results of numerical simulations qualitatively reproduce, both in 2D and 3D, the growth of the tumour spheroids prior the introduction of immune cells and the tumour-immune dynamics in non-stressed conditions. The tumour growth is of logistic type. In the control scenario, _i.e._ the scenario in which the secretion rate of the chemoattractant, the TC-CTL adhesion strength and the CTLs growth rate are set at their baseline values, CTLs are able to infiltrate into the tumour. We then have investigated the effects of psychological stress on immune infiltration. First, the results of our model support the idea that reducing the secretion rate of the chemoattractant and the TC-CTL adhesion strength, which are associated to a decrease in IFN-\(\gamma\) levels, reduces the infiltration of CTLs into the tumour. These results also suggest that the secretion rate of the chemoattractant is more likely to have an impact on T cell infiltration when CTLs display a sufficiently high capability to infiltrate through tumour cells. We have also studied the effects of psychological stress on immune infiltration by reducing the growth rate of CTLs, which is associated to increased IL-10 levels. Decreasing the growth rate of CTLs reduces the number of CTLs in the domain. This leads to a significant reduction in the infiltration score only when the TC-CTL adhesion strength is sufficiently large. Finally, we have performed numerical simulations by letting CTLs eliminate tumour cells with a small probability - _i.e._ when the immune success rate is greater than 0. In the scenario in which CTLs are able to infiltrate into the tumour, increasing the immune success rate leads to a reduced infiltration score, as tumour cells in contact with CTLs are eliminated. This in turn leads to a slightly decreased number of tumour cells at the end of simulations. In summary, the results of numerical simulations of our model indicate that the interplay between IFN-\(\gamma\) and IL-10 plays a key role in determining the effects of psychological stress on immune infiltration reported in [4], as both cytokines contribute to decrease immune infiltration. Moreover, our results shed light on the impact of three biological stress-induced mechanisms on immune infiltration. In particular, they support the idea that a high infiltration score can be obtained only when the secretion rate of the chemoattractant and the TC-CTL adhesion strength are large, provided that the growth of CTLs is not inhibited. On the other hand, reducing the value of these parameters can lead to a reduced immune infiltration in different ways. For example, we found that the parameter having the strongest impact on immune infiltration is the TC-CTL adhesion strength, which is associated with the physical capability of CTLs to infiltrate through tumour cells. In this regard, the development of abnormal structural features that inhibit the ability of CTLs to penetrate tumour sites is a hallmark of cancer progression [30]. Evidence is emerging that glucocorticoids act on adhesion of immune cells by inhibiting adhesion molecules (integrins and selectins) [34, 35]. The deregulation of adhesion molecules may act as barriers to T cell migration and infiltration. In this context, the results of this study support the idea that new glucocorticoid receptor antagonists should be developed to target cell adhesion molecules in order to enhance immune infiltration. The results of numerical simulations also support the idea that an efficient anti-tumour immune response can occur only in highly infiltrated tumours. This is a key result because it indicates that therapeutic strategies promoting the infiltration of CTLs into tumours may be a promising approach against cancer. In particular, our findings suggest that a synergistic effect can be achieved by combining glucocorticoid receptor antagonists, which facilitate CTL infiltration, with immune checkpoint therapies, which enhance the effectiveness of _in situ_ anti-tumour immune response [30]. The current version of our model can be developed further in several ways. Firstly, due to the high computational cost in simulating the three dimensional version of our model, we carried out 3D simulations only to part of our study. However, by running the simulations on high performance computers, this limitation may be addressed in the future and a larger spectrum of parameter values could be tested. To keep the model as simple as possible, we chose to include only mechanisms that were necessary to reproduce part of the experimental results presented in [4]. For example, our current model does not consider the effects of tumour necrosis and hypoxia or CTL exhaustion. These mechanisms can actively contribute to deregulate the normal levels of pro- and anti-inflammatory cytokines, resulting in more aggressive tumours and impaired immune response [30, 36, 37, 38]. From a biological point of view, a natural development of this work would consist in studying the effects of therapeutic strategies which counteract the negative impact of psychological stress on immune infiltration. In fact, in [4] it was found that the administration of glucocorticoid receptor antagonists reversed the effects of cortisol and significantly enhanced immune infiltration in tumour spheroids. The effects of therapeutic strategies could be incorporated into our model by, for example, including a detailed metabolic network at the sub-cellular level that directly influences the dynamics at the cellular level, such as CTL growth and movement. In this regard, we could also investigate the delivery schedule of therapeutic agents (_i.e._ time and dosage) that may make it possible to maximise the number of infiltrated CTLs at the end of the treatment. Despite its relative simplicity, our model provides a novel _in silico_ framework to investigate the impact of biological mechanisms linked to psychological stress on immune infiltration, and may be a promising tool to easily and cheaply explore therapeutic strategies designed to increase immune infiltration and improve the overall anti-tumour immune response. ## Funding E.L. has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 740623). T.L. gratefully acknowledges support from the the PRIN 2020 project (No. 2020JWP23) "Integrated Mathematical Approaches to Socio-Epidemiological Dynamics" (CUP: E15F21005420006). T.L. gratefully acknowledge support of the Institut Henri Poincare (UAR 839 CNRS-Sorbonne Universite), and LabEx CARMIN (ANR-10-LABX-59-01). L.A., T.L. and E.L. gratefully acknowledge support from the CNRS International Research Project 'Modelisation de la biomecanique cellulaire et tissulaire' (MOCETIBI). ## Appendix A Details of the individual-based model The individual-based model (IBM) has been numerically simulated using the multicellular modelling environment CompuCell3D [29]. This software is an open source solver, which uses a Cellular Potts (CP) model [39] (also known as Glazier-Graner-Hogeweg model). In CP models, biological cells are treated as discrete entities represented as a set of lattice sites, defined as pixels in 2D (or voxels in 3D), each with characteristic values of area and perimeter (or volume and surface in 3D), and intrinsic motility on a regular lattice. Interaction descriptions and dynamics between cells are modelled by means of the effective energy of the system. This determines many characteristics such as cell size, motility, adhesion strength, and the reaction to gradients of chemotactic fields. During a simulation, each cell will attempt to extend its boundaries, through a series of index-copy attempts, in order to minimise the effective energy. The success of the index copy attempt depends on rules which take into account energy changes. Files to run a simulation example of the model with CompuCell3D [29] are available at [https://plmlab.math.cnrs.fr/leschiera/roleofstress](https://plmlab.math.cnrs.fr/leschiera/roleofstress). ### Cell types In CP models, cells are uniquely identified with an index \(\sigma(i)\) on each lattice site \(i\), with \(i\) a vector of integers occupying lattice site \(i\). Each cell in the model has a type \(\tau(\sigma(i))\), which determines its properties, and the processes and interactions in which it participates. Note that, technically, the extracellular medium is also considered as a cell of type medium. In our model, we define 3 cell types: medium, tumour cell and CTL. ### Cellular dynamics The effective energy is the basis for operation of all CP models, including CompuCell3D [29], because it determines the interactions between cells (including the extracellular medium). Configurations evolve to minimise the effective energy \(H\) of the system, defined in a two-dimensional system as \[H=\underbrace{\sum_{i,j}J(\tau(\sigma_{i}),\tau(\sigma_{j}))(1-\delta(\sigma_ {i},\sigma_{j}))}_{\text{boundary energy}}+\underbrace{\sum_{\sigma}\left[\lambda_{area}( \sigma)(a(\sigma)-A_{t}(\sigma))^{2}\right]}_{\text{area constraint}}+ \underbrace{\sum_{\sigma}\left[\lambda_{per}(\sigma)(p(\sigma)-P_{t}(\sigma) )^{2}\right]}_{\text{perimeter constraint}}.\] (A.1) The most important component of the effective energy equation is the boundary energy, which governs the adhesion of cells. The boundary energy \(J(\tau(\sigma_{i}),\tau(\sigma_{j}))\) describes the contact energy between two cells \(\sigma_{i}\) and \(\sigma_{j}\) of types \(\tau(\sigma_{i})\) and \(\tau(\sigma_{j})\). It is calculated by summing over all neighbouring pixels \(i\) and \(j\) that form the boundary between two cells. Moreover, \(\delta(\sigma_{i},\sigma_{j})=1\) if \(\sigma_{i}=\sigma_{j}\), and \(\delta(\sigma_{i},\sigma_{j})=0\) otherwise. Thanks to the term \((1-\delta(\sigma_{i},\sigma_{j}))\), the boundary energy contribution is considered only between lattice sites belonging to two different cells. When considering a two-dimensional domain, the second and third terms represent, respectively, a cell-area and cell-perimeter constraint. In particular, \(a(\sigma)\) and \(p(\sigma)\) are the surface area and perimeter of the cell \(\sigma\), \(A_{t}(\sigma)\) and \(P_{t}(\sigma)\) are the cell's target surface area and perimeter, and \(\lambda_{area}(\sigma)\) and \(\lambda_{per}(\sigma)\) are an area and perimeter constraint coefficients. Note that in 3D these two terms represent, respectively, a cell-volume and cell-surface constraint and, therefore, they might assume a different value. The cell configuration evolves through lattice-site copy attempts. To begin an index-copy attempt, the algorithm randomly selects a lattice site to be a target pixel \(i\), and a neighbouring lattice site to be a source pixel \(i^{\prime}\). If the source and target pixels belong to the same cell (_i.e._ if \(\sigma(i)=\sigma(i^{\prime})\)), they do not need to attempt an lattice-site copy and thus the effective energy will not be calculated. Otherwise, an attempt will be made to switch the target pixel as the source pixel, thereby increasing the surface area of the source cell and decreasing the surface area of the target cell. The algorithm computes \(\Delta H=H-H^{\prime}\), with \(H\) being the effective energy of the system and \(H^{\prime}\) being the effective energy if the copy occurs. Then, it sets \(\sigma(i)=\sigma(i^{\prime})\) with probability \(P(\sigma(i)\rightarrow\sigma(i^{\prime}))\) given by \[P(\sigma(i)\rightarrow\sigma(i^{\prime}))=\begin{cases}\quad 1\quad:\quad \Delta H\leq 0\\ \exp^{-\frac{\Delta H}{T_{m}}}\quad:\quad\Delta H>0.\end{cases}\] (A.2) The change in effective energy \(\Delta H\) provides a measure of the energy cost of such a copy and the parameter \(T_{m}\) determines the level of stochasticity of accepted copy attempts. The unit of simulation time is the Monte Carlo step (MCS). ### Subcellular dynamics and chemotaxis In our model we simulate CTL chemotaxis toward tumour cells, defined as the cell motion induced by the presence of a chemical. In CompuCell3D [29], chemotaxis is obtained biasing the cell's motion up or down a field gradient by adding a term \(\Delta H_{chem}\) in the calculated effective-energy change \(\Delta H\) used in the acceptance function (A.2). For a field \(c(i)\): \[\Delta H_{chem}=-\lambda_{chem}(\phi(i)-\phi(i^{\prime})),\] (A.3) where \(\phi(i)\) is the chemical field at the index-copy target pixel \(i\), \(\phi(i^{\prime})\) the field at the index-copy source pixel \(i^{\prime}\), and \(\lambda_{chem}\geq 0\) the strength of chemotaxis. The change in concentration of the chemical field \(\phi\) is obtained by solving a reaction-diffusion equation of the following general form: \[\frac{\partial\phi}{\partial t}=D\Delta\phi-\gamma\phi+S\] (A.4) where \(D\), \(\gamma\) and \(S\) denote the diffusion constant, decay constant and secretion rates of the field, respectively. These three parameters may vary with position and cell-lattice configuration, and thus be a function of cell \(\sigma\) and pixel \(i\). In CompuCell3D, this general form of PDEs can be solved using a number of different PDE solvers. More details about the different PDE solvers can be found in the CompuCell3D Reference Manual. In the main body of the paper, the dynamic of the concentration of the chemoattractant secreted by tumour cells \(\phi\) is governed by the following reaction-diffusion equation: \[\frac{\partial\phi}{\partial t}=D\Delta\phi-\gamma\phi+\alpha\sum_{n=1}^{N_{T }(t)}\delta_{n\in N_{TB}(t)}.\] (A.5) In Eq. (A.5), \(D\) is the diffusion rate of the chemoattractant, \(\gamma\) is the rate of natural decay and \(\alpha\) is the secretion rate. Moreover, \(\delta_{n\in N_{TB}(t)}=1\) if \(n\in N_{TB}(t)\), and \(\delta_{n\in N_{TB}(t)}=0\) otherwise, where \(N_{TB}(t)\) denotes the set of indices of tumour cells in contact with the surrounding medium at time \(t\). This terms, handled in CompuCell3D by the DiffusionSolverFE, takes into account the fact that only the tumour cells at the border of the tumour secrete the chemoattractant. We complement Eq. (A.5) with zero-flux boundary conditions and an initial concentration \(\phi_{init}\) at time 0 of the experiments (_i.e._ when CTLs are introduced) which is set to be zero everywhere in the domain but at the border of the tumour (_cf._ Tab.2). ### Infiltration score Building on the TIC proposed in [4] to measure immune infiltration levels, in our model at each time-step we compute the 'infiltration score' via Eq. (2.1). This score allows us to quantify the level of CTL infiltration into the tumour and is defined as the number of CTLs surrounded by tumour cells, divided by the number of tumour cells and CTLs surrounded by tumour cells. Below we detail how the infiltration score is implemented in CompuCell3D. In CompuCell3D, the function \(get\_cell\_neighbor\_data\_list(cell)\) allows to access a list of each cell neighbors. The neighbour of a cell is defined as an adjacent cell that shares a surface area with the cell in question. In our model, for each cell we loop over all its neighbors and we compute its common surface area with medium, tumour cells and CTLs using the function \(neighbor\_list.common\_surface\_area\_by\_type(cell)\). We then assume that \(\delta_{m\in N_{CS}(t)}=1\) if the common surface area between the \(m^{th}\) CTL and CTLs and medium surrounding it is strictly lower than 4 and that \(\delta_{n\in N_{TS}(t)}=1\) if the common surface area between the \(n^{th}\) tumour cell and CTLs and medium surrounding it is strictly lower than 6. ## Appendix B Initial conditions and values of model parameters The IBM is based on the mathematical model developed in our previous work [24], and has been calibrated to qualitatively reproduce the experimental results presented in [4]. ### Set-up of simulations To carry out numerical simulations in Sec. 3, we used a CP approach both on a 2D spatial domain with a total of \(400\times 400\) lattice sites and on a 3D spatial domain with a total of \(100\times 100\times 100\) lattice sites. The numerical simulations we present in Sec. 4 were carried out on the 2D domain only. In both cases, simulations were performed using the software CompuCell3D [29] on a standard workstation (Intel i7 Processor, 4 cores, 16 GB RAM, macOS 11.2.2). At the initial time point of simulations (_i.e._ on day -7), 35 tumour cells are placed in the centre of the domain (_cf._ Fig.2). First we let tumour cells grow in the absence of CTLs for 11 days, carrying out numerical simulations for \(33000\) time-steps. On day 0, the number of tumour cells is set at carrying capacity (_i.e._ 950 cells) and 150 CTLs are randomly introduced at the border of the domain. We let CTLs grow, move and interact with tumour cells for 4 days, carrying out numerical simulations for \(1200\) time-steps (_cf._ Figs.3-6). In the next subsection we describe the way in which additional components of the model were calibrated leading to the parameter values reported in Tab.1 and Tab.2, to qualitatively reproduce the behaviour of the experimental results presented in [4]. ### Model calibration through parameter exploration The model is calibrated to qualitatively reproduce the experimental results reported in [4]. Due to computational cost, it was not feasible to start with the actual number of cells present within a cell culture (which can reach the order of magnitude \(10^{5}\)) or to simulate the same number of cells as found in a real tumour. With our model, we instead focused on qualitatively capturing the change in immune infiltration levels while varying a certain set of parameters. Some parameters of the model (see Tab.1 and Tab.2) are estimated from the literature and defined on the basis of precise biological assumptions. Other model parameters that could not be based on a literature source, such as the cell death rates due to intra-population competition, are adjusted to qualitatively reproduce the growth of the spheroids and CTLs in non-stressed conditions presented in [4]. Finally, there are some parameters, such as the TC-CTL adhesion strength, the immune success rate and the secretion rate of the chemoattractant, whose values were simply chosen and varied with an exploratory aim to qualitatively reproduce essential aspects of the experimental results obtained in [4]. ### Parameters in the 2D system In the 2D system, the value of the rate of death due to competition between tumour cells is chosen so that the number of tumour cells reaches its carrying capacity after 7 days of proliferation. The number of CTLs introduced in the domain on day 0 and the value of the rate of death due to competition between CTLs are chosen so that the value of the infiltration score computed at the end of the 2D simulations in the control scenario is similar to the value of the TIC algorithm obtained on day 4 in [4], when cortisol is not introduced in the co-culture. The ratio between the energy at the interface between tumour cells and CTLs and the energy at the interface between tumour cells (_i.e._ the values of parameters \(J_{CT}\) and \(J_{TT}\) in Eq. (A.1)) allows us to consider a wide range of biological scenarios corresponding to different degrees of immune infiltration. In particular, if \(J_{CT}<J_{TT}\), then CTLs infiltrate through tumour cells, whereas if \(J_{CT}>J_{TT}\), then CTLs accumulate at the margin of the tumour, without infiltrating it. Therefore, to obtain different degrees of immune infiltration, we fix the value of \(J_{TT}\) and we vary the value of \(J_{CT}\). In the body of the paper we refer to the parameter \(J_{CT}\) as "tumour cell-CTL adhesion strength" (TC-CTL adhesion strength). In the control scenario we suppose that CTLs have a high capability of infiltrate through tumour cells. Therefore, we suppose that \(J_{CT}<J_{TT}\). In stressed conditions instead, we suppose that CTLs have a lower capability to infiltrate through tumour cells. Therefore, we increase the value of \(J_{CT}\) to a value equal to or greater than that chosen for \(J_{TT}\). ### Parameters in the 3D system The numerical simulations shown in Sec. 3 attempted to verify that our model produces similar results both in 2D and 3D. Therefore, in the 3D system we make use of the same values selected in the two-dimensional case (see previous subsection). Due to the slightly different number of cells obtained at the end of the numerical simulations in 3D, we simply adjust the death rate of tumour cells and CTLs due to intra-population competition, in order to obtain a number of cells at the end of the numerical simulations similar to that of the 2D scenario. \begin{table} \begin{tabular}{l l l l} \hline Phenotype & Symbol & Description & Value & Ref. \\ \hline **Domain** & Pixel & Lattice site in 2D & \(1\) Pixel \(=3\times 3\)\(\mu m^{2}\) \\ & Voxel & Lattice site in 3D & \(1\) Voxel \(=3\times 3\times 3\)\(\mu m^{3}\) \\ & \(\Delta t\) & Time-step & 1 MCS = 0.5 min \\ **CC3D** & \(J_{MT}\) & Contact energy tumour cells-medium & 50 \\ & \(J_{MC}\) & Contact energy CTLs-medium & 50 \\ & \(J_{CT}\) & Contact energy CTLs-tumour cells & high adh.: 5; intermediate \\ & \(J_{TT}\) & Contact energy tumour cells-tumour cells & 50 \\ & \(J_{CC}\) & Contact energy CTLs-CTLs & 1000 \\ & \(d_{T}\) & Tumour cell diameter & 20-40 (\(\mu m\)) [40] \\ & \(d_{C}\) & CTL diameter & 12 (\(\mu m\)) \\ & \(A_{0}\) & Initial area constrain (2D) & \(\mathcal{U}_{[25,55]}\) - tumour cells (_pixels_) \\ & \(V_{0}\) & Initial volume constrain (3D) & \(\mathcal{U}_{[25,55]}\) - tumour cells (_voxels_) \\ & \(P_{t}\) & Perimeter constrain (2D) & \(4\sqrt{A_{t}}+0.5\) (_pixels_) \\ & \(S_{t}\) & Surface constrain (3D) & \(6V_{t}^{\frac{2}{3}}\) (_voxels_) \\ & \(\lambda_{area}\) & Tumour cell and CTL area constrain (2D) & \(10\) \\ & \(\lambda_{per}\) & Tumour cell and CTL perimeter constrain (2D) & \(10\) \\ & \(\lambda_{vol}\) & Tumour cell and CTL volume constrain & \(20\) \\ & (3D) & & \\ & \(\lambda_{surf}\) & Tumour cell and CTL surface constrain & \(20\) \\ & (3D) & & \\ & \(T_{m}\) & Fluctuation amplitude parameter & \(10\) \\ & \(\lambda_{chem}\) & Strength and direction of chemotaxis & \(50\) \\ \hline \end{tabular} \end{table} Table 1: Parameter values used to implement the CP model. Energies, temperature and constrains are dimensionless parameters. \begin{table} \begin{tabular}{l l l l} \hline Phenotype & Description & Value & Ref. \\ \hline **Tumour** & Initial number & \(N_{T}(0)=36\) & \\ & Index identifier & \(n=1,\ldots,N_{T}(t)\) & \\ & Lifespan & \(\mathcal{U}_{[3,7]}\) (_days_) & [40] \\ & Growth rate & \(\mathcal{U}_{[0.015,0.019]}\) (_pixel or voxel/MCS_) & [4] \\ & Mean cycle time & \(12\) (_hours_) & [4] \\ & Rate of death due to intra- & (2D) \(4.6\times 10^{-7}\) (1/MCS) & estim. \\ & pop. competition & (3D) \(4.4\times 10^{-7}\) (1/MCS) & estim. \\ **CTLs** & Initial number & \(N_{C}(0)=150\) & \\ & Index identifier & \(m=1,\ldots,N_{C}(t)\) & \\ & Growth rate & normal: \(\mathcal{U}_{[0.0038,0.0042]}\) (_pixel or voxel/MCS_) & [4] \\ & & decreased: \(\frac{1}{2}\mathcal{U}_{[0.0038,0.0042]}\) (_pixel/MCS_) & \\ & Mean cycle time & 8-10 (_hours_) & [4, 40] \\ & Rate of death due to intra- & (2D) \(1.2\times 10^{-6}\) (1/MCS) & estim. \\ & pop. competition & (3D) \(1.3\times 10^{-6}\) (1/MCS) & estim. \\ & Lifespan & \(\mathcal{U}_{[2.5,3.5]}\) (_days_) & [40] \\ & Engagement time & 6 (_hours_) & [25] \\ & Immune success rate & Figs 2-5: \(0\); Fig 6: \(0.00005\) & \\ **Chemoattr.** & Concentration & \(\phi\geq 0\) (_mol/pixel or voxel_) & \\ & Diffusion & \(D=2\) (_pixel\({}^{2}\) or voxel\({}^{3}\)/MCS_) & [24] \\ & Secretion & high: \(\alpha=30\); intermediate: \(\alpha=10\); low: & \\ & & \(\alpha=3\) (_mol/MCS/pixel or voxel_) & \\ & Decay & \(\gamma=7\times 10^{-4}\) (1/MCS) & [24] \\ & Initial concentration & (2D) \(\phi^{init}=0.5(280\) & \(-\) \\ & & \(\sqrt{(x-200)^{2}+(y-200)^{2}}\)) & \\ & & (3D) \(\phi^{init}=0.5(280\) & \(-\) \\ & & \(\sqrt{(x-50)^{2}+(y-50)^{2}+(z-50)^{2}}\)) & \\ \hline \end{tabular} \end{table} Table 2: Parameter values used in numerical simulations.
2310.11939
Mixture distributions for probabilistic forecasts of disease outbreaks
Collaboration among multiple teams has played a major role in probabilistic forecasting events of influenza outbreaks, the COVID-19 pandemic, other disease outbreaks, and in many other fields. When collecting forecasts from individual teams, ensuring that each team's model represents forecast uncertainty according to the same format allows for direct comparison of forecasts as well as methods of constructing multi-model ensemble forecasts. This paper outlines several common probabilistic forecast representation formats including parametric distributions, sample distributions, bin distributions, and quantiles and compares their use in the context of collaborative projects. We propose the use of a discrete mixture distribution format in collaborative forecasting in place of other formats. The flexibility in distribution shape, the ease for scoring and building ensemble models, and the reasonably low level of computer storage required to store such a forecast make the discrete mixture distribution an attractive alternative to the other representation formats.
Spencer Wadsworth, Jarad Niemi, Nick Reich
2023-10-18T13:05:55Z
http://arxiv.org/abs/2310.11939v1
# Mixture distributions for probabilistic forecasts of disease outbreaks ###### Abstract Collaboration among multiple teams has played a major role in probabilistic forecasting events of influenza outbreaks, the COVID-19 pandemic, other disease outbreaks, and in many other fields. When collecting forecasts from individual teams, ensuring that each team's model represents forecast uncertainty according to the same format allows for direct comparison of forecasts as well as methods of constructing multi-model ensemble forecasts. This paper outlines several common probabilistic forecast representation formats including parametric distributions, sample distributions, bin distributions, and quantiles and compares their use in the context of collaborative projects. We propose the use of a discrete mixture distribution format in collaborative forecasting in place of other formats. The flexibility in distribution shape, the ease for scoring and building ensemble models, and the reasonably low level of computer storage required to store such a forecast make the discrete mixture distribution an attractive alternative to the other representation formats. Ensemble modeling Proper scoring rules Influenza outbreaks COVID-19 ## 1 Introduction Predicting the outcomes of prospective events is the object of much scientific inquiry and the basis for many decisions both public and private. Because predictions of the future can never be precise, it is usually desirable that a level of uncertainty be attached to any prediction. In recent years, it has become increasingly desirable that forecasts be probabilistic in order to account for uncertainty in predicted quantities or events [1]. Weather forecasting [2], economics [3], and disease outbreaks [4] are some of the areas where probabilistic forecasting is used. A probabilistic forecast is a forecast in which possible outcomes are assigned probabilities. There are a number of ways whereby probabilities or uncertainty may be represented. A common representation is either a continuous or discrete parametric distribution, given as a probability density/mass function. Much of the literature on calibration, sharpness, and scoring of a forecast pertains to parametric distribution forecasts [5, 6, 2]. Other common representations include samples [7], discretized bin distributions [8], and quantile forecasts [9, 10]. Each representation may be more or less appropriate than the others for a given problem, but knowing how to interpret, score, and construct ensemble forecasts for a selected representation is essential when multiple teams collaborate in the same forecasting project. Two collaborative projects on forecasting disease outbreaks for which many separate forecasts are used include the United States Centers for Disease Control (CDC) annual competition for forecasting the influenza outbreak [11] and the COVID-19 Forecast Hub which has continuously operated since the start of the COVID-19 pandemic in the US in early 2020 [12]. ### CDC flu forecasting During the 2013-14 flu season, the CDC began hosting an annual competition for forecasting the timing, peak, and intensity of the year's flu season. The specific events to be forecast were known as _targets_. Forecasts for these different targets included forecasts for one, two, three, and four weeks into the future. National flu data was provided weekly to - academic teams not directly affiliated with the CDC who used that data to construct forecasts using whatever methods they chose. Historically, the forecasts were submitted in a discretized bin distribution or a bin distribution format. A _bin distribution_ is a probability distribution represented by breaking the numeric range of an outcome into intervals or _bins_ and directly assigning to each bin the probability that the event falls within the bin. During previous flu seasons the _binning scheme_ or the assignment of bin values was on a numeric scale with a bounded range, and the prediction of a specific target was a set of probabilities assigned to each bin [8]. These forecasts were then evaluated against actual flu activity, and at the end of the season a winning team was declared [11]. The CDC continues to host a flu forecasting project, but since the 2021-22 season the only target for forecasts has weekly hospitalizations and the forecast submission format has been quantile forecasts similar to those described in the following section. Flu forecasting has provided the CDC, competing teams, and other interested parties a chance to collaborate and improve their forecasting from season to season. One proposed way to enhance prediction has been to aggregate the various teams' forecasts into a _multi-model ensemble forecast_[8, 13, 14], or an ensemble forecast. An _ensemble forecast_ is a combination of several component forecast models into one model which often yields better predicting power than the individual models [15]. Such an ensemble made from multiple influenza competition forecasts did in fact outperform the individual component models [14]. ### COVID-19 Forecast Hub In March 2020, at the onset of the COVID-19 pandemic, the United States COVID-19 Forecast Hub was founded. Borrowing from the work done in the CDC flu competition, the COVID-19 Forecast Hub was a central site in which dozens of academic teams collaborated to forecast the ongoing COVID-19 pandemic. Every week relevant pandemic data aws provided to these teams who constructed forecast models to predict the target cases, hospitalizations, and deaths due to COVID-19. Forecasts were made on the US county, state, and national levels and for days, weeks, and months ahead. These forecasts were aggregated into a single ensemble forecast. The model data, forecasts, and the ensemble forecast were passed along to the CDC for its use in official communication [12]. Figure 1 is from an official CDC report from August 2021. It shows forecasts from the COVID-19 Forecast Hub of increment deaths and cumulative deaths due to COVID-19. Though similar to the forecasting in the CDC flu competition, the format of the COVID-19 Forecast Hub has key distinctions. First, this project has been operating continuously since it began, so forecasts have been made for over 100 straight weeks. Second, rather than bin distributions the forecasts are requested as the predictive median and predictive intervals for various nominal levels depending on the target to be predicted [10]. Each value in a predictive interval is a value for a quantile at a specified nominal level. This makes a set of predictive intervals a _quantile forecast_ or a forecast made up of a set of quantiles and corresponding values. Collecting forecasts as quantile forecasts instead of bin forecasts brings with it differences in how to score the forecasts, construct an ensemble forecast, and store the forecasts among other differences. Ray et. al show that ensemble forecasts in the COVID-19 Forecast Hub provide precise short-term forecasts which decline in accuracy in longer term forecasts approaching four weeks [16]. ### Outline In the context of collaborative forecasting like that of the CDC flu competition or the COVID-19 Forecast Hub, bin forecasts and quantile forecasts have become important representations. Yet both representations come with their drawbacks. Computer storage for instance might be a concern if many bin distributions are used for forecasting, and scoring methods are limited if forecasts are quantile forecasts. In this paper, we propose the use of finite mixture distributions as a means of forecasting for collaborative projects similar to the CDC flu competition or the COVID-19 Forecast Hub. A finite mixture distribution -which we will refer to as a _mixture distribution_- is a distribution constructed by aggregating a finite collection of other distributions. In this paper, we focus on the case where the collection of distributions are parametric distributions. In Section 2, popular probabilistic forecast representations are defined and reviewed. For each representation, we review methods of scoring, storing, constructing ensembles, and other aspects. Section 3 presents using mixture distributions in a collaborative forecast project and discusses tools for scoring and constructing an ensemble forecast. Section 4 is a retrospective study of the CDC flu competition and COVID-19 Forecast Hub forecasts and an attempt to assess whether forecast models may be approximated by one component mixture distributions. ## 2 Probabilistic forecast representations At least four forecast representations are commonplace in forecasting. In a collaborative setting, certain aspects of each representation should be considered including scoring, computer storage, and how to construct an ensemble forecast. For each representation discussed herein, applications of each of those aspects are also discussed. ### Things to consider in collaborative projects In this section we introduce three topics which ought to be considered in a collaborative forecast project including forecast scoring, computer storage of forecasts, and ensemble forecast construction from multiple models. #### 2.1.1 Scoring Scoring rules are used to numerically evaluate or _score_ a probability forecast. The score is a measure of the accuracy of the forecast and where multiple forecasts exist the score for each may be used to compare forecasts. If a scoring rule is _proper_, then the best possible score is obtained by reporting the true distribution. The rule is strictly proper if the best score is unique. Under proper scoring rules, a forecaster has no incentive to be dishonest in their submission Figure 1: This image, published on www.cdc.gov in August 2021 as official public communication by the CDC, shows forecast models for national new weekly deaths due to COVID-19 in the top row and cumulative deaths on the bottom row. The plots in the left column show prediction intervals from multiple teams whereas the plots on the right show the intervals for an ensemble forecast model with gray lines that are individual model point forecast predictions. [17]. This makes proper scoring rules ideal for evaluating forecasts. We will limit our review of scoring methods to rules which are proper. #### 2.1.2 Storage For a collaborative forecast project where many researchers are involved and many predictions are collected, computer storage may need to be addressed. As an example of required computer storage, the repository for the COVID-19 Forecast Hub contained 85 million forecasts as of April 4, 2022 which required more than 11.7 gigabytes of storage. [18]. When determining the goals of a forecast project, there should be consideration of the storage required for different forecast representations. #### 2.1.3 Ensemble model An ensemble model is a statistical model made by combining information from two or more individual statistical models. Private and public decisions are regularly made after combining information from multiple sources. For a given problem, information from one source may provide insight on a subject which other sources fail to capture. Likewise one statistical model may provide insight that another model does not. Thus when multiple models are combined, the resulting ensemble may outperform the individual models. As probabilistic forecasting becomes more commonplace, so too does ensemble modeling. Ensembles have been used extensively in weather and climate modeling [2], and they have been used increasingly in modeling infectious disease outbreaks [4]. Ensembles allow for an incorporation of multiple signals -often from differing data sources- and sometimes individual model biases are canceled out or reduced by biases from other models [14, see references therein]. In several disease outbreak studies, ensemble forecasts have been shown to outperform individual model forecasts [16, 15, see references therein]. Construction of an ensemble may be done by combining individual forecast models using weighted averages. This has been called stacking [19] or weighted density ensembles [20]. Considered the state-of-the-art techniques for combining component distributions into an ensemble distribution are nonhomogeneous regression and ensemble model averaging (MA), both of which are defined by Gneiting and Katz-fuss [1]. In the context of an ensemble made from component models submitted from various sources, MA may be preferable because it does not require that modeling methods for individual components be the same. The general form for an MA ensemble distribution \(p^{E}\) is defined in equation (1), where \(p_{m}\) is the \(m^{th}\) component forecast distribution and \(0\leq w_{m}\leq 1\) is a weight assigned to that component where \(\sum_{m}w_{m}=1\). \[p^{E}(x)=\sum_{m=1}^{M}w_{m}p_{m}(x) \tag{1}\] In MA, the final model does not have to be specified beforehand and the resulting forecast will be a mixture distribution of all component forecasts. Many methods exist for estimating weights. Some of these methods include maximum likelihood estimation [21], Markov chain Monte Carlo (MCMC) sampling [22], Bayesian model averaging, Akaike or AIC weights, [23], minimizing the CRPS of the ensemble [2], and others. ### Probabilistic forecast representations Probabilistic forecast uncertainty can take on many representations. In this section we describe parametric distributions, sample distributions, bin distributions, and predictive intervals as representation forecasts. Following the description of these four representations, mixture distributions are introduced. #### 2.2.1 Parametric distributions A parametric distribution is a discrete or continuous probability distribution described by a known function \(p(x):=p(x|\theta)\). The function \(p(x)\) is called a probability mass function (pmf) if the distribution is discrete and a probability density function (pdf) if the distribution is continuous. Here \(\theta\) is a vector of parameters contained in the parameter space of the distribution. For a forecast represented as a parametric distribution with pmf/pdf \(p_{m}(\cdot)\), the accuracy of the forecast may be measured by how likely the realized value \(x^{*}\) is to occur. Commonly used proper scoring rules for parametric distributions include the logarithmic score (LogS), the continuous rank probability score (CRPS) [24][25], and the interval/Brier score (IS) [17] among others. See also [1] Section 3 for more on proper scoring functions. The definitions (2), (3), and (11) are found in the review by Krueger. For a forecast with pdf/pmf \(p(\cdot)\), (2) evaluates the probability of the observed value \(x^{*}\). \[\text{LogS}(p,x^{*})=-\log p(x^{*}) \tag{2}\] The goal for a forecaster is to minimize the LogS, so a forecast \(p^{\prime}(x^{*})\) is considered superior to \(p(x^{*})\) if \(\text{LogS}(p^{\prime},x^{*})<\text{LogS}(p,x^{*})\). The LogS is limited to scoring forecasts with density functions and evaluating those densities only at the point \(x^{*}\). For the cumulative distribution (CDF) \(F\) of a parametric distribution, the CRPS is defined in (3). Here too a smaller score indicates a more accurate forecast. \[\text{CRPS}(F,x^{*})=\int_{-\infty}^{\infty}(F(x)-1_{\{x*\leq x\}})^{2}dx \tag{3}\] Standard practice for constructing an ensemble model from multiple parametric models is to use MA. For selecting distribution weights for the ensemble, minimizing the LogS or CRPS of \(p^{E}(x)\) is common. Because it is evaluated over the whole distribution, minimizing the CRPS has some nice properties, but it can also be difficult to compute. For example, when the forecast is a mixture of a truncated normal distribution (TN) and a truncated lognormal (TL), the CRPS is not available in closed form [2]. Generally computation and evaluation of parametric distribution functions is easy. For most commonly used parametric distributions -normal, lognormal, Poisson, gamma, etc.- there is software readily available to compute density, distribution, and quantile values. A completely defined continuous parametric distribution may be evaluated at a continuously infinite number of values which we call an _infinite resolution_. Requirements for storage are also low compared to other representations that will be discussed since the most common parametric distributions can be fully defined with three or four pieces of information including the distribution family and the corresponding parameters. Table 1 contains enough information to completely define a Lognormal(1,0.4) truncated to the inteval \([0,8]\). The truncation is done here so as to make a direct comparison with the distributions shown later in Tables 2 and 3. A drawback of representing a forecast in a parametric distribution is the lack of flexibility in the model selection. Easy computation and evaluation of these models is limited to what is available in software, so certain distributional shapes may be unattainable. Requiring a parametric forecast also bars the use of some statistical methods which might be used to create a forecast model including some Bayesian methods where a posterior distribution cannot be computed in closed form. #### 2.2.2 Sample distributions Forecasters may want more flexibility in modeling than a parametric distribution can provide. Methods that require sampling from a posterior distribution or bootstrap sampling to obtain a forecast distribution are examples where parametric distributions may not be appropriate for modeling because of the lack of flexibility in distribution shape. A sample distribution is composed of a collection of, possibly weighted, random variables \((X_{1},...,X_{n})\) where \(X_{i}\sim D\) and \(D\) is some distribution. From this sample, statistics such as mean, median, variance, and quantiles may be calculated. An empirical cumulative distribution function (EDF) may also be calculated as in (4). If a sufficiently large sample is generated from a distribution for which an expectation exists, the sample will closely approximate the true distribution. \[\text{ECDF}=F_{n}(x)=\frac{1}{n}\sum_{i=1}^{n}\mathbb{I}(X_{i}\leq x) \tag{4}\] For common distribution families it is easy to generate large samples using existing functions in R and other programming platforms. For some distributions for which the mathematical formula is unknown or is not in closed form, more sophisticated methods may be required to generate samples. Bayesian analyses may require MCMC sampling to \begin{table} \begin{tabular}{|c c c c c|} \hline family & param1 & param2 & lowerlim & upperlim \\ \hline Inorm & 1 & 0.4 & 0 & 8 \\ \hline \end{tabular} \end{table} Table 1: This is an example of information required to specify a truncated lognormal distribution. generate a sample. Such samples are useful in that the true distribution may be closely approximated without knowing the true mathematical form. Under the sample distribution representation, the options that a researcher has for constructing a forecast are more than if they are asked to submit a parametric distribution, and the range of possible shapes for a distribution is larger. In the last few decades, increased computing power and improvements in MCMC sampling have greatly contributed to growth in the use of sample distributions for forecasting [17][7, see examples listed therein]. To properly score a forecast represented by a sample distribution, both the CRPS and LogS may be used. The CRPS has the advantage here of scoring the sample distribution directly since the CDF in (3) may be replaced with the ECDF in (4). To use the LogS to score a forecast, a density function for the sample may be approximated. Common approximations include a kernel density (KD) or Gaussian approximation (GA) [7, for example]. The KD in (5) is defined by Krueger et. al where \(K\) is a kernel function, and \(h_{n}\) is a suitable bandwidth. The GA is defined in (6) where \(\Phi\) is the standard normal CDF and \(\hat{\mu}_{n}\) and \(\hat{\sigma}_{n}\) are the empirical mean and standard deviation of the sample \((X_{i})\)[7, see also for a comparison of scoring MCMC drawn forecasts between the CRPS and the LogS]. \[\hat{p}_{n}^{KD}(x)=\frac{1}{nh_{n}}\sum_{i=1}^{n}K\left(\frac{x-X_{i}}{h_{n} }\right) \tag{5}\] \[\hat{F}_{n}^{GA}(x)=\Phi\left(\frac{x-\hat{\mu}_{n}}{\hat{\sigma}_{n}}\right) \tag{6}\] To build an ensemble model from sample distribution forecasts, the MA construction from (1) may be used only replacing \(p_{m}\) with the approximate KD or GA pdf functions -\(\hat{p}_{n_{m}}^{KD}\) or \(\hat{p}_{n_{m}}^{GA}\) respectively where \(\hat{p}_{n}^{GA}\) is the pdf corresponding to (6). The optimal weights \(w_{m}\) may be estimated by maximizing the likelihood or minimizing the CRPS. If the desire is that the ensemble has uniform weights, a random sample from \((X_{n})_{m}\) with probability \(w_{m}\) can be obtained. A potentially large issue with using sample distributions is the amount of storage it may require. For example when making MCMC draws from a posterior distribution, the final sample distribution can have a sample size of thousands or tens of thousands. Maybe not all distributions would require such a large sample size, but sizes of at least dozens or hundreds would be required for each forecast prediction. For any project the size of the CDC flu competition or the COVID-19 Forecast Hub, the storage required would be large and potentially expensive. #### 2.2.3 Bin distributions An alternative to parametric distributions and sample distributions, which allows for higher flexibility in distribution shape than a parametric distribution and will usually require less storage space than samples is the bin distribution. A bin distribution may be constructed over a set \(A=[a,b)\) by partitioning \(A\) into a set of \(K\) bins \(\{B_{i}\}_{i=1}^{K}\) where \(B_{i}=[b_{i-1},b_{i})\) and \(\cup_{i=1}^{K}B_{i}=A\). Based on the problem to be forecast, researchers will determine the possible range \(A\) and select the number of bins and the sizes for each bin. It may be the case that a collaborative project will set the widths of all bins to be equal so that \(\Delta=b_{i}-b_{i-1}\) is the same width for all \(i\)[8]. To complete the construction, a probability \(p_{i}\) is assigned to each \(B_{i}\) where \(\sum p_{i}=1\). These probabilities are determined by the forecasters. This representation with a given bin and assigned probability may be treated like a discrete distribution with a pmf in that the calculation of the cumulative distribution is done similarly to that of a discrete parametric distribution. The cumulative distribution may be calculated as in (7). Here \(p_{i}\) is the probability for the bin \(B_{i}\) where \(x\in B_{i}\). \[P(X\leq x)=\sum_{i=1}^{n:x\in B_{n}}p_{i} \tag{7}\] If the value to be forecast takes on discrete values, a common discrete distribution, such as a Bernoulli or Poisson distribution, may sometimes be used to assign probabilities to each of the bins. When the values to be forecast are continuous, a forecaster may need to employ a method of discretization to a forecast distribution. There are a number of possible ways to do this including those outlined by Chakraborty and Subrata [26]. For the first several seasons when the CDC hosted the flu competition, the forecast representation used was the bin distribution. The CDC has also used the bin distribution represetation for other disease outbreak forecast projects. In that context it has become the standard representation [27]. Much work has been done in evaluating and constructing ensemble models on influenza forecasts represented by discretized bins [8, 13, 28]. Because a bin distribution can be viewed as a pmf, methods for proper scoring already discussed -LogS and CRPS- are useable and MA is a valid method for ensemble construction. Reich et. al used MA to combine multiple forecasts from the flu competition. They constructed and compared ensemble models with different weighting schemes including equally weighted components, \(w_{m}=1/M\), and estimating weights according to the model specification. To estimate weights they used the expectation maximization (EM) algorithm [14, see supplementary material within for details]. The exact amount of information required for a bin forecast will vary depending on the permitted range of the forecast and the desired resolution. In the CDC flu contest, a forecast might have 131 bins between 0% and 13% -bins having increments of 0.1 or 0.01%- with corresponding probabilities in each. This makes 262 pieces of information per prediction. For any binning scheme of more than two or three bins, the information requirement for a bin distribution will be higher than for a parametric distribution. Table 2 illustrates what the bin distribution discretized from a Lognormal(1,0.4) distribution truncated over \([0,8]\) looks like in 41 equally spaced bins. The discretization was done such that the probabilities \(p_{i}\) are calculated as in (8) where \(p^{TL}\) is the pdf of a truncated Lognormal(1,0.4). This is similar to Methodology-IV from Chakraborty [26, 29]. The truncation here is done because in practice a bin forecast will generally have a finite support. \[p_{i}=\int_{b_{i-1}}^{b_{i}}p^{TL}(x)dx \tag{8}\] Submitted as a forecast prediction, the distribution illustrated in Table 2 includes 82 values. For parts of the CDC influenza competition some forecasts included up to 262. This is far less storage than the possible thousands of draws from a sample distribution but is still much larger than the three or five pieces of information required to report a lognormal or truncated lognormal distribution. Besides the potentially large amount of information required per forecast, creating the right binning scheme may be a challenge. Because there must be a finite number of bins, forecast distributions often have finite support. And where the range of possible outcomes to a problem is not well known, the right binning scheme may be hard to produce. This may depend on the details of the event to be forecast, but in the case of the COVID-19 outbreak, choosing the right set of bins posed a few problems. #### 2.2.4 Interval forecasts When deciding how forecasts should be represented in the COVID-19 Forecast Hub, the time pressure of generating forecasts and the large range for possible outcomes both contributed to the COVID-19 Forecast Hub decision to forego trying to create the right binning scheme and use quantile forecasts to forecast the COVID-19 pandemic [10]. The COVID-19 Forecast Hub requires predictions to be submitted as 11 or three nominal intervals -depending on the specific target and unit to be forecast- and a median. Likewise the CDC flu forecasting project has used this same format for the past two seasons [30]. A quantile forecast is constructed as in (9). Here for \(N\) given quantile levels \(\alpha_{1},...,\alpha_{N}\); \(q_{1},...,q_{N}\) are the values such that we have (9). When the quantiles are reported as prediction intervals we have (10). \[P(Y\leq q_{1})=\alpha_{1},P(Y\leq q_{2})=\alpha_{2},...,P(Y\leq q_{N})=\alpha_ {N} \tag{9}\] \[P(Y\leq q_{1})=\alpha_{1},P(Y\leq q_{2})=\alpha_{2},...,P(Y\leq q_{N-1})=1- \alpha_{2},P(Y\leq q_{N})=1-\alpha_{1} \tag{10}\] \begin{table} \begin{tabular}{|c|c|} \hline bin & prob \\ \hline... &... \\ [1.4, 1.6) & 0.04414 \\ [1.6, 1.8) & 0.05896 \\ [1.8, 2.0) & 0.07032 \\ [2.0, 2.2) & 0.07172 \\ [2.2, 2.4) & 0.07955 \\... &... \\ \hline \end{tabular} \end{table} Table 2: This is a storage example of a discretized lognormal with \(\mu=1\) and \(\sigma=0.4\) truncated over \([0,8]\). The CRPS and LogS may not be used to score a quantile forecast, and MA may not be used for constructing an ensemble from multiple quantile forecasts. Methods for scoring quantile forecasts and constructing ensemble forecasts are limited, and when given only in quantiles the shape of a distribution is not known. In fact nothing is known about the tails or the uncertainty beyond the most extreme reported quantile values. In the COVID-19 Forecast Hub forecasts, nothing is reported about the range below the \(1^{st}\) quantile or above the \(99^{th}\). Yet the quantile representation has its advantages. Quantile forecasts allow for forecasters to submit fairly detailed forecasts without restricting the range of possible values. Since quantiles are easily calculated from any regular distribution type -using the quantile function for parametric functions or calculating sample quantiles- we consider quantile forecasts to be highly flexible in terms of what methods forecasters may employ in modeling. To score a quantile forecast, neither the LogS nor the CRPS may be used, but another proper scoring rule the IS may be used. For an observed outcome \(x^{*}\) and a prediction interval \((l,r)\) where \(l\) and \(r\) are the \(\alpha/2\) and \((1-\alpha/2)\) quantiles that bound the central \((1-\alpha)\) prediction interval, the IS is defined as in (11). This is a sum -weighted by \(\alpha\)- of the width of the interval and the distance between \(x^{*}\) and the interval (if \(x^{*}\) is not captured in the interval) [1]. The IS requires only a single central \((1-\alpha)\times 100\) prediction interval. \[IS_{\alpha}(l,r;x^{*})=(r-l)+\frac{2}{\alpha}(l-x^{*})1\{x^{*}<l\}+\frac{2}{ \alpha}(x^{*}-r)1\{x^{*}>r\} \tag{11}\] When a quantile forecast is made up of multiple intervals each with different \(\alpha\) levels, the weighted interval score (WIS) may be used. Bracher et. al use the WIS to score COVID-19 quantile forecasts [10]. There are multiple versions of the WIS, some of which are described in Bracher et. al, but the version used by the COVID-19 Forecast Hub for a forecast of \(K\) intervals is defined in (12). Here \(median\) refers to the predictive median and \(w_{k}=\alpha_{k}/2\) is the weight on the \(k^{th}\) interval. With that selection of weights, it may be shown that the WIS approximates the CRPS [10, see S1 Text therein]. \[WIS_{0,K}(F_{m},x^{*})=\frac{1}{K+1/2}\left(w_{0}|x^{*}-median|+\sum_{k=1}^{ K}\{w_{k}IS_{\alpha_{k}}(F_{m},x^{*})\}\right) \tag{12}\] Bogner, Liechti, and Zappa compared scoring forecasts of quantiles with the Quantile Score (QS) similar to the interval score and scoring distribution functions fit to those quantiles using the CRPS [31]. The CRPS corresponds to the integral of the QS over all possible thresholds rather than just specific quantiles, so it more effectively reveals deficiencies in parts of the distribution and especially in the tails past the end points of quantiles used in QS or IS. Thus there may be something lost in terms of scoring when the WIS is used since it also is constructed from the IS. Like the CRPS, not only does the WIS provide an easily interpretable proper score for interval forecasts, but it may also be useful when building an ensemble forecast. The ensemble forecast constructed by the COVID-19 Forecast Hub is made as an equally-weighted average of forecasts from the component models. More specifically, each quantile value of the ensemble is the average of values from all models corresponding to the same quantile [16]. For \(M\) models each with \(K\) quantiles, the \(k^{th}\) ensemble quantile \(q_{k}^{E}\) is calculated as in (13) where \(w_{m}\) is the weight assigned to each forecast and \(\sum w_{m}=1\). In the COVID-19 Forecast Hub model, \(w_{m}=w=1/M\). Where the overall mean or a weighted mean may be used for averaging, the median may also be used. Brooks et. al compare performance of the COVID-19 ensemble using equally-weighted means, weighted means, and median value constructions [27]. In their report, they show that weighted means and median constructions tend to outperform an equally-weighted mean construction. To come up with optimal weights, they select values \(w_{m}\) from (13) which minimize the WIS of the ensemble forecast. \[q_{k}^{E}=\sum_{m=1}^{M}w_{m}q_{k}^{m} \tag{13}\] As in sample distributions and bin distributions, data storage for interval forecasts will depend on the desired clarity of resolution. For the COVID-19 forecasts submitted to the COVID-19 Forecast Hub, 23 quantile values are requested for quantiles (0.01, 0.025, 0.05, 0.10,..., 0.95, 0.975, 0.99). This includes a median along with 11 predictive intervals [10]. Forecasters are thus required to submit 46 values in each short-term forecast (some of the longer term forecasts only include seven quantiles). In terms of storage, this is an improvement over requirements for the CDC flu competition. Table 3 shows how a submission of 23 quantiles from a Lognormal(1,0.4) truncated on \([0,8]\) might look. Figure 2 illustrates how the densities and CDFs compare between parametric distributions, sample distributions, bin distributions, and quantiles. Figure 2: This figure compares the densities and CDFs of forecast representations discussed in the left and right columns respectively. Each is generated from a Lognorma(1,0.4) distribution truncated on \([0,8]\). Blue shows the density and CDF functions. Purple shows a histogram and ECDF of 1,000 samples. Red shows bin probabilities and the CDF function for a bin distribution. Green shows quantiles with corresponding values. ### Mixture distributions A mixture distribution forecast representation is an attractive alternative to the four representations already discussed. A mixture distribution forecast would allow for a large range of distribution shapes, a high resolution, storage comparable to that of bin and quantile forecasts, and ensemble construction using MA. A mixture distribution may be constructed in the same way as the ensemble described in section 2.2.1 (1) where for \(C\) distributions with pdfs \(p_{c}(x)\) and \(w_{c}>0\) and \(\sum w_{c}=1\) we have (14). \[p^{M}(x)=\sum_{c=1}^{C}w_{c}p_{c}(x) \tag{14}\] Like a parametric distribution, a mixture distribution may be evaluated using existing software like the distr package in R[32]. And scoring may be done using the LogS, CRPS, and IS. A mixture distribution, like its parametric distribution components, has an infinite resolution. A mixture distribution may be more flexible than a single component parametric distribution in terms of distribution shape. According to McLachlan and Peel, a mixture of normal densities with common variance may be used to approximate arbitrarily well any continuous distribution [33] (see also [34]). Thus, for an unconventional probability distribution -such as an MCMC posterior sample- it may be reasonable to approximate the distribution by fitting those samples to a mixture of normal distributions. Depending on the number of components a forecaster includes in a mixture forecast, the amount of storage per forecast might be as little as for a parametric forecast or as much as is permitted in the specific collaborative forecast project. An ensemble model may be constructed by using (1) only replacing \(p_{m}\) with \(p_{m}^{M}\) from (14). Solving for weights may also be done by maximizing the likelihood of the forecast or minimizing the CRPS. However with the added complexity of component models being mixture distributions the computation is likely to be more expensive. An example where this is true is when minimizing the CRPS when the exact mixture distribution does not produce a closed form CRPS [2]. In large projects like the COVID-19 Forecast Hub, if an equal weight is not assigned to each component, it may be determined that models not reaching a certain standard of predictive performance are assigned an ensemble weight of 0. This would simplify an ensemble model to include only the best performing forecasts. Table 4 shows how a mixture distribution forecast compares with the other formats discussed in terms of methods for scoring, information and resolution provided, methods for ensemble building, and computer storage requirement. To summarize, a continuous mixture distribution has the infinite resolution of a parametric distribution with the flexibility of a bin distribution, a sample distribution, and a set of quantiles. The common proper scoring rules LogS and CRPS may be used to score a mixture forecast. The storage requirement is comparable to that of a bin distribution or a set of quantiles. And MA may be used for building an ensemble. In Section 3 we show how a mixture distribution may be constructed, scored, and used to construct an ensemble using software available in R. \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline Type & \multicolumn{3}{c}{Scoring} & \multicolumn{3}{c}{Ensemble} & Resolution & Storage & Shape/Flexibility \\ \cline{2-10} & LogS & CRPS & IS & MA & QA & Sample & & & \\ \hline Bins & x & x & x & x & x & x & \# bins & 100s & limited by binning scheme \\ Quantiles & & & x & & x & & \# quantiles & 10s & unknown shape, no tail info \\ Parametric & x & x & x & x & x & x & \(\infty\) & 10 & well known distributions \\ Sample & & x & x & & x & x & \(n\) draws & 1000 & flexible shape \\ Mixture & x & x & x & x & x & x & \(\infty\) & 10s & flexible shape \\ \hline \hline \end{tabular} \end{table} Table 4: This table compares scoring, information, ensemble building, and storage requirements for the different forecast representations discussed. To summarize a continuous mixture distribution has the infinite resolution of a parametric distribution with the flexibility of a bin distribution, a sample distribution, and a set of quantiles. The common proper scoring rules LogS and CRPS may be used to score a mixture forecast. The storage requirement is comparable to that of a bin distribution or a set of quantiles. And MA may be used for building an ensemble. \begin{table} \begin{tabular}{|c||c|c|c|c|c|c|c|} \hline quantile & 0.01 & 0.025 & 0.05 &... & 0.95 & 0.975 & 0.99 \\ \hline value & 1.07137 & 1.2404 & 1.40689 &... & 5.18328 & 5.82391 & 6.58783 \\ \hline \end{tabular} \end{table} Table 3: This shows six quantiles and values from a lognormal distribution with \(\mu=1\) and \(\sigma=0.4\) ## 3 Mixture distributions in a collaborative forecast project The CDC flu competition and the COVID-19 Forecast Hub as well as other collaborative projects have their own established systems for receiving, scoring, and constructing ensemble forecasts. A transition from using bin or quantile forecasts to using mixture distribution forecasts would require a few adjustments to those systems. In this section we outline how some of these adjustments may be implemented. We also present tools which may be used to build forecasts from submissions, score those forecasts, and construct ensembles from them. ### Submission format For a collaborative forecast project to run smoothly, forecast submissions from all forecasters should follow the same format. For both the CDC flu competition and the COVID-19 Forecast Hub, teams provide a.csv spreadsheet which contains the distributional information for one or multiple forecasts. Tables 5 and 6 show what variables are included in those submissions and a couple rows to illustrate possible values. The column variables include location, target, type, unit, bin or quantile, and value. Here location defines the specific county, state, or country of the forecast. The target variable defines what is forecast with levels: season onset, deaths, hospitalizations, etc. The type variable defines the type for the value variable with levels of point, bin, or quantile. The unit variable defines the time frame of the forecast with levels of one week, two weeks, four weeks, etc. The variables bin and quantile give a specific bin or a specific quantile. The value variable is a number that either gives the probability associated with a bin or the value associated with a quantile. A single submission may include many forecasts aimed at forecasting different combinations of location, target, and unit. A set of rows which share the same specific combination of location, target, and unit constitute a single forecast. One forecast for the CDC flu competition may require up to 131 rows whereas in the COVID-19 Forecast Hub one forecast may require up to 23 rows. Table 7 illustrates adjustments made to the submission formats from Tables 5 and 6 which make a usable submission format for mixture distribution forecasts. In such a format, each row represents one component distribution used in a mixture distribution. The variables bin or quantile and value are removed and replaced with family, param1, param2, and weight where family is the distribution family of the component, param1 and param2 are the parameters for the component distribution, and weight is the weight \(w_{i}\) for the \(i^{th}\) component. For reasons of storage and computation, a forecast project may have a limit to the number of components allowed per forecast. For reference, a mixture distribution forecast following the format in table 5 with 17 components would require \(17\times 8=136\) pieces of information submitted per forecast. A submission to the COVID-19 Forecast Hub forecast with 23 quantiles according to the format in Table 5 requires \(23\times 6=138\) cells. Thus if the COVID-19 Forecast Hub were to change the forecast representation from quantile forecasts to mixture forecasts but continue \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline location & target & type & unit & quantile & value \\ \hline us national & season onset & quantile & week & 0.01 &. \\ us national & season onset & quantile & week & 0.025 &. \\... &... &... &... &... &... \\ \hline \end{tabular} \end{table} Table 6: This shows a few rows of a submission file for a quantile forecast like those in the COVID-19 Forecast Hub. A set of rows which share the same combination of location, target, and unit make up a single forecast. One submission may include many forecasts specified by differing combinations of those three columns. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline location & target & type & unit & bin & value \\ \hline us national & season onset & bin & week & 0.0 &. \\ us national & season onset & bin & week & 0.1 &. \\... &... &... &... &... &... \\ \hline \end{tabular} \end{table} Table 5: This table shows a few rows of a submission file for a bin forecast like those in the CDC flu competition. A set of rows which share the same combination of location, target, and unit make up a single forecast. One submission may include many forecasts specified by differing combinations of those three columns. allowing the same amount of data per forecast, a mixture distribution with 17 components could be used in one forecast. That many components could allow for a large range of distribution shapes and flexible forecasts. In the remainder of this section, explanations of how to work with mixture distributions submitted according to Table 7 are given. Also given is R code which demonstrates constructing a mixture distribution from a forecast submission, scoring the forecast, and building an ensemble from two separate submissions. ### Mixture construction and scoring tools A single.csv submission file of the format in Table 7 may contain multiple forecasts forecasting different combinations of location, target, and unit. Selecting only rows which share a specific combination of location, target, and unit will produce a table representing a single forecast. That table may look like Tables 8 and 9. If the table is saved as a standard data.frame in R, then tools based on the distr package [32] may be used for evaluating a mixture distribution with the component distributions in the table. The distr package contains a function UnivarMixingDistribution() which takes as arguments a list of distributions and a vector of weights for each distribution and an object of class AbscontDistribution is returned. An AbscontDistribution class is a mother class which defines a random number generator, pdf, CDF, and quantile function for continuous distributions from common families contained in the distr package and for mixture distributions with component distributions from those families. We wrote a function MakeDist() (see APENDIX) which takes on a data.frame with variables family, param1, param2, param3, and weight and where each row represents a component distribution in a mixture distribution. The function MakeDist() calls on the UnivarMixingDistribution() function and returns a mixture distribution object of class AbscontDistribution. If a forecast such as in Table 8 is taken as an argument in MakeDist(), the resulting mixture distribution may then be evaluated with functions for the pdf, CDF, quantile function, and random samples from the mixture distribution may be drawn. The distribution may then be scored using the LogS or CRPS. Here we include code to illustrate the process of constructing and scoring two separate forecasts. We suppose that Table 8 represents a submitted forecast from one forecaster and Table 9 represents a forecast of the same event from \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline location & target & type & unit & family & param1 & param2 & weight \\ \hline us national & season onset & dist & week & norm & \(a_{n}\) & \(b_{n}\) & \(w_{1}\) \\ us national & season onset & dist & week & lnorm & \(a_{l}\) & \(b_{l}\) & \(w_{2}\) \\... &... &... &... &... &... &... &... \\ \hline \end{tabular} \end{table} Table 7: This is an example of a submission file for a disease outbreak forecast using a mixture distribution representation. Each row represents a component distribution of a mixture distribution. The variables location, target, and unit specify what is being forecast. The variable type specifies that the row represents a parametric distribution. The variables family, param1, and param2 specify the exact component distribution. And the variable weight specifies the weight \(w_{i}\) that the \(i^{th}\) component distribution is assigned in the mixture distribution. Here two components are shown with distributions Normal\((a_{n},b_{n})\), Lognormal\((a_{l},b_{l})\) and weights \(w_{1}\) and \(w_{2}\). \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline family & param1 & param2 & param3 & weight \\ \hline Lnorm & 2 & 1 & NA & 0.3 \\ Norm & 2.1 & 1 & NA & 0.7 \\ \hline \end{tabular} \end{table} Table 8: This is an illustrative example of a mixture distribution forecast where the distribution is described in a data frame. The first component is a Lognormal\((2,1)\) with a weight of 0.3 in the mixture and the second component is a Normal\((2.1,1)\) with a weight of 0.7. The distribution family abbreviations are capitalized here because that is how they will be requested in the MakeDist() function. Refer to Table 13 in the APPENDIX for more details. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline family & param1 & param2 & param3 & weight \\ \hline Lnorm & 2 & 1 & NA & 0.3 \\ Norm & 2.1 & 1 & NA & 0.7 \\ \hline \end{tabular} \end{table} Table 9: This is a second illustrative example of a mixture distribution forecast where the distribution is described in a data frame. The first component is a Normal\((1.5,1)\) with a weight of 0.4 in the mixture and the second component is a Normal\((4,2)\) with a weight of 0.6. The distribution family abbreviations are capitalized here because that is how they will be requested in the MakeDist() function. Refer to Table 13 in the APPENDIX for more details. a second forecaster. Table 3 shows plots of the pdfs for both mixture forecasts. Note the additional param3 variable in Tables 8 and 8. This variable is included in the table because of the functionality of the MakeDist() function which allows for component distributions of up to three parameters. The code here shows these two forecasts as data.frames and how the MakeDist() function is used to create the distributions in R. Once the distributions are created as AbscontDistribution objects, then functions for evaluating a pdf and a CDF for each are created. preddf1 ## family param1 param2 param3 weights ## 1 Lnorm 2.0 1 NA 0.3 ## 2 Norm 2.1 1 NA 0.7 preddf2 ## family param1 param2 param3 weights ## 1 Norm 1.5 1 NA 0.4 ## 2 Norm 4.0 2 NA 0.6 _#makemixture distributions from prediction submissions_ mdist1 <- MakeDist(preddf1) _#make pdfs for mixture predictions_ dmdist1 <- function(x) {distr::d(mdist1)(x)} dmdist2 <- function(x) {distr::d(mdist2)(x)} _#make cdfs for mixture predictions_ pmdist1 <- function(x) {distr::p(mdist1)(x)} pmdist2 <- function(x) {distr::p(mdist2)(x)} The LogS or the CRPS may then be calculated for each forecast using the pdf and CDF functions respectively. Here will will assume that the true value which both forecasts attempted to predict was 3. The CRPS() function here is one that we wrote and is included in the APPENDIX. It is seen in the code below that under the LogS, forecast 1 from Figure 3: These plots show the density functions of the two example mixture distribution forecasts along with the component density functions scaled by the corresponding weights Table 8 outperforms forecast 2 from Table 9 with scores of 1.547 and 1.849 respectively. However, under the CRPS, forecast 2 outperforms forecast 1 with scores 0.635 and 0.635 respectively. We continue to use these same forecasts in Section 3.3 used in constructing an ensemble forecast. ``` #realizedobservation xstar<-3 #LogSforpredictionsattherealizedobservation -log(dmdist1(xstar)) ##[1]1.547238 -log(dmdist2(xstar)) ##[1]1.848796 #CRPSforpredictionsattherealizedobservation CRPS(pmdist1,yr=xstar) ##[1]0.6348212 CRPS(pmdist2,yr=xstar) ##[1]0.5306083 ``` ### Ensemble construction To construct an ensemble distribution from multiple mixture distributions, the UnivarMixingDistribution() function may be used. The function takes two or more AbscontDistribution distribution objects, including mixture distribution objects, and a vector of weights corresponding to each object. A new AbscontDistribution object is returned as an ensemble of mixture distributions as in (1). Since they are AbscontDistribution objects, mdist1 and mdist2 created in the code in Section 3.2 may be input as arguments into the function UnivarMixingDistribution(), but weights for each object also need to be determined. At the onset of a collaborative forecast before there are true event observations which the forecasts may be scored on, it may make sense to assign an equal weight to each component distribution in an ensemble. As a project progresses, however, assigning weights based on past performance may be desired. As mentioned in section 2.2.1, weights may be selected by maximizing the likelihood of (1) or by minimizing the CRPS. Another method of selecting weights is to use the posterior model probability. If we have \(T\) models \(M_{1},...,M_{T}\) the posterior model probability of \(M_{t}\) is defined as in (15) where \(p(\cdot|M_{t}):=p_{t}(\cdot)\) is the pdf of the model distribution and \(p(M_{t})\) is the prior probability assigned to the model. A common approach is to assume the prior probabilities for each model are equal or \(p(M_{t})=1/T\) for all \(t\) in which case (15) is reduced to (16). In this case the posterior model probability for the \(t^{th}\) model is equal to the exponential of its negative LogS or \(p(M_{t}|x)=e^{-\text{LogS}(p(M_{t}|x))}\), so the performance of a forecast based on the LogS is directly related to its posterior model probability and may be used as an ensemble weight. For an observed event \(x^{*}\), ensemble weights \((w_{t})\) from (1) may be defined as \(w_{t}:=p(M_{t}|x^{*})\). \[p(M_{t}|x)=\frac{p(x|M_{t})p(M_{t})}{p(x)}=\frac{p(x|M_{t})p(M_{t})}{\sum_{k= 1}^{T}p(x|M_{k})p(M_{k})} \tag{15}\] \[p(M_{t}|x)=\frac{p(x|M_{t})}{\sum_{k=1}^{T}p(x|M_{k})} \tag{16}\] Using the illustrative example from Section 3.2, the following code shows how to use the posterior model probability to select weights, construct an ensemble distribution, and score the ensemble forecast. Here again we take the true event value to be 3. The ensemble distribution along with component distributions is shown in Figure 4. #posteriormodelprobabilityforcalculatingweights w1<-pmdist1(xstar)/(pmdist1(xstar)+pmdist2(xstar)) w2<-1w1 w1 ##[1]0.5286434 w2 ##[1]0.4713566 #buildensemblewithcalculatedweights ensdist<-distr::UnivarMixingDistribution(midst1, mdist2, mixCoeff=c(w1,w2)) #pdfandcdforsemble densdist<-function(x){(distr::d(ensdist)(x))} pensdist<-function(x){(distr::p(ensdist)(x))} #LogSforpredictionsattherealizedobservation -log(densdist(xstar)) ##[1]1.678156 #CRPSforpredictionsattherealizedobservation CRPS(pensdist,y=xstar) ##[1]0.5486368 ## 4 Retrospective analysis For large collaborative forecast projects having already established the representation formats for forecasting, it may be difficult for teams to adjust to using mixture distributions. There may be several reasons for this, including that not all forecast modeling methods will produce forecasts which may conveniently be represented by a mixture distribution. In this section we attempt to assess whether or not bin forecasts from the CDC flu competition or quantile forecasts from the COVID-19 Forecast Hub may be reasonably approximated by a mixture distribution with normal components as the number of components in the distribution increases. Forecasters in both the CDC flu competition and the COVID-19 Forecast Hub do not include with their forecast submissions information about modeling methods or distributional assumptions. Thus the only information we have for fitting distributions are bin forecasts and quantile forecasts. We are unaware of formal statistical methods for fitting parametric distributions or mixture distributions to bin distributions. Methods of fitting a distribution to quantiles include Bayesian Quantile Matching [35], step interpolation with exponential tails [36], and the Method of Simulated Quantiles [37]. These studies, however, lack claims that the methods for fitting are statistically formal. Nirwan and Bertshinger state that minimizing the mean square error between quantile values and a CDF function has been the most common way to fit a distribution to a set of quantiles. This is the method we will use in Section 4.2. Because of the lack of statistically formal methods for fitting a parametric distribution to a bin distribution or a set of quantiles, it should be noted that any conclusions made in this section may not be stated in terms of statistical certainty. ### CDC flu competition The CDC Retrospective Forecasts project on zoltrdata.com [38] contains 869,638 probabilistic influenza-like illness forecasts for all combinations of 11 regions in the United States and seven targets from 27 different modeling teams. These include forecasts made during all flu seasons between October 2010 and December 2018. All forecasts are represented by bin distributions. To assess whether the bin probabilities may be more closely approximated by mixture distributions with an increasing number of components, 5 mixture distributions with one to five normal components were fit to each of a selected set of bin forecasts. In fitting a distribution to a forecast, we want minimize (17). Equation (17) is a variation of the Kullback Leibler divergence (KLD). Here \(\mathbf{p}\) represents a bin distribution where \(p_{i}>0\) is the reported probability for the \(i^{th}\) bin \(B_{i}:=[b_{i-1},b_{i})\), and \(K\) is the number of bins. \(M_{\theta}^{C}\) is a random variable of a mixture distribution with \(C\) components and parameter vector \(\theta\). The fitted parameter vector \(\hat{\theta}\) is the solution to (18). \[D(\mathbf{p}|M_{\theta})=\sum_{i=1}^{K}p_{i}\text{log}\left(\frac{p_{i}}{P(M_{ \theta}^{C}\in B_{i})}\right) \tag{17}\] \[\hat{\theta}=\operatorname*{argmin}_{\theta}\sum_{i=1}^{K}p_{i}\text{log} \left(\frac{p_{i}}{P(M_{\theta}^{C}\in B_{i})}\right) \tag{18}\] Pulling forecasts from zoltardata.com and fitting mixture distributions to them was computationally expensive, so we limited the analysis to 1,141 forecasts. To select these forecasts, we sampled from the 869,638 forecasts as follows. For each submission -a single submission may contain multiple forecasts for various units and targets forecasted- there is a recorded date corresponding to the week of the forecast. There are 246 total dates. The number of submissions for each week was counted and 10 weeks were randomly selected with probabilities based on the number of submissions by week. All submissions from the selected 10 weeks were pulled from zoltardata.com, but the only forecasts kept for fitting were US national level forecasts for 1, 2, 3, and 4 week ahead and season peak percentage and forecasts with more 4 bins. Twenty-six of twenty-seven teams were represented by the selected forecasts. With forecasts selected, we then fit five mixture distributions to each. For each distribution to be fit there are \(C\in\{1,2,3,4,5\}\) normal components. In the mixture distribution there are \(C\) mean values, a common standard deviation shared by each component, and \(C\) component weights \(\omega_{1},...,\omega_{C}\). Thus the parameter vector \(\theta=(\mu_{1},...,\mu_{C},\sigma,\omega_{1},...,\omega_{C})\). To ensure in the optimization that \(\omega_{1},...,\omega_{C}>0\) and \(\sum_{i=1}^{C}\omega_{i}=1\) we optimize the parameters \(\nu_{1}=0,...,\nu_{C}\) and set \(\omega_{i}=\frac{e^{\nu_{i}}}{\sum_{i=1}^{C}\varpi_{i}}\). Since \(\nu_{1}=0\), only the \(C-1\) parameters \(\nu_{2},...,\nu_{C}\) parameters are optimized. To maintain Figure 4: Ensemble forecast made from forecast 1 and forecast 2 from Section 3.2. The red line is the density component of mixture forecast 1 with weight 0.529. The green line is the density component of mixture forecast 2 with weight 0.471. The blue line is the density of the ensemble forecast. order in the set of mixture components, we add the constraint \(\mu_{1}<\mu_{2}<...<\mu_{C}\). This is enforced by taking \(\mu_{1}=\mu_{1},\mu_{2}=\mu_{1}+e^{\alpha_{1}},...,\mu_{C}=\mu_{C-1}+e^{\alpha_{C -1}}\) so that the optimized parameters for means are \(\mu_{1},\alpha_{1},...,\alpha_{C-1}\). The necessary condition that \(\sigma>0\) is enforced by setting \(\sigma=e^{\eta}\) and optimizing over \(\eta\). Thus the \(2C\) parameter vector to be optimized is \(\gamma=(\mu_{1},\alpha_{1},...,\alpha_{C-1},\eta,\nu_{2},...\nu_{C})\). The optimization was done iteratively by repeating the following steps. 1. Initialize \(\gamma_{0}=(\mu_{1}^{(0)},\alpha_{1}^{(0)}...,\alpha_{C-1}^{(0)},\eta^{(0)}, \nu_{2}^{(0)},...,\nu_{C}^{(0)})\) 2. At step \(m\geq 1\), set the following \(2C\) parameter vectors where \[\gamma_{1}^{(m)}=(\tilde{\mu_{1}}^{(m+1)},\alpha_{1}^{(m)}...,\alpha_{C-1}^{( m)},\eta^{(m)},\nu_{1}^{(m)},...,\nu_{C}^{(m)})\] \[\gamma_{2}^{(m)}=(\mu_{1}^{(m)},\tilde{\alpha_{1}}^{(m+1)}...,\alpha_{C-1}^{( m)},\eta^{(m)},\nu_{1}^{(m)},...,\nu_{C}^{(m)})\] \[\vdots\] \[\gamma_{2C}^{(m)}=(\mu_{1}^{(m)},\alpha_{1}^{(m)}...,\alpha_{C-1}^{(m)},\eta^ {(m)},\nu_{1}^{(m)},...,\tilde{\nu_{C}}^{(m+1)})\] where \(\gamma_{i}^{(m)}\) is the parameter vector minimizing (18) over the \(i^{th}\) element while holding all other elements constant. 3. Set \(\gamma^{(m+1)}=\underset{\gamma_{i}^{(m)}}{\mathrm{argmin}}\sum_{i=1}^{K}p_{ i}\text{log}\left(\frac{p_{i}}{P(M_{\gamma_{i}^{(m)}}\in B_{i})}\right),i\in(1,...,2C)\) 4. Return to step 2. This process was run until \(|D(p||M_{\theta(m+1)}-D(p||M_{\gamma(m)})|/D(p||M_{\gamma(m)})<0.001\) where \(D(p||M_{\theta})\) is as defined in (17), or \(m=500\). The optimization was done using the opti function in R, and the optimization algorithm used was either "BFGS" or "L-BFGS-B". Of the 1,141 selected forecasts, 1,103 were fit by the 5 mixture distributions. The remaining forecasts were ignored in further analysis because the fitting algorithms failed to converge. Figure 5 shows boxplots of KLD for all fit mixture distributions for 1 to 5 components. In general, the KLD between the actual forecast distribution and the fit distribution tends to decrease as the number components in the fit mixture distribution increases. Figure 6 shows examples of density functions of one to five components fit to a forecast. The outer plots show fit mixture density functions plotted over the bin probabilities where the probabilities are multiplied by 10 to give the same scale as the densities. The inner plots show the KLD of the forecast and the fit distribution by number of components in the mixture distribution. To further compare the fits to the forecasts, we compared the actual forecast performance of the bin forecast to the fit mixture distributions. For each week and target represented in the sample of selected forecasts, truth data was obtained. For each bin forecast, the bin in which the true value fell, \(B_{t}\) was determined and the probability value in that bin was noted as \(p_{t}\). Then for each mixture distribution fit to that density, the probability \(p_{t}^{M^{C}}=P(M^{C}\in B_{t})\) within the true bin was calculated. All forecasts were then classed by the specific target and week for which they were forecasting. There were 50 total combinations of target and week. Figure 7 shows 5 scatterplots where the probabilities \(p_{t}\) for for 1 week ahead forecasts on 2016-10-17 are plotted against the probabilites \(p_{t}^{M^{C}}\). Each plot represents a different number of component distrubtions in the mixture distribution fit. The linear correlation coefficient between the probabilites is also given, and the correlation increases as the number of components increases. Table 10 shows the same correlation trend for 5 different target/week combinations and for all forecasts not broken out by target/week combinations. distribution to a set of quantiles was done in the same manner as to bin probabilities, only by choosing the parameter \(\hat{\theta}\) which minimizes the sum of square differences (SS) between the quantiles and the CDF of a mixture distribution as in equation (19). Here \(\hat{\theta}\) is the solution to equation (20). In equation (19), \(q_{i}\) is the \(i^{th}\) quantile (out of \(N\) quantiles) from a forecast and \(F(\alpha_{i}|\hat{\theta})\) is a fit CDF evaluated at the value \(\alpha_{i}\) with parameter \(\hat{\theta}\). The parameter \(\hat{\theta}\) is the solution to (20). \[\text{SS}=\sum_{i=1}^{m}(q_{i}-F(\alpha_{i}|\hat{\theta}))^{2} \tag{19}\] \[\hat{\theta}=\arg\min_{\theta}\sum_{i=1}^{m}(\alpha_{i}-F(q_{i}|\theta))^{2} \tag{20}\] We randomly selected 9 weeks from which to pull forecasts from zoltardata.com by searching through all teams which submitted any forecast. Forecasts submitted between a Monday and the following Tuesday were considered to have come from the same week [40]. The 9 weeks were selected randomly with probability relative to the total number of forecasts submitted that week. From those weeks, only US national forecasts for increasing and cumulative deaths from one to four weeks ahead were selected for fitting. In total, 2,676 forecasts were selected. However, as in Figure 5: Each boxplot is made of the KLD from equation (17) calculated for each sampled flu forecast. As the number of component normals fit to the forecasts increases, the median values and spread decrease. \begin{table} \begin{tabular}{c|c c c c c c} Components & Overall & (a) & (b) & (c) & (d) & (e) \\ \hline 1 & 0.5811 & 0.7608 & 0.789 & 0.4454 & 0.748 & 0.4752 \\ 2 & 0.738 & 0.9428 & 0.8885 & 0.6349 & 0.8025 & 0.8264 \\ 3 & 0.8341 & 0.9567 & 0.9626 & 0.7610 & 0.8271 & 0.8503 \\ 4 & 0.8445 & 0.978 & 0.9715 & 0.9428 & 0.7874 & 0.9114 \\ 5 & 0.8920 & 0.9844 & 0.9903 & 0.9691 & 0.7188 & 0.9454 \\ \hline \end{tabular} \end{table} Table 10: (a) 10/28/13 Season peak, (b) 12/2/13 4 wk, (c) 10/17/16 season peak, (d) 4/20/15 4 wk, (e) 4/20/15 1 wk. The values are the correlations between the actual probability score from the submitted forecasts and the probability according to the fit mixture distributions. Figure 6: Three selected flu forecasts with the pdfs of mixture distributions of one to five components plotted on top. The inner plot shows the KLD value as the number of components increases. The bin probabilities are multiplied by 10 to ensure they are on the same scale as the pdfs in the plot. the influenza analysis, computational issues made it difficult to fit five different mixture distributions to all quantile forecasts, as well truth data was missing for two targets during one week in September 2022. Thus the remainder of the anlysis included 2,319 forecasts. Table 11 shows the maximum SS values over all fits for fits from one to five component mixture distributions. Figure 9 contains plots showing fits for selected individual forecasts with fit CDF functions plotted over quantiles in the left plots, and SSE values plotted by the number of components. To compare forecast performance between actual quantile forecasts and the fit distributions, truth for each week/target combination was obtained and the WIS from (12) was calculated. Figure 10 shows plots of the WIS scores for 3 week ahead cumulative death forecasts for December 29, 2020 plotted against WIS scores for mixture distributions fit to the forecasts. There is a separate scatterplot for fits of one to five components. Table 12 shows calculated correlations between actual WIS scores and fit WIS scores for 5 different week/target combinations. The results from these studies suggest that forecasts in bin distribution and quantile formats may indeed be increasingly well approximated by a mixture of normal distributions as the number of components increases. ### Sample Distribution Forecast As some forecasting projects may accept sample distributions for forecasts, we created this example to show that a sample may be closely approximated by a mixture distribution as the number of components increases. To show this, we first created a sample distribution. With the sample in hand we fit a normal distribution by calculating the maximum \begin{table} \begin{tabular}{c|c c c c c} Components & 1 & 2 & 3 & 4 & 5 \\ \hline max SS & 0.53 & 0.36 & 0.16 & 0.11 & 0.06 \\ \end{tabular} \end{table} Table 11: Maximum SS value over all quantile forecasts to which mixture distributions were fit for one to five components Figure 7: Scatterplots with \(p_{t}\) on the x-axis and \(p_{t}^{M^{C}}\) on the y-axis for all selected 1 week ahead forecasts for the date 10/17/16. The correlation between the probabilities increases as the number of components in the fit mixture increases suggesting that the forecasts may be more closely approximated as the number of components increases. likelihood estimates for mean and standard deviation. We then fit mixture distributions from two to five components to the sample, thus giving fits for mixture distributions of one to five component normal mixture distributions. We obtained a sample distribution by first selecting a bin forecast from the flu forecasting competition. We randomly drew 700 samples where each draw corresponded to a bin \(B_{i}\) with probability according to the forecast \(p_{i}\). For each draw corresponding to each bin \(B_{i}\), a value was randomly selected according to the uniform distribution \(Unif(b_{i-1},b_{i})\). The expected maximization (EM) algorithm was used to find maximum likelihood values to fit a mixture distribution to the sample. We used the function normalmixEM in the mixtools package in R for this. For each of the five fits, we calculated the Kolmogorov-Smirnov (KS) test statistic or the maximum distance between the ECDF function of the sample and the CDF function of the fit mixture distrubtion. Figures 11 and 12 show the fit CDFs plotted with the ECDF and the pdfs plotted on the histogram respectively. These figures show that the sample is more closely approximated as the number of components in a mixture distribution increases. Figure 9: Actual quantiles for three different COVID-19 forecasts with CDFs of fit mixture distributions from one to five components. The inner plots show the SS value as the number of components is increased. Figure 11: The outer plot shows the fit CDFs in color plotted with the ECDF in black of the sample. The inner plot shows the value of the KS statistic of the fit CDFs and the ECDF as the number of components increases. Figure 10: Scatter plots for actual WISs and WISs from fit mixture distributions for 3 week ahead COVID-19 forecasts from 12/29/2020 terms of model flexibility, storage, and ensemble construction it is comparable to bin and quantile forecasts but also provides a forecast with a infinite nominal resolution. Based on a retrospective analysis, we argue that forecasts of quantile or bin distribution representations may be more closely approximated by a mixture distribution as the number of components of the distribution is increased. This may allow the transition from past and current formats to a mixture distribution format straightforward. We thus advocate for the use of mixture distributions in future forecasting projects like those done in the CDC flu competition or in the COVID-19 Forecast Hub. For a number of reasons, some forecasters may prefer not to the adopt mixture distributions as a format in collaborative forecasting. A collaborative forecast center, along with forecasters, using a different representation format may simply not want to break from tradition. There may be some concern that a mixture distribution does not represent well certain models. And the implementation of new scoring and ensemble construction methods may also be a barrier. Development of tools beyond what was used in Section 3.2 would assist in making a transition to using mixture distributions more straightforward. One aspect of ensemble construction which received little attention in this paper is the selection of weights for components of an ensemble where each of the components is a mixture distribution. Computing requirements could be a concern in such a problem, and further research on this may provide ideas of best methods for weight selection. Another area of recommended research is the use of joint mixture distributions for forecasting. We have only considered here probabilistic forecasting of one event at a time, or example, the number of new infections in one week at one specific location. This forecast is presented as a marginal distribution for that specific target, time, and location. A joint distribution for forecasting multiple targets, times, or locations may sometimes be desirable and may require further consideration on how joint mixture distributions could be used as a format in collaboration.
2306.15261
A Survey on Out-of-Distribution Evaluation of Neural NLP Models
Adversarial robustness, domain generalization and dataset biases are three active lines of research contributing to out-of-distribution (OOD) evaluation on neural NLP models. However, a comprehensive, integrated discussion of the three research lines is still lacking in the literature. In this survey, we 1) compare the three lines of research under a unifying definition; 2) summarize the data-generating processes and evaluation protocols for each line of research; and 3) emphasize the challenges and opportunities for future work.
Xinzhe Li, Ming Liu, Shang Gao, Wray Buntine
2023-06-27T07:44:25Z
http://arxiv.org/abs/2306.15261v1
# A Survey on Out-of-Distribution Evaluation of Neural NLP Models ###### Abstract Adversarial robustness, domain generalization and dataset biases are three active lines of research contributing to out-of-distribution (OOD) evaluation on neural NLP models. However, a comprehensive, integrated discussion of the three research lines is still lacking in the literature. In this survey, we 1) compare the three lines of research under a unifying definition; 2) summarize the data-generating processes and evaluation protocols for each line of research; and 3) emphasize the challenges and opportunities for future work. ## 1 Introduction Neural NLP models have been reported for their super-human performance on many language understanding and generation tasks, such as sentiment analysis and machine reading comprehension (MRC). Recent studies show that these models lack human-level understanding of language since they are vulnerable beyond in-distribution (ID) test data and fail to generalize on out-of-distribution (OOD) data, such as perturbed examples under adversarial attacks [1], text from different domains [1] and examples against dataset biases [1], which lie in the three mainstream lines of research: adversarial robustness, domain generalization and dataset biases. Despite recent progress in each of the three research lines, there has not been a survey that comprehensively reviews and investigates the connections between these lines. We have the following motivations for such an investigation: 1) explaining some phenomena across the three lines of research. For example, methods designed to improve model generalization on one OOD type can defend against other types of OOD data [21] but sometimes degrade model performance on other OOD types [1, 2]; 2) encouraging future researchers to perform a comprehensive OOD evaluation while proposing their new methods; 3) suggesting that all the OOD types be addressed through in future research. In this survey, we first unify the three research lines as the study of distribution shift, which gives us a starting point to find connections between them in terms of shifted features (SS2). Furthermore, we summarize their differences in OOD data generation and evaluation protocols (SS3). In particular, we categorize different methods to generate OOD data, including data with natural domain shift (NDS) for domain generalization, debiased data for evaluating dataset biases and adversarial examples for adversarial robustness. We then categorize evaluation protocols for the OOD data into two classes: data-based and method-based evaluation. Finally, we identify opportunities based on the discussion of connections and differences between the three OOD types, including developing a comprehensive benchmark for OOD evaluation, advising caution about breaking the assumptions of covariate shift, improving general OOD performance and using the detection methods across different OOD types. Also, we demonstrate the gap in adversarial robustness for future work (SS4). ## 2 Definition This section first introduces a comprehensive definition of distribution shift across the three research lines and then shows how the definition aids in elucidating the interconnections regarding the shifted features. ### Distribution Shift Deep Neural Network (DNN) \(f(\theta)\) can achieve remarkable or even superhuman performance on NLP benchmarks, such as GLUE with in-distribution (ID) test data \(\mathcal{D}\text{test}\sim\mathbb{P}_{0}\), when \(f(\theta)\) is trained on data sampled from the data-generating distribution (an unknown distribution under a data-generating process) \(\mathcal{D}_{\text{train}}\sim\mathbb{P}_{0}\). However, the well-trained model often fails to generalize to various unforeseen examples. Formally, the unforeseen data (or OOD data) \(\tilde{\mathcal{D}}\) can be characterized by a shifted distribution \(\tilde{\mathbb{P}}\). The distribution shift includes covariate shifts (input distribution shifts) \(\mathbb{P}_{0}(\mathcal{X})\neq\tilde{\mathbb{P}}(\mathcal{X})\) and label distribution shifts \(\mathbb{P}_{0}(\mathcal{Y})\neq\tilde{\mathbb{P}}(\mathcal{Y})\), where \(\mathcal{X}\) is the input space, and \(\mathcal{Y}\) is the ground-truth output space. We unify the three lines of research as a study of covariate shifts. For brevity, the input notation \(\mathcal{X}\) is sometimes omitted when referring to it, as exemplified by the use of \(\mathbb{P}_{0}\) to represent \(\mathbb{P}_{0}(\mathcal{X})\). Domain generalization and dataset biases.Real-life documents in different domains, e.g., news and fiction text [1], are typically characterized by systematically distinct properties and originate in different data generating distributions. Domain generalization investigates such natural domain shifts, and relevant literature often uses domain shift and distribution shift interchangeably. Some literature [16, 17] also defines that data with particular biases and those without such biases come from different domains. Therefore, the studies of domain generalization and dataset biases can be intuitively defined in the framework of distribution shifts. **Adversarial robustness: from robustness to distribution shift.** Initially, adversarial methods evaluate model robustness against small perturbations in the worst-case scenarios, especially for continuous data like images [15]. Hence, prior studies typically separate adversarial robustness from the study of domain generalization and distribution shift. We can unify adversarial robustness as a study of distribution/domain shift. Conceptually, we can define the shifted distribution in the adversarial setting as a distribution around the original distribution \(\mathbb{P}_{0}\). We perform a perturbation on text \(\mathbf{x}\in\mathbb{P}_{0}(\mathcal{X})\) to simulate OOD data from the shifted distribution. In other words, adversarial perturbations encode subtle domain shifts, as discussed in SS2.3, and the shifted domains are characterized by the adversarial methods summarized in SS3.3. ### Shifted Features To concretize the shifted features, we first define task-relevant features \(s_{\text{rel}}\) and task-irrelevant features \(s_{\text{ir}}\), which satisfy \(\mathbb{P}(s_{\text{rel}}|y)\neq\mathbb{P}(s_{\text{rel}})\) and \(\mathbb{P}(s_{\text{ir}}|y)=\mathbb{P}(s_{\text{ir}})\), respectively. \(\mathbb{P}\) is an arbitrary distribution. **Background features and semantic features.** When \(\mathbb{P}\) is the ground-truth distribution \(\mathbb{P}_{\text{true}}\), Arora et al., [1] name \(s_{\text{rel}}\) as _semantic features_ since most realistic NLP tasks address the semantics of text, and \(s_{\text{ir}}\) as _background features_, which can be the syntax, writing style or task-irrelevant text. **Biased features are task-irrelevant features under \(\mathbb{P}_{\text{true}}\) despite being learned as task-relevant features from \(\mathbb{P}_{0}\). \(\mathbb{P}_{0}(\mathcal{X})\) can be factorized into the conditional distributions of generalized features \(g(\mathcal{X})\) and biased features \(b(\mathcal{X})\).** \[\mathbb{P}_{0}(\mathcal{X})=\sum_{y}\mathbb{P}_{0}(g(\mathcal{X})|y)\mathbb{P }_{0}(b(\mathcal{X})|y)\mathbb{P}_{0}(y), \tag{1}\] assuming conditional independence between \(b(\mathcal{X})\) and \(g(\mathcal{X})\) given \(y\). By definition, \(g(\mathcal{X})\) should always be task-relevant features in the true distribution \(\mathbb{P}_{\text{true}}\), i.e., \(g(\mathcal{X})\in s_{\text{rel}}\), while biased features \(b(\mathcal{X})\) are only predictive for ID dataset \(\mathcal{D}\sim\mathbb{P}_{0}\). We can express \(b(\mathcal{X})\) as \(\{s|s\in\mathbb{P}_{0}\text{ and }s\notin\tilde{\mathbb{P}}\}\). ### Shifted Features in Three OOD Types This section shows how each OOD type can potentially shift the three kinds of features. **Shifted features on adversarial examples.** Shifted features of adversarial examples depend on the attack methods. White-box and grey-box methods compute gradients or output logits, respectively, while black-box methods can only query final predictions. With the guidance of gradients, white-box methods tend to shift the biased features since the input gradients can help identify the biased features for perturbations [11, 13]. Perturbation types largely affect whether it is a semantic shift or background shift (See SS3.3). For example, character editing may lead to unseen words, e.g., "wonderful" to "wonderful", which belongs to semantic shift. In contrast, black-box methods explicitly change background features at the sentence level. For example, Qi et al., [1] paraphrase with different writing styles. **Shifted features on debiased data.** Debiased data, by definition, is generated by removing biased features. However, the generation process often encodes either background features \(s_{\text{ir}}\) or semantic features \(s_{\text{rel}}\). For example, debiased data generated by syntactic templates [14] encode background shift, while MRC data requiring numerical reasoning to avoid dataset biases [18] change semantic features. **Shifted features on NDS data.** We classify natural domain shift into two categories, namely genres and sub-populations. Both background and semantic shifts can occur in both categories, such as different vocabularies associated with sub-populations or genres [1]. For example, for sentiment analysis, Tweets tend to express sentiments with emotions compared to the genre of movie reviews, and sub-populations can differ in the way to express emotions [18]. Furthermore, DNNs can easily learn background features for each sub-population or genre as biased features. Writing styles can also serve as another background feature that changes across genres [1, e.g., text from fiction books to Wikipedia content. ## 3 OOD Performance Evaluation This section compares the concrete methodologies of OOD data generation and evaluation protocols for each line of research. Their differences are summarized in Table 1. ### NDS Data Generation NDS data can be generated from different genres or sub-populations. The former can be acquired from various data sources, whereas the latter involves partitioning data into sub-populations based on attributes of entities or individuals. Diverse sets of data genres may display a fusion of unique textual styles, syntactic structures, and lexicons. **Genres.** Different genres refer to text written by different authors or annotators or for different audiences, or come from different data sources. For example, academic papers are written in formal language while online reviews on Yelp contain non-standard orthography. Since a training dataset typically comes from a specific data source and contains only one genre, even well-trained models generalize poorly to different genres. Although some datasets are created to include multiple genres of text, models still perform worse beyond the coverage of these genres. For example, MultiNLI for NLI consists of text from ten distinct genres. However, even the large pretrained language models cannot generalize well to a different genre [1], such as SNLI from image captions or WNLI from fiction books. Take text classification as another example. Classifiers cannot perform well on text from disjoint annotators, e.g., SST data from experts v.s. lengthy IMDB reviews from laymen [11, 12]. [20] find that MRC models trained on Wikipedia hardly generalize on data from New York Times articles, Reddit posts, and Amazon product reviews. Sub-populations.Multiple studies identify NDS data by dividing data into sub-populations according to different attributes of objects or individuals obtained from metadata. Arora et al., [20] select reviews from different businesses, e.g., restaurant reviews v.s. movie reviews. For the same business, Hendrycks et al., [20] split them according to different types of products (e.g., women's clothing, shoes) or restaurants (e.g., American v.s. Chinese restaurants). Furthermore, model performance can vary on data with different demographics, which is closely connected to another line of research about fairness. For example, Borkan et al., [20] find the worst test accuracy on non-toxic comments from the black population. ### Debiased Data Generation In this section, we first divide dataset biases into two categories and introduce a text summarization dataset bias that does not fall under either one. We then classify data-generating methods into two categories according to the knowledge of dataset biases. Table 2 and 3 summarize dataset biases and debiased datasets for various NLP tasks, respectively. \begin{table} \begin{tabular}{c|c c|c c} \hline \hline \multirow{2}{*}{Biases} & \multicolumn{2}{c}{OOD Data Generation} & \multicolumn{2}{c}{Evaluation} \\ & Generated by & Required Knowledge & Types & Metrics \\ \hline Adversarial robustness & Auto & Model & Method-based & ASR; query \# \\ Adversarial transferability & Auto/human & Base model & Data-based & ID metrics \\ Domain generalization & Existing & Domain & Data-based & ID metrics \\ Dataset biases & Auto/human & Biases/NA & Data-based & ID metrics \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of OOD data generation and evaluation. ASR: attack success rate; query \(\#\): number of queries; ID metrics: the same aggregate metrics for ID test data, e.g., accuracy for classifiers. \begin{table} \begin{tabular}{l l l} \hline \hline Methods & Tasks & Debiased Datasets \\ \hline \multirow{4}{*}{Biased inspired} & TC & c-IMDB [20] \\ \cline{2-3} & NLI & HANS [14], Stress Text [15] \\ \cline{2-3} & MRC & Para-SQUAD [15] Adv-SQUAD [16] \\ \cline{2-3} & Paraphrase & PAWS [17] \\ \cline{2-3} & Fact Checking & FEVER-Symmetric [14] \\ \hline \multirow{4}{*}{Systematic} & NLI & SNLI-AFLITE [18] \\ \cline{2-3} & Reasoning & Winogrande [21] \\ \cline{2-3} & CGI & Swag [19] \\ \hline \hline \end{tabular} \end{table} Table 2: Dataset biases for various NLP tasks. TC: Text Classification; NLI: Natural Language Inference; MRC: Machine Reading Comprehension; Paraphrase: Paraphrase Identification. \begin{table} \begin{tabular}{l l} \hline \hline Methods & Tasks & Debiased Datasets \\ \hline \multirow{4}{*}{Biased inspired} & TC & c-IMDB [20] \\ \cline{2-3} & NLI & HANS [14], Stress Text [15] \\ \cline{2-3} & MRC & Para-SQUAD [15] Adv-SQUAD [16] \\ \cline{2-3} & Paraphrase & PAWS [17] \\ \cline{2-3} & Fact Checking & FEVER-Symmetric [14] \\ \hline \multirow{4}{*}{Systematic} & NLI & SNLI-AFLITE [18] \\ \cline{2-3} & Reasoning & Winogrande [21] \\ \cline{2-3} & CGI & Swag [19] \\ \hline \hline \end{tabular} \end{table} Table 3: Debiased datasets for various NLP tasks. TC: Text Classification; NLI: Natural Language Inference; MRC: Machine Reading Comprehension; Paraphrase Identification; Reasoning: Commonsense Reasoning; CGI: Commonsense Grounded Inference. Dataset Biases Lexical correlations.Words frequently appearing in the training examples of a particular class can be predictive biases for that class. For example, when NLI annotators create a majority of contradictory hypotheses by negating premises, models learn the spurious correlation between negation words ('no', 'never') and labels (_hypothesis-only reliance_[11]). The success of class-wise adversarial attacks comes from exploring these predictive words to some extent [23]. Similarly, claim-only models can perform well on ID test data without the context of evidence for fact-checking due to predictive words [10]. MRC models can find the answer in a given paragraph by matching the question type, e.g., "November 2014" for a "when" question (question word matching [13] or entity type matching [20]), and hence they are easily distracted to the wrong answers on OOD data [13]. Emelin et al., [2] observe that machine translation systems have a tendency to disambiguate words based on the sense that occurs most frequently in the training data. Lexical overlap.Lexical overlap is another predictive indicator for sentence-pair classification. For example, the high overlap between the premise and hypothesis leads to the "entailment" prediction from NLI models (premise-hypothesis overlap [12]). The classifiers for paraphrase identification also tend to predict highly overlapping sentences as paraphrases [14]. Besides, MRC models can locate the sentence with maximum overlap words in the paragraph (context matching [20]) and then use question word matching to extract the correct answer. Positional biases for text summarization.News or journal articles tend to summarize content in the lead paragraphs (positional bias or layout bias). Specifically, Kedzie et al., [2] show that 88.6% reference summaries from the training examples of NYT come from the first half of documents (69% for DUC, 71.7% for CNN/DM). Methodologies There are two types of methods to generate debiased data: 1) breaking the spurious correlations; 2) filtering the examples containing superficial patterns automatically without recognizing any specific dataset biases. Bias-informed approaches.Many studies generate debiased data by reversing the correlations between dataset biases and labels for text classification. For natural language inference (NLI), Mccoy et al., [2] specify linguistic phenomena (constituent and subsequent heuristics) behind lexical overlap. They devise heuristics-based syntactic templates to create sentence pairs that have high lexical overlap but contradict each other. Kaushik et al., [2] construct counterfactual examples by breaking spurious correlations in sentiment analysis datasets (i.e., IMDB) and NLI datasets. They annotate the text into the targeted label by largely keeping the original text. In this way, the dataset biases of the original label are kept. To break the lexical overlap between paraphrase pairs, Zhang et al., [2] create non-paraphrase sentences via word scrambling. This method changes the meanings of paraphrases but keeps the overlapping words. For MRC, Jia and Liang [2] add a sentence containing words overlapping with the question into paragraphs and find that models tend to select answers from the sentence. They also combine the bias-inspired approach with a grey-box attack to generate adversarial examples. Gardner et al., [2] rely entirely on experts who have the knowledge of dataset biases to generate debiased data. Systematic approaches.Debiased data can also be generated by bias mitigation techniques without knowing dataset biases. For example, Bras et al., [2] use a simple classifier to identify biased examples and generate debiased datasets for NLI, commonsense reasoning and grounded commonsense inference, respectively. Although some works try to train robust models with such filtered data, filtering always leads to significant drops on ID data and even OOD data [15]. Therefore, it can be more practical to use them for OOD evaluation. ### Adversarial Example Generation Given a well-trained model, adversarial examples are generated by modifying the given text (or reference data) to make the model output wrong predictions. Typically, the generation process satisfies semantics-preserving and label-preserving assumptions. Mostly, semantics preservation can guarantee the same label for a perturbed text. Thus, a robust model should generate invariant outputs \(\operatorname*{arg\,max}_{y*}f(\mathbf{x})=\operatorname*{arg\,max}_{y*}f(\mathbf{ \tilde{x}})=y\) under semantics-preserving perturbations, where \(y\) is the ground-truth label and \(\mathbf{\tilde{x}}\) is the perturbed text. Adversarial attacks have been widely studied to automatically generate adversarial examples. We categorize the attack methods according to their perturbation types, perturbation space and adversary's knowledge. Table 4 exemplifies typical methods for each category. **Perturbation Types** There are four common perturbation types that exhibit different granularities, namely, character editing, word substitution, paraphrasing and universal adversarial perturbation (UAP). Character editing mimics real-life accidental typos or spelling variants in social media via character swapping (e.g., "place" \(\Rightarrow\) "place", deletion (e.g., "artist" \(\Rightarrow\) "arlist"), insertion (e.g., "computer" \(\Rightarrow\) "computer") and substitution (e.g., "computer" \(\Rightarrow\) "computor") [15]. Humans are robust to spelling errors maintaining certain morphological or phonological characteristics, e.g., "computer". Hence, a few edits on a word would not affect the human perception of the word but may lead to completely opposite predictions by the models. In contrast, word substitutions require semantics-preserving constraints on substitute words or rely on a vocabulary of synonyms for substitutions [14]. Paraphrasing is seldom considered in adversarial attacks due to the difficulty of generating high-precision paraphrases. The last type of adversaries follows the work to generate UAPs for images, which perturb any images of a particular class [17]. Behjati et al., [15] generate the UAPs for NLP models in the form of n-grams (i.e., non-sensical phrases), which lead to misclassification when appended to text of a class. Wallace et al. (2019, 2020) even extend the idea of UAPs for MRC, language modeling and machine translation. These studies assume that inserting non-sensical phrases does not change the semantics of text and ground-truth labels. Song et al. (2021) generate natural phrases (e.g., "natural energy efficiency") as UAPs via Adversarially Regularized Auto Encoder (ARAE). However, it can only maintain the coherence of UAPs rather than the whole perturbed text. #### 3.2.2 Perturbation Space A text consists of a sequence of tokens (e.g., words or subwords), denoted as \(\mathbf{x}=w_{1},w_{2},\ldots,w_{T}\), where \(T\) is the number of tokens in the vocabulary \(\mathcal{V}=\{w_{1},w_{2},\ldots,w_{K}\}\) (\(K\) is the size of the vocabulary). Note that the vocabulary can differ from the ones of the victim models, especially under black-box attacks. All types of perturbations, except for paraphrasing, can be perturbed in a discrete space. When perturbing via word substitutions or UAPs, the objective is to identify significant tokens in \(\mathbf{x}\) and substitute them with tokens from the vocabulary \(\mathcal{V}\) in order to generate incorrect model predictions. This is a combinatorial optimization problem, where the size of the search space is \(K^{T}\). The search space is intractable and is typically addressed using heuristic-based approaches (Alzantot _et al._, 2018; Zang _et al._, 2020) or approximation methods (Ebrahimi _et al._, 2018; Wallace _et al._, 2019). Character editing also requires identifying significant tokens, but it involves a subsequent manipulation of characters within the identified tokens (Gao _et al._, 2018). Assuming the average token length is \(l\), the search space becomes \(l^{T}\), leading to combinatorial explosion of modifications. There are two types of methods to perturb text in the continuous space: 1) perturbing tokens (characters/subwords/words) in the embedding space. Gong et al. (2018) add continuous perturbations directly to the continuous representation \(e_{w}\) of token \(w\) via element-wise addition between the perturbation \(\eta\) and \(e_{w}\), similar to pixel-wise addition for images (Goodfellow _et al._, 2015). To transform \(\eta+e_{w}\) back to the text, they search for the nearest token \(\tilde{w}\) in the embedding space where \(e_{\tilde{w}}+\eta\in\mathcal{V}\). However, it probably generates a semantically variant token, because the closest and most legible token can be far away from the original one in projection space. Note that they still require searching for important tokens. 2) reparameterization for paraphrasing or UAPs. Zhao et al. (2018) reparameterize the text \(\mathbf{x}\) into \(\mathbf{z}\) in a continuous space and then perturb \(\mathbf{z}\) into \(\tilde{\mathbf{z}}\). They also train a text generator (e.g., LSTM) to decode \(\tilde{\mathbf{z}}\) back to text. Song et al. (2020) apply the reparameterization trick for UAP generation. Moreover, generation models are commonly used to create paraphrases such as machine translation models for back-translation (Ribeiro _et al._, 2018), a syntactically controlled paraphrase network (Iyyer _et al._, 2018) or a text style transfer model (Qi _et al._, 2021). The latter two specifically change task-irrelevant features. #### 3.2.3 Adversary's Knowledge We can categorize the attack methods into three types (white-box, grey-box and black-box) according to the three levels of model knowledge: model parameters, output logits (i.e., estimated probability distribution) and final predictions. The white-box methods require model parameters to propagate gradients back to the input (e.g., the gradient of an adversarial loss with respect to the input word embedding \(\nabla_{e_{w}}\mathcal{L}\)). They need \(\nabla_{e_{w}}\mathcal{L}\) to find important tokens and generate perturbations, e.g., the noise in the embedding space or substitute tokes. Liang et al. (2018) adapt the gradient magnitude to find important words for perturbations and search for substitute words for each targeted label since the magnitude in each dimension of the word embedding indicates the sensitiveness of the prediction to the change. Gong et al. (2018) directly use gradients to perform perturbations on the embedding space. Ebrahimi et al. (2018) and Wallace et al. (2019, 2020) use the gradients and word embeddings to approximate the loss for word substitutions. Specifically, they approximate the loss change of substituting a word \(w\) with another word \(s\) in the vocabulary by the inner product of the word embedding \(e_{s}\) and \(\nabla_{e_{w}}\mathcal{L}\), where \(\mathcal{L}\) is the adversarial loss. The selected word \(s\) is expected to minimize the adversarial loss on the perturbed text \(\tilde{\mathbf{x}}\). \[\operatorname*{arg\,min}_{s} \mathcal{L}(\tilde{\mathbf{x}},y) \tag{2}\] \[\approx e_{s}^{\mathrm{T}}\nabla_{e_{w}}\mathcal{L}(\mathbf{x},y)\] \begin{table} \begin{tabular}{l l l} \hline \hline Knowledge & Perturbation Space & Perturbation Types \\ \hline \multirow{2}{*}{White-box} & Continuous & Word substitution (Gong _et al._, 2018), UAP (Song _et al._, 2021), Character editing (Liu _et al._, 2022) \\ \cline{2-3} & Discrete & Word substitution (Liang _et al._, 2018; Ebrahimi _et al._, 2018), UAP (Wallace _et al._, 2019; Behjati _et al._, 2019) \\ \hline Grey-box & Discrete & Word substitution (Alzantot _et al._, 2018; Zang _et al._, 2020; Jin _et al._, 2020; Li _et al._, 2020), Character editing (Gao _et al._, 2018) \\ \hline \multirow{2}{*}{Black-box} & Continuous & Paraphrasing (Zhao _et al._, 2018) \\ \cline{2-3} & (Only generation model) & Paraphrasing (Iyyer _et al._, 2018; Ribeiro _et al._, 2018; Qi _et al._, 2021) \\ \hline \hline \end{tabular} \end{table} Table 4: Classification of Attack Methods Based on Adversaries’ Knowledge, Perturbation Space, and Perturbation Types. UAP refers to Universal Adversarial Perturbation (UAP). Several studies employ generation models to produce paraphrases without adding any noise for perturbations. Under the grey-box setting, an adversary only has access to the probability distribution (or logits) over the output variable \(Pr(y|\mathbf{x})=f(\mathbf{x})\), where \(Pr(y_{i}|\mathbf{x})\) is the probability of the outcome \(y_{i}\) given the text \(\mathbf{x}\). Grey-box methods employ the probabilistic information to measure how important each token \(w_{i}\in\mathbf{x}\) is to the prediction. For \(w_{i}\), its importance score \(\mathbf{I}_{i}\) is commonly calculated as the difference of the probabilities of the correct class \(Pr(y_{\text{true}}|\mathbf{x})-Pr(y_{\text{true}}|\mathbf{x}_{\setminus w_{i}})\)[11], where \(x_{\setminus w_{i}}\) can either be \(\{w_{1},\dots,w_{i-1},w_{i+1},\dots,w_{T}\}\) (delete \(w_{i}\)) [11, 20] or \(\{w_{1},\dots,\text{UNK},\dots,w_{T}\}\) (replace \(w_{i}\) with the unknown token _UNK_) [6, 10]. Jin et al., [2020] also add the change to the misclassified class \(f_{\tilde{g}}(\tilde{x})-f_{\tilde{g}}(x)\) if \(\operatorname*{arg\,max}_{y\ast}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! linguistic or semantic knowledge to perform well on all three types of OOD evaluation. This accounts for the observation that unsupervised pretraining and data augmentation strategies for learning general linguistic or semantic knowledge can enhance generalization on different domains and adversarial robustness [11, 12]. Notably, such augmentation techniques [20] may solely modify the background features of training samples without any adversarial process. Our second argument is that even though dataset filtering for bias mitigation may improve model performance on biased datasets, it may lead to inferior performance on NDS data and decreased adversarial robustness, as biased data still contains knowledge of semantic and background features that can enhance general OOD performance. Supporting evidence for this argument is provided by Bras et al., [20] and Gokhale et al., [20] across various NLP tasks. Utilizing detection methods across different research lines.The connections between OOD types regarding the shifted features also motivate future researchers to apply OOD detection methods across the three research lines. For example, since density estimation methods can effectively detect NDS data with background shift [1], they may defend against black-box attacks, which always generate adversarial examples by shifting background features. ### Challenges and Opportunities for Adversarial Robustness Beyond semantics preservation.Adversarial examples can be generated beyond the assumption of semantics preservation. Chen et al., [20] define obstinate adversarial examples, which satisfy two conditions: 1) their ground-truth labels after the perturbation are changed, but 2) the victim model maintains the original predictions. Similarly, Kaushik et al., [20] minimally perturb test examples to change the labels of test examples. There are many reasons to explore obstinate adversarial examples. Gardner et al., [20] argue that such examples characterize the correct decision boundary for the task. Chen et al., [20] find that models with adversarial training on these examples reveal vulnerability to obstinate adversarial examples. Also, breaking the assumption of semantics preservation allows more adversarial behaviours for tasks beyond single-sentence classification. For example, Song et al., [20] generate nonsensical or natural sentences, leading to invariant model outputs for sentence-pair classification. Wallace et al., [20] change malicious content into nonsensical text which still makes machine translation models translate it into bad language. Developing realistically harmful adversarial behaviours.There are different model behaviours designed for malicious intents during adversarial attacks. For classification tasks, we can either specify the expected prediction (targeted attack) or accept any prediction different from the correct label (non-targeted attack). The harm of attack depends on the specific task or labels. For example, it is harmful to generate adversarial examples to evade the detection of misinformation or toxic content. In contrast, there are many possibilities of adversarial behaviours for generation tasks due to various combinations of input and output text. Cheng et al., [20] define an attack where all the words in the output text are different from the original output sequence while the input is similar. This kind of attack can lower the standard evaluation metrics based on n-grams, like BLEU scores. Wallace et al., [20] develop UAPs to make MT models hardly generate any translation or output random words. Targeted keyword attacks [1] can make models generate targeted words. Future researchers can define more realistically harmful behaviours, such as the generation of malicious nonsense or racial materials. Adversarial robustness may not be a good proxy for realistic scenarios.Adversarial perturbations may make text deviate from their real-world distribution and generate rarely occurred examples. Particularly, pure gradient-based methods only pursue worst-case perturbations without considering the naturalness of adversarial examples. Hence, to alleviate this problem, adversarial candidates are commonly validated by some metrics such as language model perplexity [1], and part-of-speech matching [1]. However, attack processes become computationally expensive by including these separate modules and rejecting most of the perturbed examples. Another type of approach is to use limited search space for adversarial perturbations, e.g., using synonyms for word substitutions with the compromise of worst-case performance [15]. It is worth exploring the attack methods that can inherently ensure the naturalness of perturbed text. There are some scenarios, as summarized below, where adversarial robustness against rare and unnatural text is undoubtedly critical. 1) When models' behaviours can cause devastating outcomes, e.g., astronautics or legal services. 2) When models are deployed in ubiquitous, unforeseeable scenarios. For example, Facebook translation once made the mistake of translating the simple phrase "good morning" in Arabic into "attack it" in English. Such a mysterious translation convinced the police that the user posting this message might launch a vehicle attack, resulting in the arrest of the user. 3) When attackers have sufficient malicious intent to explore rare cases, such as fact verification and security tasks where attackers would like to evade the model's detection to get unqualified access, e.g., propagating anti-social content or fake news on social media. ## 5 Conclusion In this paper, we characterized and summarized the three mainstream lines of research on dataset biases, domain generalization and adversarial robustness. We encouraged future researchers to think comprehensively about OOD evaluation and improve the OOD generalization of NLP models on all the three types of OOD data. We also highlighted the gap between adversarial robustness and realistic OOD evaluation.
2305.03635
On MSR Subspace Families of Lines
A minimum storage regenerating (MSR) subspace family of $\mathbb{F}_q^{2m}$ is a set $\mathcal{S}$ of $m$-spaces in $\mathbb{F}_q^{2m}$ such that for any $m$-space $S$ in $\mathcal{S}$ there exists an element in $\mathrm{PGL}(2m, q)$ which maps $S$ to a complement and fixes $\mathcal{S} \setminus \{ S \}$ pointwise. We show that an MSR subspace family of $2$-spaces in $\mathbb{F}_q^4$ has at most size $6$ with equality if and only if it is a particular subset of a Segre variety. This implies that an $(n, n-2, 4)$-MSR code has $n \leq 9$.
Ferdinand Ihringer
2023-05-05T15:47:21Z
http://arxiv.org/abs/2305.03635v2
# On MSR Subspace Families of Lines ###### Abstract A minimum storage regenerating (MSR) subspace family of \(\mathbb{F}_{q}^{2m}\) is a set \(\mathcal{S}\) of \(m\)-spaces in \(\mathbb{F}_{q}^{2m}\) such that for any \(m\)-space \(S\) in \(\mathcal{S}\) there exists an element in \(\mathrm{PGL}(2m,q)\) which maps \(S\) to itself and fixes \(\mathcal{S}\setminus\{S\}\) pointwise. We show that an MSR subspace family of \(2\)-spaces in \(\mathbb{F}_{q}^{4}\) has at most size \(6\) with equality if and only if it is a particular subset of a Segre variety. This implies that an \((n,n-2,4)\)-MSR code has \(n\leq 9\). ## 1 Introduction Distributed storage systems (DSS) require codes which are particularly suited for dealing with the unavailability of some storage nodes. This leads to the investigation of _minimum storage regenerating (MSR) codes_. It has been shown that MSR codes are closely linked to MSR subspace families, cf. [1, 2, 4], which we will define below. Here we obtain a precise upper bound on the size of \((n,n-2,4)\)-MSR codes, to our knowledge the first open case, using techniques from finite geometry. Let \(\mathrm{GL}(\ell,q)\) denote the general linear group over the field with \(q\) elements, that is the set of intervertible matrices over \(\mathbb{F}_{q}\). We denote the multiplicative group of \(\mathbb{F}_{q}\) by \(\mathbb{F}_{q}^{*}\). Let \(\mathrm{PGL}(\ell,q)=\mathrm{GL}(\ell,q)/\mathbb{F}_{q}^{*}\) denote the corresponding projective general linear group. In the below we use exponential notation for group actions. Consider \(\mathbb{F}_{q}^{rm}\). We want to find a family \(\mathcal{F}\) of \(m\)-spaces \(S_{i}\) such that there exist \(g_{i,j}\in\mathrm{PGL}(rm,q)\), where \(j\in\{1,\ldots,r-1\}\), such that for all distinct \(i,i^{\prime}\), we have that \(S_{i}+S_{i}^{g_{i,1}}+\ldots+S_{i}^{g_{i,r-1}}=\mathbb{F}_{q}^{rm}\) and that \(S_{i^{\prime}}^{g_{i,j}}=S_{i^{\prime}}\). We call a family \(\{S_{1},\ldots,S_{k}\}\) such that such \(g_{i,j}\) exist a _minimum storage regenerating (MSR) subspace family_ or \((rm,r)\)-MSR subspace family, cf. [1]. Note that in [1] only \(g_{i,j}\in\mathrm{GL}(rm,q)\) is required, but this makes no difference as we only consider actions on subspaces. As we limit ourselves to the case that \(r=2\), we write \(g_{i}\) instead of \(g_{i,1}\) and \(S_{i}+S_{i}^{g_{i}}=\mathbb{F}_{q}^{2m}\) is equivalent to \(S_{i}\cap S_{i}^{g_{i}}\) being trivial. In this document matrices act by left multiplication. Throughout the document, let \(e_{i}\) denote the \(i\)-th vector of the canonical basis. We use projective notation and call \(1\)-spaces _points_, \(2\)-spaces _lines_, and \(3\)-spaces _planes_. Our main result is the following: **Theorem 1.1**.: _A \((4,2)\)-MSR subspace family has at most size \(6\). In case of equality, the family is isomorphic to_ \[\langle e_{1},e_{3}\rangle, \langle e_{2},e_{4}\rangle, \langle e_{1}{+}e_{2},e_{3}{+}e_{4}\rangle,\] \[\langle e_{1},e_{2}\rangle, \langle e_{3},e_{4}\rangle, \langle e_{1}{+}e_{3},e_{2}{+}e_{4}\rangle.\] By [1, Proposition 2], this implies **Corollary 1.2**.: _An \((n,n-2,4)\)-MSR code has \(n\leq 9\)._ ## 2 Constructions A group \(G\) acts _regularly_ on a set \(S\) if for any \(x,y\in S\) there exists a unique \(g\in G\) with \(x^{g}=y\). A _(projective) frame_ or _projective basis_ is a tuple of \(\ell+1\) points of \(\mathbb{F}_{q}^{\ell}\) of which all \(\ell\)-element subsets span \(\mathbb{F}_{q}^{\ell}\). We use repeatedly the well-known fact that \(\operatorname{PGL}(\ell,q)\) acts regularly on frames, that is for two frames \(\mathcal{B}=(B_{1},\dots,B_{\ell+1})\), \(\mathcal{B}^{\prime}=(B_{1}^{\prime},\dots,B_{\ell+1}^{\prime})\), there exists a unique \(g\in\operatorname{PGL}(\ell,q)\) such that \(B_{i}^{g}=B_{i}^{\prime}\) for all \(i\in\{1,\dots,\ell+1\}\), cf. [3, p. 19]. Let us give a construction which is essentially identical to the construction given in Appendix A in [1]. We denote the identity matrix by \(I\). **Example 2.1**.: _Let \(q\geq 3\) and let \(\alpha\) be an element of \(\mathbb{F}_{q}\setminus\{0,1\}\). The following are MSR codes._ 1. _For_ \((2,1)\)_-MSR subspace family, take_ \[S_{1} =\langle e_{1}\rangle, S_{2} =\langle e_{2}\rangle, S_{3} =\langle e_{1}+e_{2}\rangle,\] \[g_{1} =\begin{pmatrix}1&0\\ \alpha&1-\alpha\end{pmatrix}, g_{2} =\begin{pmatrix}1-\alpha^{-1}&\alpha^{-1}\\ 0&1\end{pmatrix}, g_{3} =\begin{pmatrix}1&0\\ 0&\alpha\end{pmatrix}.\] _In particular,_ \(S_{i}^{g_{i}}=\langle e_{1}+\alpha e_{2}\rangle\)_._ 2. _For a_ \((4,2)\)_-MSR subspace family, take_ \[S_{1} =\langle e_{1},e_{3}\rangle, S_{2} =\langle e_{2},e_{4}\rangle,\] \[S_{3} =\langle e_{1}+e_{2},e_{3}+e_{4}\rangle, S_{4} =\langle e_{1},e_{2}\rangle,\] \[S_{5} =\langle e_{3},e_{4}\rangle, S_{6} =\langle e_{1}+e_{3},e_{2}+e_{4}\rangle,\] _and_ \[g_{1} =\begin{pmatrix}1&0&0&0\\ \alpha&1-\alpha&0&0\\ 0&0&1&0\\ 0&0&\alpha&1-\alpha\end{pmatrix}, g_{2} =\begin{pmatrix}1-\alpha^{-1}&\alpha^{-1}&0&0\\ 0&1&0&0\\ 0&0&1-\alpha^{-1}&\alpha^{-1}\\ 0&0&0&1\end{pmatrix},\] \[g_{3} =\begin{pmatrix}1&0&0&0\\ 0&\alpha&0&0\\ 0&0&1&0\\ 0&0&0&\alpha\end{pmatrix}, g_{4} =\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ \alpha&0&1-\alpha&0\\ 0&\alpha&0&1-\alpha\end{pmatrix},\] \[g_{5} =\begin{pmatrix}1-\alpha^{-1}&0&\alpha^{-1}&0\\ 0&1-\alpha^{-1}&0&\alpha^{-1}\\ 0&0&1&0\\ 0&0&0&1\end{pmatrix}, g_{6} =\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&\alpha&0\\ 0&0&0&\alpha\end{pmatrix}.\] _In particular,_ \(S_{i}^{g_{i}}=\langle e_{1}+\alpha e_{2},e_{3}+\alpha e_{4}\rangle\) _for_ \(i\in\{1,2,3\}\)_, and_ \(S_{i}^{g_{i}}=\langle e_{1}+\alpha e_{3},e_{2}+\alpha e_{4}\rangle\) _for_ \(i\in\{4,5,6\}\)_._ 3. _Let_ \(\mathcal{F}=\{S_{i}\}\) _be an_ \((2m,m)\)_-MSR subspace family of_ \(\mathbb{F}_{q}^{2m}\) _of size_ \(k\)_. Then we obtain an MSR subspace family_ \(\mathcal{F}^{\prime}=\{S_{i}^{\prime}\}\) _of size_ \(k+3\) _in_ \(\mathbb{F}_{q}^{4m}\) _as _follows: Put_ \[S^{\prime}_{k+1} =\langle e_{1},\ldots,e_{2m}\rangle,\] \[S^{\prime}_{k+2} =\langle e_{2m+1},\ldots,e_{4m}\rangle,\] \[S^{\prime}_{k+3} =\langle e_{1}+e_{2m+1},\ldots,e_{2m}+e_{4m}\rangle,\] \[g^{\prime}_{k+1} =\begin{pmatrix}I&0\\ \alpha I&(1-\alpha)I\end{pmatrix},\] \[g^{\prime}_{k+2} =\begin{pmatrix}(1-\alpha^{-1})I&\alpha^{-1}I\\ 0&I\end{pmatrix},\] \[g^{\prime}_{k+3} =\begin{pmatrix}I&0\\ 0&\alpha I\end{pmatrix}.\] _As_ \(\operatorname{PGL}(4m,q)\) _acts regularly on frames, there exists a unique element_ \(h\in\operatorname{PGL}(4m,q)\) _which maps_ \(e_{i}\) _to_ \(e_{i+2m}\) _for all_ \(i\in\{1,\ldots,2m\}\)_. Then put_ \[S^{\prime}_{i} =S_{i}+S^{h}_{i}, g^{\prime}_{i} =\begin{pmatrix}g_{i}&0\\ 0&g_{i}\end{pmatrix}.\] Note that Example 2 can be obtained from Example 1 using the recursion in Example 3. The recursion is a tensor product construction, hence this clarifies the aforementioned connection to the construction in Appendix A of [1]. ## 3 Upper Bounds The idea for our proof of Theorem 1.1 is to generalize the following proof for MSR subspace families in \(\mathbb{F}_{q}^{2}\) to \(\mathbb{F}_{q}^{4}\). Call an MSR subspace family _maximal_ if it is not contained in a larger MSR subspace family. For a subspace \(S\) of \(\mathbb{F}_{q}^{\ell}\), let \(\operatorname{Stab}(S)\) denote the stabilizer of \(S\) in \(\operatorname{PGL}(\ell,q)\). For us, usually \(S\) will be an \(\ell/2\)-space. Furthermore, for a family \(\mathcal{S}=\{S_{1},\ldots,\)\(S_{k}\}\) of subspaces of \(\mathbb{F}_{q}^{m}\), let \(\operatorname{Stab}(\mathcal{S})=\bigcap_{i=1}^{k}\operatorname{Stab}(S_{i})\) denote the intersection of their stabilizers in \(\operatorname{PGL}(\ell,q)\). We denote the neutral element of a group by \(id\). **Lemma 3.1**.: _An MSR subspace family \(\mathcal{S}\) with \(\operatorname{Stab}(\mathcal{S})=\{id\}\) is maximal._ Proof.: Suppose that there exists an \(m\)-space \(S\) such that \(\mathcal{S}\cup\{S\}\) is an MSR subspace family. Then there exists \(g\in\operatorname{Stab}(\mathcal{S})\) such that \(S^{g}\cap S\) is trivial. This is a contradiction. As \(\operatorname{Stab}(\{P_{1},P_{2},P_{3}\})=\{id\}\) for three pairwise distinct points of \(\mathbb{F}_{q}^{2}\), we find **Corollary 3.2**.: _An MSR subspace family of \(\mathbb{F}_{q}^{2}\) has at most size \(3\)._ **Lemma 3.3**.: _For an MSR subspace family \(\mathcal{S}\) of \(\mathbb{F}_{q}^{2m}\), put \(U=\operatorname{Stab}(\mathcal{S})\). Let \(\ell\geq m+1\). Suppose that one of the following (equivalent) conditions is satisfied:_ 1. _There exists an_ \(\ell\)_-space_ \(H\) _of_ \(\mathbb{F}_{q}^{2m}\) _which is fixed pointwise by_ \(U\)_._ 2. _There exists a frame_ \(\mathcal{B}\) _of size_ \(\ell+1\) _which is fixed pointwise by_ \(U\)_._ _Then \(\mathcal{S}\) is maximal._ Proof.: Both conditions are equivalent as \(\operatorname{Stab}(H)\) acts regularly on frames of \(H\). Suppose that \(\mathcal{S}\) is not maximal and we find an \(m\)-space \(S\) such that \(\mathcal{S}\cup\{S\}\) is an MSR subspace family. Then there exists a \(g\in\operatorname{Stab}(\mathcal{S})\) such that \(S\cap S^{g}\) is trivial. This contradicts that \((S\cap H)^{g}=S\cap H\) contains a point. ### The Segre Variety in Dimension Four We will use some basic facts about Segre varieties over finite fields. We will also give an explicit coordinatized example for \(\mathbb{F}_{q}^{4}\), our main interest in this document. If we have three pairwise disjoint 2-spaces \(S_{1},S_{2},S_{3}\) in \(\mathbb{F}_{q}^{4}\), then there exist precisely \(q+1\) lines \(\mathcal{R}^{opp}:=\mathcal{R}^{opp}(S_{1},S_{2},S_{3}):=\{L_{1},\ldots,L_{q+1}\}\) which meet \(S_{1},S_{2},S_{3}\) each in a point. Furthermore, there exist precisely \(q+1\) subspaces \(\mathcal{R}:=\mathcal{R}(S_{1},S_{2},S_{3}):=\{S_{1},\ldots,S_{q+1}\}\) which meet each element of \(\mathcal{R}^{opp}\) in a point. We call \(\mathcal{R}\) a _regulus_, \(\mathcal{R}^{opp}\) its _opposite regulus_, and \(\mathcal{R}\cup\mathcal{R}^{opp}\) is the line set of a _hyperbolic quadric_. Let \(\mathcal{P}(\mathcal{R})=\mathcal{P}(\mathcal{R}^{opp})\) denote the point set of the lines in \(\mathcal{R}\). The setwise stabilizer of \(\mathcal{R}\) (in \(\mathrm{PGL}(4,q)\)) is isomorphic to \(\mathrm{PGL}(2,q)\times\mathrm{PGL}(2,q)\) and acts on \(\mathcal{R}\cup\mathcal{R}^{opp}\) as expected, that is one \(\mathrm{PGL}(2,q)\) acts on \(\mathcal{R}\) as on points of \(\mathbb{F}_{q}^{2}\), and one \(\mathrm{PGL}(2,q)\) acts on \(\mathcal{R}^{opp}\) as on points of \(\mathbb{F}_{q}^{2}\). This leads to the following well-known result on \(\mathrm{Stab}(\{S_{1},S_{2},S_{3}\})\). **Lemma 3.4**.: _Let \(S_{1},S_{2},S_{3}\) be three pairwise disjoint \(2\)-spaces of \(\mathbb{F}_{q}^{4}\). Then \(\mathrm{Stab}(\{S_{1},S_{2},S_{3}\})\) fixes each element of \(\mathcal{R}=\mathcal{R}(S_{1},S_{2},S_{3})\) and is isomorphic to \(\mathrm{PGL}(2,q)\). _ **Example 3.5**.: _Without loss of generality (as \(\mathrm{PGL}(4,q)\) acts transitively on triples of pairwise disjoint lines) \(\mathcal{R}\) consists of the lines_ \[\{\langle e_{1}+\alpha e_{2},e_{3}+\alpha e_{4}:\alpha\in\mathbb{F}_{q}\rangle \cup\{\langle e_{2},e_{4}\rangle\},\] _and \(\mathcal{R}^{opp}\) consists of the lines_ \[\{\langle e_{1}+\alpha e_{3},e_{2}+\alpha e_{4}:\alpha\in\mathbb{F}_{q}\rangle \cup\{\langle e_{3},e_{4}\rangle\}.\] _Note that this is the canonical tensor product \(\mathbb{F}_{q}^{2}\otimes\mathbb{F}_{q}^{2}\), that is points of the first \(\mathbb{F}_{q}^{2}\) correspond to lines in \(\mathcal{R}\) and the points of the second \(\mathbb{F}_{q}^{2}\) correspond to lines in \(\mathcal{R}^{opp}\). The setwise stabilizer of \(\mathcal{R}\) in \(\mathrm{PGL}(4,q)\) is given by_ \[\left\{\begin{pmatrix}a\alpha&b\alpha&a\beta&b\beta\\ c\alpha&d\alpha&c\beta&d\beta\\ a\gamma&b\gamma&a\delta&b\delta\\ c\gamma&d\gamma&c\delta&d\delta\end{pmatrix}:\begin{pmatrix}a&b\\ c&d\end{pmatrix},\begin{pmatrix}\alpha&\beta\\ \gamma&\delta\end{pmatrix}\in\mathrm{GL}(2,q)\right\}/\mathbb{F}_{q}^{*},,\] _that is \(\mathrm{PGL}(2,q)\times\mathrm{PGL}(2,q)\). Hence,_ \[\mathrm{Stab}(\{S_{1},S_{2},S_{3}\}) =\left\{\begin{pmatrix}a&0&b&0\\ 0&a&0&b\\ c&0&d&0\\ 0&c&0&d\end{pmatrix}:\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\mathrm{GL}(2,q)\right\}/\mathbb{F}_{q}^{*}\] \[=\left\{\begin{pmatrix}aI&bI\\ cI&dI\end{pmatrix}:\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\mathrm{GL}(2,q)\right\}/\mathbb{F}_{q}^{*},\] _and, clearly, \(\mathrm{Stab}(\{S_{1},S_{2},S_{3}\})\) is isomorphic to \(\mathrm{PGL}(2,q)\)._ Note that \(\mathcal{P}(\mathcal{R})\) corresponds to the points \(\langle x\rangle\) with \(Q(x)=0\) on a nondegenerate quadratic form \(Q\) of rank 2, for instance the points \(\mathcal{P}(\mathcal{R})\) in Example 3.5 are precisely the points which vanish on \(Q(x)=x_{1}x_{4}+x_{2}x_{3}\). We find the following types of lines with respect to \(\mathcal{R}\): * A line of \(\mathcal{R}\). * A line of \(\mathcal{R}^{opp}\). * A line, say \(\langle e_{1},e_{2}\rangle\), which meet \(\mathcal{P}(\mathcal{R})\) in two points (secants), say \(Q(x)=x_{1}x_{2}\). In this example the points of \(\mathcal{P}(\mathcal{R})\) are \(\langle x_{1}\rangle\) and \(\langle x_{2}\rangle\). * A line, say \(\langle e_{1},e_{2}\rangle\), which meet \(\mathcal{P}(\mathcal{R})\) in one point (tangents), say \(Q(x)=x_{1}^{2}\). In this example the point of \(\mathcal{P}(\mathcal{R})\) is \(\langle x_{2}\rangle\). * A line, say \(\langle e_{1},e_{2}\rangle\), which meet \(\mathcal{P}(\mathcal{R})\) in no point (passants), say \(Q(x)=x_{1}^{2}+\alpha x_{1}x_{2}+\beta x_{2}^{2}\) such that \(1+\alpha x_{2}+\beta x_{2}^{2}\) is irreducible over \(\mathbb{F}_{q}\). And the following types of planes: * A plane, say \(\langle e_{1},e_{2},e_{3}\rangle\), intersects \(\mathcal{P}(\mathcal{R})\) in conic (conic plane), that is there exists a nondegenerate quadratic form \(Q\), say \(Q(x)=x_{1}^{2}+x_{2}x_{3}\), such that \(\mathcal{P}(\mathcal{R})\) is the set of points \(\langle x\rangle\) of the plane with \(Q(x)=0\). * A plane, say \(\langle e_{1},e_{2},e_{3}\rangle\), intersects \(\mathcal{P}(\mathcal{R})\) in two lines (degenerate plane), that is there exists a degenerate quadratic form \(Q\), say \(Q(x)=x_{2}x_{3}\). In the example the two lines are \(\langle e_{1},e_{2}\rangle\) and \(\langle e_{1},e_{3}\rangle\). Note that for a passant \(L\), all \(q+1\) planes through \(L\) are conic planes, while for a line \(L\) of \(\mathcal{R}\cup\mathcal{R}^{opp}\) all planes through \(L\) are degenerate planes. The group \(\operatorname{Stab}(\mathcal{R})\) has precisely five orbits on lines and two orbits on planes as given above. All \(q+1\) planes through a passant are conic planes, all \(q+1\) planes through a secant are degenerate planes. ### The Case \(m=2\) From here on we assume that \(m=2\) and that \(\mathcal{S}=\{S_{1},\ldots,S_{k}\}\) is an MSR subspace family. Note that Lemma 3.4 implies that \(|\mathcal{R}\cap\mathcal{S}|\leq 3\). **Lemma 3.6**.: _Suppose that \(k\geq 5\). If \(S_{1},S_{2},S_{3}\) are pairwise disjoint and \(S_{4}\) meets \(\mathcal{R}=\mathcal{R}(S_{1},S_{2},S_{3})\) in at least a point, then \(|\mathcal{S}|\leq 6\) with equality if and only if \(S_{4},S_{5},S_{6}\in\mathcal{R}^{opp}\)._ Proof.: First we assume that \(S_{5}\) and \(S_{6}\) (if it exists) also meet \(\mathcal{P}(\mathcal{R})\) in a point. Then \(S_{i}\) meets \(\mathcal{P}(\mathcal{R})\) in a point on a line \(L_{i}\in\mathcal{R}^{opp}\) for \(i\in\{4,5,6\}\). Note that \(L_{i}\) is not necessarily unique as \(S_{i}\) can meet \(\mathcal{R}\) in two points if it is a secant. Also note that in this case, that is \(S_{i}\) meets \(\mathcal{P}(\mathcal{R})\) in points \(Q_{i}\) and \(Q_{i}^{\prime}\) on lines \(L_{i}\) and \(L_{i}^{\prime}\), then there exists no \(g\in\operatorname{Stab}(\mathcal{R})\) such that \(Q_{i}^{g}=Q_{i}^{\prime}\) (as \(Q_{i}\) and \(Q_{i}^{\prime}\) do not lie on the same line of \(\mathcal{R}\) and \(\operatorname{Stab}(\mathcal{R})\) fixes each line of \(\mathcal{R}\) pointwise). Hence, \(\operatorname{Stab}(\{S_{1},S_{2},S_{3},S_{i}\})\) contains \(\operatorname{Stab}(\mathcal{R}\cup\{L_{i}\})\). By Lemma 3.4, \(\operatorname{Stab}(\mathcal{R}\cup\{L_{i}\})\) fixes the points of \(L_{i}\) for \(i\in\{4,5\}\). More explicitly, without loss of generality \(L_{4}=\langle e_{1},e_{2}\rangle\) and \(L_{5}=\langle e_{3},e_{4}\rangle\) (as \(\operatorname{Stab}(\mathcal{R})\) acts transitively on triples of pairwise distinct elements of \(\mathcal{R}^{opp}\)). Then \[\operatorname{Stab}(\mathcal{R}\cup\{L_{4}\}) =\left\{\begin{pmatrix}aI&bI\\ 0&dI\end{pmatrix}:\begin{pmatrix}a&b\\ 0&d\end{pmatrix}\in\operatorname{GL}(2,q)\right\}/\mathbb{F}_{q}^{*},\] \[\operatorname{Stab}(\mathcal{R}\cup\{L_{5}\}) =\left\{\begin{pmatrix}aI&0\\ cI&dI\end{pmatrix}:\begin{pmatrix}a&0\\ c&d\end{pmatrix}\in\operatorname{GL}(2,q)\right\}/\mathbb{F}_{q}^{*}.\] In particular, this implies that \(L_{4}\neq L_{5}\) as otherwise \((S_{5}\cap L_{4})^{g_{5}}=S_{5}\cap L_{4}\). Suppose that \(S_{4}\neq L_{4}\). Put \(H=\langle L_{4},S_{4}\rangle\). Then \(U:=\operatorname{Stab}(\{S_{1},S_{2},S_{3},S_{4},S_{5}\})\) fixes each point of \(L_{4}\), the line \(S_{4}\), and \(H\cap L_{5}\). Hence, \(U\) fixes a frame \(P_{1}:=L_{4}\cap S_{4}\), \(P_{2}\subseteq L_{4}\setminus\{P_{1}\}\), \(P_{3}:=L_{5}\cap H\), \(P_{4}:=\langle P_{2}^{\prime},P_{3}\rangle\cap S_{4}\), where \(P^{\prime}\subseteq L_{4}\setminus\{P_{1},P_{2}\}\). Hence, \(U\) fixes \(H\) pointwise. By Lemma 3.3, \(|\mathcal{S}|\leq 5\). As we only assumed that \(S_{4}\neq L_{4}\), that is \(S_{4}\notin\mathcal{R}^{opp}\), we have \(|\mathcal{S}|\leq 5\) as long as \(S_{i}\neq L_{i}\) for any \(i\in\{4,5,6\}\). Otherwise, \(S_{4},S_{5},S_{6}\in\mathcal{R}^{opp}\) and, by Lemma 3.4, \(\operatorname{Stab}(\{S_{1},S_{2},S_{3},S_{4},S_{5},S_{6}\}=\{id\}\). By Lemma 3.1, we are done. More explicitly, if \(L_{4}\) and \(L_{5}\) are chosen as before, then \(U\) is contained in \[\operatorname{Stab}(\mathcal{R}\cup\{L_{4},L_{5}\})=\left\{\begin{pmatrix}aI& 0\\ 0&dI\end{pmatrix}:\begin{pmatrix}a&0\\ 0&d\end{pmatrix}\in\operatorname{GL}(2,q)\right\}/\mathbb{F}_{q}^{*}.\] Hence, the claim in the preceding paragraph is also easily verified with an explicit calculation. Lastly, we assume that \(S_{5}\) is disjoint from \(\mathcal{P}(\mathcal{R})\). As before, \(S_{4}\) meets \(\mathcal{P}(\mathcal{R})\) in a point on a line \(L_{4}\in\mathcal{R}^{opp}\) which \(\operatorname{Stab}(\{S_{1},S_{2},S_{3},S_{4}\})\) fixes pointwise. Pick a plane \(H\) through \(S_{5}\) which does not contain \(L_{4}\cap S_{i}\) for any \(i\in\{1,2,3\}\). As there are \(q+1\) planes through \(S_{5}\) and \(q\geq 3\), this is possible. Then \(\mathcal{P}(\mathcal{R})\) intersects \(H\) in a conic \(\mathcal{C}\). As the point \(L_{4}\cap H\) is in \(\mathcal{C}\), \(U:=\operatorname{Stab}(\{S_{1},S_{2},S_{3},S_{4},S_{5}\})\) fixes \(H\) and therefore the frame \(S_{1}\cap H\), \(S_{2}\cap H\), \(S_{3}\cap H\), \(L_{4}\cap H\) of \(H\). By Lemma 3.3, \(|\mathcal{S}|\leq 5\). In Example 2.1.2, the lines \(S_{1},S_{2},S_{3}\) lie in the regulus \(\mathcal{R}=\{\langle e_{1}+\beta e_{2},e_{3}+\beta e_{4}\rangle:\beta\in \mathbb{F}_{q}\}\cup\{\langle e_{2},e_{4}\rangle\}\), and the lines \(S_{4},S_{5},S_{6}\) lie in the opposite regulus \(\mathcal{R}=\{\langle e_{1}+\beta e_{3},e_{2}+\beta e_{4}\rangle:\beta\in \mathbb{F}_{q}\}\cup\{\langle e_{3},e_{4}\rangle\}\). Note that under the natural action of \(\operatorname{PGL}(2,q)\times\operatorname{PGL}(2,q)\) on \(\mathcal{R}\) all choices for a set of six lines with three in \(\mathcal{R}\) and three in \(\mathcal{R}^{opp}\) are isomorphic. **Lemma 3.7**.: _If all elements of \(\mathcal{S}\) are pairwise disjoint, then \(|\mathcal{S}|\leq 5\)._ Proof.: Put \(\mathcal{R}=\mathcal{R}(S_{1},S_{2},S_{3})\). By Lemma 3.6, we can assume that \(S_{4},S_{5},S_{6}\) are each disjoint from \(\mathcal{R}\). We claim that \(U:=\operatorname{Stab}(\{S_{1},S_{2},S_{3},S_{4}\})\) acts regularly on the \(q+1\) planes through \(S_{4}\): Indeed, take distinct planes \(H,H^{\prime}\) through \(S_{4}\). Recall that \(H\) and \(H^{\prime}\) are both conic planes. Put \(P_{1}=S_{1}\cap H\), \(P_{2}=S_{2}\cap H\), \(P_{3}=\langle P_{1},S_{3}\cap H\rangle\), \(P_{4}=\langle P_{2},S_{3}\cap H\rangle\), and \(Q_{1}=S_{1}\cap H^{\prime}\), \(Q_{2}=S_{2}\cap H^{\prime}\), \(Q_{3}=\langle Q_{1},S_{3}\cap H^{\prime}\rangle\), \(Q_{4}=\langle P_{2},S_{3}\cap H^{\prime}\rangle\). Then we find precisely one map \(g\) in \(U\) such that \(P_{i}^{g}=Q_{i}\) for all \(i\in\{1,2,3,4\}\) as \(\operatorname{PGL}(3,q)\) acts regularly on frames of \(\mathbb{F}_{q}^{3}\). Analogously, \(\operatorname{Stab}(\{S_{1},S_{2},S_{3},S_{5}\})\) has the same property. If now for \(U^{\prime}:=\operatorname{Stab}(\{S_{1},S_{2},S_{3},S_{4},S_{5}\})\) we have \(U=U^{\prime}\), then \(S_{5}^{g}=S_{5}\) for all \(g\in U\) and \(\{S_{1},S_{2},S_{3},S_{4},S_{5}\}\) cannot be an MSR subspace family. Hence, any \(g\in U^{\prime}\) fixes some plane \(H\) through \(S_{4}\). Hence, \((H\cap S_{i})^{g}=H\cap S_{i}\) for all \(i\in\{1,\dots,4\}\). Hence, \(U^{\prime}\) fixes \(H\) pointwise. By Lemma 3.3, \(|\mathcal{S}|\leq 5\). **Lemma 3.8**.: _If \(\mathcal{S}\) does not contain a triple of pairwise disjoint subspaces, then \(|\mathcal{S}|\leq 4\)._ Proof.: Observation: no three elements of \(\mathcal{S}\), say \(S_{1},S_{2},S_{3}\), can meet in the same point \(P\), as \(U:=\operatorname{Stab}(\{S_{1},S_{2}\})\) fixes \(P\), so also \((S_{3}\cap P)^{g}=P\) for all \(g\in U\). Suppose that we find a quadrangle \(S_{1},S_{2},S_{3},S_{4}\) in \(\mathcal{S}\), that is without loss of generality \(S_{1}\) and \(S_{3}\), respectively, \(S_{2}\) and \(S_{4}\) are pairwise disjoint and \(S_{i}\cap S_{i+1}\) are points (with \(S_{4+1}\) read as \(S_{1}\)). If \(|\mathcal{S}|\geq 5\), then, as there are no three pairwise disjoint subspaces in \(\mathcal{S}\), either \(S_{5}=\langle S_{1}\cap S_{2},S_{3}\cap S_{4}\rangle\) or \(S_{5}=\langle S_{1}\cap S_{4},S_{3}\cap S_{2}\rangle\). But this contradicts the observation. If we do not find a quadrangle, and \(|\mathcal{S}|\geq 4\), then we find without loss of generality that \(S_{1},S_{2},S_{3}\) are coplanar, namely in \(\langle S_{1},S_{2}\rangle\). But then \(S_{3}^{g}\subseteq\langle S_{1},S_{2}\rangle\) for all \(g\in\operatorname{Stab}(\{S_{1},S_{2}\})\), so \(S_{3}\cap S_{3}^{g}\) is nontrivial. Hence, \(|\mathcal{S}|\leq 3\). Lemma 3.6, Lemma 3.7, and Lemma 3.8 together show Theorem 1.1. ## 4 Future Work We tried to generalize the techniques here, for instance to \((8,2)\)-MSR subspace families or \((6,3)\)-MSR subspace families. In both cases the number of necessary case distinctions increases significantly, but the connection to Segre varieties still exists. For instance, if we have three disjoint \(m\)-spaces in a \((2m,2)\)-MSR subspace family, then a generalization of Lemma 3.4 still applies. Therefore it might be possible to generalize our techniques using the structure of Segre varieties in a more refined way. AcknowledgmentsThe author thanks Sascha Kurz and Itzhak Tamo for their helpful comments on drafts of this document and telling him about the problem. The author is supported by a postdoctoral fellowship of the Research Foundation - Flanders (FWO).
2304.14060
Unsteady granular chute flows at high inertial numbers
We study the time-dependent flow behavior of gravity-driven free surface granular flows using the discrete element method and continuum modeling. Discrete element method (DEM) simulations of slightly polydisperse disks flowing over a periodic chute with a bumpy base are performed. A simple numerical solution based on a continuum approach with the inertial number based $\mu-I$ rheology has been proposed to predict the flow dynamics. The results of the continuum model are compared with the DEM simulation results for a wide range of chute inclinations. Solutions for the constitutive model described by the popular JFP model as well as the recently proposed modified rheological model using a non-monotonic variation of $\mu-I$ are obtained. Our results demonstrate that the popular JFP model reliably predicts the flow at low to moderate inclination angles (i.e. for $I \lesssim 0.5$). However, it fails to predict the flow properties at high inclinations. The modified rheological model, on the other hand, is very well able to predict the time-averaged flow properties for all the inclination angles considered in this study. Accounting for the presence of the slip velocity, layer dilation, and stress anisotropy are found to be crucial for accurate predictions of transient flows at high inertial numbers (i.e. for $I > 1$).
Satyabrata Patro, Sumit Kumar, Anubhav Majumdar, Anurag Tripathi
2023-04-27T09:43:43Z
http://arxiv.org/abs/2304.14060v1
# Unsteady granular chute flows at high inertial numbers ###### Abstract We study the time-dependent flow behavior of gravity-driven free surface granular flows using the discrete element method and continuum modeling. Discrete element method (DEM) simulations of slightly polydisperse disks flowing over a periodic chute with a bumpy base are performed. A simple numerical solution based on a continuum approach with the inertial number based \(\mu-I\) rheology has been proposed to predict the flow dynamics. The results of the continuum model are compared with the DEM simulation results for a wide range of chute inclinations. Solutions for the constitutive model described by the popular JFP model as well as the recently proposed modified rheological model using a non-monotonic variation of \(\mu-I\) are obtained. Our results demonstrate that the popular JFP model reliably predicts the flow at low to moderate inclination angles (i.e. for \(I\lesssim 0.5\)). However, it fails to predict the flow properties at high inclinations. The modified rheological model, on the other hand, is very well able to predict the time-averaged flow properties for all the inclination angles considered in this study. Accounting for the presence of the slip velocity, layer dilation, and stress anisotropy are found to be crucial for accurate predictions of transient flows at high inertial numbers (i.e. for \(I\gtrsim 1\)). ## I Introduction The rheology of granular materials has been an active research topic for the last few decades due to its wide occurrence in geophysical as well as industrial situations. A number of experimental [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16] as well as simulation studies using discrete element method (DEM) [2; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31] have been utilized to explore the rheology of granular materials. A detailed review of granular flow rheology in different configurations can be found in [32; 33; 34]. These studies have shown that the granular flow between the two limiting cases of quasistatic, slow flows, and rapid, dilute flows is controlled by a non-dimensional inertial number \(I\) which depends on the local shear rate and pressure in addition to the particle size and density. This intermediate dense flow regime has been studied in a variety of configurations. The inertial number-based rheological description for this dense flow regime has been confirmed in chute flows [2; 17; 22; 30; 35], plane shear flows [2; 18; 25; 27], annular shear cell [13], granular collapse [8; 9; 10; 36], planar silos [28; 29], heap flows [3] and rotating cylinders [16; 37]. Both experiments as well as simulations confirm that the ratio of the shear stress to the pressure depends on the inertial number in this regime. The most popular model for the inertial number-based rheology is the JFP model [35]. According to this model, the effective friction coefficient \(\mu\) varies in a nonlinear fashion with \(I\). Starting from a minimum effective friction coefficient \(\mu_{s}\) at \(I\sim 0\), \(\mu\) increases with \(I\). The inertial number-based JFP model [35] has been able to capture the flow behavior of granular materials in experiments [2; 3; 38] as well as simulations [17; 18; 39] in the dense flow regime for inertial number \(I\leq 0.6\). In absence of simulation and experimental data at large \(I\) values, the model assumes that the effective friction coefficient \(\mu\) at large inertial numbers becomes constant. In addition, the solids fraction also decreases with the inertial number. Accounting for the solids fraction variation in the continuum simulations requires the incorporation of the compressibility effects. In order to use the commonly employed approach for incompressible fluid flows, most studies ignore the density variation in the continuum simulation of granular flows. The incompressible JFP model has been used to predict the steady-state flow properties in chute flows [4; 35; 36; 40; 41; 42; 43; 44], vertical chute [44], steady plane shear [43] as well as flow through an annular shear cell [45; 40]. The incompressible \(\mu-I\) model or its variants have also been used to predict the transient flow of granular materials flowing over a chute [41; 43; 46; 47; 48; 49; 50], material discharging from granular silos [40; 48; 51; 52], flow during column collapse [36; 43; 48; 49; 53], in addition to rotating drum [46], heap flows [44]. The time-dependent response of granular media in bounded heap flow has been studied using DEM simulations and experiments [54]. Efforts have also been made to capture the experimentally observed transient behavior of granular slides in air as well as water [55]. The depth-averaged model approach has also been utilized in the past to predict the flow behavior of granular materials over a chute [56; 57; 58; 59]. Granular flows also exhibit some non-local effects during slow flows such as shear banding, weak dependence of stress on the magnitude of shear rate, and dilation effects [60]. Some non-local continuum models have also been proposed to understand the flow behavior of granular materials [61; 62; 45]. Recently Debnath _et al._[63] have solved and compared different compressible \(\mu-I\)[64; 65] as well as non-local rheological models [45; 60] with the DEM simulations of flow in a vertical chute. These models are found to have limited success in predicting the flow properties. A review of the non-local modeling for granular flows is presented by [66]. Concerns about studies dealing with time-dependent continuum simulations have been raised in the past. Barker _et al._[47] have shown that the incompressible JFP model is well-posed only for a narrow range of inertial numbers; for very low and relatively high values of inertial numbers, it is found to be ill-posed. A wider range of well-posed regions can be obtained by partial regularization of the \(\mu-I\) model by deriving a new functional form of the \(\mu-I\) model so that the granular material no longer has yield stress [41]. The compressible rheology has been implemented to understand the behavior of subaqueous granular collapse [53]. Few efforts have also been made to introduce bulk compressibility effects [64; 65; 67] to regularize the \(\mu-I\) model. Recently Barker _et al._[46] have implemented the regularized well-posed \(\mu-I\) rheology [41] intercoupled with segregation in a continuum framework to capture the qualitative behavior of flow evolution and segregation of multi-component mixtures in an inclined plane and rotating tumbler. All of the aforementioned studies using inertial number rheology employ the assumption that the effective friction coefficient becomes constant at high inertial numbers. Mandal and Khakhar [25; 27], however, in their simulation study of the flow of dumbbells in plane shear flows showed that the saturating behavior of the effective friction coefficient \(\mu\) at higher inertial number \(I\) used in such studies is not correct. Instead, \(\mu\) is found to decrease with \(I\) after achieving a maximum. However, the authors were unable to observe similar behavior in the case of chute flows. A recent study by Patro _et al._[30] has shown that the non-monotonic variation of the effective friction coefficient \(\mu-I\) with the inertial number \(I\) is observed in the case of chute flows as well. Figure 1(a) shows the variation of the effective friction coefficient with the inertial number (solid line) as suggested by the modified rheology of Patro _et al._[30]. The broken line shows the variation according to the JFP model. Figure 1(b) shows the variation of the solids fraction \(\phi\) with the inertial number \(I\) for the two models. Figure 1(c) shows the variation of the ratio of the normal stress difference to the pressure ratio \(N_{1}/P\) with the inertial number \(I\). Note that the JFP model does not account for the presence of normal stress difference and hence \(N_{1}=0\) at all inertial numbers. Non-zero normal stress differences in granular flows, however, have been observed by other researchers [68; 69; 70; 71; 72]. By accounting for the normal stress difference law in the modified rheological model, Patro _et al._[30] also confirmed that two different flow states at the same inclination angle of the chute are not possible despite the non-monotonic variation of the \(\mu-I\) with the \(I\). The authors also showed that the modified rheological description coupled with momentum balance equations is able to predict various flow properties of interest at steady state for chute flow of disks at different inclination angles. These predictions have been found to be in good agreement with DEM simulation results for periodic chute flows Figure 1: Variation of the (a) Effective friction coefficient \(\mu\) with the inertial number \(I\) (b) Solids fraction \(\phi\) with \(I\) and (c) normal stress difference to the pressure ratio \(N_{1}/P\) with \(I\) using the model parameters from Patro _et al._[30]. Solid lines represent the variation according to the modified rheology. The dashed line represents the fitted line using the JFP model. even at high inertial numbers. However, predictions for time-dependent properties for high inertial number granular flows have not been compared with DEM simulations to the best of our knowledge. In this study, we focus on unsteady granular flows down an inclined surface in a periodic chute flow configuration spanning a large range of inclination angles to cover a wide range of inertial numbers. Parez _et al._[50] have obtained analytical expressions by solving the momentum balance equation coupled with a linear \(\mu-I\) relation for such a system. Extending this analytical approach to the non-linear \(\mu-I\) rheological model is much more complex and mathematically challenging. Hence, we solve the resulting equation numerically to predict the time-dependent flow properties. The predictions obtained from the numerical solution are compared with the DEM simulation results for the first time for inertial numbers close to or greater than unity. Our results show that using the JFP model at high inertial number flows leads to significant overprediction of the average velocity along with underprediction of solids fraction. The organization of the paper is as follows. The DEM simulation methodology is briefly mentioned in Section II. The theoretical formulation as well as the numerical technique used to solve the resulting equation is described in Section III. Results obtained from the numerical solution of the momentum balance equation are compared with the DEM simulations in Section IV. Discussion about some important observations is reported in Section V. Summary and future work are presented in Section VI. ## II Simulation Methodology The discrete element method (DEM) technique is used to simulate slightly poly-disperse (\(\pm 5\%\) polydispersity), inelastic, frictional disks flowing over a rough and bumpy inclined surface. A schematic of the simulation setup is shown in Fig. 2. The length of the simulation box is \(40d\) where \(d\) is the mean diameter of the discs. To mimic an infinite length of the chute, a periodic boundary condition is imposed in the flowing (\(x\)) direction. The base of the simulation domain consists of static bumpy particles of size \(2d\) to reduce the slip at the base. The contact force between the discs is modeled using the linear spring and dashpot model as in the L2 model of Silbert _et al._[17]. The coefficient of static friction \(\mu\) is chosen as \(0.5\) and two different values of the normal restitution coefficient (\(e_{n}=0.5\) & \(0.1\)) are considered. The initial arrangement of the particles is done on a square lattice with a finite spacing between the surface of particles and they are given small random initial velocities. In order to simulate the flow of particles from rest, the particles are allowed to settle under the influence of gravity until the average kinetic energy of the particles in the layer becomes less than \(10^{-6}mgd\). The height of the settled layer of \(N=2000\) particles following this protocol is \(h\sim 50d\). At time \(t=0\), the direction of the gravity is changed to the desired inclination angle \(\theta\). The flow is allowed Figure 2: Snapshot of simulation of a granular layer flowing under the influence of gravity at an inclination \(\theta=32^{\circ}\) from the horizontal at any instant. Black-filled circles represent the static particles that form the bumpy chute base and grey-filled circles represent the flowing discs. to evolve until the average kinetic energy of the system of particles becomes constant, indicating that the system has achieved a steady state. To compute the flow properties, the simulation domain is divided into a number of strips in the \(y\) direction. The thickness of each strip is equal to the mean particle diameter \(d\) so that the reported properties are averaged over the strip area \(A=L_{x}\times d\). The properties reported at any instant \(t\) are averaged over 60 snapshots having an interval of 0.1 time unit between successive snapshots and thus represent the average property over the last 6 time units. The partial contribution of the particle's area in each strip is accounted for while calculating various flow profiles such as velocity, solids fraction, stresses, etc. More details are available in Patro _et al._[30]. ## III Theory Consider a fully developed granular flow over a surface inclined at an angle \(\theta\) under the influence of gravity. Assuming a unidirectional flow in the \(x\) direction, the momentum balance equation in \(x\) and \(y\) directions simplifies to \[\rho_{b}\frac{\partial v_{x}}{\partial t}=-\frac{\partial\tau_{yx}}{\partial y }+\rho_{b}g\sin\theta, \tag{1}\] \[0=-\frac{\partial\sigma_{yy}}{\partial y}-\rho_{b}g{\rm cos}\theta, \tag{2}\] where \(v_{x}\) is the velocity along the flow direction, \(\theta\) is the inclination angle, \(\tau_{yx}\) is the shear stress, \(g\) is the gravitational acceleration, and \(\rho_{b}=\phi(y)\rho_{p}\) is the bulk density of the medium with \(\rho_{p}\) being the density of the particle and \(\phi(y)\) is the local solids fraction at any \(y\). Integrating Equation (2) with \(y\), we get \[\sigma_{yy}=\rho_{p}g{\rm cos}\,\theta\int_{y}^{h}\phi(y)dy. \tag{3}\] Assuming the variation of \(\phi(y)\) along \(y\) at any instant to be small, we approximate the integral in Eq. (3) as \(\int_{y}^{h}\phi(y)dy=\phi_{avg}(h-y)\), so that the expression for \(\sigma_{yy}\) simplifies to \[\sigma_{yy}=\rho_{p}g{\rm cos}\,\theta\phi_{avg}(h-y). \tag{4}\] According to the \(\mu-I\) rheology, the flow behavior depends upon the non-dimensional inertial number \(I\) defined as \[I=\frac{|\dot{\gamma}|d}{\sqrt{P/\rho_{p}}}. \tag{5}\] For the case of unidirectional chute flow, the second invariant of the strain rate tensor \(|\dot{\gamma}|\) equals the shear rate \(dv_{x}/dy\), i.e., \(|\dot{\gamma}|=dv_{x}/dy\). The effective friction coefficient \(\mu(I)\) is defined as the ratio of the second invariant of the stress tensor \(|\tau_{yx}|\) to the pressure \(P\), i.e., \[\mu(I)=\frac{|\tau_{yx}|}{P}. \tag{6}\] The JFP model uses the following form to relate the effective friction coefficient \(\mu(I)\) with \(I\) \[\mu(I)=\mu^{{}^{\prime}}_{s}+\frac{\mu^{{}^{\prime}}_{m}-\mu^{{}^{\prime}}_{s }}{1+I^{{}^{\prime}}_{0}/I}, \tag{7}\] with \(\mu^{{}^{\prime}}_{s}\), \(\mu^{{}^{\prime}}_{m}\) and \(I^{{}^{\prime}}_{0}\) being the model parameters. Our recent study [30] shows that a more appropriate variation of \(\mu(I)\) is given by \[\mu(I)=\mu_{s}+\frac{c_{1}-c_{2}I}{1+I_{0}/I}. \tag{8}\] The proposed rheology is complemented with an empirical relation to describe the variation of solid fraction (\(\phi\)) with inertial number \(I\) as \[\phi=\phi_{max}-aI^{\alpha} \tag{9}\] where \(\phi_{max}\), \(a\) and \(\alpha\) are the model parameters [30]. In addition, we find that the role of normal stress difference becomes important at high inclinations. This presence of the normal stress difference is accounted by a normal stress difference law in the rheology by proposing the ratio of the first normal difference \(N_{1}=\sigma_{xx}-\sigma_{yy}\) to the pressure \(P=(\sigma_{xx}+\sigma_{yy})/2\) as a function of inertial number \(I\), i.e., \[\frac{N_{1}}{P}=f(I). \tag{10}\] Using Eqs. (4), (6) and (10) we get the expression for pressure as \[P=\frac{2\phi_{avg}\rho_{p}g\cos\theta(h-y)}{2-f(I)}. \tag{11}\] As expected, in absence of normal stress difference, the expression for pressure reduces to a hydrostatic head. Recent study by Patro _et al._[30] shows that \(f(I)\) remains constant up to \(I\leq I^{*}\) and varies quadratically with inertial number \(I\) for \(I>I^{*}(=0.1)\), i.e., \[f(I)=\left\{\begin{array}{ll}k&\mbox{for }I\leq 0.1,\\ AI^{2}+BI+C&\mbox{for }I>0.1.\end{array}\right.\] (12a) Using the expression of pressure \[P\] from Eq. ( 11 ) in Eq. ( 5 ), we get, \[I=\frac{|\dot{\gamma}|d\sqrt{2-f(I)}}{\sqrt{2g\cos\theta(h-y)}\phi_{avg}}.\] (13) Equation ( 13 ) can be rearranged to get the expression for \[I\] as follows: \[I=\left\{\begin{array}{ll}\frac{-B_{0}+\sqrt{B_{0}^{2}-4A_{0}C_{0}}}{2A_{0} }&\mbox{for }I>0.1\\ \frac{|\dot{\gamma}|d\sqrt{2-k}}{\sqrt{2g\cos\theta(h-y)}\phi_{avg}}&\mbox{for }I \leq 0.1\end{array}\right.\] (14a) with \[A_{0}=2g\cos\theta(h-y)\phi_{avg}+A\dot{\gamma}{}^{2}d^{2}\], \[B_{0}=B\dot{|\dot{\gamma}|}^{2}d^{2}\] and \[C_{0}=(C-2)\dot{|\dot{\gamma}|}^{2}d^{2}\]. The calculated inertial number is used in the empirical form relating the effective friction coefficient \[\mu(I)\] with the inertial number \[I\]. Eqs. ( 7 ) and ( 8 ) show two such empirical forms that can be utilized to obtain the time-dependent properties of the flow. Using Eqs. ( 6 ) and ( 11 ), we get \[\tau_{yx}=\mu(I)\left(\frac{2\phi_{avg}\rho_{p}g\cos\theta(h-y)}{2-f(I)}\right).\] (15) Differentiating Eq. ( 15 ) with respect to \[y\] and substituting \[\frac{\partial\tau_{yx}}{\partial y}\] in Eq. ( 1 ), we get \[\phi(I)\frac{\partial v_{x}}{\partial t}=\phi(I)g\sin\theta-\frac{\partial}{ \partial y}\left[\mu(I)\phi_{avg}\left(\frac{2\rho_{p}g\cos\theta(h-y)}{2-f(I )}\right)\right]. \tag{16}\] In writing Eq. ( 16 ), we account for the variation of \(\phi\) with \(y\) in the \(x\) momentum balance equation. However, in the calculation of \(\sigma_{yy}\), the variation of \(\phi\) with \(y\) is ignored and \(\phi(y)\) is replaced by the average value \(\phi_{avg}\) (Eq. (4)). We also solve Eq. ( 16 ) by accounting for the variation of \(\phi\) along \(y\) in the calculation of \(\sigma_{yy}\) using Eq. (3 ). We find that the results obtained are not altered significantly due to this more refined calculation of the \(\sigma_{yy}\). Hence we use Eq. (4) for the calculation of \(\sigma_{yy}\) in all the results presented in this work. In order to solve Equation ( 16 ), we use the PDEPE solver in MATLAB along with the following initial and boundary conditions: \[IC:v_{x}(y,0) =0, \tag{17a}\] \[BC1:\tau_{yx}(h,t) =0,\] (17b) \[BC2:v_{x}(0,t) =v_{slip}(t). \tag{17c}\] The initial condition (Eq. (17a)) represents that the velocity across the layer at \(t=0\) is zero and mimics the flow starting from rest. The first boundary condition (Eq. (17b)) corresponds to zero shear stress at the free surface \(y=h\). The second boundary condition (Eq. (17c)) corresponds to a known slip velocity at the base. While a no-slip boundary condition seems to be appropriate for low inclinations, this boundary condition is needed to account for the sufficient slip observed at high inclinations [30]. The general form of the partial differential equation used by \(PDEPE\) solver in MATLAB is given as follows: \[c\left(y,t,u,\frac{\partial u}{\partial y}\right)\frac{\partial u}{\partial t }=y^{-m}\frac{\partial}{\partial y}\left(y^{m}F\left(y,t,u,\frac{\partial u}{ \partial y}\right)\right)+s\left(y,t,u,\frac{\partial u}{\partial y}\right). \tag{18}\] Equation (16) can be written in the above general form using with \(u=v_{x}\) and \(c=\phi(I)\), \(m=0\), \(s=\phi(I)g\sin\theta\) and \(F=\mu(I)\phi(I)\left(\frac{2\rho_{p}g\cos\theta(h-y)}{2-f(I)}\right)\). The PDEPE solver represents the derivatives numerically by discretizing the domain into finite spatial and temporal grids. The number of grids along the \(y\) direction is chosen to be \(N=50\) and the time step \(\Delta t\) is chosen to be \(0.1\). Equation (9), which is referred to as the dilatancy law, dictates that the solids fraction \(\phi\) of the medium decreases with an increase in the inertial number. With increasing velocity (and hence inertial number), the flowing granular layer dilates and leads to an increase in the layer thickness \(h\) with time \(t\). Our DEM simulation results show that this increase in the layer height becomes very significant at high inertial numbers. This significant dilation of the granular layer indicates that the compressibility effects become important at high inertial numbers and need to be accounted for by solving the continuity equation as well. Given our assumption of the unidirectional flow, the \(y\)-component of the velocity is ignored in our theoretical formulation. Due to this reason, accounting for the variation of \(\phi\) with \(t\) is not possible using the equation of continuity. We circumvent this problem by using the integral mass balance equation. Since our simulation method utilizes a periodic simulation box with a fixed mass of particles, we account for the height of the layer at any instant by equating the mass at time \(t\) to the mass in the beginning of the simulation. The height of the layer is \(h_{t=0}=h_{min}\) at \(t=0\) as it starts from the rest with the maximum solids fraction \(\phi_{t=0}=\phi_{max}\) across the entire layer. The mass per unit width of the flowing layer of the disks at any time instant \(t\) is equal to \(\int_{0}^{h(t)}\rho_{b}(t)dy\). Using \(\rho_{b}(t)=\phi(t)\rho_{p}\) and equating the mass at any instant to the initial mass of the layer, we get \(\int_{0}^{h(t)}\phi(t)dy=h_{min}\phi_{max}\). Since \(\int_{0}^{h(t)}\phi(t)dy=h(t)\phi_{avg}(t)\), the height at any instant can be obtained using \(h(t)=\frac{h_{min}\phi_{max}}{\phi_{avg}(t)}\) where \(\phi_{avg}(t)\) the average solids fraction across the layer at any instant \(t\). The detailed steps for computing the time-dependent properties of the flowing layer are given in Algorithm 1. ## IV Results In this section, we present results for the time-dependent flow of disks over an inclined surface using DEM simulations and compare them with the theoretical predictions. The results are reported in dimensionless form for two different restitution coefficients \(e_{n}=0.5\) and \(e_{n}=0.1\). The theoretical predictions are obtained by solving the momentum balance equations along with the inertial number-based JFP model popularly used for describing the behavior of dense granular flows. Due to the limitations of the JFP model to capture the behavior at high inertial numbers, the recently proposed MK model along with a normal stress difference law has been used to predict the flow properties for dense as well as dilute granular flows. We first compare the DEM simulation results with the analytical predictions of Parez _et al._[50]. Since the analytical expressions derived by the authors assume a linear \(\mu-I\) relation, we compare their theory only for \(I\leq 0.35\). Next, we compare the DEM results with the predictions obtained using two different rheological models ### Analytical and numerical predictions for linear \(\mu-I\) model In this section, we compare the predictions of inertial number, velocity profile, and solids fraction using the analytical solution given by Parez _et al._[50]. The authors predicted the flow properties for unsteady flows flowing down an inclined plane using an analytical approach. They considered a linear form of the \(\mu-I\) and their results were in very excellent agreement with their DEM simulation results. Due to the weak dependence of solids fraction on the flow velocity reported in previous works [2; 17; 18; 32; 73], the authors assumed a constant solids fraction in their analytical approach to derive the expression for time-dependent flow properties. The authors obtained a series solution for the shear rate and the velocity profile. The solution can be very well approximated by a single-term solution and ignoring other terms of the series since they are much smaller compared to the first term. This analytical solution for the time-dependent velocity profile was derived using a linear relation between effective friction coefficient \(\mu(I)\) with the inertial number \(I\), i.e, \(\mu(I)=\tan\theta_{r}+bI\). The values of the model parameters \(\tan\theta_{r}\) and \(b\) are obtained by fitting a line to the \(\mu-I\) data obtained from DEM simulations as shown in Fig. 3(a). In order to ensure linear \(\mu-I\) relation, only data up to \(\theta\leq 28^{\circ}\) are considered in Fig. 3(a). The value of the slope and intercept of the fitted line are obtained to be \(b=0.76\) and \(\tan\theta_{r}=0.29\). The solid lines in Figs. 3(b)-3(d) show the flow properties such as the average velocity, the bulk inertial number and the bulk solids fraction predicted using the single-term analytical solution. We use the following expressions reported by Parez _et al._[50] for velocity \[v(y,t)=\frac{2}{3}\frac{\sqrt{\phi_{s}g\cos\theta}(\tan\theta-\tan\theta_{r}) }{bd}(h_{s}^{3/2}-y^{3/2})(1-e^{-t/T_{1}})+h.o.t. \tag{19}\] where \(T_{1}=0.5\sqrt{\phi_{s}}\frac{h_{s}^{3/2}}{bd\sqrt{g\cos\theta}}\). The inertial number is calculated using Eq. (13) where the shear rate is obtained by numerically differentiating the velocity predicted from the theory of Parez _et al._[50] and the pressure is assumed to be equal to hydrostatic pressure, following the assumptions of the theory. Note that Parez _et al._[50] assume that the height of the flowing layer \(h\) as well as solids fraction \(\phi\) remains constant and does not change with time. Due to this reason the time variation of the average solids fraction is not captured by their theory. We use the steady-state height \(h_{s}\) and solids fraction \(\phi_{s}\) in Eq. (19) to calculate the predictions shown in Figure 3. DEM simulation results are shown using symbols. The profiles predicted using their analytical solution are in reasonable agreement with the DEM data up to \(I=0.3\). At earlier times, the difference between theory and simulations is observable due to the difference in the \(\mu-I\) data and the linear fit used. Fitting a line only for data lying in the range of \(I<0.1\) improves the analytical predictions for \(\theta=20^{\circ}\). Similarly, a better match can be obtained for \(\theta=24^{\circ}\) by using a linear fit only up to \(I\sim 0.2\). At a higher inclination angle of \(\theta=28^{\circ}\), the analytical predictions for \(v_{avg}\) differ from simulation data not only for early times but also for later times. Increasing the inclination angle further leads to more deviation from the simulation data since the non-linear behavior of the data is not captured well using a linear fit for higher values of the inertial number observed at these high inclinations. In absence of any theoretical results for non-linear \(\mu-I\) relation, we resort to the numerical solution of the momentum balance equations in the next sections. Before reporting the numerical results of momentum balance equations for the non-linear \(\mu-I\) rheology, we compare our numerical predictions for the linear \(\mu-I\) case with the analytical solutions of Parez _et al._[50]. Figure 4: a) Comparison of the numerical solution (shown using symbols), analytical one-term solution (dashed line), and analytical three-term solution (solid lines) at different times for \(\theta=24^{\circ}\). b) Comparison of the numerical solution (shown using symbols) and analytical one-term solution (solid lines) for different angles. Figure 3: (a) Variation of effective friction coefficient \(\mu\) with inertial number \(I\) for \(e_{n}=0.5\). Black circles represent the DEM data up to inclination \(\theta=28^{\circ}\). The red solid line represents the fitted line of the form \(\mu(I)=aI+b\). Variation of the (b) average velocity of the layer \(v_{avg}\), (c) average inertial number in the bulk \(I_{bulk}\), and (d) average solids fraction in the bulk \(\phi_{bulk}\) with time \(t\) for \(e_{n}=0.5\) at different inclinations. Symbols represent the DEM simulation data and the solid lines represent the analytical predictions by Parez _et al._[50]. We first benchmark our numerical solutions using the rheological parameters reported in Parez _et al._[50] and find a near-perfect match with their analytical solutions (not shown here). Next, we use the fitted parameter for linear \(\mu-I\) relation from Fig. 3(a) to obtain numerical solutions corresponding to the analytical solutions shown in Fig. 3(b). Figure 4(a) shows the results for velocity profile at different time instants for \(\theta=24^{\circ}\). Figure 4(b) shows the average velocity \(v_{avg}\) variation with time for three different inclinations. The dashed line corresponds to the single-term solution while the solid line accounts for the first three terms of the infinite series solution. The slight difference between the single-term solution and the three-term solution is due to the fact that the neglected terms of the series solution remain comparable to the first term of the series at smaller times and become negligible only at large times. In the next sections, we use this numerical method with non-linear \(\mu-I\) relation to obtain predictions for the flow properties with time. The excellent match between the numerical solutions (symbols) and analytical predictions (solid lines) confirms the accuracy of the numerical solutions. ### Predictions from JFP Model Figs. 5(a)-5(f) shows the variation of the velocity \(v_{x}\), shear rate \(\dot{\gamma}\), solids fraction \(\phi\), inertial number \(I\), shear stress \(\tau_{yx}\) and viscosity \(\eta\) with distance \(y\) from the base at different times for \(\theta=24^{\circ}\) and restitution coefficient \(e_{n}=0.5\). Black circles, red squares, and blue lower triangles represent the average DEM flow properties at \(t=100\), \(t=200\), and Figure 5: Variation of the (a) velocity \(v_{x}\), (b) shear rate \(\dot{\gamma}\), (c) inertial number \(I\), (d) solids fraction \(\phi\), (e) shear stress \(\tau_{yx}\) and (f) viscosity \(\eta\) with distance \(y\) from the base at different times for \(\theta=24^{\circ}\). Symbols represent the DEM simulations data and the solid lines represent the JFP model predictions. \(t=500\) time units. The solid lines represent the predictions at different times obtained by numerically solving the Eq. 16 using \(PDEPE\) solver and using the JFP model (Eq. 7) for rheological description. The JFP model parameters obtained by fitting Eq. 7 to the simulation data are shown in Table 1. Figure 5(a) shows the variation of velocity \(v_{x}\) with distance \(y\) from the base at three different time instants of \(t=100\), \(t=200\), and \(t=500\) time units. The velocity profile shows a Bagnold dependence with negligible slip at the base and increases to a maximum value at the free surface. The velocity at the free surface keeps increasing with time and the slope of the velocity profile near the base also changes. The shear rate near the base is maximum and decreases with an increase in height from the base as can be seen in Fig. 5(b). The shear rate also increases with time at any given distance from the base. Figure 5(c) shows the inertial number \(I\) along the height of the flowing layer at different time instants. As expected, the inertial number also increases with time and shows minor variations in the layer for early times. At later times, it becomes nearly constant in most of the bulk layer with small oscillations. Figure 5(d) shows the variation of solids fraction \(\phi\) along the flowing layer at different time instants. The solids fraction \(\phi\) shows large fluctuation across the flowing layer due to averaging over only a few snapshots. In addition, the usage of nearly monodisperse particles in the 2D simulation may also lead to particle layering that can cause strong variations in solids fraction \(\phi\) in different bins. With increasing time, a small decrease in solids fraction from \(\phi\simeq 0.8\) to \(\phi\simeq 0.75\) is observed. Figure 5(e) shows a linear variation of the shear stress \(\tau_{yx}\) with distance \(y\) from the base due to the nearly constant bulk density of the layer. Figure 5(f) shows the viscosity \(\eta\) with distance \(y\) from the base. The viscosity varies non-linearly from zero at the free surface to a maximum value at the base. With the increase in time, the viscosity decreases. The theoretical predictions obtained using the JFP rheological model are indeed able to capture the flow properties at different times for \(\theta=24^{\circ}\). Figs. 6(a)-6(d) show the variation of the average velocity of the flowing layer \(v_{avg}\), inertial number in the bulk \(I_{bulk}\), average solids fraction in the bulk \(\phi_{bulk}\) and the slip velocity at the base \(v_{slip}\) with time \(t\) for inclinations varying from low to moderate inclination angles. The average velocity \(v_{avg}\) is calculated as \(v_{avg}=\frac{1}{h}\int_{0}^{h}v_{x}dy\). As expected, the average velocity increases with time and eventually becomes constant at a steady state. The steady-state value of the average velocity increases with an increase in inclination. Figs. 6(b) and 6(c) show the variation of average inertial number and solids fraction in the bulk with time for different inclinations \(\theta\). The average values in the bulk Figure 6: Variation of the (a) average velocity \(v_{avg}\), (b) average inertial number \(I_{bulk}\), (c) average solids fraction in the bulk \(\phi_{bulk}\) and (d) slip velocity at the base \(v_{slip}\) with time \(t\) (\(e_{n}=0.5\)) for different inclinations \(\theta\). Symbols represent the DEM simulations data whereas the solid lines in (a-c) represent the JFP model predictions. are calculated by considering data in the bulk region (\(0.2h\leq y\leq 0.8h\)) of the layer and discarding the data near the free surface and the base where \(h\) is the free surface height. The inertial number also increases with time and eventually reaches a steady state value and increases with the inclination angle. The bulk solids fraction decreases with time and inclination angle and eventually attains a steady state at large times. Figure 6(d) shows the variation of slip velocity \(v_{slip}\) with time \(t\) for low to moderate inclinations. The slip velocity increases with an increase in inclination over time and eventually attains a steady state with small fluctuations around the steady mean slip velocity. The slip velocity in the case of these inclination angles is very small and neglecting the slip velocity in the calculation of average velocity barely affects the results. The effect, however, becomes important at higher inclinations and hence almost all the results reporting instantaneous/average velocity account for the slip velocity at the base in this study. Only the results shown in Fig. 3(b) assume the slip velocity at the base to be zero to remain consistent with the theoretical predictions of Parez _et al._[50]. The solid lines in Fig. 6 represent the theoretical predictions obtained from the \(PDEPE\) solver using the JFP model. Figs. 6(a)-6(c) show that the JFP model predicts the flow behavior very well for \(\theta\leq 24^{\circ}\). However, the JFP model predictions appear to be deviating from the DEM data for \(\theta=28^{\circ}\). This can be attributed to the presence of normal stress difference which remains negligible for low inclinations but starts to become important as the angle increases [22, 30]. Next, we consider even higher inclination angles and compare the predictions of the continuum model using JFP model with DEM simulation results. Figs. 7(a)-7(d) show the variation of average velocity \(v_{avg}\), bulk inertial number \(I_{bulk}\), bulk solids fraction \(\phi_{bulk}\) for inclinations varying from \(\theta=30^{\circ}\) to \(\theta=36^{\circ}\). As before, the DEM simulations show that the average velocity and bulk inertial number increase with time and inclination angle and eventually attain a constant value after a sufficiently long time indicating the existence of a steady state at all four inclination angles. The predictions using the JFP model (solid lines) capture the flow properties for moderate angles very well but fail to capture the flow behavior for high inclinations. Specifically, the theoretical predictions using the JFP model, show continuously increasing average velocity and bulk inertial number for all \(\theta\geq 32^{\circ}\) and do not achieve a steady state during the time period of interest. The difference between the predicted values and DEM results keeps increasing with the inclination angle and the theoretical predictions for the higher angles (shown in Fig. 7) differ from the DEM results by a factor of two. Similar differences are observed in the predictions of the average inertial number \(I_{bulk}\) and average solids fraction \(\phi_{bulk}\) using the JFP model at higher inclinations. The predicted values of the bulk inertial number are found to be much higher and the bulk solids fractions are found to be substantially smaller than those observed in DEM simulations. Figure 7(d) shows that the slip velocity in case of such high inclination angles is very significant. Neglecting such large slip velocity in the theoretical predictions affects the predicted values of the average velocity significantly. Figure 8 shows the variation of the slip velocity \(v_{slip}\) with time \(t\) for two different angles \(\theta=32^{\circ}\) and \(\theta=36^{\circ}\). Although a small deviation from the fitted curve is evident at early times, the variation of the slip velocity with time is captured reasonably well using an exponential fit, shown using red solid lines in Fig. 8. A large scatter in the instantaneous slip velocity values is evident around the mean exponential variation. This scatter appears to be within Figure 7: Variation of the (a) average velocity of the layer \(v_{avg}\), (b) average inertial number in the bulk \(I_{bulk}\), (c) average solids fraction in the bulk \(\phi_{bulk}\) and (d) slip velocity at the base \(v_{slip}\) with time \(t\) (\(e_{n}=0.5\)) for different inclinations. Symbols represent the DEM simulation data and the solid lines represent the JFP model predictions. \(5\%-10\%\) of the mean value obtained from the exponential fit. In the results shown in this study (except Fig. 3), the theoretical predictions utilize the fitted exponential time variation of the slip velocity while calculating the average velocity and instantaneous velocity profiles. The results for the shear rate, inertial number, packing fraction, shear stress, pressure, and the apparent viscosity are not affected due to the presence of the slip velocity. The results shown in this section confirm that the substantial difference observed between the DEM simulations and the theoretical predictions arise due to the inability of the JFP model to capture the rheology appropriately at high inclinations. In the next section, we show that the modified rheological description is able to capture the time-dependent properties in very good agreement with the DEM simulation results. ### Predictions using modified rheological description In this section, we report the predictions of the continuum model utilizing the new rheological description proposed by Patro _et al._[30] which compliments the MK model along with a normal stress difference law. Figs. 9(a)-9(f) report the results for \(\theta=28^{\circ}\) at four different time instances. Symbols represent the variation of velocity \(v_{x}\), shear rate \(dv_{x}/dy\), inertial number \(I\), solid fraction \(\phi\), shear stress \(\tau_{yx}\) and viscosity \(\mu\) with distance \(y\) from the base. Black circles represent the flow properties at \(t=100\). Red squares represent the flow properties at \(t=200\). Similarly, blue upper triangles and purple lower triangles represent the flow properties at \(t=400\) and \(t=2000\) respectively. The solid lines represent the theoretical predictions obtained from the continuum model using the modified rheology of Patro _et al._[30]. The rheological model parameters for the modified \(\mu-I\) relation, dilatancy law and normal stress difference law are shown in Table 2, Table 3 and Table 4, respectively. The predictions using the modified rheology are able to capture the flow properties very well at different times for \(\theta=28^{\circ}\). For comparison, we also show the predictions obtained using the JFP model using dashed lines. The predictions for the JFP model differ marginally from the modified rheology predictions and DEM data for early times (\(t\leq 1000\)). At larger times, however, the JFP model predictions differ substantially from the DEM data. show the variation of average velocity \(v_{avg}\), bulk inertial number \(I_{bulk}\), and bulk solids fraction \(\phi_{bulk}\) with time \(t\) for three different inclinations. In all the cases, the predictions from the modified rheology (shown as solid lines) agree with the DEM simulations data better compared to the JFP model predictions shown using the dashed lines. While predicting the average velocity in Fig. 10(a), we account for the slip velocity at the base as mentioned before. However, assuming the slip velocity to be zero leads to very small differences in the flow predictions at these angles. Figs. 11(a)-11(c) show the variation of velocity \(v_{x}\), inertial number \(I\), and solids fraction \(\phi\) with the distance from the base \(y\) using the modified rheology for \(e_{n}=0.5\) at \(\theta=36^{\circ}\). The flow profiles observed in DEM simulations are indeed captured very well using the modified rheology (shown using solid lines). The JFP model, on the other \begin{table} \begin{tabular}{c c c c} & \multicolumn{3}{c}{\(I>I^{*}=0.1\)} & \(I\leq I^{*}=0.1\) \\ \hline \(e\) & \(A\) & \(B\) & \(C\) & \(K\) \\ \hline 0.1 & 0.16 & 0.39 & -0.1 & -0.06 \\ 0.5 & 0.10 & 0.43 & -0.1 & -0.06 \\ \end{tabular} \end{table} Table 4: Model parameters for normal stress difference law Figure 9: Variation of the (a) velocity \(v_{x}\), (b) shear rate \(\dot{\gamma}\), (c) inertial number \(I\), (d) solids fraction \(\phi\), (e) shear stress \(\tau_{yx}\) and (f) viscosity \(\eta\) with distance \(y\) from the base at different times for \(\theta=28^{\circ}\). Symbols represent the DEM simulations data (\(e_{n}=0.5\)), solid lines are the predictions of the modified rheology, and dashed lines represent the JFP model predictions. Figure 11: Variation of the (a) velocity \(v_{x}\), (b) inertial number \(I\), (c) solids fraction \(\phi\) with distance \(y\) from the base at different times for \(\theta=36^{\circ}\). Variation of the (d) average velocity \(v_{avg}\), (e) average inertial number \(I_{bulk}\) and (f) average solids fraction \(\phi_{bulk}\) with time \(t\) for even inclinations \(32^{\circ}\leq\theta\leq 40^{\circ}\). Symbols represent the DEM simulations data (\(e_{n}=0.5\)), solid lines are the predictions of the modified rheology and dashed lines represent the JFP model predictions. Figure 10: Variation of the (a) average velocity \(v_{avg}\), (b) average inertial number \(I_{bulk}\), and (c) average solids fraction \(\phi_{bulk}\) with time \(t\) for different inclinations \(\theta\). Symbols represent the DEM simulations data (\(e_{n}=0.5\)). Symbols represent the DEM simulations data (\(e_{n}=0.5\)), solid lines are the predictions of the modified rheology and dashed lines represent the JFP model predictions. hand, has serious limitations in predicting the flow properties (shown using dashed lines) accurately at such high inclinations. Specifically, the JFP model predictions lead to significant over-predictions of the velocity and inertial number and under-prediction of the solids fraction at large times. Figs. 11(d)-11(f) show the variation of average velocity \(v_{avg}\), average inertial number \(I_{bulk}\) and average solids fraction \(\phi_{bulk}\) with time \(t\) for stiff inclinations in the range \(32^{\circ}\leq\theta\leq 40^{\circ}\). The solid lines represent the predictions of the modified rheology [30]. For the higher values of inclination angles, the average inertial number exceeds unity. However, the modified rheological model is able to predict the transient flow properties in reasonable agreement with the DEM simulations. Figs. 12(a)-12(c) report the results for highly dissipative particles with \(e_{n}=0.1\) spanning a large range of inclinations. Symbols represent the DEM data and the solid lines represent the modified rheological model predictions by [30]. Figs. 12(a)-12(c) report the average flow properties for moderate to high inclinations (i.e. \(\theta=30^{\circ},34^{\circ},38^{\circ}\) and \(42^{\circ}\)). The results for inclinations \(\theta\leq 30^{\circ}\) are shown in the inset. The steady state average velocity varies from \(v_{avg}^{ss}\)\(\sim 10\) at \(\theta=20^{\circ}\) to \(v_{avg}^{ss}\sim 580\) at \(\theta=42^{\circ}\). The inertial number varies from around \(I_{avg}\sim 0.1\) at lowest inclination to \(I_{avg}\sim 1.5\) for highest inclination. The modified rheology is able to predict the entire range of velocity profiles, inertial number and solids fraction observed across these different tilt angles accurately. As mentioned before, accurate prediction of velocity at high inclination requires reliable knowledge of the slip velocity at the base. These slip velocities for the three higher angles \(\theta=34^{\circ}\), \(\theta=38^{\circ}\) and \(\theta=42^{\circ}\) are reported in Figs. 12(d),12(e) and 12(f) respectively. Despite small deviations from the mean trend, an exponential variation describes the slip velocity dependence on time very well. All the data seem to be within 5% of the fitted mean slip velocity. In the predictions of properties shown in Figs. 12(a)-12(c), the fitted mean slip velocity with time is accounted for. This accounting of the mean slip velocity is crucial for accurate predictions of the average and instantaneous velocities. Other properties, however, do not get Figure 12: Variation of (a) the average velocity \(v_{avg}\), (b) average inertial number \(I_{bulk}\), (c) average solids fraction \(\phi_{bulk}\) with time \(t\) at different inclinations \(\theta\). Symbols represent the DEM data for \(e_{n}=0.1\) whereas the solid lines represent the theoretical predictions obtained from the modified rheology. Inset shows the results for low to moderate inclinations. The results for higher inclination angles are reported in the main figure. Slip velocity at the base \(v_{slip}\) as a function of time for inclination (d) \(\theta=34^{\circ}\), (e) \(\theta=38^{\circ}\) and (f) \(\theta=42^{\circ}\). Red line shows the exponential fit to the simulation data (shown using black circles). Brown band shows the variation of 5% from the mean velocity and is able to capture the slip velocity data at all time. altered by ignoring the slip velocity at the base. ### Importance of granular dilatancy The modified \(\mu-I\) rheological model along with a normal stress difference law is able to predict the flow profiles at large inclination angles that are in good agreement with the DEM simulation results. However, this accurate prediction requires the continuum approach to keep track of the free surface. In this work, we have utilized a simplified method to account for the variation of the flowing layer thickness \(h(t)\) with time \(t\). Using the mass balance and equating the total mass at any instant to be equal to the mass at \(t=0\), the flowing layer thickness \(h\) at every instant is updated by \(h(t)=h_{min}\phi_{max}/\phi_{avg}(t)\). Most of the studies dealing with granular rheology assume the flow to be incompressible and hence do not bother about such dilation effects. In order to check if this increase of the layer height with time is critical to get accurate flow properties, we also perform continuum simulations assuming that the flowing layer thickness remains constant for the entire range of inclination angles. Note that, the continuum predictions for the velocity obtained by solving Eq. 16 do not predict any slip velocity (which is added to the predictions for comparison with DEM data), we compare the predictions for \(v_{avg}-v_{slip}\) with time \(t\) in Fig. 13. Figure 13(a) shows the variation of \(v_{avg}-v_{slip}\) for \(\theta=20^{\circ}\), \(24^{\circ}\) and \(28^{\circ}\). The average velocity predicted by assuming a constant flowing layer thickness is close to the predictions for variable flowing layer thickness at low inclinations. As the angle of inclination increases, the deviation between the constant flowing layer thickness case and the variable flowing layer thickness case starts to increase as evident in Fig. 13(b). These results confirm that the effect of the layer dilation becomes crucial and needs to be accounted at high inclinations. This can be seen more clearly in Figs. 14(a)-14(d) where the constant layer thickness predictions start deviating from the variable layer thickness predictions at \(t=500\) and become much larger at higher times. Note that the velocity (shown in Fig. 14(a)) across the layer for the two cases at \(t=500\) differs only near the free surface. However, the inertial number and the solids fraction at \(t=500\) for the two cases differ significantly all across the layer. This noticeable difference between the two cases can be understood in the following manner. With the increase in the velocity, the shear rate, and hence, the inertial number \(I\) increases with time. This increase in \(I\) causes the solids fraction \(\phi\) (and hence bulk density \(\rho_{b}=\phi\rho_{p}\)) to decrease. In the constant height case, the reduction in the bulk density leads to a small reduction in the pressure (dashed lines in Fig. 14(d)) with time. Since \(I=\dot{\gamma}d/(\sqrt{P/\rho_{p}})\), this small reduction in pressure contributes to the increase of the inertial number only marginally, and most of the contribution to the inertial number increase is due to the change in the shear rate. However, when the dilation of the layer is allowed, the height of the layer increases and hence the pressure at all locations in the layer increases with time (solid lines in Fig. 14(d)). This increased pressure Figure 13: Variation of the average velocity \(v_{avg}\) with time \(t\) for \(e_{n}=0.5\) at (a) \(\theta=20^{\circ}\), \(24^{\circ}\) and \(28^{\circ}\) (b) \(\theta=32^{\circ}\) and \(36^{\circ}\). The lines are the theoretical predictions from the modified rheological model. Solid lines are predictions considering variable flowing layer thickness, while dashed lines are predictions considering a constant flowing layer thickness. leads to a reduction in the inertial number while the shear rate increase leads to an increase in the inertial number. Figure 14(b) shows that the net effect of these two competing influences of shear rate and pressure effectively leads to an increase of \(I\). However, this increment in \(I\) is smaller compared to the constant height case due to the increase in pressure. With time, the inertial number increases and the change in pressure become even more prominent and the discrepancy between the variable and constant layer height case keeps increasing. ### Role of normal stress difference One of the key features of the rheology proposed by Patro _et al._[30] is the significant role of the normal stress difference due to the anisotropy in the diagonal components of the stress tensor. This stress anisotropy, along with the non-monotonic-\(\mu-I\) relation makes the rheological description complete. The JFP model, on the other hand, ignores the stress anisotropy and assumes the normal stress difference to be zero. In order to explore the importance of normal stress differences in predicting the flow properties accurately, we predicted the average velocity by ignoring the normal stress differences and using only the non-monotonic-\(\mu-I\) variation. As shown in Fig. 13, the slip velocity has not been included in these predictions. These predictions are found to be nearly identical to the predictions accounting for the presence of the normal stress difference for \(\theta\leq 24^{\circ}\). This is due to the fact that the normal stress difference to pressure ratio is very small at low inclinations (i.e. at low inertial numbers). However, at higher inclinations, the difference between the solid and dashed curves of the same color becomes significant (Fig. 15) confirming that the Figure 14: Variation of the (a) velocity \(v_{x}\), (b) inertial number \(I\), (c) solids fraction and (d) Pressure \(P\) with distance \(y\) from the base at different times for \(e_{n}=0.5\) at \(\theta=36^{\circ}\). The solid lines are the theoretical predictions from the rheological model considering variable flowing layer thickness. The dashed lines are the theoretical predictions from the rheological model considering constant flowing layer thickness. presence of normal stress difference cannot be ignored at higher inclinations i.e., at \(\theta=30^{\circ}\) and \(\theta=32^{\circ}\). These results indicate that accounting for the non-monotonic variation of \(\mu-I\) alone is not enough to capture the flow properties accurately for flows at high inertial numbers (observed at high inclinations). ### Oscillations in the steady flow Figure 16 shows the snapshots of the flowing layer (\(e_{n}=0.5\)) for an inclination of \(\theta=42^{\circ}\) at four different time instants. Starting from a packed arrangement of particles at \(t=0\), the layer height increases as the flow evolves with time and achieves more than twice of the initial layer height. The reduction in the solids fraction accompanied with this layer height increase is evident from the sparse arrangement of particles in the bed. Note that the region near the base depicts lower solids fraction compared to the bulk of the layer. In addition, the arrangement of particles in Figure 16: Snapshots of system’s configuration at different times for \(e_{n}=0.5\) at \(\theta=42^{\circ}\). Figure 15: Variation of the average velocity \(v_{avg}\) with time \(t\) for \(e_{n}=0.5\) at \(\theta=20^{\circ}\), \(24^{\circ}\), \(28^{\circ}\), \(30^{\circ}\) and \(32^{\circ}\). Solid lines are predictions considering normal stress difference, while dashed lines are predictions without considering the effect of normal stress difference. the layer shows regions of varying density along the flow as well as normal direction. These density variations at high angles are observed along with noticeable fluctuations in the layer height. These oscillations in the bulk layer height \(h_{bulk}\) are shown in Fig. 17(a) over a time period of 200 time units starting from a time instant \(t_{0}\). Oscillations in the center of mass \(y_{com}\) are shown in Fig. 17(b). These height measurements are done after the flow has achieved steady kinetic energy (hence \(t_{0}\) varies with the inclination angle \(\theta\)). The oscillations in the bulk layer height \(h_{bulk}\) as well as center of mass \(y_{com}\) keep increasing with the inclination angle. While the difference between maximum and minimum bulk height is around a couple of particle diameters for inclinations \(\theta\leq 30^{\circ}\), this variation becomes as large as \(\Delta h_{bulk}\approx 16d\) for \(\theta=42^{\circ}\) case (Fig. 17(a)). As shown in Fig. 17(b), the amplitude of the oscillations observed in the center of mass position is approximately half of that observed in the bulk layer height. These large amplitude oscillations in the layer height at high inclinations might be linked to the pressure (or expansion) waves that cause density variations in the system. Such variations are observable in the video link given in the supplementary information (see SI 1). These effects indicate that the role of density and height variations become important and cannot be ignored for granular flows at inertial numbers comparable to unity. Such oscillations at high chute inclinations have been mentioned by Brodu _et al._[24] as well. Due to these oscillations, the calculation of rheological parameters in [30] from simulation data at high inclinations requires averaging over a large number of snapshots so that the influence of these fluctuations is averaged out. Further, the data near the free surface and chute base are ignored during the rheological parameter estimation as Mandal and Khakhar [25] have shown that the \(\mu-I\) data near the free surface and base follow slightly different behavior as compared to the data in the bulk region. In classical fluids, as the flow becomes faster, the competition between the inertia and gravity leads to instability and the free surface shows long wave modulations. In the case of turbulent flows, this instability is known as roll wave instability while in the case of viscous fluids, it is referred to as Kapitza instability. Such instabilities have been experimentally observed in the case of granular media before [74]. Since our simulations use a periodic box, such long wave undulations are not observable. However, the oscillations in layer height (as shown in Fig. 17(a)) might be indicative of such instability in the flow. A pronounced rippling behavior in the case of gravity-driven free surface flow of Newtonian fluids occurs in the range of Reynolds numbers \(20\leq Re\leq 1500\)[75]. Beyond this Reynolds number, the flow of Newtonian fluids becomes chaotic and is considered to be turbulent. Since the granular flow behavior is controlled by inertial number, we plot the variation of the Reynolds number of the layer with the inertial number in Fig. 18(a). Calculation of the Reynolds number for a fluid requires knowledge of its density and viscosity. While the bulk density of the flowing granular fluid remains constant across most of the layer, the viscosity keeps changing across the layer (see Fig. 9(f)). In absence of a proper definition of Reynolds' number for granular flows, we use the average density \(\rho_{avg}\) and average viscosity \(\eta_{avg}\) of the layer by averaging these properties in the bulk region, i.e., the region away from the base and the free surface. The average velocity \(v_{avg}\) is calculated by averaging across the entire flowing layer. Using these average values, Reynolds number for the layer is calculated as \(Re=\frac{\rho_{avg}v_{avg}h}{\eta_{avg}}\). The calculation of Reynolds number using \(h=h_{max}\) is shown using black circles Figure 17: Steady state oscillating behavior of a) bulk flowing layer thickness \(h_{bulk}\) and b) center of mass \(y_{com}\) at different inclinations for restitution coefficient \(e_{n}=0.5\). whereas the calculation based on \(h=h_{bulk}\) is shown using red upper triangles in Fig. 18(a). The difference between the two seems to become significant only at large inertial numbers. Figure 18(b) shows the variation of scaled slip velocity \(v_{slip}/(gh_{min})^{0.5}\) with the inertial number, where \(h_{min}\) is the minimum layer thickness in the packed configuration at the beginning of the simulation. The scaled slip velocity becomes significantly different from zero for \(I>0.5\). The difference in the maximum and minimum value of the bulk flowing layer thickness (\(\Delta h_{bulk}/d\)), shown on the right ordinate) remains nearly constant for \(I<0.5\) and shows a noticeable increase for higher inertial numbers. A close look at the inset of Fig. 18(a), which shows the main graph on a log-linear scale, indicates that \(I\approx 0.5\) corresponds to the \(Re\approx 1500\). As mentioned before, this limit of Reynolds number corresponds to the onset of turbulence in case of Newtonian fluids. Whether the noticeable changes in the slip velocity and the bulk layer height oscillations observed in the flow correspond to onset of turbulence in the flowing granular layer is an open question that remains to be explored. For the sake of completeness, it is worth mentioning that the limit \(Re=20\) for the onset of rippling flow corresponds to \(I\approx 0.07\). Figure 19: Fast Fourier transform analysis of the center of mass data for all inclination angles. Figure 18: a) Variation of the Reynolds number with the inertial number for \(e_{n}=0.5\). The inset figure in (a) shows the variation of Reynolds number (\(Re\)) with the inertial number (\(I\)) in the logarithmic scale. Black circles correspond to \(h_{max}\) whereas the red upper triangles correspond to \(h_{bulk}\) for the layer thickness in Reynolds number calculation. b) Variation of the slip velocity \(v_{slip}/\sqrt{gh_{min}}\) and \(\Delta h_{bulk}/d\) (on the right ordinate) with the inertial number at steady state. In order to investigate the time periodic behavior of the layer at high inertial numbers, we performed a Fast Fourier Transform (FFT) analysis of the time series data of the center of mass position at steady state. Figure 19 shows the amplitude spectrum of the center of mass for different inclinations. The amplitude spectrum for inclinations \(\theta=18^{\circ}\) and \(\theta=22^{\circ}\) shows nearly uniform distribution for all frequencies. At higher inclinations, a dominant frequency with a clear peak starts to appear. The occurrence of a dominant frequency in the amplitude spectrum confirms that the variation of layer height occurs with a characteristic time period at these large inclinations. The peak frequency keeps moving to lower values as the inclination angle increases, indicating that the time period of oscillations increases with the inclination angle. Figure 19 also shows that the amplitude of oscillations at the \(\theta=42^{\circ}\) is nearly two orders of magnitude higher compared to those at \(\theta=26^{\circ}\). Hence the oscillations observed in the layer at high inclinations are of larger amplitude and occur over larger time periods compared to those at moderate inclinations. FFT analysis of the bulk layer height and the kinetic energy data are shown for three different inclinations in Fig. 20 using red and blue lines respectively. The black lines in the figure correspond to the center of mass position. The maximum amplitude of the spectrum is observed at the same frequency for all three propr Figure 21: a) Variation of the dominant frequency \(f\) corresponding to different inclination angles \(\theta\). b) Variation of the amplitude \(A\) corresponding to the dominant frequency \(f\) (shown in (a)) for different inclination angles. See text for details Figure 20: Fast Fourier transform analysis of the center of mass (shown in black line), bulk flowing layer thickness (shown in red line) and kinetic energy (shown in blue line) for inclination angle a) \(\theta=30^{\circ}\), b) \(\theta=36^{\circ}\) and c) \(\theta=42^{\circ}\). that the oscillations at high inclinations affect these properties of the flow in a nearly identical fashion. This is more clearly observed in Fig. 21(a) which shows that the dominant frequency for all three properties is identical at any given inclination. The figure also shows that the oscillation frequency decreases with an increase in the inclination angle. Figure 21(b) shows that the amplitude of oscillations increases with the inclination angle and the amplitude of the oscillations in the bulk layer height is approximately twice of that in the center of mass. The kinetic energy oscillation amplitudes (shown by blue on the right ordinate) differ by nearly two orders of magnitude. Note that the data for angles less than \(24^{\circ}\) cannot be included due to the lack of any dominant frequency in the amplitude spectrum. Fig. 21(a) shows that the dominant frequency for \(\theta=30^{\circ}\) is found to be \(f\approx 0.09\). The corresponding values for \(\theta=36^{\circ}\) and \(\theta=42^{\circ}\) are found to be \(f\approx 0.04\) and \(f\approx 0.02\) respectively. In order to get an estimate of the frequencies that might be observed in experiments, the dimensionless dominant frequencies reported above need to be converted to dimensional values. For 1mm size particles flowing under the influence of earth's gravity, these frequencies turn out to be approximately 9 Hz for \(\theta=30^{\circ}\), 4 Hz for \(\theta=36^{\circ}\) and 2 Hz for \(\theta=42^{\circ}\). ## VI Summary and Future Work We perform two-dimensional DEM simulations of granular materials flowing down a bumpy inclined plane varying over a wide range of inclination angles and calculate various flow properties of interest as the flow evolves with time and achieves a steady state. We also obtain numerical solutions of the momentum balance equations (Eq. (1)) supplemented by different constitutive rheological models (Eqs. (7) & (8)) using the _PDEPE_ solver in MATLAB. This solver uses an implicit finite difference method to solve the resulting partial differential equation (Eq. (16)). The numerical simulation assumes a fully developed flow so that it can be compared with the DEM simulations of flow over a chute having periodic boundary conditions. We first benchmark our numerical solution method with the analytical predictions of Parez _et al._[50] for a linear \(\mu-I\) relation. After confirming the accuracy of our numerical solutions, we utilize the non-linear \(\mu-I\) relation of the JFP model [35]. The predicted flow properties obtained from the numerical solutions are compared with the DEM simulations and are found to be in good agreement with the DEM simulations for inclination \(\theta<28^{\circ}\). Differences in the predictions of average velocity \(v_{avg}\) become noticeable for \(\theta=28^{\circ}\) (\(I\sim 0.3\)) and this difference is attributed to the neglection of normal stress difference in the JFP model. The JFP model fails to capture the flow behavior for \(\theta>28^{\circ}\) as it reaches higher inertial numbers. The deviation between predictions and DEM data keeps increasing with the inclination angle \(\theta\) and no steady flow is predicted for \(\theta\geq 36^{\circ}\) using the JFP model, which is inconsistent with the DEM results. Next, we predicted the flow properties using the modified \(\mu-I\) rheology along with the normal stress difference law recently proposed by Patro _et al._[30]. The method also accounts for the dilation of the layer with an increase in the inertial number as the flow evolves with time. The predicted flow properties obtained using the modified rheological model are in very good agreement with the DEM simulation results for a much larger range of inclinations (up to \(\theta=40^{\circ}\)). In the numerical predictions of the velocity profile, the slip velocity needs to be explicitly accounted for and cannot be neglected at large inclinations. To summarize, we find that accurate prediction of time-dependent flow properties at high inertial numbers requires us to account for the following effects: (1) Non-monotonic \(\mu-I\) relation proposed by Patro _et al._[30]; (2) Presence of the normal stress difference; (3) Presence of the slip velocity at the chute base; and (4) Dilation of the flowing layer to account for changes in the solids fraction. Ignoring any of these effects leads to significant errors in the predictions at high inclinations. Continuum simulations of granular flows using different rheological models are being actively pursued since last few years. However, all such studies have remained limited to inertial numbers less than unity. This work shows that accounting for these four important effects is crucial for a successful and reliable continuum simulation of high-speed granular flows. It is worth highlighting that the numerical solution of the momentum balance equations does not capture the periodic oscillations in the flow properties at steady state which become prominent at high inclinations. The bulk layer height, the center of mass position of the layer as well as the kinetic energy show oscillations around the steady state value with a characteristic frequency beyond an inclination angle. As the inclination angle increases, both the amplitude and time period of the oscillations increase. Despite the periodic variations observed at high inclinations, the time-averaged properties are predicted reasonably well using the modified rheology proposed by [30]. It has been shown by Barker _et al._[47] that the \(\mu-I\) rheology given by JFP model is ill-posed at high and low inertial numbers. In other words, for a given set of rheological parameters, there exist a minimum and a maximum value of the inertial number beyond which, the results obtained from solving the rheological and momentum balance equations will be grid size dependent. However, the variation of the effective friction coefficient with the inertial number assumed in such studies is not consistent with the simulation results. Hence a detailed study about the well-posedness of the rheological model of Patro _et al._[30] by accounting for the presence of the normal stress difference and non-monotonic variation of \(\mu\) with \(I\) needs to be pursued. Further, the compressibility effects also need to be accounted for. In order to check the grid size effect on our predictions, we obtained theoretical predictions using three different grid sizes for the modified rheology case and the results were found to differ from each other by less than 1%. Further, the results obtained from the modified rheology are found to be in excellent agreement with the DEM simulations. Hence these results give us confidence that the proposed rheology is able to predict the time-dependent behavior of granular flows at high inertial numbers. Most studies using continuum modeling for granular materials treat them as incompressible fluids. Such a treatment for granular flows in silos, hoppers, heaps, etc. may not lead to significant errors since such flows are typically restricted to \(I\leq 0.5\). However, free surface flows in avalanches and long chutes at high inclinations may show flows at higher inertial numbers. Our results, although reported for 2D, show that ignoring the dilatancy effects in such cases for \(I>0.6\) may lead to inaccurate predictions. The presence of normal stress difference reported in the work suggests that the rheology of granular materials in 3D at high inertial numbers will require accounting for both the first and second normal stress differences. A detailed study investigating the study of 3D granular chute flows at high inertial numbers for the transient as well as steady state will be reported in the future. While the solids fraction across the layer remains nearly constant in the case of chute flow, the presence of side walls leads to the variation of the inertial number and hence the solids fraction across the layer in the case of heap flows. Since the modified rheological model of Patro _et al._[30] seems to be applicable for both dilute as well as dense granular flow regimes, it can also be utilized to explore the liquid-gas transition that occurs in heap flows confined between two sidewalls at high flow rates [3; 76]. ## Acknowledgements AT gratefully acknowledges the financial support provided by the Indian Institute of Technology Kanpur via the initiation grant IITK/CHE/20130338. The authors also acknowledge discussions with Prof. V. Shankar (IIT Kanpur) and Dr. Partha Goswami (IIT Bombay) regarding the characterization of oscillations in the flow. ## Data availability The data that support the findings of this study are available from the corresponding author upon request.
2307.02352
Optimal damping of vibrating systems: dependence on initial conditions
Common criteria used for measuring performance of vibrating systems have one thing in common: they do not depend on initial conditions of the system. In some cases it is assumed that the system has zero initial conditions, or some kind of averaging is used to get rid of initial conditions. The aim of this paper is to initiate rigorous study of the dependence of vibrating systems on initial conditions in the setting of optimal damping problems. We show that, based on the type of initial conditions, especially on the ratio of potential and kinetic energy of the initial conditions, the vibrating system will have quite different behavior and correspondingly the optimal damping coefficients will be quite different. More precisely, for single degree of freedom systems and the initial conditions with mostly potential energy, the optimal damping coefficient will be in the under-damped regime, while in the case of the predominant kinetic energy the optimal damping coefficient will be in the over-damped regime. In fact, in the case of pure kinetic initial energy, the optimal damping coefficient is $+\infty$! Qualitatively, we found the same behavior in multi degree of freedom systems with mass proportional damping. We also introduce a new method for determining the optimal damping of vibrating systems, which takes into account the peculiarities of initial conditions and the fact that, although in theory these systems asymptotically approach equilibrium and never reach it exactly, in nature and in experiments they effectively reach equilibrium in some finite time.
K. Lelas, I. Nakić
2023-07-05T15:11:42Z
http://arxiv.org/abs/2307.02352v2
# Optimal damping of vibrating systems: dependence on initial conditions ###### Abstract Common criteria used for measuring performance of vibrating systems have one thing in common: they do not depend on initial conditions of the system. In some cases it is assumed that the system has zero initial conditions, or some kind of averaging is used to get rid of initial conditions. The aim of this paper is to initiate rigorous study of the dependence of vibrating systems on initial conditions in the setting of optimal damping problems. We show that, based on the type of initial conditions, especially on the ratio of potential and kinetic energy of the initial conditions, the vibrating system will have quite different behavior and correspondingly the optimal damping coefficients will be quite different. More precisely, for single degree of freedom systems and the initial conditions with mostly potential energy, the optimal damping coefficient will be in the under-damped regime, while in the case of the predominant kinetic energy the optimal damping coefficient will be in the over-damped regime. In fact, in the case of pure kinetic initial energy, the optimal damping coefficient is \(+\infty\)! Qualitatively, we found the same behavior in multi degree of freedom systems with mass proportional damping. We also introduce a new method for determining the optimal damping of vibrating systems, which takes into account the peculiarities of initial conditions and the fact that, although in theory these systems asymptotically approach equilibrium and never reach it exactly, in nature and in experiments they effectively reach equilibrium in some finite time. keywords: viscous damping, optimal damping, multi-degree of freedom, initial conditions + Footnote †: journal: Journal of Sound and Vibration ## 1 Introduction If we have an multi-degree of freedom (MDOF) linear vibrating system, i.e. a system of coupled damped oscillators, how to determine damping coefficients that ensure optimal evanescence of free vibrations? In the literature one finds several different criteria, typically based on frequency domain analysis of the system, although there are also approaches based on time domain analysis [1]. The tools used for designing the criteria include modal analysis [2], transfer functions [3], \(H_{2}\) and \(H_{\infty}\) norms coming from systems theory [4; 5] and spectral techniques [6]. A general overview of the optimization tools for structures analysis can be found in e.g. [7]. Another optimization criterion used is to take as optimal the damping coefficients that minimize the (zero to infinity) time integral of the energy of the system, averaged over all possible initial conditions corresponding to the same initial energy [8]. This criterion was investigated widely, mostly by mathematicians in the last two decades, more details can be found, e.g., in references [8; 9; 10; 11; 12]. However, what is common to all these criteria is that they implicitly or explicitly ignore the dependence of the dynamics of the system on the initial conditions. Sometimes this is suitable, e.g. for systems with continuous excitation, but for some, like e.g. earthquake models, it make sense to study the free vibrations of the system with non-zero initial conditions. Specifically, the dependence of the energy integral on the initial conditions has not been investigated, as far as we are aware, and therefore it is not clear how much information about the behavior of vibrating systems is lost by taking the average over all initial conditions or by assuming zero initial conditions and it is not clear how well the optimal damping obtained in this way works for a specific choice of initial conditions, e.g. for an experiment with initial conditions such that the initial energy consists only of potential energy, etc. We have chosen to study the particular criterion of minimizing time integral of the energy as in this case it is straightforward to modify it to take into account the initial conditions: instead of averaging over all possible initial conditions, we study the dependence of the time integral of the energy of the system on initial conditions. Specifically, for criteria based in frequency domain analysis, which are designed for forced vibrations, it is not clear how to take into account the non-zero initial conditions. We will explore this dependence by considering free vibrations of single degree of freedom (SDOF), two-degree of freedom (2-DOF) and MDOF vibrating systems with mass proportional damping (MPD). In particular, for a SDOF, averaging over all initial conditions gives the critical damping as optimal [8, 10], and we show, by considering the minimization of the energy integral without averaging over initial conditions, that damping coefficients approximately 30% less than critical to infinite are obtained as optimal, depending on the initial conditions. We systematize all our results with respect to the relationship between initial potential and initial kinetic energy, e.g., for initial conditions with initial potential energy grater than initial kinetic energy the optimal damping coefficient is in the under-damped regime, while for initial conditions with initial kinetic energy grater than initial potential energy we find the optimal damping deep in the over-damped regime. We also consider the minimization of the energy integral averaged over a subset of initial conditions and obtain a significant dependence of the optimal damping coefficient on the selected subset. Qualitatively, we find the same behavior in 2-DOF and MDOF systems as well. Furthermore, we show that the minimization of the energy integral for certain types of initial conditions does not give a satisfactory optimal damping coefficient. Specifically, for SDOF systems, the obtained optimal damping coefficient does not distinguish between two initial states with the same magnitude of initial displacement and initial velocity, but which differ in the relative sign of initial displacement and initial velocity. These initial conditions differ significantly in the rate of energy dissipation as a function of the damping coefficient, i.e. it is not realistic for one damping coefficient to be optimal for both of these initial conditions. The same is true for each individual mode of MDOF systems with respect to the signs of initial displacements and velocities, expressed via modal coordinates. Another disadvantage of this criterion is that, for initial conditions with purely kinetic initial energy, it gives an infinite optimal damping coefficient, which is not practical for experiments. Also, the energy integral is calculated over the entire time, due to the fact that these systems asymptotically approach equilibrium and never reach it exactly, but in nature and experiments they effectively reach equilibrium in some finite time. We introduce a new method for determining the optimal damping of MDOF systems, which practically solves the aforementioned problems and gives optimal damping coefficients that take into account the peculiarities of each initial condition and the fact that these systems effectively reach equilibrium in some finite time. We take that the system has effectively reached equilibrium when its energy drops to some small fraction of the initial energy, e.g., to the energy resolution of the measuring device with which we observe the system. Our method is based on the determination of the damping coefficient for which the energy of the system drops to that desired energy level the fastest. In this paper, we focus on MPD, but, as we comment at the end of the paper, our conclusions and the method we present applies to other types of vibrating systems as well. In this paper we focus on mass proportional damping so that we could analytically perform a modal analysis and present ideas in the simplest possible way. Everything we have done can be done in a similar fashion analytically for the case of Rayleigh damping [13] as well as for tuned mass damper [14, 15]. Also, it is possible to carry out this kind of analysis numerically for systems with damping that does not allow analytical treatment. This will be the subject of our further research. The rest of the paper is organized as follows: Section 2 is devoted to SDOF systems, in particular minimization of the energy integral and optimal damping is studied for the chosen set of initial conditions. In Section 3 we analyze 2-DOF systems with MPD. MDOF systems with MPD are the subject of Section 4. In Section 5 we propose a new optimization criterion and analyze its properties. Section 6 summarizes important findings of the paper. ## 2 SDOF systems Free vibrations of a SDOF linear vibrating system can be described by the equation \[\ddot{x}(t)+2\gamma\dot{x}(t)+\omega_{0}^{2}x(t)=0,\,x(0)=x_{0},\,\dot{x}(0)=v_{0}, \tag{1}\] where \(x(t)\) denotes the displacement from the equilibrium position (set to \(x=0\)) as a function of time, the dots denote time derivatives, \(\gamma>0\) is the damping coefficient (sometimes called damping ratio), \(\omega_{0}\) stands for the undamped oscillator angular frequency (sometimes called the natural frequency of the oscillator) and \((x_{0},v_{0})\) encode the initial conditions [16; 17]. The physical units of the displacement \(x(t)\) depend on the system being considered. For example, for a mass on a spring (or a pendulum) in viscous fluid, when it is usually called _elongation_, it is measured in \([m]\), while for an RLC circuit it could either be voltage, or current, or charge. In contrast, the units of \(\gamma\) and \(\omega_{0}\) are \([s^{-1}]\) for all systems described with the SDOF model. The form of the solution to the equation (1) depends on the relationship between \(\gamma\) and \(\omega_{0}\), producing three possible regimes [16; 17]: under-damped (\(\gamma<\omega_{0}\)), critically damped (\(\gamma=\omega_{0}\)) and over-damped (\(\gamma>\omega_{0}\)) regime. Here we would like to point out that, although it is natural to classify the solution of SDOF into three regimes depending on the value of \(\gamma\), we can actually take one form of the solution as a unique solution valid for all values of \(\gamma>0\), \(\gamma\neq\omega_{0}\), \[x(t)=e^{-\gamma t}\left(x_{0}\cos(\omega t)+\frac{v_{0}+\gamma x_{0}}{\omega} \sin(\omega t)\right)\,, \tag{2}\] where \(\omega=\sqrt{\omega_{0}^{2}-\gamma^{2}}\) is the (complex valued) damped angular frequency. In order to describe the critically damped regime, one can take the limit \(\gamma\to\omega_{0}\) of the solution (2) to obtain the general solution of the critically damped regime \[x_{c}(t)=e^{-\omega_{0}t}\left(x_{0}+(v_{0}+\omega_{0}x_{0})t\right)\,. \tag{3}\] Therefore, in order to calculate the energy and the time integral of the energy, we do not need to perform separate calculations for all three regimes, but a single calculation using the displacement given by (2) and the velocity given by \[\dot{x}(t)=e^{-\gamma t}\left(v_{0}\cos(\omega t)-\frac{\gamma v_{0}+\omega_{ 0}^{2}x_{0}}{\omega}\sin(\omega t)\right)\,. \tag{4}\] For simplicity, in this section we will refer to the quantity \[E(t)=\dot{x}(t)^{2}+\omega_{0}^{2}x(t)^{2} \tag{5}\] as the _energy_ of the system, and to the quantities \(E_{K}(t)=\dot{x}(t)^{2}\) and \(E_{P}(t)=\omega_{0}^{2}x(t)^{2}\) as the _kinetic energy_ and _potential energy_ of the system respectively. The connection of the quantity (5) to the usual expressions for the energy is straightforward, e.g., for a mass \(m\) on a spring in viscous fluid \[\mathcal{E}(t)=\frac{m}{2}E(t)\,, \tag{6}\] and similarly for other systems described with the SDOF model. Using (2) and (4) in (5), we obtain \[E(t)=e^{-2\gamma t}\left(E_{0}\cos^{2}(\omega t)+\gamma\left(\omega_{0}^{2}x_ {0}^{2}-v_{0}^{2}\right)\frac{\sin(2\omega t)}{\omega}+\left(E_{0}(\omega_{0} ^{2}+\gamma^{2})+4\omega_{0}^{2}\gamma x_{0}v_{0}\right)\frac{\sin^{2}(\omega t )}{\omega^{2}}\right) \tag{7}\] for the energy of the system, where \(E_{0}=v_{0}^{2}+\omega_{0}^{2}x_{0}^{2}\) is the initial energy given to the system at \(t=0\). Accordingly, \(E_{0K}=v_{0}^{2}\) is the initial kinetic energy and \(E_{0P}=\omega_{0}^{2}x_{0}^{2}\) is the initial potential energy. Expression (7) is valid for both under-damped and over-damped regimes, and to obtain the energy of the critically damped regime we take the \(\gamma\to\omega_{0}\) limit of the energy (7), and obtain \[E_{c}(t)=e^{-2\omega_{0}t}\left(E_{0}+2\omega_{0}\left(\omega_{0}^{2}x_{0}^{2} -v_{0}^{2}\right)t+2\omega_{0}^{2}\left(E_{0}+2\omega_{0}x_{0}v_{0}\right)t^{2 }\right)\,. \tag{8}\] ### Minimization of the energy integral and optimal damping in dependence of initial conditions We consider the SDOF system with initially energy \(E_{0}\). All possible initial conditions that give this energy can be expressed in polar coordinates with constant radius \(r=\sqrt{E_{0}}\) and angle \(\theta=\arctan\left(\frac{v_{0}}{\omega_{0}x_{0}}\right)\), i.e. we have \[\omega_{0}x_{0} =r\cos\theta\] \[v_{0} =r\sin\theta\,. \tag{9}\] In Fig. 1 we sketch the circle given by (9), i.e. given by all possible initial conditions with the same energy \(E_{0}\). For clarity of the exposition, here we comment on a few characteristic points of the circle presented in Fig. 1: * Initial conditions \(\omega_{0}x_{0}=\pm\sqrt{E_{0}}\) and \(v_{0}=0\), i.e. with purely potential initial energy (and zero initial kinetic energy), correspond to two points on the circle with \(\theta=\{0,\pi\}\). * Initial conditions \(\omega_{0}x_{0}=\pm\sqrt{E_{0}/2}\) and \(v_{0}=\pm\sqrt{E_{0}/2}\), i.e. with initial potential energy equal to initial kinetic energy, correspond to four points on the circle with \(\theta=\{\pi/4,3\pi/4,5\pi/4,7\pi/4\}\). * Initial conditions \(\omega_{0}x_{0}=0\) and \(v_{0}=\pm\sqrt{E_{0}}\), i.e. with purely kinetic initial energy (and zero initial potential energy), correspond to two points on the circle with \(\theta=\{\pi/2,3\pi/2\}\). Using (9) in (7) and (8), we obtain the energy of the under-damped and over-damped regime \[E(t,\theta)=E_{0}e^{-2\gamma t}\left(\cos^{2}(\omega t)+\gamma\cos 2\theta \frac{\sin(2\omega t)}{\omega}+\left(\omega_{0}^{2}+\gamma^{2}+2\omega_{0} \gamma\sin 2\theta\right)\frac{\sin^{2}(\omega t)}{\omega^{2}}\right)\,, \tag{10}\] and the energy of the critically damped regime \[E_{c}(t,\theta)=E_{0}e^{-2\omega_{0}t}\left(1+2\omega_{0}(\cos 2\theta)t+2 \omega_{0}^{2}\left(1+\sin 2\theta\right)t^{2}\right)\,, \tag{11}\] as functions of \(\theta\), instead of \(x_{0}\) and \(v_{0}\). Now we integrate energy (10) over all time, i.e. \[I(\gamma,\theta)=\int_{0}^{\infty}E(t)dt\,, \tag{12}\] Figure 1: Sketch of all possible initial conditions with the same initial energy \(E_{0}\) in the \((\omega_{0}x_{0},v_{0})\) coordinate system. Square of the coordinates corresponds to initial potential energy \(E_{0P}=\omega_{0}^{2}x_{0}^{2}\) and initial kinetic energy \(E_{0K}=v_{0}^{2}\) respectively. This representation gives us a useful visualization, e.g.: all initial conditions with \(E_{0P}>E_{0K}\) are represented by two arcs, i.e. points with \(\theta\in(-\pi/4,\pi/4)\cup(3\pi/4,5\pi/4)\) (blue dotted arcs); initial conditions with \(E_{0K}=E_{0}\) and \(E_{0P}=0\) are represented by two points on a circle with \(\theta=\{\pi/2,3\pi/2\}\) (two red filled circles); etc. and obtain \[I(\gamma,\theta)=\frac{E_{0}}{2\omega_{0}}\left(\frac{\omega_{0}^{2}+\gamma^{2}}{ \gamma\omega_{0}}+\frac{\gamma}{\omega_{0}}\cos 2\theta+\sin 2\theta\right)\,. \tag{13}\] Integral (13) is valid for all three regimes, i.e. for any \(\gamma>0\). We note here that the energy (see (7) and (8)) is invariant to a simultaneous change of the signs of the initial conditions, i.e. to the change \((x_{0},v_{0})\rightarrow(-x_{0},-v_{0})\) (but not to \(x_{0}\rightarrow-x_{0}\) or \(v_{0}\rightarrow-v_{0}\) separately). This change of signs corresponds to the change in angle \(\theta\rightarrow\theta+\pi\), therefore, functions (10), (11) and (13) are all periodic in \(\theta\) with period \(\pi\). In Fig. 2 we show the integral (13) for \(\gamma\in[0.1\omega_{0},3\omega_{0}]\) for three different initial conditions, i.e. for \(\theta=\{0,\pi/4,\pi/2\}\). We can see that \(I(\gamma,\theta=0)\) (red solid curve), with purely potential initial energy and zero initial kinetic energy, attains minimum for \(\gamma=0.707\omega_{0}\) (rounded to three decimal places), i.e. well in the under-damped regime. For the initial condition with equal potential and kinetic energy, \(I(\gamma,\theta=\pi/4)\) (black dotted curve) attains minimum for \(\gamma=\omega_{0}\), i.e. at the critical damping condition. Interestingly, for the initial condition with purely kinetic energy and zero potential energy, \(I(\gamma,\theta=\pi/2)\) (blue dashed curve) has no minimum in the displayed range of damping coefficients, therefore here we explicitly show this function \[I(\gamma,\theta=\pi/2)=\frac{E_{0}}{2\gamma}\,, \tag{14}\] and it is clear that (14) has no minimum. This is easy to understand from a physical point of view, i.e. if all the initial energy is kinetic, the higher the damping coefficient, the faster the energy dissipation will be. If we consider the optimal damping as the one for which the integral (13) is minimal, we can easily determine the optimal damping coefficient \(\gamma_{\rm opt}(\theta)\) from the condition \[\left.\frac{\partial I(\gamma,\theta)}{\partial\gamma}\right|_{\gamma_{\rm opt }}=0\,, \tag{15}\] and we obtain \[\gamma_{\rm opt}(\theta)=\sqrt{\frac{1}{2\cos^{2}\theta}}\omega_{0}\,. \tag{16}\] In Fig. 3 we show the optimal damping coefficient (16) for \(\theta\in[0,2\pi]\) (function (16) has a period \(\pi\), but here we choose this interval for completeness), and here we comment on the shown results with respect to the relationship between initial potential energy (\(E_{0P}=\omega_{0}^{2}x_{0}^{2}\)) and initial kinetic energy (\(E_{0K}=v_{0}^{2}\)) for any given initial condition, i.e. for any \(\theta\): * Initial conditions with \(E_{0P}>E_{0K}\) correspond to the set \(\theta\in(-\pi/4,\pi/4)\cup(3\pi/4,5\pi/4)\). For these initial conditions, optimal damping coefficients (16) are in the under-damped regime, i.e. \(\gamma_{\rm opt}\in\left[\sqrt{2}\omega_{0}/2,\omega_{0}\right)\), with the minimum value \(\gamma_{\rm opt}=\sqrt{2}\omega_{0}/2=0.707\omega_{0}\) (rounded to three decimal places) obtained for \(\theta=\{0,\pi\}\), i.e. for two initial conditions with \(E_{0}=E_{0P}\) and \(E_{0K}=0\). Figure 2: Integral (13) for three initial conditions \(\theta=\{0,\pi/4,\pi/2\}\). * Initial conditions with \(E_{0P}=E_{0K}\) correspond to four points \(\theta=\{\pi/4,3\pi/4,5\pi/4,7\pi/4\}\) with optimal damping coefficient (16) equal to critical damping, i.e. \(\gamma_{\rm opt}=\omega_{0}\). * Initial conditions with \(E_{0K}>E_{0P}\) correspond to the set \(\theta\in(\pi/4,3\pi/4)\cup(5\pi/4,7\pi/4)\). For these initial conditions, optimal damping coefficients (16) are in the over-damped regime, i.e. \(\gamma_{\rm opt}\in(\omega_{0},\infty)\), where \(\gamma_{\rm opt}\) diverges for \(\theta=\{\pi/2,3\pi/2\}\), i.e. for two initial conditions with \(E_{0K}=E_{0}\) and \(E_{0P}=0\). Before closing this subsection, we would like to point out two more ways in which we can write relation (16) that will prove useful when dealing with MDOF systems. The ratio of the initial potential energy to the initial total energy is \[\beta=\frac{E_{0P}}{E_{0}}=\cos^{2}\theta\,, \tag{17}\] where we used first of the relations (9) and \(E_{0P}=\omega_{0}^{2}x_{0}^{2}\). Using (17), optimal damping coefficient (16) can be written as a function of the fraction of potential energy in the initial total energy, i.e. \[\gamma_{\rm opt}(\beta)=\sqrt{\frac{1}{2\beta}}\omega_{0}\,. \tag{18}\] Thus, from (18) one can simply see that \(\gamma_{\rm opt}\) is in the under-damped regime for \(\beta\in(1/2,1]\), in the critically damped regime for \(\beta=1/2\) and in the over-damped regime for \(\beta\in[0,1/2)\). Using \(\beta=\omega_{0}^{2}x_{0}^{2}/E_{0}\) in (18) we can express the optimal damping coefficient in yet another way, as a function of the initial displacement \(x_{0}\), i.e. \[\gamma_{\rm opt}(x_{0})=\sqrt{\frac{E_{0}}{2x_{0}^{2}}}=\sqrt{\frac{v_{0}^{2} +\omega_{0}^{2}x_{0}^{2}}{2x_{0}^{2}}}\,, \tag{19}\] where \(x_{0}\in[-\sqrt{E_{0}}/\omega_{0},\sqrt{E_{0}}/\omega_{0}]\) and for \(v_{0}\) the condition \(v_{0}^{2}=E_{0}-\omega_{0}^{2}x_{0}^{2}\) holds. One of the benefits of relation (19) is that it can be seen most directly that the optimal damping coefficient does not distinguish initial conditions \((\pm x_{0},\pm v_{0})\) and \((\pm x_{0},\mp v_{0})\), which is a shortcoming of this optimization criterion, because the energy as a function of time is not the same for those two types of initial conditions (see (7) and (8)) and the energy decay may differ significantly depending on which of those initial conditions is in question. We will deal with these and other issues of energy integral minimization as an optimal damping criterion in the subsection 4.2. Figure 3: Optimal damping coefficient (16) (solid red curve) as a function of all possible initial conditions, i.e. for \(\theta\in[0,2\pi]\). Below the dashed horizontal line, optimal damping coefficients are in the under-damped regime, above the line in the over-damped regime, and in the critically damped regime at the crossing points of the line and the solid red curve. Minimization of the energy integral averaged over a set of initial conditions and optimal damping in dependence of the chosen set Now we calculate the average of the integral (12) over a set of initial conditions with \(\theta\in[\phi_{1},\phi_{2}]\), i.e. \[\overline{I}(\gamma,\phi_{1},\phi_{2})=\frac{1}{\phi_{2}-\phi_{1}}\int_{\phi_{1 }}^{\phi_{2}}I(\gamma,\theta)d\theta\,, \tag{20}\] and we obtain \[\overline{I}(\gamma,\phi_{1},\phi_{2})=\frac{E_{0}}{2\omega_{0}}\left(\frac{ \omega_{0}^{2}+\gamma^{2}}{\gamma\omega_{0}}+\frac{\gamma}{2\omega_{0}(\phi_{ 2}-\phi_{1})}(\sin 2\phi_{2}-\sin 2\phi_{1})+\frac{1}{2(\phi_{2}-\phi_{1})}( \cos 2\phi_{1}-\cos 2\phi_{2})\right)\,. \tag{21}\] In Fig. 4 we show averaged integral (21) for three different sets of initial conditions. For the set of initial conditions with \(\phi_{1}=-\pi/4\) and \(\phi_{2}=\pi/4\), i.e. with \(E_{0P}\geq E_{0K}\) (where the equality holds only at the end points of the set), minimum of the averaged integral (solid red curve) is at \(\gamma=0.781\omega_{0}\) (rounded to three decimal places). For the set of initial conditions with \(\phi_{1}=\pi/4\) and \(\phi_{2}=3\pi/4\), i.e. with \(E_{0K}\geq E_{0P}\) (where the equality holds only at the end points of the set), minimum of the averaged integral (dashed blue curve) is at \(\gamma=1.658\omega_{0}\) (rounded to three decimal places). For the set of mixed initial conditions with \(\phi_{1}=-\pi/4\) and \(\phi_{2}=3\pi/4\), i.e. with \(E_{0P}>E_{0K}\) and \(E_{0K}>E_{0P}\) points equally present in the set, minimum of the averaged integral (dotted black curve) is at the critical damping condition \(\gamma=\omega_{0}\). If we consider the optimal damping as the one for which the averaged integral (21) is minimal, we can easily determine the optimal damping coefficient \(\overline{\gamma}_{\rm opt}(\phi_{1},\phi_{2})\) form the condition \[\left.\frac{\partial\overline{I}(\gamma,\phi_{1},\phi_{2})}{\partial\gamma} \right|_{\overline{\gamma}_{\rm opt}}=0\,, \tag{22}\] and we obtain \[\overline{\gamma}_{\rm opt}(\phi_{1},\phi_{2})=\sqrt{\frac{2(\phi_{2}-\phi_{1 })}{2(\phi_{2}-\phi_{1})+\sin 2\phi_{2}-\sin 2\phi_{1}}}\omega_{0}\,. \tag{23}\] We note here that averaged integral (21) and optimal damping coefficient (23) are not periodic functions in variables \(\phi_{1}\) and \(\phi_{2}\), if we keep one variable fixed and change the other. But they are periodic, with period \(\pi\), if we change both variables simultaneously. In Fig. 5 we show the optimal damping coefficient (23) as a function of \(\phi_{2}\) with fixed \(\phi_{1}=0\), and the results shown can be summarized as follows: * For \(\phi_{1}=0\) and \(\phi_{2}\in[0,\pi/2)\cup(\pi,3\pi/2)\), the optimal damping coefficient (23) is in the under-damped regime. In this case, integral (20) is averaged over sets that have more points corresponding to initial conditions with \(E_{0P}>E_{0K}\), in comparison to the points corresponding to initial conditions with \(E_{0K}>E_{0P}\). Figure 4: Averaged integral (21) for three sets of initial conditions. * For \(\phi_{1}=0\) and \(\phi_{2}=\{\pi/2,\pi,3\pi/2,2\pi\}\), the optimal damping coefficient (23) is equal to critical damping. In this case, integral (20) is averaged over sets that have equal amount of points corresponding to initial conditions with \(E_{0P}>E_{0K}\) and initial conditions with \(E_{0K}>E_{0P}\). * For \(\phi_{1}=0\) and \(\phi_{2}\in(\pi/2,\pi)\cup(3\pi/2,2\pi)\), the optimal damping coefficient (23) is in the over-damped regime. In this case, integral (20) is averaged over sets that have more points corresponding to initial conditions with \(E_{0K}>E_{0P}\), in comparison to the points corresponding to initial conditions with \(E_{0P}>E_{0K}\). ## 3 2-DOF systems with MPD Here we consider 2-DOF system shown schematically in Fig. 6. The corresponding equations of motion are \[m_{1}\ddot{x}_{1}(t) =-c_{1}\dot{x}_{1}(t)-k_{1}x_{1}(t)-k_{2}\left(x_{1}(t)-x_{2}(t) \right)\,, \tag{24}\] \[m_{2}\ddot{x}_{2}(t) =-c_{2}\dot{x}_{2}(t)-k_{3}x_{2}(t)+k_{2}\left(x_{1}(t)-x_{2}(t) \right)\,.\] We will consider MPD [18], i.e. masses \(\{m_{1},m_{2}\}\), spring constants \(\{k_{1},k_{2},k_{3}\}\), and dampers \(\{c_{1},c_{2}\}\) can in general be mutually different but the condition \(c_{1}/m_{1}=c_{2}/m_{2}\) holds. In this case we can use modal analysis [2; 16] and the system of equations (24) can be written via modal coordinates [16] as \[\ddot{q}_{1}(t)+2\gamma\dot{q}_{1}(t)+\omega_{01}^{2}q_{1}(t)=0 \tag{25}\] \[\ddot{q}_{2}(t)+2\gamma\dot{q}_{2}(t)+\omega_{02}^{2}q_{2}(t)=0\,,\] where \(q_{i}(t)\) and \(\omega_{0i}\), with \(i=\{1,2\}\), denote the modal coordinates and undamped modal frequencies of the two modes, while \(\gamma=c_{i}/2m_{i}\) is the damping coefficient. In the analysis that we will carry out in this Figure 5: Optimal damping coefficient (23) (solid red curve) as a function of \(\phi_{2}\in[0,2\pi]\) for fixed \(\phi_{1}=0\). Below the dashed horizontal line, optimal damping coefficients are in the under-damped regime, above the line in the over-damped regime, and in the critically damped regime at the crossing points of the line and the solid red curve. Figure 6: Schematic figure of a 2-DOF system. subsection, we will not need the explicit connection of modal coordinates \(q_{i}(t)\) and mass coordinates, i.e. displacements \(x_{i}(t)\), and we will deal with this in the next subsection when considering a specific example with given masses, springs and dampers. Similarly as in section 2 (see (2)), the general solution for the _i_-th mode can be written as \[q_{i}(t)=e^{-\gamma t}\left(q_{0i}\cos(\omega_{i}t)+\frac{\dot{q}_{0i}+\gamma q _{0i}}{\omega_{i}}\sin(\omega_{i}t)\right)\,, \tag{26}\] where \(\omega_{i}=\sqrt{\omega_{0i}^{2}-\gamma^{2}}\) is the damped modal frequency, and \(q_{i}(0)\equiv q_{0i}\) and \(\dot{q}_{i}(0)\equiv\dot{q}_{0i}\) are the initial conditions of the \(i\)-th mode. Thus, the reasoning and the results presented in section 2, with some adjustments, can by applied for the analysis of the 2-DOF system we are considering here. The energy of the system is \[E(t)=\sum_{i=1}^{2}\frac{m_{i}\dot{x}_{i}(t)^{2}}{2}+\frac{k_{1}x_{1}(t)^{2}}{ 2}+\frac{k_{3}x_{2}(t)^{2}}{2}+\frac{k_{2}(x_{1}(t)-x_{2}(t))^{2}}{2}\,, \tag{27}\] and we take that the modal coordinates are normalised so that (27) can be written as \[E(t)=\sum_{i=1}^{2}E_{i}(t)=\sum_{i=1}^{2}\left(\dot{q}_{i}(t)^{2}+\omega_{0i} ^{2}q_{i}(t)^{2}\right) \tag{28}\] where \(E_{i}(t)\) in (28) denotes the energy of the \(i\)-th mode. Total energy at \(t=0\), i.e. the initial energy, is given by \[E_{0}=\sum_{i=1}^{2}E_{0i}=\sum_{i=1}^{2}\left(E_{0Ki}+E_{0Pi}\right)=\sum_{i= 1}^{2}\left(\dot{q}_{0i}^{2}+\omega_{0i}^{2}q_{0i}^{2}\right)\,, \tag{29}\] where \(E_{0i}\) denotes the initial energy of the \(i\)-th mode, \(E_{0Ki}=\dot{q}_{0i}^{2}\) and \(E_{0Pi}=\omega_{0i}^{2}q_{0i}^{2}\) denote initial kinetic and initial potential energy of the \(i\)-th mode. All possible initial conditions with the same initial energy (29) can be expressed similarly as in the SDOF case (see (9) and Fig. 1) but with two pairs of polar coordinates, one pair for each mode. For the \(i\)-th mode we have radius \(r_{i}=\sqrt{E_{0i}}\) and angle \(\theta_{i}=\arctan\left(\frac{\dot{q}_{0i}}{\omega_{0i}\cdot q_{0i}}\right)\), i.e. we can write \[\begin{array}{c}\omega_{0i}q_{0i}=r_{i}\cos\theta_{i}\\ \dot{q}_{0i}=r_{i}\sin\theta_{i}\,.\end{array} \tag{30}\] Thus, each initial condition with energy \(E_{0}=E_{01}+E_{02}\) can be represented by points on two circles with radii \(r_{1}=\sqrt{E_{01}}\) and \(r_{2}=\sqrt{E_{02}}\), for which condition \(r_{1}^{2}+r_{2}^{2}=E_{0}\) holds, and with angles \(\theta_{1}\) and \(\theta_{2}\) that tell us how initial potential and initial kinetic energy are distributed within the modes. Using relation (10) for SDOF systems, we can write the energy of the \(i\)-th mode in polar coordinates (30) as \[E_{i}(t)=E_{0i}e^{-2\gamma t}\left(\cos^{2}(\omega_{i}t)+\gamma\cos 2\theta_{i }\frac{\sin(2\omega_{i}t)}{\omega_{i}}+\left(\omega_{0i}^{2}+\gamma^{2}+2 \omega_{0i}\gamma\sin 2\theta_{i}\right)\frac{\sin^{2}(\omega_{i}t)}{\omega_{i}^{2}}\right) \tag{31}\] for the under-damped (\(\gamma<\omega_{0i}\)) and over-damped (\(\gamma>\omega_{0i}\)) regime, and the energy of the \(i\)-th mode in the critically damped regime is obtained analogously using the relation (11). Consequently, the integral of the energy (28) over the entire time, for some arbitrary initial condition, is simply calculated using relation (13). For each individual mode, we obtain \[I(\gamma,\{E_{0i}\},\{\theta_{i}\})=\sum_{i=1}^{2}\int_{0}^{\infty}E_{i}(t)dt= \sum_{i=1}^{2}\frac{E_{0i}}{2\omega_{0i}}\left(\frac{\omega_{0i}^{2}+\gamma^{2} }{\gamma\omega_{0i}}+\frac{\gamma}{\omega_{0i}}\cos 2\theta_{i}+\sin 2\theta_{i} \right)\,. \tag{32}\] Furthermore, initial energy of the \(i\)-th mode can be written as \(E_{0i}=a_{i}^{2}E_{0}\), where coefficient \(a_{i}^{2}\in[0,1]\) denotes the fraction of the initial energy of the \(i\)-th mode in the total initial energy. Coefficients of the two modes satisfy \(a_{1}^{2}+a_{2}^{2}=1\) and therefore can be parameterized as \[\begin{split}& a_{1}=\cos\psi\\ & a_{2}=\sin\psi,\end{split} \tag{33}\] where \(\psi\in[0,\pi/2]\). Taking (33) into account, we can write (32) as \[I(\gamma,\psi,\theta_{1},\theta_{2})=E_{0}\sum_{i=1}^{2}\frac{a_{i}^{2}}{2 \omega_{0i}}\left(\frac{\omega_{0i}^{2}+\gamma^{2}}{\gamma\omega_{0i}}+\frac{ \gamma}{\omega_{0i}}\cos 2\theta_{i}+\sin 2\theta_{i}\right)\,. \tag{34}\] If we consider the optimal damping coefficient as the one for which the integral (34) is minimal, we can easily determine the optimal damping coefficient form the condition \[\frac{\partial I(\gamma,\psi,\theta_{1},\theta_{2})}{\partial\gamma}\bigg{|}_ {\gamma_{\rm opt}}=0\,, \tag{35}\] and we obtain \[\gamma_{\rm opt}(\psi,\theta_{1},\theta_{2})=\sqrt{\frac{\omega_{01}^{2} \omega_{02}^{2}}{2\omega_{02}^{2}\cos^{2}\psi\cos^{2}\theta_{1}+2\omega_{01}^ {2}\sin^{2}\psi\cos^{2}\theta_{2}}}\,. \tag{36}\] It is easy to see that, for any fixed \(\psi\), the function (36) has smallest magnitude for \(\cos^{2}\theta_{1}=\cos^{2}\theta_{2}=1\), which corresponds to the initial conditions with initial energy comprised only of potential energy distributed within the two modes, i.e \(E_{0}=E_{0P1}+E_{0P2}\). In that case we can write the denominator of (36) as \[f(\psi)=\sqrt{2\omega_{02}^{2}\cos^{2}\psi+2\omega_{01}^{2}\sin^{2}\psi}=\sqrt {2(\omega_{02}^{2}-\omega_{01}^{2})\cos^{2}\psi+2\omega_{01}^{2}}\,, \tag{37}\] where we used \(\sin^{2}\psi=1-\cos^{2}\psi\). Since \(\omega_{01}<\omega_{02}\), the function (37) has maximum for \(\psi=0\). Thus, the minimum value of the optimal damping coefficient (36) is \(\sqrt{2}\omega_{01}/2\), and it is obtained for \(\psi=0\) and \(\theta_{1}=\{0,\pi\}\), which corresponds to the initial conditions with initial energy comprised only of potential energy in the first mode, i.e. \(E_{0}=E_{0P1}\). On the other hand, for any fixed \(\psi\), the function (36) has singularities for \(\cos^{2}\theta_{1}=\cos^{2}\theta_{2}=0\), which corresponds to the initial conditions with initial energy comprised only of kinetic energy. Thus, the range of the optimal damping coefficient (36) is \[\gamma_{\rm opt}\in\left[\sqrt{2}\omega_{01}/2,+\infty\right)\,. \tag{38}\] Now we calculate the average of the integral (34) over a set of all initial conditions, we obtain \[\overline{I}(\gamma)=\frac{1}{2\pi^{3}}\int_{0}^{\pi/2}d\psi\int_{0}^{2\pi}d \theta_{1}\int_{0}^{2\pi}d\theta_{2}\,I(\gamma,\psi,\theta_{1},\theta_{2})= \frac{E_{0}}{4}\sum_{i=1}^{2}\left(\frac{\omega_{0i}^{2}+\gamma^{2}}{\gamma \omega_{0i}^{2}}\right)\,, \tag{39}\] and from the condition \[\frac{\partial\overline{I}(\gamma)}{\partial\gamma}\bigg{|}_{\overline{\gamma} _{\rm opt}}=0\,, \tag{40}\] we find that the optimal damping coefficient with respect to the averaged integral (39) is given by \[\overline{\gamma}_{\rm opt}=\sqrt{\frac{2\omega_{01}^{2}\omega_{02}^{2}}{ \omega_{01}^{2}+\omega_{02}^{2}}}\,. \tag{41}\] In order to more easily analyze the behavior of the damping coefficient (36) with regard to the distribution of the initial potential energy within the modes and its relationship with the damping coefficient (41), similarly as in subsection 2.1 (see (17) and (18)), we define the ratio of the initial potential energy of the \(i\)-th mode and the total initial energy, i.e. \[\beta_{i}=\frac{E_{0Pi}}{E_{0}}\,. \tag{42}\] Since the initial potential energy satisfies \(E_{0P}=E_{0P1}+E_{0P2}\leq E_{0}\), we have \(\beta_{i}\in[0,1]\) and the condition \(0\leq(\beta_{1}+\beta_{2})\leq 1\) holds. Taking \(E_{0Pi}=E_{0i}\cos^{2}\theta_{i}\) (see (30)) and \(E_{0i}=a_{i}^{2}E_{0}\) with (33) into account, we have \[\begin{split}\beta_{1}&=\cos^{2}\psi\cos^{2} \theta_{1}\\ \beta_{2}&=\sin^{2}\psi\cos^{2}\theta_{2}.\end{split} \tag{43}\] Using (43), relation (36) can be written as \[\gamma_{\rm opt}(\beta_{1},\beta_{2})=\sqrt{\frac{\omega_{01}^{2}\omega_{02}^ {2}}{2\omega_{02}^{2}\beta_{1}+2\omega_{01}^{2}\beta_{2}}}\,. \tag{44}\] For clarity, we will repeat briefly, the minimum value of (44) is \(\sqrt{2}\omega_{01}/2\), obtained for \(\beta_{1}=1\) and \(\beta_{2}=0\) (or in terms of the angles in (36), for \(\psi=0\) and \(\theta_{1}=\{0,\pi\}\)), while \(\gamma_{\rm opt}\to+\infty\) for \(\beta_{1}=\beta_{2}=0\) (or in terms of the angles in (36), for any \(\psi\) with \(\theta_{1}=\{\pi/2,3\pi/2\}\) and \(\theta_{2}=\{\pi/2,3\pi/2\}\)). The benefit of relation (44) is that we expressed (36) through two variables instead of three, i.e. this way we lost information about the signs of the initial conditions and about distribution of initial kinetic energy within the modes, but the optimal damping coefficient (36) does not depend on those signs anyway, due to the squares of trigonometric functions in variables \(\theta_{1}\) and \(\theta_{2}\), and, for a fixed distribution of initial potential energy within the modes, the optimal damping coefficient (36) is constant for different distributions of initial kinetic energy within the modes. By looking at relations (44) and (41), it is immediately clear that \(\gamma_{\rm opt}(\beta_{1},\beta_{2})=\overline{\gamma}_{\rm opt}\) for \[\omega_{02}^{2}\beta_{1}+\omega_{01}^{2}\beta_{2}=\frac{\omega_{01}^{2}+ \omega_{02}^{2}}{4}, \tag{45}\] while \(\gamma_{\rm opt}(\beta_{1},\beta_{2})<\overline{\gamma}_{\rm opt}\) if the left hand side of relation (45) is greater than the right hand side, and \(\gamma_{\rm opt}(\beta_{1},\beta_{2})>\overline{\gamma}_{\rm opt}\) if the left hand side of relation (45) is smaller than the right hand side. Again, similarly as in subsection 2.1 (see (19)), using \(\beta_{i}=\omega_{0i}^{2}q_{0i}^{2}/E_{0}\) we can express the optimal damping coefficient (44) as a function of the initial modal coordinates as well, i.e. \[\gamma_{\rm opt}(q_{01},q_{02})=\sqrt{\frac{E_{0}}{2q_{01}^{2}+2q_{02}^{2}}}\,, \tag{46}\] where \(q_{0i}\in[-\sqrt{E_{0}}/\omega_{0i},\sqrt{E_{0}}/\omega_{0i}]\) and the condition \(0\leq(\omega_{01}^{2}q_{01}^{2}+\omega_{02}^{2}q_{02}^{2})\leq E_{0}\) holds. We can express condition (45) in terms of initial modal coordinates, i.e. \(\gamma_{\rm opt}(\{q_{0i}\})=\overline{\gamma}_{\rm opt}\) for \[\frac{q_{01}^{2}+q_{02}^{2}}{E_{0}}=\frac{\omega_{01}^{2}+\omega_{02}^{2}}{4 \omega_{01}^{2}\omega_{02}^{2}}, \tag{47}\] while \(\gamma_{\rm opt}(\{q_{0i}\})<\overline{\gamma}_{\rm opt}\) if the left hand side of relation (47) is greater than the right hand side, and \(\gamma_{\rm opt}(\{q_{0i}\})>\overline{\gamma}_{\rm opt}\) if the left hand side of relation (47) is smaller than the right hand side. We note here that we did not use explicit values of the undamped modal frequencies \(\omega_{01}\) and \(\omega_{02}\) in the analysis so far, and relations presented so far are valid for any 2-DOF system with MPD. In the next subsection, we provide a more detailed quantitative analysis using an example with specific values of modal frequencies. ### Quantitative example Here we consider the 2-DOF system as the one shown schematically in Fig. 6, but with \(m_{1}=m_{2}=m\), \(k_{1}=k_{2}=k_{3}=k\) and \(c_{1}=c_{2}=c\). The corresponding equations of motion are \[\begin{split} m\ddot{x}_{1}(t)&=-c\dot{x}_{1}(t)-kx_ {1}(t)-k\left(x_{1}(t)-x_{2}(t)\right)\,,\\ m\ddot{x}_{2}(t)&=-c\dot{x}_{2}(t)-kx_{2}(t)+k \left(x_{1}(t)-x_{2}(t)\right)\,.\end{split} \tag{48}\] For completeness, we will investigate here the behavior of the optimal damping coefficient given by the minimization of the energy integral for different initial conditions, and its relationship with the optimal damping coefficient given by the minimization of the averaged energy integral, in all three coordinate systems that we introduced in the previous subsection and additionally in the coordinate system defined by the initial displacements of the masses. System of equations (48) can be easily recast to the form (25) with the modal coordinates \[\begin{split} q_{1}(t)&=\sqrt{\frac{m}{4}}\left(x_ {1}(t)+x_{2}(t)\right)\\ q_{2}(t)&=\sqrt{\frac{m}{4}}\left(x_{1}(t)-x_{2}( t)\right)\,,\end{split} \tag{49}\] and with the natural (undamped) frequencies of the modes \(\omega_{01}=\omega_{0}\) and \(\omega_{02}=\sqrt{3}\omega_{0}\), where \(\omega_{0}=\sqrt{k/m}\). Normalisation factors \(\sqrt{m/4}\) in (49) ensure that our expression (28) for the energy of the system corresponds to energy expressed over the displacements and velocities of the masses, i.e. \[E(t)=\sum_{i=1}^{2}\left(\dot{q}_{i}(t)^{2}+\omega_{0i}^{2}q_{i}(t)^{2}\right) =\sum_{i=1}^{2}\left(\frac{m\dot{x}_{i}(t)^{2}}{2}+\frac{kx_{i}(t)^{2}}{2} \right)+\frac{k(x_{1}(t)-x_{2}(t))^{2}}{2}\,. \tag{50}\] Using the specific values of undamped modal frequencies of this system, relations (36), (41) and (44) become \[\gamma_{\rm opt}(\psi,\theta_{1},\theta_{2})=\sqrt{\frac{3}{6\cos^{2}\psi\cos ^{2}\theta_{1}+2\sin^{2}\psi\cos^{2}\theta_{2}}}\omega_{0}\,, \tag{51}\] \[\overline{\gamma}_{\rm opt}=\frac{\sqrt{6}}{2}\omega_{0}\,, \tag{52}\] \[\gamma_{\rm opt}(\beta_{1},\beta_{2})=\sqrt{\frac{3}{6\beta_{1}+2\beta_{2}}} \omega_{0}\,. \tag{53}\] Since \(\omega_{01}=\omega_{0}\), the range of (53) is \(\gamma_{\rm opt}\in[\sqrt{2}\omega_{0}/2,+\infty)\) (see (38)). As examples of the behavior of the damping coefficient (51) as a function of the angles \(\{\psi,\theta_{1},\theta_{2}\}\) and its relationship with the damping coefficient (52), in Fig. 7 we show \(\gamma_{\rm opt}(\psi,\theta_{1},\theta_{2})/\overline{\gamma}_{\rm opt}\) for \(\psi=\{\pi/3,\pi/6\}\) and \(\theta_{i}\in[0,\pi]\). In Fig. 8 we show ratio of the damping coefficient (53) and the damping coefficient (52), i.e. \(\gamma_{\rm opt}(\beta_{1},\beta_{2})/\overline{\gamma}_{\rm opt}\). If the initial energy is comprised only of potential energy, in terms of initial modal coordinates we have \(E_{0}=\omega_{0}^{2}q_{01}^{2}+3\omega_{0}^{2}q_{02}^{2}\), thus, the initial modal coordinates satisfy \[\begin{split} q_{01}\sqrt{\frac{\omega_{0}^{2}}{E_{0}}}\in[-1,1 ]\,\,,\\ q_{02}\sqrt{\frac{\omega_{0}^{2}}{E_{0}}}\in\left[-\sqrt{3}/3, \sqrt{3}/3\right]\,,\\ 0\leq\frac{\omega_{0}^{2}}{E_{0}}\left(q_{01}^{2}+3q_{02}^{2} \right)\leq 1\,.\end{split} \tag{54}\] Furthermore, we can write the optimal damping coefficient (46) as \[\gamma_{\rm opt}(q_{01},q_{02})=\sqrt{\frac{E_{0}}{2\omega_{0}^{2}(q_{01}^{2}+q_{ 02}^{2})}}\omega_{0}\,, \tag{55}\] and the condition (47) as \[\frac{\omega_{0}^{2}}{E_{0}}\left(q_{01}^{2}+q_{02}^{2}\right)=\frac{1}{3}. \tag{56}\] In Fig. 9(a) we show the ratio of (55) and (52), i.e. \(\gamma_{\rm opt}(q_{01},q_{02})/\overline{\gamma}_{\rm opt}\). The domain of this function consists of points inside and on the ellipse, i.e. it is given by (54). Similarly as before, singularity at \((q_{01},q_{02})=(0,0)\) is indicated by the infinity symbol, and the points for which \(\gamma_{\rm opt}(q_{01},q_{02})/\overline{\gamma}_{\rm opt}>3\) are removed. For points inside the circle we have \(\gamma_{\rm opt}(q_{01},q_{02})/\overline{\gamma}_{\rm opt}>1\), and for points outside the circle we have \(\gamma_{\rm opt}(q_{01},q_{02})/\overline{\gamma}_{\rm opt}<1\). Minimum values of this ratio are \(\sqrt{3}/3\approx 0.58\), obtained for the points \((q_{01},q_{02})=\{(-\sqrt{E_{0}}/\omega_{0},0),(\sqrt{E_{0}}/\omega_{0},0)\}\). Using (49) we can write the optimal damping coefficient (55) in terms of initial displacements \(x_{i}(0)\equiv x_{0i}\) as \[\gamma_{\rm opt}(x_{01},x_{02})=\sqrt{\frac{E_{0}}{m(x_{01}^{2}+x_{02}^{2})}} =\sqrt{\frac{E_{0}}{m\omega_{0}^{2}(x_{01}^{2}+x_{02}^{2})}}\omega_{0}\,. \tag{57}\] If the initial energy is comprised only of potential energy, in terms of initial displacements we have \(E_{0}=m\omega_{0}^{2}(x_{01}^{2}+x_{02}^{2}-x_{01}x_{02})\), thus, the initial displacements of the masses satisfy \[x_{0i}\sqrt{\frac{m\omega_{0}^{2}}{E_{0}}}\in[-1,1]\, \tag{58}\] \[0\leq\frac{m\omega_{0}^{2}}{E_{0}}\left(x_{01}^{2}+x_{02}^{2}-x _{01}x_{02}\right)\leq 1\,,\] and the condition (56) is now \[\frac{m\omega_{0}^{2}}{E_{0}}\left(x_{01}^{2}+x_{02}^{2}\right)=\frac{2}{3}. \tag{59}\] In Fig. 9(b) we show the ratio of (57) and (52), i.e. \(\gamma_{\rm opt}(x_{01},x_{02})/\overline{\gamma}_{\rm opt}\). The domain of this function consists of points given by (58). Similarly as before, singularity at \((x_{01},x_{02})=(0,0)\) is indicated by the infinity symbol, and the points for which \(\gamma_{\rm opt}(x_{01},x_{02})/\overline{\gamma}_{\rm opt}>3\) are removed. For points inside the circle \(\gamma_{\rm opt}(x_{01},x_{02})/\overline{\gamma}_{\rm opt}>1\), and for points outside the circle \(\gamma_{\rm opt}(x_{01},x_{02})/\overline{\gamma}_{\rm opt}<1\). Minimum values of this ratio are \(\sqrt{3}/3\approx 0.58\), obtained for the points \((x_{01},x_{02})=\left(\pm\sqrt{\frac{E_{0}}{m\omega_{0}^{2}}},\pm\sqrt{\frac{ E_{0}}{m\omega_{0}^{2}}}\right)\). Figure 9: (a) Ratio \(\gamma_{\rm opt}(q_{01},q_{02})/\overline{\gamma}_{\rm opt}\) of the optimal damping coefficients (55) and (52). (b) Ratio \(\gamma_{\rm opt}(x_{01},x_{02})/\overline{\gamma}_{\rm opt}\) of the optimal damping coefficients (57) and (52). Singularities, at points \((0,0)\) on the both figures, are denoted by infinity symbols, and the points near singularities, for which \(\gamma_{\rm opt}/\overline{\gamma}_{\rm opt}>3\), are removed. Black circles on both figures indicate the points for which \(\gamma_{\rm opt}/\overline{\gamma}_{\rm opt}=1\). ## 4 MDOF systems with MPD Here we consider the MDOF system with \(N\) degrees of freedom shown schematically in Fig. 10. As in the section 3, we will consider MPD, i.e. masses \(\{m_{1},m_{2},...,m_{N}\}\), spring constants \(\{k_{1},k_{2},...,k_{N+1}\}\), and dampers \(\{c_{1},c_{2},...,c_{N}\}\) can in general be mutually different but the condition \(c_{i}/m_{i}=2\gamma\) holds for any \(i=\{1,...,N\}\), where \(\gamma\) is the damping coefficient. Therefore, the reasoning we presented in section 3 can be applied here, with the main difference that now the system has \(N\) modes instead of two. Again, we can write each initial condition over polar coordinates, as in the 2-DOF case (see (30)), only now we have \(N\) pairs of polar coordinates instead of two. The energy of each mode is given by (31), and consequently, the integral of the total energy over the entire time, for some arbitrary initial condition, is simply calculated similarly as in (32), i.e. \[I(\gamma,\{E_{0i}\},\{\theta_{i}\})=\sum_{i=1}^{N}\int_{0}^{\infty}E_{i}(t)dt =\sum_{i=1}^{N}\frac{E_{0i}}{2\omega_{0i}}\left(\frac{\omega_{0i}^{2}+\gamma^ {2}}{\gamma\omega_{0i}}+\frac{\gamma}{\omega_{0i}}\cos 2\theta_{i}+\sin 2 \theta_{i}\right)\,, \tag{60}\] where, again, \(E_{i}(t)\) is the energy of the \(i\)-th mode, \(E_{0i}\) is the initial energy of the \(i\)-th mode. Thus, each initial condition with energy \(E_{0}=\sum_{i=1}^{N}E_{0i}\) is represented by points on \(N\) circles with radii \(r_{i}=\sqrt{E_{0i}}\), for which condition \(\sum_{i=1}^{N}r_{i}^{2}=E_{0}\) holds, and with angles \(\theta_{i}\) that tell us how initial potential and initial kinetic energy are distributed within the modes. Similarly as before, initial energy of the \(i\)-th mode can be written as \(E_{0i}=a_{i}^{2}E_{0}\), where coefficient \(a_{i}^{2}\in[0,1]\) denotes the fraction of the initial energy of the \(i\)-th mode in the total initial energy \(E_{0}\), and the condition \[\sum_{i=1}^{N}a_{i}^{2}=1 \tag{61}\] holds. Relation (61) defines a sphere embedded in \(N\)-dimensional space and we can express the coefficients \(a_{i}\) over \(N\)-dimensional spherical coordinates (\(N-1\) independent coordinates, i.e. angles, since the radius is equal to one), but for the sake of simplicity we will not do that here and we will stick to writing the expressions as a functions of the coefficients \(a_{i}\). Thus, we can write (60) as \[I(\gamma,\{a_{i}\},\{\theta_{i}\})=\sum_{i=1}^{N}\int_{0}^{\infty}E_{i}(t)dt= E_{0}\sum_{i=1}^{N}\frac{a_{i}^{2}}{2\omega_{0i}}\left(\frac{\omega_{0i}^{2}+ \gamma^{2}}{\gamma\omega_{0i}}+\frac{\gamma}{\omega_{0i}}\cos 2\theta_{i}+ \sin 2\theta_{i}\right)\,. \tag{62}\] We differentiate relation (62) by \(\gamma\) and equate it to zero and get \[\gamma_{\rm opt}(\{a_{i}\},\{\theta_{i}\})=\left(\sum_{i=1}^{N}\frac{2a_{i}^{ 2}\cos^{2}\theta_{i}}{\omega_{0i}^{2}}\right)^{-1/2} \tag{63}\] as the optimal damping coefficient for which integral (62) is minimal. For any fixed set of coefficients \(\{a_{i}\}\), the smallest magnitude of the function (63) is obtained for \(\cos^{2}\theta_{i}=1\)\(\forall i\), which corresponds to the Figure 10: Schematic figure of a MDOF system with \(N\) degrees of freedom. initial conditions with initial energy comprised only of potential energy distributed within the modes, i.e \(E_{0}=\sum_{i=1}^{N}E_{0\,Pi}\). In that case the denominator of (63) is \[f(\{a_{i}\})=\left(\sum_{i=1}^{N}\frac{2a_{i}^{2}}{\omega_{0i}^{2}}\right)^{1/2} \tag{64}\] and using \(a_{1}^{2}=1-\sum_{i=2}^{N}a_{i}^{2}\) (see (61)) we can write (64) as \[f(\{a_{i}\})=\left(\frac{2}{\omega_{01}^{2}}+\sum_{i=2}^{N}2a_{i}^{2}\left( \frac{1}{\omega_{0i}^{2}}-\frac{1}{\omega_{01}^{2}}\right)\right)^{1/2}\,. \tag{65}\] Since \(\omega_{01}<\omega_{0i}\) for any \(i\geq 2\), each term in the sum of relation (65) is negative, and we can conclude that the function (65) has maximum for the set \(\{a_{i}\}=\{1,0,...,0\}\). Thus, the minimum value of the optimal damping coefficient (63) is \(\sqrt{2}\omega_{01}/2\), and it is obtained for \(a_{1}=1\) and \(\theta_{1}=\{0,\pi\}\), which corresponds to the initial conditions with initial energy comprised only of potential energy in the first mode, i.e. \(E_{0}=E_{0P1}\). On the other hand, for any fixed set \(\{a_{i}\}\), the function (63) has singularities for \(\cos^{2}\theta_{i}=0\ \forall i\). Thus, the range of the optimal damping coefficient (63) is \[\gamma_{\rm opt}\in\left[\sqrt{2}\omega_{01}/2,+\infty\right)\,. \tag{66}\] In A we have calculated the average of the integral (62) over a set of all initial conditions and obtained \[\overline{I}(\gamma)=\frac{E_{0}}{2N}\sum_{i=1}^{N}\left(\frac{\omega_{0i}^{2 }+\gamma^{2}}{\gamma\omega_{0i}^{2}}\right)\,. \tag{67}\] We differentiate relation (67) by \(\gamma\) and equate it to zero and obtain \[\overline{\gamma}_{\rm opt}=N^{1/2}\left(\sum_{i=1}^{N}\frac{1}{\omega_{0i}^ {2}}\right)^{-1/2} \tag{68}\] as the optimal damping coefficient with respect to the averaged integral (67). Since the ratio of the initial potential energy of the \(i\)-th mode and the total initial energy is \[\beta_{i}=\frac{E_{0Pi}}{E_{0}}=a_{i}^{2}\cos^{2}\theta_{i}\,, \tag{69}\] where \(\beta_{i}\in[0,1]\) and the condition \(0\leq\sum_{i=1}^{N}\beta_{i}\leq 1\) holds, we can write (63) as a function of the distribution of the initial potential energy over the modes, i.e. \[\gamma_{\rm opt}(\{\beta_{i}\})=\left(\sum_{i=1}^{N}\frac{2\beta_{i}}{\omega_ {0i}^{2}}\right)^{-1/2}\,. \tag{70}\] The minimum value of (70) is \(\sqrt{2}\omega_{01}/2\), obtained for \(\beta_{1}=1\) and \(\beta_{i}=0\) for \(i\geq 2\), while \(\gamma_{\rm opt}\rightarrow+\infty\) for \(\beta_{i}=0\ \forall i\). Using \(\beta_{i}=\omega_{0i}^{2}q_{0i}^{2}/E_{0}\), we can write (63) as a function of initial modal coordinates as well, i.e. \[\gamma_{\rm opt}(\{q_{0i}\})=\sqrt{\frac{E_{0}}{2\sum_{i=1}^{N}q_{0i}^{2}}}\,, \tag{71}\] where \(q_{0i}\in[-\sqrt{E_{0}}/\omega_{0i},\sqrt{E_{0}}/\omega_{0i}]\) and the condition \(0\leq\sum_{i=1}^{N}\omega_{0i}^{2}q_{0i}^{2}\leq E_{0}\) holds. ### Quantitative example Here we consider the MDOF system as the one shown schematically in Fig. 10 but with \(m_{i}=m\), \(c_{i}=c\) for \(i=\{1,...,N\}\), and with \(k_{i}=k\) for \(i=\{1,...,N+1\}\). Such a system without damping, i.e with \(c_{i}=0\)\(\forall i\), is a standard part of the undergraduate physics/mechanics courses [17]. Therefore, for the MDOF system with \(N\) degrees of freedom we are considering here, the undamped modal frequencies are [16; 17] \[\omega_{0i}=2\omega_{0}\sin\left(\frac{i\pi}{2(N+1)}\right)\,,\,\text{with}\,\, i=\{1,...,N\}\,, \tag{72}\] and where \(\omega_{0}=\sqrt{k/m}\). In Fig. 11(a) we show undamped modal frequencies \(\omega_{01}\), \(\omega_{0N}\) and damping coefficient \(\overline{\gamma}_{\text{opt}}\), i.e. (68), calculated with (72), as functions of \(N\). We clearly see that the coefficient \(\overline{\gamma}_{\text{opt}}\) is in the over-damped regime from the perspective of the first mode, and in the under-damped regime from the perspective of highest mode, for any \(N>1\), and in the case \(N=1\) all three values match. In Fig. 11(b) we show ratios \(\overline{\gamma}_{\text{opt}}/\omega_{01}\) and \(\omega_{0N}/\overline{\gamma}_{\text{opt}}\) and we see that both ratios increase with increasing \(N\). We show in B that the following limits hold \[\lim_{N\rightarrow+\infty}\overline{\gamma}_{\text{opt}}(N)=0, \tag{73}\] \[\lim_{N\rightarrow+\infty}\frac{\overline{\gamma}_{\text{opt}}(N)}{\omega_{01 }(N)}=+\infty, \tag{74}\] \[\lim_{N\rightarrow+\infty}\frac{\omega_{0N}(N)}{\overline{\gamma}_{\text{opt }}(N)}=+\infty. \tag{75}\] We note here that these limit values do not correspond to the transition from a discrete to a continuous system, but simply tell us the behavior of these quantities with respect to the increase in the number of masses, i.e. with respect to the increase in the size of the discrete system. From everything that has been said so far, it is clear that the damping coefficient \(\overline{\gamma}_{\text{opt}}\), obtained by minimizing the energy integral averaged over all initial conditions that correspond to the same initial energy, cannot be considered generally as optimal and that, by itself, it says nothing about optimal damping of the system whose dynamics started with some specific initial condition. Damping coefficient (63), which is given by the minimization of the energy integral for a specific initial condition, is of course a better choice for optimal damping of an MDOF system, than the damping coefficient \(\overline{\gamma}_{\text{opt}}\), if we want to consider how the system dissipates energy the fastest for a particular initial condition, but, as we argue in the section 4.2, this damping coefficient also has some obvious deficiencies. Figure 11: (a) Undamped modal frequencies \(\omega_{01}\) (blue circles), \(\omega_{0N}\) (red x’s) and the damping coefficient \(\overline{\gamma}_{\text{opt}}\) (black squares) as functions of the number of the masses \(N\). (b) Ratios \(\overline{\gamma}_{\text{opt}}/\omega_{01}\) (blue line) and \(\omega_{0N}/\overline{\gamma}_{\text{opt}}\) (red line), shown as solid lines due to the high density of the shown points. ### Issues with the minimum of the energy integral as a criterion for optimal damping We can ask, for example, whether in an experiment, with known initial conditions, in which an MDOF system is excited to oscillate, a damping coefficient (63) would be the best choice if we want that the system settles down in equilibrium as soon as possible? Here, in three points, we explain why we think the answer to that question is negative: * From relation (62), we see that, due to the term \(\sin 2\theta_{i}\), the energy integral is sensitive to changes \(\theta_{i}\rightarrow-\theta_{i}\) and \(\theta_{i}\rightarrow\pi-\theta_{i}\), which correspond to changes of initial conditions \((q_{0i},\dot{q}_{0i})\rightarrow(q_{0i},-\dot{q}_{0i})\) and \((q_{0i},\dot{q}_{0i})\rightarrow(-q_{0i},\dot{q}_{0i})\). When we differentiate (62) to determine \(\gamma\) for which the energy integral has a minimum, the term \(\sin 2\theta_{i}\) cancels and as a result the coefficient (63) is not sensitive to this change in initial conditions. Such changes in the initial conditions lead to significantly different situations. For example, if \(q_{0i}>0\) and \(\dot{q}_{0i}>0\), the \(i\)-th mode in the critical and over-damped regime (i.e. for \(\gamma\geq\omega_{0i}\)) will never reach the equilibrium position, while for \(q_{0i}>0\) and \(\dot{q}_{0i}<0\), and \(i\)-th mode initial kinetic energy grater than initial potential energy, it can go through the equilibrium position once, depending on the magnitude of the damping coefficient, and there will be the smallest damping coefficient in the over-damped regime for which no crossing occurs and for which the solution converges to equilibrium faster than for any other damping coefficient [19]. Therefore, the damping coefficient considered optimal would have to be sensitive to this change in initial conditions. * Damping coefficient (63) has singularities for \(\cos\theta_{i}=0\ \forall i\), i.e. for initial conditions for which all initial energy is kinetic. For such initial conditions, the higher the damping coefficient, the higher and faster the dissipation. In other words, the higher the damping coefficient, the faster the energy integral decreases. Therefore, coefficient (63) diverges for that type of initial conditions. This would actually mean that, for this initial conditions, it is optimal to take the damping coefficient as high as possible, but in principle this corresponds to a situation in which all modes are highly over-damped, i.e. all masses reach their maximum displacements in a very short time and afterwards they begin to return to the equilibrium position almost infinitely slowly. Figuratively speaking, it is as if we immersed the system in concrete. This issue has recently been addressed in the context of free vibrations of SDOF [19] and was already noticed in [20]. Therefore, simply taking the highest possible damping coefficient, as suggested by relation (63) for this type of initial conditions, is not a good option. * The damping coefficient (63) is determined on the basis of the energy integral over the entire time and therefore it does not take into account that in nature and experiments these systems effectively return to the equilibrium state for some finite time. Because of the above points, in the next section we provide a new approach to determine the optimal damping of MDOF systems. ## 5 Optimal damping of an MDOF system: a new perspective From a theoretical perspective, systems with viscous damping asymptotically approach the equilibrium state and never reach it exactly. In nature and in experiments, these systems reach the equilibrium state which is not an exact zero energy state, but rather a state in which the energy of the system has decreased to the level of the energy imparted to the system by the surrounding noise, or to the energy resolution of the measuring apparatus. Following this line of thought, we will define a system to be in equilibrium for times \(t>\tau\) such that \[\frac{E(\tau)}{E_{0}}=10^{-\delta}\,, \tag{76}\] where \(E(\tau)\) is the energy of the system at \(t=\tau\), \(E_{0}\) is the initial energy, and \(\delta>0\) is a dimensionless parameter that defines what fraction of the initial energy is left in the system. This line of thought has recently been used to determine the optimal damping of SDOF systems [19], and here we extend it to MDOF systems. Therefore, in what follows, we will consider as optimal the damping coefficient for which the systems energy drops to some energy level of interest, e.g. to the energy resolution of the experiment, the fastest and we will denote it with \(\tilde{\gamma}\). ### Optimal damping of the \(i\)-th mode of a MDOF system with MPD Here we will consider the behavior of the energy of the \(i\)-th mode of the MDOF system with MPD and determine the optimal damping coefficient \(\tilde{\gamma}_{i}\) of the \(i\)-th mode with respect to criterion (76). For any MDOF system with \(N\geq 1\) degrees of freedom with MPD, each mode behaves as a SDOF system studied in section 2, with the damping coefficient \(\gamma\) and the undamped (natural) frequency \(\omega_{0i}\). Thus (see relation (31)), the ratio of the energy of the \(i\)-th mode, \(E_{i}(\gamma,t)\), and initial energy of the \(i\)-th mode, \(E_{0i}\), is given by \[\frac{E_{i}(\gamma,t)}{E_{0i}}=e^{-2\gamma t}\left(\cos^{2}(\omega_{i}t)+\gamma \cos 2\theta_{i}\frac{\sin(2\omega_{i}t)}{\omega_{i}}+\left(\omega_{0i}^{2}+ \gamma^{2}+2\omega_{0i}\gamma\sin 2\theta_{i}\right)\frac{\sin^{2}(\omega_{i}t)}{ \omega_{i}^{2}}\right) \tag{77}\] for the under-damped (\(\gamma<\omega_{oi}\)) and over-damped (\(\gamma>\omega_{0i}\)) regime. We will repeat here briefly for clarity, \(\omega_{i}=\sqrt{\omega_{0i}^{2}-\gamma^{2}}\) is the damped angular frequency and \(\theta_{i}\) is the polar angle which determines the initial conditions \(q_{0i}\) and \(\dot{q}_{0i}\) of the \(i\)-th mode and the distribution of the initial energy within the mode, i.e. initial potential and initial kinetic energy of the \(i\)-th mode are \(E_{0Pi}=E_{0i}\cos^{2}\theta_{i}\) and \(E_{0Ki}=E_{0i}\sin^{2}\theta_{i}\) respectively. Energy to initial energy ratio for the \(i\)-th mode in the critically damped regime (\(\gamma=\omega_{0i}\)) is simply obtained by taking \(\gamma\rightarrow\omega_{0i}\) limit of the relation (77), and we obtain \[\frac{E_{i}(\gamma=\omega_{0i},t)}{E_{0i}}=e^{-2\omega_{0i}t}\left(1+2\omega_ {0i}(\cos 2\theta_{i})t+2\omega_{0i}^{2}\left(1+\sin 2\theta_{i}\right)t^{2} \right)\,. \tag{78}\] In relations (77) and (78), we explicitly show that the energy depends on the damping coefficient and time, because in what follows we will plot these quantities as functions of these two variables for fixed initial conditions, i.e. fixed \(\theta_{i}\). We will investigate the behavior for several types of initial conditions, which of course will not cover all possible types of initial conditions, but will give us a sufficiently clear picture of the determination and behavior of the optimal damping with respect to the initial conditions and the equilibrium state defined with condition (76). #### 5.1.1 Initial energy of the \(i\)-th mode comprised only of potential energy In Fig. 12 we show the base 10 logarithm of the ratio (77), i.e. \(\log\left(E_{i}(\gamma,t)/E_{0i}\right)\), for initial condition \(\theta_{i}=0\), which corresponds to the initial energy of the \(i\)-th mode comprised only of potential energy. Four black contour lines denote points with \(E_{i}(\gamma,t)/E_{0i}=\{10^{-3},10^{-4},10^{-5},10^{-6}\}\) respectively, as indicated by the numbers placed to the left of each contour line. Each contour line has a unique point closest to the \(\gamma\) axis, i.e. corresponding to the damping coefficient \(\tilde{\gamma}_{i}\) for which that energy level is reached the fastest. As an example, we draw arrow in Fig. 12 that points to the coordinates \((\gamma,t)=(0.840\omega_{0i},5.15\omega_{0i}^{-1})\), i.e. to the tip of the contour line with points corresponding to \(E_{i}(\gamma,t)=10^{-4}E_{0i}\). Thus, for the initial condition \(\theta_{i}=0\), \(\tilde{\gamma}_{i}=0.840\omega_{0i}\) is the optimal damping coefficient for the \(i\)-th mode to reach this energy level the fastest, and it does so at the instant \(\tau_{i}=5.15\omega_{0i}^{-1}\). In Table 1 we show results for other energy levels corresponding to contour lines shown in Fig. 12. Here, and in the rest of the paper, we have rounded the results for the damping coefficient to three decimal places, and for the time to two decimal places. Consider now, for example, a thought experiment in which we excite a MDOF system so that it vibrates only in the first mode and that all initial energy was potential, i.e. \(E_{01}=E_{0}\) and \(\theta_{1}=0\). Furthermore, \begin{table} \begin{tabular}{|c|c|c|} \hline \(E_{i}(\gamma,t)/E_{0i}\) & \(\tilde{\gamma}_{i}\left[\omega_{0i}\right]\) & \(\tau_{i}[\omega_{0i}^{-1}]\) \\ \hline \(10^{-3}\) & 0.769 & 4.18 \\ \hline \(10^{-4}\) & 0.840 & 5.15 \\ \hline \(10^{-5}\) & 0.885 & 6.16 \\ \hline \(10^{-6}\) & 0.915 & 7.20 \\ \hline \end{tabular} \end{table} Table 1: Optimal damping coefficient \(\tilde{\gamma}_{i}\) for which the energy of the \(i\)-th mode drops to the level \(10^{-\delta}E_{0i}\) the fastest, with the initial condition \(\theta_{i}=0\). suppose that the system has effectively returned to equilibrium when its energy drops below \(10^{-6}E_{0}\), due to the resolution of the measuring apparatus. It is clear form the Table 1 that \(\tilde{\gamma}_{1}=0.915\omega_{01}\) would be optimal in such a scenario. In the same scenario, optimal damping coefficient given by the minimization of the energy integral, i.e. (63), would be \(\gamma_{\rm opt}=\sqrt{2}\omega_{01}/2=0.707\omega_{01}\), thus, a very bad choice in the sense that this damping coefficient would not be optimal even in an experiment with a significantly poorer energy resolution (see Table 1). This simple example illustrates that, from a practical point of view, one has to take into account both the initial conditions and the resolution of the measuring apparatus in order to determine the optimal damping coefficient. #### 5.1.2 Initial energy of the \(i\)-th mode comprised only of kinetic energy In Fig. 13(a) and (b) we show the base 10 logarithm of the ratio (77), i.e. \(\log\left(E_{i}(\gamma,t)/E_{0i}\right)\), for initial condition \(\theta_{i}=\pi/2\), which corresponds to the initial energy of the \(i\)-th mode comprised only of kinetic energy. In Fig. 13(b) we show results for larger data span than in Fig. 13(a), and only contour line for points corresponding to \(E_{i}(\gamma,t)=10^{-3}E_{0i}\). The left arrow in Fig. 13(b) indicates the same coordinates as the arrow in Fig. 13(a), and the right arrow in Fig. 13(b) points to the coordinates \((\gamma,t)=(13.316\omega_{0i},4.66\omega_{0i}^{-1})\) with \(E_{i}(\gamma,t)=10^{-3}E_{0i}\). Thus, for \(\gamma>13.316\omega_{0i}\) the system comes sooner to the energy level \(10^{-3}E_{0i}\) than for \(\gamma=0.722\omega_{0i}\), but these highly over-damped damping coefficients would correspond to restricting the system to infinitesimal displacements from equilibrium, after which the system returns to the equilibrium state practically infinitely slowly [19]. Thus, for this initial condition we take the damping coefficient in the under-damped regime, i.e. \(\tilde{\gamma}_{i}=0.722\omega_{0i}\), as optimal for reaching the level \(E_{i}(\gamma,t)=10^{-3}E_{0i}\) the fastest. For all energy levels the behaviour is qualitatively the same, and the results are given in Table 2. Consider now, for example, a thought experiment in which we excite a MDOF system so that it vibrates only in the first mode and that all initial energy was kinetic, i.e. \(E_{01}=E_{0}\) and \(\theta_{1}=\pi/2\). Furthermore, suppose that the system has effectively returned to equilibrium when its energy drops below \(10^{-6}E_{0}\), due to the resolution of the measuring apparatus. It is clear form the Table 2 that \(\tilde{\gamma}_{1}=0.892\omega_{01}\) would be optimal in such a scenario. In the same scenario, optimal damping coefficient given by the minimization of the energy integral, i.e. (63), would be \(\gamma_{\rm opt}=+\infty\). Figure 12: The base 10 logarithm of the ratio (77), i.e. \(\log\left(E_{i}(\gamma,t)/E_{0i}\right)\), for initial condition \(\theta_{i}=0\). For this initial condition, initial energy of the \(i\)-th mode is comprised only of potential energy. Four black contour lines denote points with \(E_{i}(\gamma,t)/E_{0i}=\{10^{-3},10^{-4},10^{-5},10^{-6}\}\) respectively, as indicated by the numbers placed to the left of each contour line. As an example of determining the optimal damping for which the system reaches the desired energy level the fastest, i.e. with respect to the condition (76), we draw the arrow that points to the coordinates \((\gamma,t)=(0.840\omega_{0i},5.15\omega_{0i}^{-1})\) for which the \(i\)-th mode reaches the level \(E_{i}(\gamma,t)/E_{0i}=10^{-4}\) the fastest. Thus, \(\tilde{\gamma}_{i}=0.840\omega_{0i}\) is the optimal damping coefficient to reach this energy level the fastest. Optimal values for other energy levels, denoted with contour lines, are given in Table 1. Here we note that if in such an experiment we can set the damping coefficient to be in the over-damped regime in the first part of the motion, i.e. when the system is moving from the equilibrium position to the maximum displacement, and in the under-damped regime in the second part of the motion, i.e. when the system moves from the position of maximum displacement back towards the equilibrium position, then the fastest way to achieve equilibrium would be to take the largest experimentally available over-damped coefficient in the first part of the motion, and the under-damped coefficient optimised like in 5.1.1 in the second part of the motion, with the fact that we have to carry out the optimization with respect to the energy left in the system at the moment when the system reached the maximum displacement and with respect to the energy resolution of the experiment. #### 5.1.3 Initial energy of the \(i\)-th mode comprised of potential and kinetic energy In Fig. 14(a) we show the base 10 logarithm of the ratio (77), i.e. \(\log\left(E_{i}(\gamma,t)/E_{0i}\right)\), for initial condition \(\theta_{i}=\pi/3\), which corresponds to the initial energy of the \(i\)-th mode comprised of kinetic energy \(E_{0Ki}=3E_{0i}/4\) and potential energy \(E_{0Pi}=E_{0i}/4\), with both initial normal coordinate and velocity positive, i.e. with \(q_{0i}>0\) and \(\dot{q}_{0i}>0\). The results for optimal damping are obtained by the same method as in 5.1.1 and are given in Table 3 for data shown in Fig. 14(a), and in Table 4 for data shown in Fig. 14(b). We see that the energy dissipation strongly depends on the relative sign between \(q_{0i}\) and \(\dot{q}_{0i}\). It was recently shown, \begin{table} \begin{tabular}{|c|c|c|} \hline \(E_{i}(\gamma,t)/E_{0i}\) & \(\tilde{\gamma}_{i}\left[\omega_{0i}\right]\) & \(\tau_{i}[\omega_{0i}^{-1}]\) \\ \hline \(10^{-3}\) & 0.722 & 4.66 \\ \hline \(10^{-4}\) & 0.794 & 5.50 \\ \hline \(10^{-5}\) & 0.852 & 6.42 \\ \hline \(10^{-6}\) & 0.892 & 7.40 \\ \hline \end{tabular} \end{table} Table 2: Optimal damping coefficient \(\tilde{\gamma}_{i}\) for which the energy of the \(i\)-th mode drops to the level \(10^{-\delta}E_{0i}\) the fastest, with the initial condition \(\theta_{i}=\pi/2\). Figure 13: The base 10 logarithm of the ratio (77), i.e. \(\log\left(E_{i}(\gamma,t)/E_{0i}\right)\), for initial condition \(\theta_{i}=\pi/2\). For this initial condition, initial energy of the \(i\)-th mode is comprised only of kinetic energy. (a) Four black contour lines denote points with \(E_{i}(\gamma,t)/E_{0i}=\{10^{-3},10^{-4},10^{-5},10^{-6}\}\) respectively, and the arrow points to the coordinates \((\gamma,t)=(0.722\omega_{0i},4.66\omega_{0i}^{-1})\), with \(E_{i}(\gamma,t)/E_{0i}=10^{-3}\), for which this level of energy is reached in shortest time for the shown data span. (b) Contour line for points with \(E_{i}(\gamma,t)=10^{-3}E_{0i}\) is shown for larger data span, left arrow points to the coordinates \((\gamma,t)=(0.722\omega_{0i},4.66\omega_{0i}^{-1})\), and the right arrow to the coordinates \((\gamma,t)=(13.316\omega_{0i},4.66\omega_{0i}^{-1})\), both with \(E_{i}(\gamma,t)/E_{0i}=10^{-3}\). Thus, for \(\gamma>13.316\omega_{0i}\) energy level \(10^{-3}E_{0i}\) is reached faster than for \(\gamma=0.7223\omega_{0i}\). See text for details. for free vibrations of SDOF, that for an initial condition with initial kinetic energy greater than initial potential energy and opposite signs between \(x_{0}\) and \(v_{0}\), an optimal damping coefficient can be found in the over-damped regime [19], thus, the same is true when we consider any mode of a MDOF system with MPD. Consider now, for example, a thought experiment in which we excite a MDOF system so that it vibrates only in the first mode and that 75% of initial energy was kinetic and 25% of initial energy was potential, and with \(q_{01}>0\) and \(\dot{q}_{01}>0\), i.e. \(E_{01}=E_{0}\) and \(\theta_{1}=\pi/3\). Furthermore, suppose that the system has effectively returned to equilibrium when its energy drops below \(10^{-6}E_{0}\), due to the resolution of the measuring apparatus. It is clear form the Table 3 that \(\tilde{\gamma}_{1}=0.908\omega_{01}\) would be optimal in such a scenario. In the same scenario, but with \(q_{01}>0\) and \(\dot{q}_{01}<0\), i.e. for \(\theta_{1}=-\pi/3\), we see from Table 4 that \(\tilde{\gamma}_{1}=1.145\omega_{01}\) would be optimal. Optimal damping coefficient given by the minimization of the energy integral, i.e. (63), is insensitive to the change of the sign of \(\dot{q}_{01}\), and it would be \(\gamma_{\rm opt}=\sqrt{2}\omega_{01}=1.414\omega_{01}\) in both cases. We note here, that for the initial conditions of the \(i\)-th mode with initial kinetic energy much grater than initial potential energy, i.e. \(E_{0Ki}>>E_{0Pi}\), and with opposite signs of initial displacement and velocity, i.e. \(\mathrm{sgn}(q_{0i})\neq\mathrm{sgn}(\dot{q}_{0i})\), the optimal damping coefficient is going to be deep in the over-damped regime and dissipation of initial energy will happen in a very short time. If, for any reason, this is not desirable in some particular application, one can always find damping coefficient in the under-damped regime, with that same initial condition, which can serve as an alternative. As an example of such a situation, in Fig. 15 we show the base 10 logarithm of the ratio (77), i.e. \(\log\left(E_{i}(\gamma,t)/E_{0i}\right)\), for initial condition \(\theta_{i}=-9\pi/20\), which corresponds to the initial energy of the \(i\)-th mode comprised of kinetic energy \(E_{0Ki}\approx 0.98E_{0i}\) and potential energy \(E_{0Pi}\approx 0.02E_{0i}\), with \(q_{0i}>0\) and \(\dot{q}_{0i}<0\). In Fig. 15 we see that the \(i\)-th mode will reach the energy level \(10^{-6}E_{0i}\) the fastest for \(\gamma=3.222\omega_{0i}\), and in case, e.g., that such damping coefficient is difficult to achieve experimentally, another choice for the optimal damping coefficient can be \(\gamma=0.883\omega_{0i}\). ### Optimal damping of a MDOF system with MPD If all modes of a MDOF system with \(N\) defrees of freedom are excited, the ratio of the energy of the system, \(E(\gamma,t)\), and initial energy of the system, \(E_{0}\), is given by \[\frac{E(\gamma,t)}{E_{0}}=\sum_{i=1}^{N}\frac{E_{0i}}{E_{0}}e^{-2\gamma t} \left(\cos^{2}(\omega_{i}t)+\gamma\cos 2\theta_{i}\frac{\sin(2\omega_{i}t)}{ \omega_{i}}+\left(\omega_{0i}^{2}+\gamma^{2}+2\omega_{0i}\gamma\sin 2\theta_{i} \right)\frac{\sin^{2}(\omega_{i}t)}{\omega_{i}^{2}}\right)\,, \tag{79}\] where the set of all initial energies of the modes, i.e. \(\{E_{0i}\}\), and the set of all polar angles, i.e. \(\{\theta_{i}\}\), determines the initial condition of the whole system. Since for MPD the damping of the system as a whole is determined by only one damping coefficient \(\gamma\), we can calculate the base 10 logarithm of the ratio (79), but using a unique units for \(\gamma\), \(t\) and \(\omega_{0i}\) for all modes, and from these data determine the optimal damping coefficient \(\tilde{\gamma}\), for which the system will come to equilibrium in the sense of the condition (76) the fastest, in the same way as in subsubsections 5.1.1-5.1.3 where we showed how to determine the optimal damping of individual modes. One practical choice for the units might be \(\omega_{01}\) for \(\gamma\) and for \(\omega_{0i}\)\(\forall i\), and \(\omega_{01}^{-1}\) for \(t\). This way, we have the easiest insight into the relationship between the first mode and the optimal damping coefficient that we want to determine, in the sense that we can easily see whether the first mode is under-damped, over-damped or critically damped in relation to it, which is important since the first mode is often the dominant mode. If we apply this to the 2-DOF system studied in 3.1, and we obtain \[\frac{E(\gamma,t)}{E_{0}}=\sum_{i=1}^{2}\frac{E_{0i}}{E_{0}}e^{-2\gamma t} \left(\cos^{2}(\omega_{i}t)+\gamma\cos 2\theta_{i}\frac{\sin(2\omega_{i}t)}{ \omega_{i}}+\left(\omega_{0i}^{2}+\gamma^{2}+2\omega_{0i}\gamma\sin 2\theta_{i} \right)\frac{\sin^{2}(\omega_{i}t)}{\omega_{i}^{2}}\right)\,, \tag{80}\] where \(\omega_{01}=\omega_{0}\), \(\omega_{02}=\sqrt{3}\omega_{0}\), \(\omega_{1}=\sqrt{\omega_{0}^{2}-\gamma^{2}}\), \(\omega_{2}=\sqrt{3\omega_{0}^{2}-\gamma^{2}}\) and we take that the damping coefficient is in \(\omega_{0}\) units, while the time is in \(\omega_{0}^{-1}\) units. We are now in a position to determine the optimal damping of this 2-DOF system for different initial conditions. Again, we will not investigate all possible types of the Figure 15: The base 10 logarithm of the ratio (77), i.e. \(\log\left(E_{i}(\gamma,t)/E_{0i}\right)\), for initial condition \(\theta_{i}=-9\pi/20\). Black contour line denotes the points with \(E_{i}(\gamma,t)=10^{-6}E_{0i}\). Left arrow points to the coordinates \((\gamma,t)=(0.883\omega_{0i},7.30\omega_{0i}^{-1})\) for which level \(10^{-6}E_{0i}\) is reached the fastest in the under-damped regime, and the right arrow points to the coordinates \((\gamma,t)=(3.222\omega_{0i},0.87\omega_{0i}^{-1})\) for which the same level is reached the fastest in the over-damped regime. initial conditions, but two qualitatively different ones, one with initial energy comprised only of potential energy, and the other with initial energy comprised only of kinetic energy. These two examples will give us a picture of the procedure for determining the optimal damping coefficient \(\tilde{\gamma}\) for this 2-DOF system. The same procedure for determining the optimal damping can be in principle carried out for any MDOF system with MPD, with any initial condition. #### 5.2.1 Optimal damping of the 2-DOF system with initial energy comprised only of potential energy Here we choose initial condition with \(E_{01}=E_{02}=E_{0}/2\) and \(\theta_{1}=\theta_{2}=0\), i.e. with initial potential energy distributed equally between the two modes and zero initial kinetic energy. In Fig. 16 we show the base 10 logarithm of the ratio (80), i.e. \(\log\left(E(\gamma,t)/E_{0}\right)\), for the chosen initial condition. In Table 5 we show results for other energy levels corresponding to contour lines shown in Fig. 16. For this initial condition, optimal damping coefficient given by the minimization of the energy integral, i.e. (63), is \(\gamma_{\rm opt}=\sqrt{3/4}\omega_{0}=0.866\omega_{0}\). Figure 16: The base 10 logarithm of the ratio (80), i.e. \(\log\left(E(\gamma,t)/E_{0}\right)\), for initial condition \(E_{01}=E_{02}=E_{0}/2\) and \(\theta_{1}=\theta_{2}=0\). For this initial condition, initial energy of the 2-DOF system is comprised only of potential energy distributed equally between the modes. Four black contour lines denote points with \(E(\gamma,t)/E_{0}=\{10^{-3},10^{-4},10^{-5},10^{-6}\}\) respectively, as indicated by the numbers placed to the left of each contour line. As an example of determining the optimal damping for which the system reaches the desired energy level the fastest, i.e. with respect to the condition (76), we draw the arrow that points to the coordinates \((\gamma,t)=(0.859\omega_{0},5.37\omega_{0}^{-1})\) for which the energy of the system reaches the level \(E(\gamma,t)/E_{0}=10^{-4}\) the fastest. Thus, \(\tilde{\gamma}=0.859\omega_{0}\) is the optimal damping coefficient to reach this energy level the fastest. #### 5.2.2 Optimal damping of the 2-DOF system with initial energy comprised only of kinetic energy Here we choose initial condition with \(E_{01}=E_{02}=E_{0}/2\) and \(\theta_{1}=\theta_{2}=\pi/2\), i.e. with initial kinetic energy distributed equally between the two modes and zero initial potential energy. In Fig. 17(a) and (b) we show the base 10 logarithm of the ratio (80), i.e. \(\log\left(E(\gamma,t)/E_{0}\right)\), for the chosen initial condition. In Table 6 we show results for other energy levels corresponding to contour lines shown in Fig. 17(a). For this initial condition, optimal damping coefficient given by the minimization of the energy integral, i.e. (63), is \(\gamma_{\rm opt}=+\infty\). ## 6 Conclusion The main message of this paper is that the dissipation of the initial energy in vibrating systems strongly depends on the initial conditions with which the dynamics of the system started, and ideally it would be optimal to always adjust the damping to the initial conditions. We show that having some information about the type of initial conditions can also help in devising the optimal damping. Although the paper is devoted to the case of mass proportional damping, the same phenomena also appears for other types of damping. The new method we propose for optimizing the damping is general and we expect that it will be applicable also to the types of vibrating systems we did not study in this paper. \begin{table} \begin{tabular}{|c|c|c|} \hline \(E(\gamma,t)/E_{0}\) & \(\tilde{\gamma}\left[\omega_{0}\right]\) & \(\tau[\omega_{0}^{-1}]\) \\ \hline \(10^{-3}\) & 0.783 & 4.60 \\ \hline \(10^{-4}\) & 0.838 & 5.72 \\ \hline \(10^{-5}\) & 0.861 & 6.47 \\ \hline \(10^{-6}\) & 0.909 & 7.78 \\ \hline \end{tabular} \end{table} Table 6: Optimal damping coefficient \(\tilde{\gamma}\) for which the energy of the system drops to the level \(10^{-\delta}E_{0}\) the fastest, with the initial condition \(E_{01}=E_{02}=E_{0}/2\) and \(\theta_{1}=\theta_{2}=\pi/2\). ## 7 Acknowledgments We are grateful to Bojan Loncar for making schematic figures of 2-DOF and MDOF systems, i.e. Fig. 6 and 10, according to our sketches. This work was supported by the QuantiXLie Center of Excellence, a project co-financed by the Croatian Government and European Union through the European Regional Development Fund, the Competitiveness and Cohesion Operational Programme (Grant No. KK.01.1.1.01.0004). The authors have no conflicts to disclose.
2310.10399
Towards Fair and Calibrated Models
Recent literature has seen a significant focus on building machine learning models with specific properties such as fairness, i.e., being non-biased with respect to a given set of attributes, calibration i.e., model confidence being aligned with its predictive accuracy, and explainability, i.e., ability to be understandable to humans. While there has been work focusing on each of these aspects individually, researchers have shied away from simultaneously addressing more than one of these dimensions. In this work, we address the problem of building models which are both fair and calibrated. We work with a specific definition of fairness, which closely matches [Biswas et. al. 2019], and has the nice property that Bayes optimal classifier has the maximum possible fairness under our definition. We show that an existing negative result towards achieving a fair and calibrated model [Kleinberg et. al. 2017] does not hold for our definition of fairness. Further, we show that ensuring group-wise calibration with respect to the sensitive attributes automatically results in a fair model under our definition. Using this result, we provide a first cut approach for achieving fair and calibrated models, via a simple post-processing technique based on temperature scaling. We then propose modifications of existing calibration losses to perform group-wise calibration, as a way of achieving fair and calibrated models in a variety of settings. Finally, we perform extensive experimentation of these techniques on a diverse benchmark of datasets, and present insights on the pareto-optimality of the resulting solutions.
Anand Brahmbhatt, Vipul Rathore, Mausam, Parag Singla
2023-10-16T13:41:09Z
http://arxiv.org/abs/2310.10399v1
# Towards Fair and Calibrated Models ###### Abstract Recent literature has seen a significant focus on building machine learning models with specific properties such as fairness, i.e., being non-biased with respect to a given set of attributes, calibration i.e., model confidence being aligned with its predictive accuracy, and explainability, i.e., ability to be understandable to humans. While there has been work focusing on each of these aspects individually, researchers have shied away from simultaneously addressing more than one of these dimensions. In this work, we address the problem of building models which are both fair and calibrated. We work with a specific definition of fairness, which closely matches [Biswas et. al. 2019], and has the nice property that Bayes optimal classifier has the maximum possible fairness under our definition. We show that an existing negative result towards achieving a fair and calibrated model [Kleinberg et. al. 2017] does not hold for our definition of fairness. Further, we show that ensuring group-wise calibration with respect to the sensitive attributes automatically results in a fair model under our definition. Using this result, we provide a first cut approach for achieving fair and calibrated models, via a simple post-processing technique based on temperature scaling. We then propose modifications of existing calibration losses to perform group-wise calibration, as a way of achieving fair and calibrated models in a variety of settings. Finally, we perform extensive experimentation of these techniques on a diverse benchmark of datasets, and present insights on the pareto-optimality of the resulting solutions. + Footnote †: [1] The majority of the work was completed during association with IIT Delhi. ## 1 Introduction Neural models have been shown to provide impressive performance for a large class of applications, including those in computer vision, natural language processing (NLP), speech and reinforcement learning [6, 23]. It has been argued that for an end-to-end deployment in a real world setting, a machine learning model should have some desirable properties such as, interpretability, i.e., being understandable in their predictions, being fair, i.e., not having any bias with respect to the values of a given (protected) attribute value, and being calibrated, i.e., not making predictions which are over (under) confident. It has also been shown that by themselves, neural models, while being highly accurate, often lack these properties. As a result, several researchers have focused on building models which are interpretable [19], fair [16] and calibrated [7]. But to the best of our knowledge, there is very limited work on addressing more than one of these properties simultaneously while still being accurate. Motivated by this observation, our focus in this work is on designing neural models which are both fair and calibrated. Multiple definitions of fairness have been proposed in the literature [16]. For our current exposition, we work with a variation of the definition proposed by [2]. We specifically choose this definition since it allows for building models which are both accurate and fair, as long as the predicted (aggregate) probability of the target variable does not deviate (significantly) from that observed in the data, conditioned on the sensitive attribute. Satisfaction of this definition implies that the model is not amplifying the unfairness already present in the real-world data. For calibration, we use the standard definition as existing in the literature [24]. As the main theoretical results of our paper, we show that a negative result shown by [10] regarding joint optimization of fairness and calibration, does not hold in our setting, since our definition of fairness depends on the conditional data distribution, as opposed to others such as equalized odds [16], which strive for an "absolute" notion of fairness independent of what is observed in the data. We show that calibration a model for each value of the sensitive attribute in fact implies our definition of fairness. Following this, we provide post-processing and train time techniques to trade-off between fairness, calibration and accuracy. As a post processing technique, we propose a variant of Temperature scaling [7] which we show can also achieve fairness under our definition. Calibration achieved by temperature scaling on the learned model does not hurt accuracy. Next, we start with the calibration losses proposed in literature [18, 17, 13, 8, 11], and formulate their extensions so that model can be calibrated given each value of the protected attribute. As a _hybrid_ approach, we first train a model trained using the fairness-calibration loss. We then apply our post-processing technique to improve calibration and fairness without disturbing the accuracy of this learned model. Finally, we present a detailed analysis of all these techniques on a diverse benchmark of datasets. We also present some insights into finding pareto-optimal points and quantifying tradeoff between fairness, accuracy and calibration. In Section 2, we provide an overview of prior work in fairness and calibration. Following that, in section 3 we discuss the definitions of calibration and unfairness used in this paper. We also provide justification for the use of these particular definitions and prove some goodness properties. Next, in section 4 we talk about some techniques to jointly optimise for performance, fairness and calibration. Finally, in section 5 we provide experimental results of these techniques on a wide range of real world datasets. ## 2 Related Work **Definitions of Unfairness**: Over the years, many definitions of measuring unfairness in ML have been proposed. [16] summarises all the definitions of unfairness found in literature. Furthermore, [10] proves that more than two of these definitions cannot be simultaneously satisfied except in some fixed cases. None of these works specifically talk about _amplification_ in the bias already present in the data. [2] defines two properties that a good definition must satisfy. They require fairness concept to be agonistic of prior probability shifts within groups and require it to hold true for a perfect classifier. They define _Proportional Equality_ definition which talks about amplification in the bias. [26] also talks about Bias Amplification and provides a way to measure it in multi-class domain. **Unfairness**: [9] talks about the reasons of unfairness in ML models. It mentions underfitting as one of the reasons and explains that underfitted models make more predictions on prior distribution of the data. It lists other reasons for unfairness and goes on to describe a regularization approach to mitigate it. [5] further reinforces that underfitting causes unfairness in ML models and provides empirical results which depict the same. [22] talks about extreme Simplicity Bias in Neural Networks, where the neural networks make predictions on simple features even if they are less predictive. [12] talks about feature selection techniques to introduce less unfairness into neural models. **Calibration methods for Neural Networks**: [7] summarises different methods being used for calibration of neural networks. It talks about method of temperature scaling which is done at test time as being the state-of-the-art as it does not change the labels and hence affect the accuracy. [24] extends this definition to multiclass setting and provides ways to calibrate multiclass models. Calibration at training time is a little tricky as binning schemes introduce non-differentiability and hence adding a loss term is difficult. [18, 17, 13, 8, 11] design surrogate loss terms which can be optimized at train time to obtain better calibrated models. **Unfairness using Calibration**: [10] provides class-wise calibration as a definition of fairness. [20] proves that calibration is not compatible with equalized odds and goes on to relax equalized odds to make it compatible with calibration.[14] provides a kernel-based method to define calibration in a particular region locally. It also provides a post processing technique which they show improves group-wise calibration of the model. **Other related works**: [4] provides a neural framework to de-correlate every feature from the sensitive feature so that any model trained on the de-correlated data satisfies demographic parity. [3] argues that unfairness introduced by inadequate samples sizes or unmeasured predictive variables should be addressed through data collection. [1] introduces a Shapley value paradigm to attribute the model's unfairness to individual input features. [25] talks about how unlabeled data can be effectively used to obtain better fairness-accuracy trade-off. ## 3 Theoretical Framework ### Background and Definitions We address the problem of supervised multi-class classification with two sensitive groups in this paper. The feature vector \(\mathbf{X}\in\mathcal{X}\), the label \(Y\in\{1,\ldots,K\}\) and the sensitive group \(A\in\{0,1\}\) are random variables where \((\mathbf{X},Y,A)\) follows the joint distribution \(\mathcal{D}\). Our training and test datasets are sampled i.i.d from \(\mathcal{D}\). We train a model \(h:\mathcal{X}\rightarrow\mathbb{R}^{K}\) on the training dataset where \(\mathsf{softmax}(h(\mathbf{X}))=\hat{\mathbf{P}}\) where \(\hat{P}_{k}\) is a random variable representing the probability of predicting class \(k\). The prediction \(\hat{Y}\) given \(\mathbf{X}\) is a random variable distributed as \(\mathsf{Multinoulli}(\mathsf{softmax}(h(\mathbf{X})))\). **Calibration** : A model is said the be calibrated if \(\hat{\mathbf{P}}\) represents the true probability distribution over labels. It is defined as follows in [24]. **Definition 3.1** (Perfectly Calibrated Models): _Model \(h\) is perfectly calibrated on \((\mathbf{X},Y,A)\sim\mathcal{D}\) if \(\forall\,k\in\{1,\ldots,K\}\)_ \[\Pr[Y=k|\hat{\mathbf{P}}=\mathbf{p}]=p_{k}\qquad\forall\,\mathbf{p}\in\Delta^ {K} \tag{1}\] _Probability is taken over the joint distribution of \((Y,\hat{\mathbf{P}})\). \(\Delta^{K}\) represents the \(K\)-simplex._ This is a strong notion of calibration. It implies the following weaker notion of calibration [24]. Both definitions are equivalent for the binary classification problem. **Definition 3.2** (Weakly calibrated models): _Model \(h\) is weakly calibrated on \((\mathbf{X},Y,A)\sim\mathcal{D}\) if \(\forall\,p\in[0,1]\)_ \[\Pr[Y=\mathsf{argmax}_{k}\hat{P}_{k}|\max_{k}\hat{P}_{k}=p]=p \tag{2}\] _Probability is taken over the joint distribution of \((Y,\hat{\mathbf{P}})\)._ In order to measure miscalibration, the notion of expected calibration error is defined [7]. **Definition 3.3** (Expected Calibration Error (ECE)): _The expected calibration error of \(h\) on \((\mathbf{X},Y,A)\sim\mathcal{D}\) is_ \[\mathbb{E}_{\hat{\mathbf{P}}}\left[\left|\Pr[Y=\mathsf{argmax}_{k}\hat{P}_{k} |\max_{k}\hat{P}_{k}=p]-p\right|\right] \tag{3}\] Since we only have finite sample access, and \(\hat{P}\) is a continuous random variable, the expectation in (3) cannot be computed. Hence, we approximate it by partitioning the unit interval into \(M\) equi-width bins. Define \(B_{m}\) as the set of indices of samples whose confidence score falls in \(I_{m}=\left(\frac{m-1}{M},\frac{m}{M}\right]\). Given a dataset \(\{\mathbf{x}^{(i)},\mathbf{y}^{(i)},a^{(i)}\}_{i=1}^{n}\sim\mathcal{D}^{n}\) and a model \(h\) such that \(\mathsf{softmax}(h(\mathbf{x}^{(i)}))=\hat{\mathbf{p}}^{(i)}\), define \(\hat{\mathbf{y}}^{(i)}=\mathsf{argmax}_{k}\hat{p}_{k}^{(i)}\) and \(\hat{p}^{(i)}=\mathsf{max}_{k}\,\hat{p}^{(i)}\). We define the average accuracy of \(B_{m}\) as \[\mathsf{acc}(B_{m})=\tfrac{1}{|B_{m}|}\sum_{i\in B_{m}}\mathds{1}\{y^{(i)}= \hat{\mathbf{y}}^{(i)}\} \tag{4}\] We also define the average confidence of \(B_{m}\) as \[\mathsf{conf}(B_{m})=\frac{1}{|B_{m}|}\sum_{i\in B_{m}}\hat{p}^{(i)} \tag{5}\] Finally, the empirical estimator of ECE is defined as \[\mathsf{ECE}=\sum_{m=1}^{M}\frac{|B_{m}|}{n}\left|\mathsf{acc}(B_{m})-\mathsf{ conf}(B_{m})\right| \tag{6}\] **Fairness**: There are a lot of competing definitions of fairness [16]. In this paper we focus on the amplification of the unfairness already present in \(\mathcal{D}\) which is introduced by the model. The unfairness already present in \(\mathcal{D}\) can be measured by \(\Pr[Y=k|A=1]\) and \(\Pr[Y=k|A=0]\) for all \(k\in\{1,\ldots,K\}\). For example, for the classification task in Table 1, the distribution is unfairly favours males over females while predicting doctors. This is unfairness already present in the distribution. Model 1 in Table 1 predicts the same fraction of males and females as doctors as the distribution. Thus, we say it does not amplify unfairness present in the distribution. We define the following notion of fairness. **Definition 3.4** (Perfectly Fair Models): _Model \(h\) is perfectly fair on \((\mathbf{X},Y,A)\sim\mathcal{D}\) if \(\forall\,k\in\{1,\ldots,K\}\)_ \[\Pr[\hat{Y}=k|A=1]=\Pr[Y=k|A=1]\quad\text{and}\quad\Pr[\hat{Y}=k|A=0]=\Pr[Y=k|A =0] \tag{7}\] The following lemma follows since any model \(h\) such that \(\hat{Y}\mid X,A\stackrel{{ d}}{{=}}Y\mid X,A\) satisfies the definition 3.4. This gives us a very desirable property for the definition of fairness. **Lemma 3.5**: _A perfect classifier is a perfectly fair model._ To measure unfairness of a model, we use a variation of the _Proportional Equality_ definition proposed by [2]. In probabilistic terms, the definition is as follows. **Definition 3.6** (Proportional Equality): _The proportional equality of a model \(h\) on \((\mathbf{X},Y,A)\sim\mathcal{D}\) is_ \[\mathsf{PE}=\max_{k\in\{1,\ldots,K\}}\left\{\left|\frac{\Pr[Y=k|A=1]}{\Pr[Y=k |A=0]}-\frac{\Pr[\hat{Y}=k|A=1]}{\Pr[\hat{Y}=k|A=0]}\right|\right\} \tag{8}\] For empirical evaluation, we approximate the first term in (8) using the true labels and group labels in the training dataset. We approximate the second term using the model predictions and group labels on the test dataset. In the example in Table 1, Model 1 is perfectly fair and it's \(\mathsf{PE}=0\). On the other hand, Model 2 has a \(\mathsf{PE}\)-unfairness of \(\max\{(0.8/0.7)-(0.15/0.2),(0.85/0.8)-(0.2/0.3)\}=0.396\). Intuitively, \(\mathsf{PE}\)-unfairness penalises the amplification in the unfairness that the model introduces in proportion to the under-representation of the minority group. To approximate the second term, if we use the soft confidence score of the \(k^{th}\) class \((\hat{p}_{k})\) then we call it _stochastic proportional equality_ and if we use the indicator of the prediction \((\mathds{1}\{\mathsf{argmax}_{k}p_{k}=k\})\) then we call it _deterministic proportional equality_. **Group-wise calibration**: [10] introduces group-wise calibration as a way to define fairness. It is defined as follows **Definition 3.7** (Group-wise Calibration): _A model \(h\) is group-wise calibrated on \((\mathbf{X},Y,A)\sim\mathcal{D}\) if \(\forall\,a\in\{0,1\}\) and \(\forall\,k\in\{1,\ldots,K\}\),_ \[\Pr[Y=k|\hat{\mathbf{P}}=\mathbf{p},\,A=a]=p_{k}\qquad\forall\,\mathbf{p}\in \Delta^{K} \tag{9}\] _Probability is taken over joint distribution of \((A,Y,\hat{\mathbf{P}})\). \(\Delta^{K}\) represents the \(K\)-simplex._ ### Key Results In this section, we prove the key results of this paper. The following lemma follows trivially using the total probability theorem. **Lemma 3.8**: _A group-wise calibrated model is perfectly calibrated._ Kleinberg et al. [10] show that group-wise calibration and statistical parity cannot be simultaneously achieved unless base rates match (i.e. \(\Pr[A=0]=\Pr[A=1]\)). They also show that group-wise calibration and equalized-odds cannot be achieved simultaneously unless either the base rates match for the sensitive groups or the model predicts perfectly (i.e. it knows with certainty whether a particular example belongs to a particular class). We show that group-wise calibration and Proportional Equality can be simultaneously achieved. We in fact show that group-wise calibration is a stronger condition than \(\mathsf{PE}\)-fairness in the following lemma. **Lemma 3.9**: _A group-wise calibrated model is perfectly fair._ _Proof._ Let \(h\) be group-wise calibrated and \(a\in\{0,1\}\). By definition, \[\Pr[\hat{Y}=k|A=a]=\mathbb{E}_{\hat{\mathbf{P}}}\left[p_{k}|A=a\right] \tag{10}\] Since \(h\) is group-wise calibrated, taking conditional expectation with respect to \(\hat{\mathbf{P}}\) when \(A=a\) in (9), \[\mathbb{E}_{\hat{\mathbf{P}}}\left[\Pr[Y=k|\hat{\mathbf{P}}=\mathbf{p},\,A=a ]|A=a\right]=\mathbb{E}_{\hat{\mathbf{P}}}[p_{k}|A=a] \tag{11}\] The LHS in (11) is equal to \(\Pr[Y=k|A=a]\) and the RHS in (11) is equal to \(\Pr[\hat{Y}=k|A=a]\) by (10). This completes the proof. \(\square\) Lemma 3.8 and 3.9 prove that achieving group-wise calibration can lead to joint optimization of fairness and calibration. In the next section, we try to build upon this insight to propose methods to achieve group-wise calibration whilst jointly optimising for accuracy. ## 4 Joint Optimization Techniques ### Post-processing techniques **Dual Temperature Scaling** : Temperature scaling has been known to be a very effective method to calibrate neural networks [7]. A _single_ temperature parameter \(T>0\) is tuned on the validation dataset to improve the calibration of confidence scores. Let \(h(\mathbf{x})=\mathbf{z}\), then the new confidence prediction is given by, \[\hat{q}_{k}=\mathsf{softmax}(z_{i}/T)\qquad\forall\,k\in\{1,\dots,K\} \tag{12}\] This temperature \(T\) is optimized with respect to the cross entropy loss on the validation set. At \(T=1\), we recover the original predictions \(\hat{p}_{k}\). As \(T\to\infty\), \(\hat{q}_{k}\to 1/K\) and as \(T\to 0\), \(\hat{\mathbf{q}}\) becomes a point mass at \(\mathsf{argmax}_{k}z_{k}\). We modify this method by fine-tuning a separate temperature parameter for each sensitive group. At inference time, we choose the temperature parameter based on the sensitive group of the example on which inference is being drawn. One major advantage of this technique is that model prediction \((\mathsf{argmax}_{k}q_{k})\) does not change. Hence, the accuracy of the model remains unchanged after temperature scaling. This makes it an ideal post-processing algorithm. ### Train-time techniques Train time losses have been proposed in the literature to achieve better calibrated neural models. We reweigh these loss terms to give equal weights to minimization of these losses for both the sensitive groups. We classify these loss terms in two major parts. #### Linear loss functions These are loss functions of the form \(l:\{1,\ldots,K\}\times\mathbb{R}^{K}\rightarrow\mathbb{R}\). Thus, loss of each example can be computed independently from other examples. The loss over a batch \(B\) is defined as \(\mathsf{L}(B):=\frac{1}{|B|}\sum_{i\in B}l(y^{(i)},h(\mathbf{x}^{(i)}))\). We partition our training batch \(B\) into sub-batches for each sensitive group (\(B_{a}\) for \(a\in\{0,1\}\)). We individually compute the loss on each of these sub-batches and then take a convex combination. Intuitively, we assign equal importance to minimization of this loss for both the sensitive groups and hence it should lead to group-wise calibration. We take a convex combination instead of adding the loss terms as we observe it gives better empirical results. We tune the parameter of the convex combination as a hyperparameter to the experiment. We define the group-wise loss as follows. \[\mathsf{L}_{\mathsf{g}}(B):=(1-\rho)\mathsf{L}(B_{0})+\rho\mathsf{L}(B_{1}) \quad\rho\in[0,1] \tag{13}\] Here \(\rho\) is the convex combination parameter. Taking \(\rho=1/2\) assigns equal weight to loss on each group. Taking \(\rho=\Pr[A=1]\), the group-wise loss collapses into the loss insensitive to group labels. Hence, we perform a grid search between the two values. Label smoothing (\(\mathsf{LS}\)) [18], Focal loss(\(\mathsf{FL}\)) [17] and sample dependent focal loss(\(\mathsf{FLSD}\)) [17] are loss functions which are used in lieu of cross entropy loss to train better calibrated models. For these losses, we train directly using the group-wise loss. Difference between calibration and accuracy (\(\mathsf{DCA}\)) [13] and it's multidimensional variant(\(\mathsf{MDCA}\)) [8] are loss functions used in addition to the cross entropy loss to improve calibration. For these losses, we train with the following loss. \[\mathsf{NLL}(B)+\lambda\mathsf{L}_{\mathsf{g}}(B) \tag{14}\] The hyperparameter \(\lambda\) determines the trade-off between optimizing for accuracy and optimizing for calibration. #### Pair-wise loss functions These are loss functions of the form \(l:(\{1,\ldots,K\}\times\mathbb{R}^{K})\times(\{1,\ldots,K\}\times\mathbb{R}^{ K})\rightarrow\mathbb{R}\). Loss is defined for a pair of examples. Thus, given a batch \(B\) we define the loss as \(\mathsf{L}(B):=\frac{1}{|B|^{2}}\sum_{i,j\in B}l(y^{(i)},h(\mathbf{x}^{(i)}),y ^{(j)},h(\mathbf{x}^{(j)}))\). We extend this definition naturally to define our loss. Given two batches \(B\) and \(B^{\prime}\), \[\mathsf{L}(B,B^{\prime}):=\frac{1}{|B||B^{\prime}|}\sum_{i\in B}\sum_{j\in B^ {\prime}}l(y^{(i)},h(\mathbf{x}^{(i)}),y^{(j)},h(\mathbf{x}^{(j)})) \tag{15}\] We partition a batch \(B\) based on it's sensitive group into \(B_{0}\) and \(B_{1}\). We define our group-wise loss term as follows. \[\mathsf{L}_{\mathsf{g}}(B):=(1-\rho)^{2}\mathsf{L}(B_{0},B_{0})+\rho^{2} \mathsf{L}(B_{1},B_{1})+2\rho(1-\rho)\mathsf{L}(B_{0},B_{1})\qquad\rho\in[0,1] \tag{16}\] The interpretation of \(\rho\) is the same as that for (13). This can be seen by that fact that \(\mathsf{L}(B,B^{\prime})=\mathsf{L}(B^{\prime},B)\). Maximum mean calibration error (\(\mathsf{MACE}\)) and it's weighted version (\(\mathsf{MACE}\)-\(\mathsf{W}\)) [11] are such loss functions. They are trained along with the cross entropy term. Hence we use the loss term in (14) to train models using these losses. ## 5 Empirical Evaluation ### Datasets We use a set of binary classification datasets to evaluate these techniques. The details of these datasets are as listed below. 1. **Adult**: UCI dataset where each entry represents of a person and the objective is to predict whether they earn \(>50\)K or \(<=50\)K annually. We use the sex of the person for the sensitive group. 2. **Arrhythmia**: UCI dataset where given an example the target is to predict presence or absence of cardiac arrhythmia. The sensitive group is given by gender. 3. **Communities and Crime**: UCI dataset where each example represents a community and the task is to predict whether the community has a violent crime rate in 70\({}^{th}\) percentile of all communities. The sensitive group is given by whether the community is has a majority of white population. 4. **Drug**: UCI dataset where the task is to classify weather a person is a drug consumer or not. The sensitive group is given by race. 5. **Compas**: Criminal recidivism dataset where the task is to predict recidivism of a person based on history. The sensitive group is given by race. 6. **German**: UCI dataset where the task is to classify good or bag credit for a person. The sensitive group is given by gender. 7. **Lawschool**: UCI dataset where the target is to predict whether a person passed the bar exam. The sensitive group is given by gender. Table 2 contains statistical details of all these datasets. All the features in these datasets are categorical. Thus, they are converted to multi-hot encoding vectors and suitably hashed. We follow the pre-processing steps of [15] or [21]. We divide each dataset into train, validation and test sets with ratio of \(6:1:1\). ### Training details Using the train-time methods described above, a 2-layer perceptron is trained using a fixed learning rate of \(1e-4\). It has 128 and 64 node hidden layers followed by a 2 node output layer, all with Relu activation, for binary classification. The confidence scores for both classes are computed by taking a softmax over the output. We train this network for 500 epochs for every configuration using the Adam optimizer. We compute the ECE as well as the _deterministic_ and _stochastic_ PE metrics along with the accuracy on the test set at each epoch of training. We also perform _Dual Temperature Scaling_ at each epoch of the training. We perform these experiments across 5 random seeds and report the average metric values. All training is done sequentially on a single GPU. ### Empirical observations #### Dual Temperature Scaling As dual temperature scaling only involves tuning two parameters, we implement it at each epoch of training. We fine-tune these temperatures using the cross entropy loss on the validation dataset. We use a learning rate of \(1e-4\) and Adam optimizer. We set the maximum number of epochs for fine-tuning as 500 but we implement early stopping so that the fine-tuning stops once the ECE starts increasing on the validation set. Since dual temperature scaling does not change the prediction of the model, the accuracy and _deterministic_ PE-fairness do not change. We thus report ECE and _stochastic_ PE-fairness on the test dataset. Table 3 gives the percentage improvement that dual temperature scaling offers for these metrics when applied after training with cross-entropy loss. The percentage improvement of the best value after temperature scaling across epochs over the best value before temperature scaling across epochs is reported. It can be observed that dual temperate scaling offers improvement in ECE in all the datasets. It also improves the \begin{table} \begin{tabular}{l|c c} \hline \hline & Pr[\(Dce|Woman\)] & Pr[\(Dce|Mar\)] \\ \hline Data distribution & 0.2 & 0.7 \\ Model 1 & 0.2 & 0.7 \\ Model 2 & 0.15 & 0.8 \\ \hline \hline \end{tabular} \end{table} Table 1: An example to explain the definition of unfairness. The classification task is to predict is the person is a doctor or a nurse and the sensitive attribute is the gender of the person. \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Dataset** & **Size** & **d** & Pr[A = 1] & Pr[Y=1|A=0] & Pr[Y=1|A=1] \\ \hline Adult & 2020 & 97 & 0.74 & 0.25 & 0.59 \\ Arrhythmia & 452 & 279 & 0.55 & 0.41 & 0.65 \\ Communities & 1994 & 122 & 0.71 & 0.36 & 0.84 \\ Compas & 5278 & 11 & 0.6 & 0.61 & 0.49 \\ Drug & 1885 & 10 & 0.91 & 0.83 & 0.79 \\ German & 1000 & 20 & 0.85 & 0.60 & 0.72 \\ Lawschool & 1823 & 17 & 0.54 & 0.51 & 0.55 \\ \hline \hline \end{tabular} \end{table} Table 2: Dataset statistics _stochastic_ PE-fairness in 4 of the 7 datasets. The highest improvement in both these metrics can be observed on the Compas dataset. This is because the features of Compas dataset are less informative. The maximum accuracy achieved upon training on Compas dataset is 67% whereas all the other datasets achieve accuracy of above 80%. Thus, it is harder to train on and exhibits more headroom to improve fairness and calibration. On the other hand, fairness worsens on Adult, Arrythmia and Drug datasets. We believe this is because model is well-calibrated and fair when trained with cross-entropy loss. ### Train-time techniques We train our model using the losses described in Section 4.2. The details of the hyperparameters \(\{\rho,\lambda\}\) over which we search can be found in Appendix A.1. In this section, we first discuss how effectively these losses can optimize purely for calibration and fairness. Then we give some insights about pareto-optimality achieved between fairness and calibration for a loss given a particular accuracy. **Optimizing for fairness**: For each dataset and technique, we compute the average of minimum _stochastic_ PE-fairness obtained over all seeds. We use this average PE-fairness to determine which technique performs best on a particular dataset. Table 4 reports the best technique on each dataset. We also report the average (across seeds) percentage improvement in _stochastic_ PE-fairness and average percentage change in ECE and Accuracy using this technique. We can observe a trade-off in Table 4. _Stochastic_ PE-fairness can be improved substantially by sacrificing some amount of ECE and Accuracy. MMCE and MMCE-W perform best on most datasets. MDCA and DCA perform the best for German and Lawschool respectively. However they incur more loss of Accuracy and gain in ECE. A similar analysis using _deterministic_ PE-fairness is presented in Appendix A.2. **Optimizing for calibration**: We perform similar analysis as the previous section using ECE instead of _stochastic_ PE-fairness. Table 5 reports the average (over seeds) percentage change in _stochastic_ PE-fairness, ECE and Accuracy when we optimize only for ECE. Table 5 shows that optimizing solely for ECE can help increase the accuracy but definitely hurts the fairness of the model. DCA perform the best for 5 datasets. It also improves the accuracy for Adult, Communities and German datasets (at the point of best ECE). MMCE-W also offers significant improvement in Accuracy for Arrythmia dataset along with improving ECE. **Pareto-Optimality**: In this section, we try to give some insights of the fairness-calibration trade-off offered by different techniques. To perform this analysis, we first fix the slack in accuracy that we are willing to allow. Given this slack, for each loss we identify (_stochastic_ PE-fairness, ECE) pairs for which the loss in accuracy (from the best accuracy obtained over all train-time techniques) is less than the slack. For every dataset and train-time technique, we do this for all the hyperparameter settings and all seeds. Thus for every dataset and train-time technique, we get a collection of points \(\{(\text{PE}_{i},\text{ECE}_{i})\}_{i=1}^{N}\). Given such a collection of points, for each dataset and train-time technique we identify pareto-optimal points. Since lower values of _stochastic_ PE-fairness and ECE indicate fairer and more calibrated models respectively, we define a point \((\text{PE},\text{ECE})\) as pareto-optimal if \[\nexists\in\{1,\dots,N\}\text{ s.t. }\text{PE}_{i}\leq\text{PE and }\text{ECE}_{i}\leq\text{ECE} \tag{17}\] \begin{table} \begin{tabular}{l l r r r} \hline \hline **Dataset** & **Best technique** & **\%fairness** & **\%calib.** & **\%acc.** \\ \hline Adult & MMCE & 97.94 & -14.96 & -3.69 \\ Arrhythmia & MMCE-W & 86.10 & -29.55 & -9.36 \\ Communities & MMCE & 87.66 & -25.68 & -2.54 \\ Compas & MMCE & 92.53 & -23.61 & -2.67 \\ Drug & MMCE-W & 82.12 & -4.66 & -7.81 \\ German & MDCA & 96.95 & -40.66 & -10.93 \\ Lawschool & DCA & 97.48 & -107.48 & -46.35 \\ \hline \hline \end{tabular} \end{table} Table 4: Percentage change in _stochastic_ PE-fairness, ECE and Accuracy represented as %fair, %calib. and %acc. respectively when optimizing for fairness. \begin{table} \begin{tabular}{l r r} \hline \hline **Dataset** & **\%fair** & **\%calib.** \\ \hline Adult & -0.89 & 0.88 \\ Arrhythmia & -0.10 & 0.25 \\ Communities & 0.21 & 0.80 \\ Compas & 0.57 & 2.29 \\ Drug & -4.84 & 0.49 \\ German & 0.41 & 0.33 \\ Lawschool & 0.07 & 0.61 \\ \hline \hline \end{tabular} \end{table} Table 3: Percentage improvement in _stochastic_ PE-fairness and ECE represented as %fair and %calib. respectively. Figure 4: Pareto-optimal curve: Compas: Accuracy in (63.37%, 68.37%) Figure 1: Pareto-optimal curve: Adult: Accuracy in (83.14%, 88.14%) Figure 2: Pareto-optimal curve: Communities: Accuracy in (86.2%, 91.2%) Figure 3: Pareto-optimal curve: Arrhythmia: Accuracy in (86.2%, 91.2%) Figure 1: Pareto-optimal curve: Adult: Accuracy in (83.14%, 88.14%) \begin{table} \begin{tabular}{l l r r r} \hline \hline **Dataset** & **Best technique** & **\%fair** & **\%calib.** & **\%acc.** \\ \hline Adult & DCA & -591.51 & 16.67 & 2.37 \\ Arrhythmia & MMCE-W & -578.42 & 17.60 & 3.87 \\ Communities & DCA & -2953.28 & 15.91 & 0.11 \\ Compas & MMCE-W & -870.73 & 25.60 & -1.93 \\ Drug & DCA & -7121.60 & 18.90 & -1.08 \\ German & DCA & -341.72 & 12.25 & 0.08 \\ Lawschool & DCA & -59.53 & 21.25 & -0.94 \\ \hline \hline \end{tabular} \end{table} Table 5: Percentage change in _stochastic_ PE-fairness, ECE and Accuracy represented as %fair, %calib. and %acc. respectively when optimizing for calibration. We compute this set of pareto-optimal points for each dataset and each train-time technique and plot them by interpolation. Figures 1, 3, 2, 4, 5, 6, 7 show these plots for all the datasets for a 5% point slack in accuracy. Absence of a particular technique in a plot indicates that it does not ever achieve accuracy loss of less than 5% points. The curves report absolute number of \(\mathsf{ECE}\) and _stochastic_\(\mathsf{PE}\)-fairness. In each of these plots, curves closer to the origin symbolize better fairness-calibration trade-off. It can be seen that at least one train-time technique always has better trade-off as compared to cross-entropy training. MMCE and MMCE-W have the best trade-off in most of the datasets. We also observe that FL and FLSD have a worse trade-off as compared to cross-entropy training on most datasets. **Hybrid method**: Since dual temperature scaling is a post-processing technique, it can be applied in conjunction to any of the train-time techniques to further improve _stochastic_\(\mathsf{PE}\)-fairness and \(\mathsf{ECE}\). We present the same analysis as that in Section 5.3 for this hybrid method in Appendix A.3 ## 6 Conclusion Our work shows that joint optimization of fairness and calibration is achievable when fairness is defined for ensuring that models do not amplify existing biases in the data. We propose post processing techniques and train time techniques for this joint optimization. We give a detailed analysis on performance of these techniques on a diverse benchmark of datasets. We believe that our work addresses significant gap in study of fairness and calibration together. In future work, other novel techniques to achieve group-wise calibration can be developed to aid this joint optimization.
2304.04273
Multimodal Brain-Computer Interface for In-Vehicle Driver Cognitive Load Measurement: Dataset and Baselines
Through this paper, we introduce a novel driver cognitive load assessment dataset, CL-Drive, which contains Electroencephalogram (EEG) signals along with other physiological signals such as Electrocardiography (ECG) and Electrodermal Activity (EDA) as well as eye tracking data. The data was collected from 21 subjects while driving in an immersive vehicle simulator, in various driving conditions, to induce different levels of cognitive load in the subjects. The tasks consisted of 9 complexity levels for 3 minutes each. Each driver reported their subjective cognitive load every 10 seconds throughout the experiment. The dataset contains the subjective cognitive load recorded as ground truth. In this paper, we also provide benchmark classification results for different machine learning and deep learning models for both binary and ternary label distributions. We followed 2 evaluation criteria namely 10-fold and leave-one-subject-out (LOSO). We have trained our models on both hand-crafted features as well as on raw data.
Prithila Angkan, Behnam Behinaein, Zunayed Mahmud, Anubhav Bhatti, Dirk Rodenburg, Paul Hungler, Ali Etemad
2023-04-09T16:35:31Z
http://arxiv.org/abs/2304.04273v2
Multimodal Brain-Computer Interface for In-Vehicle Driver Cognitive Load Measurement: Dataset and Baselines ###### Abstract Through this paper, we introduce a novel driver cognitive load assessment dataset, CL-Drive, which contains Electroencephalogram (EEG) signals along with other physiological signals such as Electrocardiography (ECG) and Electrodermal Activity (EDA) as well as eye tracking data. The data was collected from 21 subjects while driving in an immersive vehicle simulator, in various driving conditions, to induce different levels of cognitive load in the subjects. The tasks consisted of 9 complexity levels for 3 minutes each. Each driver reported their subjective cognitive load every 10 seconds throughout the experiment. The dataset contains the subjective cognitive load recorded as ground truth. In this paper, we also provide benchmark classification results for different machine learning and deep learning models for both binary and ternary label distributions. We followed 2 evaluation criteria namely 10-fold and leave-one-subject-out (LOSO). We have trained our models on both hand-crafted features as well as on raw data. Driver, cognitive load, wearables, brain-computer interfaces, deep learning. ## 1 Introduction A large number of accidents and collisions occur on the roads every year. While many of these accidents are caused by distracted drivers, for instance due to distraction or drowsiness [1]. Distraction, meanwhile, can be caused by a number of personal or ambient factors, including high cognitive load due to engagement with secondary tasks [2, 3, 4]. Cognitive load refers to the quantity of information our working memory can process at a given time. In other words, it is the amount of cognitive resources required to accomplish a task. In general, two categories of cognitive load, intrinsic and extraneous, have been described in the literature [5]. While intrinsic cognitive load is defined as the inherent complexity of a given task, extraneous cognitive load refers to the cognitive resources demanded by environmental factors that are task-irrelevant [5]. The success or failure in performance toward a particular task, on the other hand, is influenced by the amount of cognitive load experienced by the person performing the task [6]. If the cognitive load increases beyond a certain point, the individual's performance will degrade, which in case of driving may increase the likelihood of road accidents. In order to reduce the number of road accidents caused by high cognitive load, recent intelligent technologies integrated into vehicles should possess the ability to measure cognitive load and alarm the user should dangerously high amounts of it be detected. Brain-computer interfaces (BCI) have recently gained traction in providing advanced means of communication between humans and machines. In particular, head-worn Electroencephalogram (EEG) devices allow for non-invasive yet accurate human-machine interactions. To this end, machine learning and deep learning techniques can be used to learn from datasets with various types of driver-related signals (including EEG). Additionally, these datasets require quantitative cognitive load scores to be measured and provided at frequent intervals, so that they could be used to train the machine learning models. While a number of relevant datasets have been collected and published in recent years [7, 8, 9, 10, 11], a number of problems persist. First, while a number of datasets for cognitive load do exist, they have often been captured in non-vehicle scenarios. In fact, to our knowledge, only [7] has studied cognitive load in the context of driving.Third, while most existing datasets on cognitive load are in fact'multimodal', the notion of BCI with auxiliary wearable signals has not been widely explored [7, 8, 9, 10, 11]. Lastly, in most existing works in the area, the focus has been solely on cognitive load or distraction caused by task-irrelevant activities, overlooking the fact that performing the main task itself (in our case, driving) can be a strong source of high cognitive load. In this paper, we introduce a novel driver cognitive load assessment dataset containing EEG signals along with other physiological signals such as Electrocardiography (ECG) and Electrodermal Activity (EDA) as well as eye tracking data. This dataset, which we name CL-Drive, is collected from 21 subjects while driving in an immersive vehicle simulator in diverse situations capable of inducing various levels of cognitive load in the subjects. Each subject performs driving tasks in 9 complexity levels for 3 minutes each and reports their subjective cognitive load every 10 seconds throughout the experiment as ground-truth cognitive load labels. In this paper, we also provide benchmark classification results for different machine learning and deep learning models. Both raw signals as well as popular features supported by the literature have been used as inputs. We follow two important evaluation criteria, namely 10-fold and leave-one-subject-out (LOSO). Our benchmarking demonstrates that cognitive load induced by driving can be measured with reasonable accuracy using EEG and auxiliary wearable signals. Our contributions in this paper are summarized as follows: * We collect a dataset, CL-Drive, that can allow researchers to evaluate driving-induced cognitive load, which can be useful for developing automated alarm systems for intelligent vehicles. * CL-Drive provides data from various modalities including EEG, ECG, EDA and Gaze, which is a rich source for training machine learning systems capable of performing cognitive load assessment. * CL-Drive contains dense and frequent subjective ratings which are spread only 10 seconds, allowing for more reliable and frequent automated cognitive load measurement by learned models. The rest of this paper is summarized as follows. In Section 2, we first provide a study of cognitive load followed by an overview of the publicly available cognitive load datasets that contain physiological signals. Section 3 explains the experimental setup, including sensor configurations, driving simulator details, cognitive load assessment, and data collection protocol. Next, we discuss the data preprocessing, feature extraction, normalization, and baseline classifiers in section 4. Lastly, in Section 5 we provide the results and discussions. ## 2 Related Work ### _Cognitive Load Measurement_ Prior research has shown that measuring cognitive load from physiological signals [12, 13] continues to be a challenging task [14, 15]. There are both subjective and objective measures that are commonly used to evaluate cognitive load levels that involve: (_i_) self-reporting, (_ii_) dual-task measures, and (_iii_) physiological measures [16]. The PAAS scale [17], shown in Table I is most commonly used for self-reported subjective cognitive load labels. The National Aeronautics and Space Association Task Load Index (NASA-TLX) [18] is also commonly used as a self-reporting tool. Dual-task measurement involves the individual performing two tasks at the same time. One way of designing this is to measure knowledge gain from one task and response time for the other task [19]. In [20], another way of implementing dual-task measurement was explored, which was by performing a continuous secondary task while learning the primary task. There are several physiological parameters that have also been used as cognitive load measures in the past. This includes variation in pupil diameter and blink rate [21, 22], heart rate variability [23], and electrocardiogram (ECG) [24] to name a few. ### _Cognitive Load in Driving_ In the area of driving, prior works have studied cognitive load mainly in the context of the driver being engaged by secondary tasks such as using mobile phones or performing some other in-vehicle activities [2, 25, 26, 3, 27]. In [2], the cognitive load of drivers was measured when the drivers were involved in verbal conversation and word games while driving. As a result, the cognitive load induced was due to the combination of both primary as well as secondary tasks. A remote eye tracker was used to measure the pupil size of all 32 participants which in turn was used to estimate the cognitive load of the participants. The ground truth was the performance measures which they calculated using lane position and degree of rotation of the steering wheel, and subsequently evaluated the relationship between the change in pupil diameter and driving performance. Driving performance while interacting with an iPod1 was evaluated in [25]. It was observed during a multi-session setup, that the cognitive load of the participants during the first sessions was higher, hence the driving performance (e.g., perception response time (PRT) while braking and overall control of the vehicle), was lower in comparison to the later sessions. The experiment was carried out on 19 participants using simulated vehicle. Next, in [26], the high cognitive load of drivers was evaluated using EEG signals in 3 different driving conditions, namely: no secondary task (baseline), low cognitive load task, and high cognitive load task. The low and high cognitive load tasks were based on N-back tasks used in [28, 29]. GSR, eye tracking, respiration rate (RR), and accelerator release time (ART) data were collected during the experiments. The data was collected from 37 participants in a vehicle simulator. The NASA-TLX was used to collect the participants subjective rating at the end of the experiment. In [3], the cognitive load of participants was evaluated using eye video data extracted from facial videos during driving. Three different N-back tasks were used as secondary tasks, which were also used to quantify the ground truth levels. Hidden Markov models and 3D-CNN were then used to evaluate the result. In another paper [27], the cognitive load of participants was evaluated while driving and performing a 1-back task. EEG data was collected from 36 participants. To evaluate the performance, case-based reasoning classifiers were used [30]. A few other prior works such as [2, 25, 26] have used more simple approaches based on predefined metrics (e.g., required time to break, degree of motion of the steering wheel, etc.) to measure cognitive load from physiological signals. Footnote 1: [https://www.apple.com/ca/](https://www.apple.com/ca/) Besides cognitive load, other factors such as driver emotions [31, 32, 33] and vigilance [34, 35, 36], have been \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt}} \hline **PAAS Subjective** & **Description** \\ **Cognitive Load Scores** & **Description** \\ \hline \hline 1 & Very, very low \\ 2 & Very low \\ 3 & Low \\ 4 & Rather low \\ 5 & Neither low nor high \\ 6 & High \\ 7 & Rather high \\ 8 & Very high \\ 9 & Very, very high \\ \hline \hline \end{tabular} \end{table} TABLE I: PAAS subjective cognitive load scores used in this study. widely studies in the literature. While these works may maintain some similarities to works on cognitive load, they are in fact different driver attributes which are outside the scope of this study. Moreover, the notions of affect and distraction have been more widely studied for drivers, as opposed to cognitive load which is a less explored area. ### _Bci_ BCI systems can communicate the neural activities in the brain directly with an external device [6, 37]. Research has shown that BCI can play a vital role in interpreting the cognitive load induced while driving [6, 37]. This is due to the fact that the fronto-partial brain regions along with sub-cortical regions can be engaged while experiencing varying amounts of cognitive load [38]. EEG, is a non-invasive method which measures the potential difference caused by the electrical activity in the brain [39, 40]. This property of EEG allows it to capture changes in brain activity while experiencing variations in cognitive load, which makes it a very good candidate for cognitive load evaluation. Multimodal approaches have proven effective at magnifying the accuracy of cognitive load assessment in the past [41]. Apart from EEG, research has shown that both the sympathetic nervous system (SNS) which controls the skin conductance response and automatic nervous system which controls the heart rate variability (HRV) are impacted by cognitive load [42, 43, 44]. Prior research has also shown significant correlation between changes in pupil size, blink rate, saccade, and fixation with cognitive load [45, 46]. ### _Public Cognitive Load Datasets_ Previous research has examined the induction of cognitive load in drivers with a subsequent evaluation of driving performance under varying cognitive load levels [47, 48]. Some other publications have studied cognitive load under a variety of different experimental setups [49, 24]. There is evidence that affect and cognitive load are interrelated and affect has a significant impact on cognitive load [50]. Though a wide range of studies have been done to study the impact of affect on EEG [51, 52], the available datasets for cognitive load are indeed quite limited. In this section, we provide an overview of the publicly available datasets for cognitive load with physiological signals. Table II presents a summary of these datasets. The Driver Workload dataset [7], provides multimodal data to evaluate driver workload using ECG, body temperature, and skin conductance response (SCR). The dataset was collected from 10 participants with the goal of evaluating cognitive load of drivers on different types of roads and in different driving environments. In addition to the physiological signals, two cameras were also used to record the driving route as well as the participant's facial videos. The video data were not made public for privacy purposes. The data was collected as the participants drove the car for 30 minutes. Participants subjective ratings were collected by watching videos of their own driving at the end of the activity. The MMOD-COG [8] dataset was recorded from 40 different subjects for cognitive load assessment during reading and arithmetic tasks. ECG and EDA were recorded from the subjects in addition to speech. The experiment was divided into reading and arithmatic segments where in the reading segment, two separate digits were shown for 5 seconds and repeated with different digits 20 times. The arithmetic segment was divided into high and low cognitive load levels and a total of 40 problems were asked to be solved by each participant. The CLAS dataset [9] was collected from 62 participants and was obtained by recording various cognitive load levels induced by tasks such as mathematics and logic problems as well as the Stroop test [53]. In addition to cognitive load, audio-visual stimuli were used to induce emotional variations in the participants. Physiological data was collected using ECG, Plethysmography (PPG), EDA, and accelerometer (ACC) data. The next dataset, CogLoad [10] was also a multimodal dataset with 23 participants who performed 6 different computer tasks during the collection process. The physiological data collected was heart rate, beat to beat interval, EDA, skin temperature (ST), and ACC. The task was divided into two segments where the first segment involved understanding the participant's degree of cognitive resources and their personality traits using two N-back tasks [10, 54]. Whereas, in the second segment, 6 different tasks that required varying levels of cognitive load were performed. In addition, in order to completely occupy the cognitive resources of the participants, a secondary task was also given to them to perform. Another multimodal dataset consisting of heart rate, EDA, ST, and ACC was collected from 23 participants while they played the Snake game on a smartphone [10]. \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline **Dataset** & **Year** & **Sub.** & **Mental State** & **Modalities** & **Stimuli** \\ \hline \hline Driver Workload [7] & 2013 & 10 & Mental workload of the driver & ECG, BTemp, SCR & Watching driving videos, Driving in real environment \\ \hline MMOD-COG [8] & 2019 & 40 & Cognitive load & ECG, EDA, Speech & Arithmetic, Reading \\ \hline CLAS [9] & 2019 & 62 & Cognitive Load, negative emotion and mental stress & ECG, PPG and EDA & Math problems, Logic problems and Stroop test \\ \hline CogLoad [10] & 2020 & 23 & Cognitive load, transits & Personality EDA, ST, and ACC & Heart rate, beat to beat interval, (2-back and 3-back tasks, visual cue task etc.) \\ \hline Snake [10] & 2020 & 23 & Cognitive load & heart rate, beat to beat interval, EDA, ST, and ACC & Snake game on a smartphone \\ \hline Kalatzis et al. [11] & 2021 & 26 & Cognitive load & ECG, RR & MATB-II \\ \hline \end{tabular} \end{table} TABLE II: Existing datasets in the literature that study cognitive load using physiological signals. This dataset named Snake [10], was collected with varying cognitive load levels where the amount of cognitive load experienced by the participants was controlled by the changing speed of the game. The task consisted of 3 complexity levels, high, medium and low, which lasted 2 minutes each. After the task completion, the participants answered the NASA TLX [18] questionnaire along with two other 7-point Likert scale questions. Finally, Kalatzis et al. [11] presented a dataset that has been collected from 26 participants. In this dataset, the cognitive load of the participants was assessed using ECG and respiration rate (RR) data. High and low cognitive load data were collected as the participants used the MATB-II software [55] while the NASA TLX [18] questionnaire was used to collect ground truth values for the two cognitive load levels. ## 3 Experiment Setup and Data Collection In this section, we discuss the experimental protocol used in the study. This includes specifics on the setup for the sensors and driving simulator as well as details of the participants, diving scenarios, and cognitive load assessment. ### _Sensors_ During the experiments which will be described in Section 3.7, we use four different sensors to collect physiological signals from which to measure cognitive load. Following is a description of each sensor type, namely EEG, ECG, EDA, and Gaze, in detail. Figure 1 shows the sensors used in our study, while Figure 2 shows their detailed sensors placement. **EEG.** For collecting EEG signals the Muse S2 headband shown in Figure 0(a) is used. The device has 4 channels where 2 of them are frontal electrodes located at the forehead in locations AF7 and AF8 (according to the international 10-20 system [56, 57]) while the remaining 2 are temporal electrodes located behind the ears in locations TP9 and TP10. Figure 0(a) depicts the Muse EEG device while in Figure 1(a), we present the sensor locations of this EEG headset. As shown in the figure, the reference electrode is located at the middle of the forehead in location FPZ. The sampling rate of the EEG headband is \(256\ Hz\). Conductive gel is used to enhance the conductivity between the electrode and the skin. **ECG.** ECG signals are collected through the Shimmer3 sensors [58], which is shown in Figure 0(b). As depicted in Figure 1(b), this wearable device uses 5 standard pre-gelled adhesive electrodes from the chest and abdominal area. Among the 4 electrodes, the Right Arm (RA) and Left Arm (LA) are placed on the left and right sides of the manubrium, while Right Leg (RL) and Left Leg (LL) are placed right above the lower costal margin. The reference electrode denoted by Vx is placed slightly on the right of the sternm. The signals collected are LL-RA, LA-RA, and Vx-RA at a sampling frequency of 512 \(Hz\). The Shimmer is worn by the participants using a belt and a cradle. Footnote 3: [https://shimmersensing.com/product/shimmer3-ecg-unit-2/](https://shimmersensing.com/product/shimmer3-ecg-unit-2/) and-research/ **EDA.** Similar to ECG, the EDA signal is collected using a Shimmer wearable device [58] as shown in Figure 0(c). The data is collected using 2 electrodes placed on the left side of the abdomen which is shown in Figure 0(c). The sampling frequency of the EDA Shimmer device is 128 \(Hz\) **Gaze** Figure 0(d) shows a Tobii device4 used to collect eye tracking data. The device is comprised of a head unit and a recording unit. Inside the device there are 2 cameras per eye as well as a wide angle scene camera. From the eye tracking device we record the specific eye movement events such as saccade, fixation, and others. The sampling frequency is 50 \(Hz\). Figure 1(d) illustrates the placement of the eye tracking device along with the EEG headset. Footnote 4: [https://www.tobiiipro.com/product-listing/tobii-pro-glasses-2/](https://www.tobiiipro.com/product-listing/tobii-pro-glasses-2/) Footnote 5: [https://viragesimulation.com/vs500m-car-simulator-training-and-research/](https://viragesimulation.com/vs500m-car-simulator-training-and-research/) ### _Experiment Test-bed_ In order to simulate driving and be able to control the parameters surrounding the driving experience, we use a driving simulator5 shown in Figure 3. The driving simulator includes elements similar to a real car, including steering wheel, dashboard, accelerator, and brake. These components combined with a motion system provides participants with a more realistic driving sensation. The motion system can emulate real-life motions up to 100 \(Hz\) in frequency. This includes vibrations from road texture, acceleration, braking, speeding, and turning along with other essential movements to provide users with engaging haptic feedback. Additionally, there are three 55 inch LCD screens which provide a 180 degree view from the front, plus two additional screens for the blind spots, together creating an immersive experience. Each front screen has a display resolution of 1920 \(\times\) 1080 pixels. Moreover, directional sound is incorporated using a surround sound system. The sound is intended to mimic typical sounds heard while driving including the sound of the engine, speeding and passing vehicles, and horns, among others. Fig. 1: Wearable EEG, ECG, EDA, and Gaze devices. A debrief station is designed to provide a complete video of the participant and simulation screens during the experiment along with performance graphs. There is a webcam mounted on the top of the middle frontal screen, which has a resolution of 720p. The camera records video of the participants while driving in the simulator. The performance information displayed in the debrief station includes driving performance data, including the time required for breaking and acceleration, possible crashes, and others. This data can also be used for performance analysis. ### _Driving Scenarios_ The simulator comes with a number of pre-built driving scenarios in which the vehicle type, environmental conditions, and other factors can vary. Each scenario consists of a number of tasks that need to be performed, e.g., keeping a speed of above a certain threshold. Moreover, each scenario has a designated complexity level. We choose 9 different scenarios, one from each complexity level. The duration of each scenario is set to 3 minutes. An orientation scenario was designed and performed by each participant at the beginning of the session to allow the participant to adapt to the simulator. Table III shows the description of each driving scenario along with its corresponding complexity level. ### _Simulation Adaptation Syndrome_ It has been shown in prior research that Simulation Adaptation Syndrome (SAS) can affect different participants [59, 60]. SAS can range from feeling minor discomfort to severe symptoms such as dry mouth, dizziness, vertigo, vomiting, nausea, and disorientation while taking part in simulations such as driving a vehicle [61]. The main cause of SAS is the discrepancy between the sensory inputs such as visual and vestibular system (which is responsible for our sense of balance [62, 63]). One of the challenges we faced during our study was avoiding and minimizing SAS. As recommended in the simulator instructions, we followed the following 5 steps to manage and reduce SAS as much as possible: 1. **Cool room:** the simulator room must be cool and well ventilated; 2. **Confident introduction:** must create a calm and relaxed environment; 3. **Cautious alert:** moving on slowly to allow the driver's time to adjust; 4. **Careful observation:** must actively look for signs and symptoms of SAS; 5. **Cease driving:** must pause immediately on observing the slightest sign of SAS. SAS can be managed by carefully monitoring the participant's level of discomfort using a Likert scale and asking the participants to do an intermittent self assessment. In our case, the participants were asked to self report on their SAS level every minute using a 9-point Likert scale as shown in Table IV, which was made based on the Motion Sickness Questionnaire (MSQ) [64], Simulator Sickness Questionnaire (SSQ) [65], and The Motion Sickness Assessment Questionnaire (MSAQ) [66]. Based on these pre-cautious and careful monitoring, SAS resulted in pausing and discontinuing only 2 participants. Fig. 4: The experiment flow. Fig. 3: The immersive vehicle simulator used in this study. Fig. 2: EEG, ECG, EDA, and Gaze electrode placements. ### _Participants_ Data was collected from 23 participants including 17 females and 6 males. Prior to the simulation, participants were provided with detailed information on the experimental process and the research team received written consent. The study was approved by Queen's University's General Research Ethics Board (GREB). Among the 23 participants, 2 data collection sessions were stopped due to high levels of SAS, while the data from another 3 sessions was incomplete due to device or connectivity issues. Specifically, for participant 13, we only have data for scenarios 3, 4, 5, 6, and 9, while for participant 16, we have data for scenarios 1, 5, 6 and 9. Finally, for participant 18, we have the data for scenarios 1 to 6. While the data from the two sessions that were incomplete due to SAS are not incorporated in the dataset as they were interrupted too early in the process, the data from the three incomplete sessions are incorporated. ### _Cognitive Load Self-Assessment_ Participant cognitive load self-assessment ratings were used as ground truth labels in this study. As shown in Table I, PAAS subjective cognitive load scores consist of 9 levels [67]. A looping audio cue was generated every 10 seconds during the experiments to prompt the participants to verbally report their cognitive load, and a member of the research team recorded the reported scores. Figure 6 presents the distribution of the recorded output scores for all the participants. ### _Experiment Protocol_ Participants were given clear descriptions about the data collection protocol and equipment. After careful sensor placement, participants were asked to sit in the driving seat of the simulator and the sensors were connected via Bluetooth to a data collection station. First, 3 minutes of baseline data was collected from each participant which could be used for future normalization of the signals. To become adapted to the simulated driving environment and make sure that participants had a clear understanding about the 'low','medium', and 'high' complexity levels, and also to reduce the SAS level, participants were asked to perform the _Orientation_ scenario mentioned earlier in Table III. Following every 3-minute driving scenario (see Table III), the participants were given a resting time of 2 minutes to allow them to rest, reduce the possibility of SAS, and come back to a relatively lower cognitive load state. Followed by the resting period, a 2-minute baseline was collected before each new scenario. The experiment flow is shown in Figure 4. During all these experiments, the wearable sensors discussed earlier in Section 3.1 were used to record the respective signals from the participants. Figure 5 we illustrate a sample from each captured modality in both high and low cognitive load scenarios. ## 4 Data Processing In this section, we explain the data pre-processing steps for each signal type, followed by feature extraction. Next, we describe data normalization, which is followed by a description of the baseline classifiers used for benchmarking. ### _Pre-processing_ The cognitive load scores were collected at 10-second intervals during the 3-minute driving scenarios. We segment each recording into 18 segments of 10 seconds each. These segments will later be used for feature extraction or fed directly into the deep learning models. **EEG.** To remove noise and artifacts from EEG, we used a Butterworth 2nd order bandpass filter with a passband frequency of 0.4 to 75 \(Hz\). A notch filter with a quality factor of 30, was used to remove the powerline noise at a frequency of 60 \(Hz\). **ECG.** The ECG signals experienced missing values from time to time, which we imputed using a simple 5th order polynomial interpolation technique. Artifacts such as high frequency noise, EMG noise, T noise interference, etc., were then filtered out using a butterworth bandpass filter with passband frequency of 5 to 15 \(Hz\), which also enables us to obtain maximum QRS energy [68, 69]. **EDA.** For EDA there were some missing values which we replaced with a sample-and-hold strategy given the simplicity of the EDA signals (e.g., in comparison to ECG). We then used a lowpass butterworth filter with a cut-off frequency of \begin{table} \begin{tabular}{l l} \hline **SAS Level** & **Description** \\ \hline \hline 1 & Feeling no adverse effects \\ 2 & Feeling very mild discomfort \\ 3 & Feeling mild discomfort \\ 4 & Feeling mild to moderate discomfort \\ 5 & Feeling moderate discomfort \\ 6 & Feeling moderate to pronounced discomfort \\ 7 & Feeling pronounced discomfort \\ 8 & Feeling pronounced to severe discomfort \\ 9 & Feeling severe discomfort (potential for vomiting) \\ \hline \end{tabular} \end{table} TABLE IV: SAS levels and their corresponding description. \begin{table} \begin{tabular}{l l l} \hline **Scenario/** & **Simulation** & **Description** \\ **Complexity** & **Highway driving** & Maintain centre \\ 1 & Highway driving & 80km/h \\ 2 & Night time driving & 80km/h \\ 3 & Night time driving in the highway w/ snow & 80km/h \\ 4 & Tennis ball challenge & Hit the tennis ball with the tire, maintain accuracy, try to accelerate \\ 5 & Salom challenge & Navigate through gates, maintain accuracy, try to accelerate \\ 6 & Narrow passage challenge & Navigate through gates, maintain accuracy, try to accelerate \\ 7 & 90-degree turn challenge & Take 90 degrees left and right turns, maintain accuracy, try to accelerate \\ 8 & 3-point turn challenge & Drive while following instructions, take 3 point turn \\ 9 & Narrow alley challenge & Navigate through narrow alley, maintain accuracy, try to accelerate \\ \hline \end{tabular} \end{table} TABLE III: Driving scenario details 3 \(Hz\) to remove the unwanted noise. A highpass butterworth filter with a cut-off frequency of 0.05 \(Hz\) was used to decompose the filtered EDA signal to tonic skin conductance level and phasic skin conductance response to isolate the slow changing levels and rapid changing peaks in the signal [70]. **Gaze.** The device measures saccade, fixation, pupil diameter, blink count, and blink duration based on 2D gaze coordinates (\(x\),\(y\) pixel coordinates in screen space) for both left and right eyes, 3D gaze coordinates (\(x\),\(y\),\(z\) coordinates in mm in camera space), 3D gaze direction (vector units), gaze velocity in degrees per second (\({}^{\circ}\)/s), and gaze acceleration in degrees per second squared (\({}^{\circ}\)/\(s^{2}\)). We directly use the high-level metrics in our study. ### _Feature Extraction_ Different features were extracted to train our machine learning algorithms. In this section, we describe the features extracted from each modality. Please note that given the focus of our work on BCI and the EEG modality in particular (whereas other modalities play auxiliary roles in the multimodal setups), we describe the EEG-related features in more depth below. **EEG.** We extract 40 features from both time and frequency domains from each channel for each 10 seconds segment. The details of the features are given below: 1. **Power Spectral Density (PSD):** PSD measures the power of the EEG signal. To calculate this feature we use the Welch's method from 0.5 \(Hz\) to 75 \(Hz\) frequency, for each frequency band, Delta (0.5-4 \(Hz\)), Theta (4-\(8Hz\)), Alpha (8-12 \(Hz\)), Beta (12-31 \(Hz\)), and Gamma (31-75 \(Hz\)). We then measure the absolute, mean, maximum, minimum, and median power of the measured PSD. 2. **Spectral entropy:** Spectral entropy (SE) of a time series signal is derived from normalized Shannon's entropy [71] and can be used to determine the complexity of a signal. The formula of SE can be derived from normalized PSD or probability distribution \(p\)(\(i\)) of the signal as \[SE=\sum_{i=1}^{n}p(i)\ln p(i).\] (1) Fig. 5: Examples of different signals in high and low cognitive load scenarios. \begin{table} \begin{tabular}{l l c} \hline \hline **Modalities** & **Extracted features** & **Number of features** \\ \hline \hline EEG & PSD (absolute, mean, maximum, minimum, median power), Spectral Entropy, Hjorth mobility and complexity, Lempel-Ziv Complexity, Higuchi fractal dimension, raw signal (mean, minimum, maximum, median, variance, and standard deviation) & 40 \\ \hline ECG & RMSSD, MeanNN, SDNN, SDSD, CVNN, CVSD, MedianNN, MadNN, MCVNN, IQRNN, pNN50, pNN20, TINN, HTI, SDI, SD2 SD1/SD2, S, CSI, CSI\_Modifies, CVI, PIP, IALS, PSS, PAS, GI, AI, PI, Cld, Cla, SDId, SDId, Cla, C2d, C2a, SD2d, SD2a, Cd, Ca, SDNN, SDNna, ApEm, SampEm, mean, median, standard deviation, skewness, kurtosis, entropy, interquartile range, area under curve, squared area under the curve, median absolute deviation & 30 \\ \hline EDA & Mean, median, standard deviation, skewness, kurtosis, entropy, interquartile range, area under curve, squared area under the curve, median absolute deviation for raw data as well as phasic and tonic response & 32 \\ \hline Gaze & Pupil diameter (max, min, mean), Blink count, duration (max, mean), Fixation count, duration (max, min, mean), dispersion (max, min, mean), Sacade count, duration (max, min, mean), amplitude (max, min, mean), peak velocity (max, min, mean), peak acceleration (max, min, mean), peak deceleration (max, min, mean), peak deceleration (max, min, mean), peak deceleration (max, min, mean) & 32 \\ \hline \hline \end{tabular} \end{table} TABLE V: Extracted features from each modality. We calculate \(SE\) for all 5 bands of EEG. 3. **Hjorth mobility and complexity:** Both Hjorth mobility and complexity of a time series signal determine aspects of the signal complexity [72], where the variations in signal frequency and amplitude are represented by Hjorth mobility and complexity respectively. These two measurements can be jointly used to capture the dynamic behavior of signals. Hjorth mobility and complexity are respectively calculated as: \[Hjorth_{M}=\sqrt{\frac{\sigma^{2}(\frac{dy(t)}{d(t)})}{\sigma^{2}(y(t))}}\] (2) and \[Hjorth_{C}=\frac{Hjorth_{M}(\frac{dy(t)}{d(t)})}{Hjorth_{M}(y(t))},\] (3) where \(y(t)\) represents the signal and \(\sigma^{2}\) is the variance operator. 4. **Lempel-Ziv complexity:** Lempel-Ziv complexity (LZC) is a measure that also determines the complexity of a signal [73]. To apply the LZC algorithm to an EEG signal, the signal first needs to be binarized by the median or mean value of the entire signal. The resulting binary sequence can then be analyzed the using LZC algorithm to find any randomness [74]. 5. **Higuchi fractal dimension:** Higuchi fractal dimension (HFD) is a non-linear method that can capture changes in time-series signals by measuring the complexity in time domain [75]. Prior studies have shown promising result using HFD with EEG signal in the past [76, 77]. 6. **Statistical features:** In addition to the more sophisticated features mentioned above, we also extract simple statistical features namely mean, minimum, maximum, and median from the signal in time domain. Fig. 8: ResNet-style network used as for benchmarking in this study. Fig. 6: Self-reported cognitive load level distribution for each participant. Fig. 7: VGG-style network used as for benchmarking in this study. The complete list of EEG features is summarized in Table V. **ECG.** We extract various commonly used features from ECG [78]. These include both RR-based features (i.e., based on the time between two successive R-waves), HRV-based, and general statistical features. Examples include the popular pNN50 (proportion of RR intervals greater than 50ms, out of the total number of RR intervals), pNN20 (the proportion of RR intervals greater than 20ms, out of the total number of RR intervals), and others. The full list of features extracted from ECG is presented in Table V. **EDA.** We extract a number of features from EDA. These include statistical features from the raw data as well as phasic and tonic responses which are calculated by decomposing the EDA signal. The complete list of features extracted from EDA is presented in Table V. **Gaze.** For gaze analysis, we extract statistical features for each 10-second segment. The details of all the features are given in Table V. ### _Normalization_ To reduce the variability _between subjects_, which is a common phenomenon when recording such data, we divide each feature value with its corresponding average value from the baseline. Second, to reduce the variability _within subjects_, we perform z-score normalization [79]. ### _Classifiers_ To evaluate the dataset and to experiment the efficacy of building an automated cognitive load detection system using the collected data, we train several classical machine learning and deep learning classifiers on the extracted features or raw data. In this section, we describe these models in detail. #### 4.4.1 Classical machine learning We train a total of 9 machine learning classifiers namely AdaBoost (AB), Decision Tree (DT), Naive Bayes (NB), K-Nearest Neighbor (KNN), Linear Discriminant Analysis (LDA), Random Forest (RF), Support Vector Machine (SVM), Extreme Gradient Boosting (XGB), and Multi-Layer Perceptron (MLP). The details of the parameters of these classifiers are presented in Table VI. #### 4.4.2 Deep learning For deep learning network we use two deep Convolutional Neural Networks (CNNs), a VGG-style network as shown in Figure 7, and a ResNet-style network which is shown in Figure 8. The VGG-style network has two main blocks where each block consist of two Conv1D layers, batch normalization, ReLU activation, and maximum pooling operation. These blocks are followed by two fully connected layers and a classification layer. Cross-entropy loss with a learning rate of 0.001 was used for training. ADAM was used as the optimizer for this network [80]. The ResNet-style network, similar to the VGG model above, consists of two blocks containing two Conv1D layers, batch normalization, and ReLU activation function in each block. The classification block contains three fully connected layer followed by an output classification layer. Similar to the VGG network, we use cross-entropy loss and train the model with ADAM optimizer. Here, a learning rate of 0.01 is used. It should be noted that most studies on BCI and EEG in particular use extracted features to train deep learning models [81, 52], which is the approach we took with the networks described above. However, for completeness, we also train the deep networks with raw data (after pre-processing). For this purpose we design a separate encoder for each modality and use feature-level fusion. We expectedly notice that the optimum network depth used when utilizing the extracted features (2 blocks) is not sufficient when using the raw data. We therefore increase the depth by adding a third block to obtain better results. Accordingly each encoder for the raw data contains 3 blocks for both the VGG and ResNet-style models. The details of the architecture remain mostly the same. We present all the details of the deep networks in Table VII, for both VGG and ResNet-style models, when trained with features or raw signals. ### _Training scheme_ We train all the models (classical machine learning and deep networks) in both 10-fold cross validation and the more rigorous Leave-One-Subject Out (LOSO) scheme. We also explore both binary and ternary classification of cognitive load. For binary, we group the cognitive load ratings from 1 to 4 as 'low' cognitive load and 5 to 9 as 'high' cognitive load. For ternary, we divide the cognitive load ratings into 3 groups, 1 to 3, 4 to 6, and 7 to 9 which corresponds to 'low','medium', and 'high' cognitive load classes respectively. ## 5 Benchmarking Results Here, we present the results of the benchmarking study for binary and ternary classification in both validation schemes. \begin{table} \begin{tabular}{l l} \hline \hline **Models** & **Parameters** \\ \hline \hline AB & number of estimators: 70, learning rate: 0.1, algorithm: SAMME.R \\ \hline DT & criterion: gini, random state: 42, maximum depth: 3, minimum samples to be at leaf node: 5 \\ \hline NB & variance smoothing: 1e\({}^{-09}\) \\ \hline KNN & number of neighbors: 20, weights: distance, algorithm: \\ & ‘auto’ \\ \hline LDA & solver: least squares solution \\ \hline RF & maximum depth: 50, number of estimators: 1000, number of jobs: -1, random state: 42, class weight: balanced \\ \hline SVM & regularization parameter: 0.1, kernel: polynomial \\ \hline XGB & maximum depth: 20, number of estimators: 1000, learning rate: 0.001, use label encoder: False, subsample: 0.5, verbose eval: 200, booster: dart, number of jobs: -1, number of leaves: 50, regularization lambda: 0.0001, class weight: balanced \\ \hline MLP & hidden layer sizes: (100, 50), learning rate: adaptive, maximum iteration: 1000 \\ \hline \hline \end{tabular} \end{table} TABLE VI: The machine learning model parameters used in this study. We also present a comparison for the different multimodal setups in our experiments. The detailed results are presented in Tables VIII, IX, IX, and IX. In these tables, bold values denote the highest, while underline represents the second-highest. As we observe in Table VIII, for 10-fold cross-validation in the binary setup, we obtain the highest accuracy of 83.67% with the XGB classifier. This performance is achieved when all 4 modalities are used. This is followed by 83.02% as the second best obtained with EEG with ECG and Gaze by the same classifier. Comparing the average values obtained for different modality setups indicates that as expected, EEG, ECG, EDA, and Gaze altogether outperform the rest, followed by tri-modal, bi-modal, and uni-modal setups, respectively. Looking at the average values for all the models we observe that the XGB classifier generally outperforms the rest followed by RF. Among the 4 different deep learning variants in this setup, we notice that VGG trained with features from all 4 modalities outperforms the other 3 scenarios. In Table IX, for the binary LOSO evaluation scheme, we observe that the highest accuracy of 76.17% is obtained by the VGG-style network trained with features. This accuracy is obtained using 3 modalities, namely EEG, ECG and EDA. The second best accuracy, 76.04%, is achieved with the combination of EEG, ECG, and Gaze. Comparing the average values from different modality setups, we observe that the highest accuracy is obtained by the multimodal scenario with all 4 modalities, followed by the tri-modal, bi-modal, and uni-modal setups respectively. From the average values for each model, we can deduce that the VGG-style model trained on features performs the best, followed by the ResNet-style model trained on features. We present the results for the ternary 10-fold setup in Table X, X and X, X and X, respectively, that the best result of 74.08% is achieved with the machine learning classifier, XGB. This result is obtained when EEG is trained along with all 3 auxiliary modalities. The second best accuracy of 73.60% is obtained with the same classifier when EEG, EDA, and Gaze are used together. Comparing the average results for each modality setup, we find that as expected, the using all 4 modalities outperforms the rest while the best performing average result is achieved by the XGB classifier. Among the 4 deep learning models, the VGG-style model outperforms the other 3 when trained with features. Here, the highest accuracy of 67.12% is obtained when trained with 3 modalities EEG, EDA, and Gaze. Lastly, Table XI shows us the result in the LOSO ternary setup. The highest result obtained is 64.53% when using the ResNet-style network trained with the raw data with 3 modalities namely EEG, ECG, and Gaze. Following, the \begin{table} \begin{tabular}{l|l|l|l|l} \hline **Modules** & **Parameters** & **VGG (feat.)** & **ResNet (feat.)** & **VGG (raw)** & **ResNet (raw)** \\ \hline \hline Conv1D & Kernel size & - & 1\(\times\)3 & - & 1\(\times\)32 \\ & Filter size & - & 32 & - & 64 \\ \hline Conv Block 1 & Architecture & VGG & ResNet & VGG & ResNet \\ & Activation & ReLU & ReLU & ReLU & ReLU \\ & Kernel size & 1\(\times\)3 & 1\(\times\)3 & 1\(\times\)32 & 1\(\times\)32 \\ & Filter size & 64 & 32 & 64 & 64 \\ & Dropout rate & - & 0.5 & - & 0.5 \\ \hline Conv Block 2 & Architecture & VGG & ResNet & VGG & ResNet \\ & Activation & ReLU & ReLU & ReLU & ReLU \\ & Kernel size & 1\(\times\)3 & 1\(\times\)3 & 1\(\times\)16 & 1\(\times\)16 \\ & Filter size & 128 & 32 & 128 & 128 \\ & Dropout rate & - & 0.5 & - & 0.5 \\ \hline Conv Block 3 & Architecture & - & - & VGG & ResNet \\ & Activation & - & - & ReLU & ReLU \\ & Kernel size & - & - & 1\(\times\)8 & 1\(\times\)8 \\ & Filter size & - & - & 256 & 256 \\ & Dropout rate & - & - & - & 0.5 \\ \hline Classification Block & Layer type & FC & FC & FC & FC \\ & Number of layers & 2 & 3 & 2 & 3 \\ & Dropout rate & 0.25 & - & 0.25 & 0.25 \\ & Activation & ReLU & ReLU & ReLU & ReLU \\ \hline \end{tabular} \end{table} TABLE VII: Architectural details of the VGG- and ResNet-style networks used in this study for both features and raw signals. \begin{table} \begin{tabular}{l|l|l|l|l|l|l|l|l} \hline \multicolumn{10}{c}{Modalities} \\ \cline{2-9} \multicolumn{1}{c|}{**Models**} & \multicolumn{1}{c|}{EEG} & \multicolumn{1}{c|}{EEG,ECG} & \multicolumn{1}{c|}{EEG,Gaze} & \multicolumn{1}{c|}{EEG,ECG} & \multicolumn{1}{c|}{EEG,ECG} & \multicolumn{1}{c|}{EEG,ECG} & \multicolumn{1}{c|}{EEG,ECG} & \multicolumn{1}{c|}{Mean} \\ \cline{2-9} & EEG & EEG,ECG & EEG,EDA & EEG,Gaze & Gaze & Gaze & EDA, Gaze & EDA, Gaze & \multicolumn{1}{c|}{EDA, Gaze} & \multicolumn{1}{c|}{Mean} \\ \hline \hline AB & 67.17 (55.37) & 73.26 (68.43) & 71.99 (66.53) & 70.13 (61.29) & 78.34 (69.85) & 74.36 (69.75) & 73.36 (67.07) & 75.35 (71.80) & 72.32 (66.51) \\ DT & 65.31 (63.04) & 72.27 (71.15) & 69.61 (67.93) & 68.17 (66.48) & 73.02 (71.41) & 73.12 (71.52) & 72.99 (70.75) & 73.98 (72.40) & 71.05 (69.34) \\ NB & 48.68 (65.4) & 51.80 (50.48) & 51.39 (49.97) & 49.95 (48.22) & 53.73 (52.87) & 52.46 (51.35) & 52.39 (51.23) & 54.42 (53.65) & 51.85 (50.54) \\ KNN & 70.61 (68.49) & 69.34 (67.40) & 71.78 (70.16) & 77.07 (64.45) & 70.64 (69.16) & 77.77 (72.56) & 74.97 (72.50) & 73.33 (71.22) \\ LDA & 66.83 (62.45) & 72.74 (70.65) & 71.19 (68.45) & 69.65 (66.24) & 74.73 (72.94) & 73.70 (71.61) & 72.95 (70.62) & 75.83 (74.04) & 72.20 (69.63) \\ RF & **77.41** (**73.39**) & 79.34 (76.27) & 79.89 (63.71) & 81.26 (78.8) & 80.82 (79.84) & 80.65 (77.83) & 81.71 (79.23) & 80.07 (77.05) \\ SVM & 61.88 (38.29) & 62.08 (38.89) & 61.88 (82.89) & 63.46 (63.46) & 64.35 (65.9) & 71.54 (65.82) & 67.14 (65.43) & 73.70 (69.85) & 65.75 (79.431) \\ XGB & 77.38 (73.72) & **82.95** (**81.25**) & **80.76** (**76.77**) & **80.75 (**78.06**) & **82.61** (**80.94**) & **83.02** (**82.12**) & **82.12** (**80.08**) & **83.67** (**82.05**) & 81.57 (97.37) \\ MLP & 74.32 (72.36) & 74.22 (72.31) & 76.34 (72.03) & 75.18 (73.34) & 76.00 (74.54) & 75.83 (74.53) & 77.11 (55.47) & 77.69 (69.19) & 75.83 (74.11) \\ VGG (feat.) & 75.56 (73.21) & 77.57 (78.5) & 78.99 (64.94) & 78.78 (76.74) & 78.78 (77.22) & 78.72 (72.38) & 80.17 (78.39) & 80.66 (79.17) & 78.67 (68.31) \\ ResNet (feat.) & 69.98 (65.26) & 74.72 (71.48) & 71.74 (68.46) & 72.85 (69.15) & 75.49 (72.71) & 74.65 (71.83) & 73.61 (70.67) & 76.39 (74.28) & 73.55 (04.48) \\ VGG (raw) & 63.83 (63.2) & 67.37 (66.97) & 66.95 (66.11) & 67.26 (69.59) & 70.12 (69.12) & 71.45 (70.5) & 71.76 (71.07) & 73.87 (73.00) & 69.17 (68.38) \\ ResNet (raw) & 61.95 (99.75) & 64.49 (62.14) & 63.09 (57.45) & 66.68 (64.85) & 64.41 (62.82) & 70.04 (67.69) & 68.71 (66. second highest accuracy of 63.56% is obtained by the VGG-style network trained on raw data from all 4 modalities. The average results of various modality setups show that the multimodal setup with all 4 modalities achieves the highest accuracy. The average highest accuracy is obtained using the ResNet-style model followed by the VGG-style model both when trained with raw data. In the end, to summarize our findings above, we observe that both classical machine learning and deep learning models possess the ability to distinguish between different levels of driver cognitive loads. As expected, we find that ternary classification is more challenging than binary, while LOSO on the other hand proves more difficult than 10-fold. In terms of modalities, multimodal setups generally provide more information regarding driver cognitive load, with EEG, ECG, EDA and Gaze showing the best performances. ## 6 Conclusion In this paper, we presented CL-Drive, a new multimodal cognitive load dataset collected during simulated driving. Our dataset, contains EEG, ECG, EDA, and Gaze data from 21 participants in a variety of different driving conditions. Subjective self-reported cognitive load scores were recorded at 10-second intervals throughout the experiment, making it a very rich and dense dataset in terms of both modalities and labels. We also provided benchmarks by evaluating our dataset in both binary and ternary label distributions for both LOSO as well as k-fold evaluation schemes. \begin{table} \begin{tabular}{l|l|l l l l l l l l l} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{**Models**}} & \multicolumn{6}{c}{Modalities} \\ \hline \multirow{2}{*}{**Models**} & \multirow{2}{*}{EEG} & \multirow{2}{*}{EEG,ECG} & \multirow{2}{*}{EEG,EDA} & \multirow{2}{*}{EEG,Gaze} & \multicolumn{2}{c}{EEG,ECG} & \multirow{2}{*}{EEG,ECG} & \multirow{2}{*}{EEG,ECG} & \multirow{2}{*}{Mean} \\ & & & & & EDA & Gaze & & & EDA, Gaze & & EDA, Gaze \\ \hline \hline AB & 46.20 (83.35) & 51.84 (48.5) & 51.96 (48.78) & 50.12 (44.48) & 53.52 (51.62) & 54.56 (51.99) & 52.56 (49.66) & 55.31 (33.15) & 51.37 (48.30) \\ DT & 48.95 (48.77) & 56.03 (56.08) & 52.08 (51.85) & 52.70 (52.26) & 57.48 (57.36) & 58.82 (58.88) & 56.14 (55.85) & 58.13 (58.16) & 55.04 (54.90) \\ NB & 34.93 (29.77) & 37.13 (33.00) & 37.47 (33.77) & 36.58 (31.86) & 39.50 (36.55) & 38.78 (35.06) & 38.64 (35.23) & 40.50 (37.74) & 37.94 (34.12) \\ KNN & 54.59 (53.95) & 51.50 (50.81) & 58.46 (35.50) & 64.22 (62.01) & 52.46 (51.67) & 57.06 (56.33) & 63.05 (62.87) & 58.61 (57.93) & 56.99 (56.43) \\ LDA & 53.01 (51.93) & 58.27 (58.09) & 58.13 (57.82) & 54.93 (54.33) & 61.02 (61.00) & 59.43 (59.23) & 58.03 (57.89) & 61.84 (61.84) & 58.08 (57.77) \\ RF & 63.56 (56.30) & 68.41 (68.42) & 69.34 (69.17) & 68.30 (68.03) & 71.57 (71.63) & 70.40 (70.39) & 71.67 (71.76) & 72.67 (72.86) & 69.49 (69.41) \\ SVM & 41.01 (51.26) & 46.56 (37.78) & 41.73 (42.17) & 48.75 (25.99) & 48.40 (41.22) & 53.01 (50.34) & 50.39 (53.83) & 58.38 (51.60) & 47.96 (59.35) \\ XGB & **64.94 (61.14)** & **70.78 (70.11)** & **71.71 (72.19)** & **71.72 (72.35)** & **73.03 (73.76)** & **72.91 (73.51)** & **73.60 (73.64)** & **74.08 (74.26)** & 71.54 (71.61) \\ MLP & 58.44 (57.72) & 61.50 (60.64) & 61.12 (60.90) & 62.63 (62.22) & 62.80 (62.66) & 63.08 (63.09) & 63.84 (63.83) & 62.35 (63.26) & 62.08 (61.79) \\ VGG (feat). & 62.12 (60.92) & 62.88 (62.21) & 64.44 (63.91) & 63.58 (64.88) & 65.76 (65.67) & 64.03 (63.43) & 67.21 (67.62) & 66.67 (60.64) & 64.80 (61.49) \\ ResNet (feat). & 47.19 (44.61) & 55.52 (53.74) & 51.91 (50.66) & 51.39 (49.41) & 56.15 (55.48) & 58.46 (53.88) & 53.19 (52.24) & 55.63 (54.67) & 53.23 (51.84) \\ VGG (raw) & 47.85 (43.7) & 54.33 (51.57) & 56.48 (51.61) & 56.88 (52.24) & 61.76 (57.96) & 62.89 (58.3) & 63.44 (69.41) & 66.33 (62.93) & 58.86 (54.69) \\ ResNet (raw) & 50.57 (37.41) & 56.56 (50.99) & 53.67 (42.45) & 56.37 (49.3) & 60.62 (54.07) & 59.38 (54.49) & 61.05 (56.39) & 64.84 (60.60) & 57.91 (59.09) \\ \hline Mean & 51.78 (47.38) & 56.35 (53.99) & 55.85 (26.56) & 56.71 (54.27) & 58.81 (56.95) & 59.18 (57.56) & 59.44 (57.76) & 60.90 (59.62) & \\ \hline \hline \end{tabular} \end{table} TABLE X: The accuracy and F1 scores for the classifiers in LOSO binary setup. \begin{table} \begin{tabular}{l|l|l l l l l l l l l|l} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{**Models**}} & \multicolumn{6}{c}{Modalities} \\ \hline \multirow{2}{*}{**Models**} & \multirow{2}{*}{EEG} & \multirow{2}{*}{EEG,ECG} & \multirow{2}{*}{EEG,EDA} & \multirow{2}{*}{EEG,Gaze} & \multicolumn{2}{c}{EEG,ECG} & \multirow{2}{*}{EEG,ECG} & \multirow{2}{*}{EEG,ECG} & \multirow{2}{*}{EEG,ECG} & \multirow{2}{*}{EEG,ECG} & \multirow{2}{*}{Mean} \\ & & & & EDA & Gaze & & EDA & Gaze & & EDA, Gaze & \\ \hline \hline AB & 62.30 (46.81) & 66.38 (59.47) & 63.01 (54.57) & 64.81 (53.53) & 67.86 (62.22) & 66.80 (60.105) & 66.92 (59.68) & 69.14 (63.60) & 65.93 (57.49) \\ DT & 54.63 (49.73) & 60.33 (54.82) & 57.94 (53.07) & 57.57 (53.77) & 60.97 (56.91) & 62.13 (57.25) & 61.19 (56.73) & 62.00 (57.02) & 59.60 (54.91) \\ NB & 47.80 (43.54) & 48.94 (58.0) & 48.16 (44.71) & 49.00 (45.06) & 49.85 (47.52) & 49.85 (47.11) & 49.15 (45.71) & 50.33 (48.06) & 49.14 (45.94) \\ KNN & 58.21 (53.11) & 61.45 (58.09) & 60.83 (56.10) & 60.63 (61.42) & 62.51 (59.51) & 65.55 (61.01) & 67.06 (62.48) & 65.60 (61.36) ## 7 Acknowledgement We would like to thank the Innovation for Defence Excellence and Security (IDEaS) program under the Department of National Defence (DND) for funding this project.
2306.12268
A Finite Expression Method for Solving High-Dimensional Committor Problems
Transition path theory (TPT) is a mathematical framework for quantifying rare transition events between a pair of selected metastable states $A$ and $B$. Central to TPT is the committor function, which describes the probability to hit the metastable state $B$ prior to $A$ from any given starting point of the phase space. Once the committor is computed, the transition channels and the transition rate can be readily found. The committor is the solution to the backward Kolmogorov equation with appropriate boundary conditions. However, solving it is a challenging task in high dimensions due to the need to mesh a whole region of the ambient space. In this work, we explore the finite expression method (FEX, Liang and Yang (2022)) as a tool for computing the committor. FEX approximates the committor by an algebraic expression involving a fixed finite number of nonlinear functions and binary arithmetic operations. The optimal nonlinear functions, the binary operations, and the numerical coefficients in the expression template are found via reinforcement learning. The FEX-based committor solver is tested on several high-dimensional benchmark problems. It gives comparable or better results than neural network-based solvers. Most importantly, FEX is capable of correctly identifying the algebraic structure of the solution which allows one to reduce the committor problem to a low-dimensional one and find the committor with any desired accuracy.
Zezheng Song, Maria K. Cameron, Haizhao Yang
2023-06-21T13:43:59Z
http://arxiv.org/abs/2306.12268v2
# A Finite Expression Method for Solving High-Dimensional Committor Problems ###### Abstract Transition path theory (TPT) is a mathematical framework for quantifying rare transition events between a pair of selected metastable states \(A\) and \(B\). Central to TPT is the committor function, which describes the probability to hit the metastable state \(B\) prior to \(A\) from any given starting point of the phase space. Once the committor is computed, the transition channels and the transition rate can be readily found. The committor is the solution to the backward Kolmogorov equation with appropriate boundary conditions. However, solving it is a challenging task in high dimensions due to the need to mesh a whole region of the ambient space. In this work, we explore the finite expression method (FEX, Liang and Yang (2022)) as a tool for computing the committor. FEX approximates the committor by an algebraic expression involving a fixed finite number of nonlinear functions and binary arithmetic operations. The optimal nonlinear functions, the binary operations, and the numerical coefficients in the expression template are found via reinforcement learning. The FEX-based committor solver is tested on several high-dimensional benchmark problems. It gives comparable or better results than neural network-based solvers. Most importantly, FEX is capable of correctly identifying the algebraic structure of the solution which allows one to reduce the committor problem to a low-dimensional one and find the committor with any desired accuracy. ## 1 Introduction Understanding the transition events of a stochastic system between disjoint metastable states is of great importance in many branches of applied sciences [31, 33, 14, 3, 23]. Examples of such transition events include conformational changes of biomolecules and dislocation dynamics in crystalline solids. In this work, we focus on the overdamped Langevin process as the underlying dynamics \[d\mathbf{x}_{t}=-\nabla V\left(\mathbf{x}_{t}\right)dt+\sqrt{2\beta^{-1}}d \mathbf{w}_{t},\] where \(\mathbf{x}_{t}\in\Omega\subset\mathbb{R}^{d}\) is the state of the system at time \(t\), \(V:\mathbb{R}^{d}\rightarrow\mathbb{R}\) is a smooth and coercive potential function, \(\beta^{-1}=k_{B}T\) is the absolute temperature times the Boltzmann constant, and \(\mathbf{w}_{t}\) is the standard \(d\)-dimensional Brownian motion. The invariant probability density for the overdamped Langevin dynamics (1) is given by \[\rho(\mathbf{x})=Z_{\beta}^{-1}\exp(-\beta V(\mathbf{x})),\quad Z_{\beta}=\int_{ \mathbb{R}^{d}}\exp(-\beta V(\mathbf{x}))d\mathbf{x}. \tag{1}\] In real-world applications, the dimension \(d\) is typically high, leading to the major numerical difficulty of concern in this paper. Transition path theory (TPT) [7, 8] is a mathematical framework for the quantitative description of statistics of transition events. The committor function is a central object in TPT. The reaction rate, the reaction current, i.e., the vector field delineating the transition process, and the density of transition paths are expressed in terms of the committor. For two disjoint regions \(A\) and \(B\) in \(\Omega\) chosen by the user, the committor function \(q(\mathbf{x})\) is defined as \[q(\mathbf{x})=\mathbb{P}(\tau_{B}<\tau_{A}|\mathbf{x}_{0}=\mathbf{x}),\] where \(\tau_{A}\) and \(\tau_{B}\) are the hitting times for the sets \(A\) and \(B\), respectively. The committor function is the solution to the boundary-value problem (BVP) for the backward Kolmogorov equation \[\beta^{-1}\Delta q-\nabla V\cdot\nabla q=0,\quad\mathbf{x}\in \Omega_{AB}:=\Omega\backslash(\tilde{A}\cup\bar{B}), \tag{2}\] \[q(\mathbf{x})|_{\partial A}=0,\quad q(\mathbf{x})|_{\partial B} =1,\] \[\frac{\partial q}{\partial\hat{n}}=0,\quad\mathbf{x}\in\partial\Omega,\] where \(\hat{n}\) is the outer unit normal vector. An analytical solution to equation (2) can be found only in special cases. Otherwise, BVP (2) must be solved numerically. However, the curse of dimensionality makes traditional traditional numerical schemes, such as finite difference and finite element methods, prohibitively expensive when \(d>3\). To address this issue, alternative methods based on ideas borrowed from data science and machine learning have been emerging. ### Prior Approaches to Solving the Committor Problem in High Dimensions To the best of our knowledge, the first high-dimensional committor solver was introduced by Lai and Lu [19] relying on the assumption that the dynamics were concentrated near a low-dimensional manifold. The authors constructed a linear system for the committor problem via introducing local meshes on patches of the point cloud and corrected the entries of the resulting stiffness matrix afterward to make it symmetric. While this idea is elegant, we found that it suffers from a lack of robustness when the intrinsic dimension of the system varies throughout the phase space. Neural network-based committor solvers were introduced in [16, 21, 25]. They exploit the fact that BVP (2) admits a variational formulation \[\operatorname*{arg\,min}_{\begin{subarray}{c}f\in\mathcal{C}^{1}(\Omega_{AB}) \\ f(\mathbf{x})|_{\partial A}=0,\ f(\mathbf{x})|_{\partial B}=1\end{subarray}} \int_{\Omega_{AB}}|\nabla f(\mathbf{x})|^{2}\rho(\mathbf{x})d\mathbf{x}, \tag{3}\] where \(\rho\) is the invariant density (1). The committor is approximated by a solution model involving a neural network which is trained to minimize the objective function (3). In [21], the boundary conditions for the committor are built in the solution model, while in [16] they are enforced by means of penalty functions. The solution models in these works are also different. The model [16] involves Green's functions to facilitate accurate approximation of the committor at high temperatures, while the model in [21] is advantageous for low temperatures as it allows for training points generated using enhanced sampled algorithms. The accuracy of both of these solvers is limited by the accuracy of Monte Carlo integration as the integral in (3) is approximated as a Monte Carlo sum. A committor solver based on tensor train representations was introduced in [5]. This approach is shown to be suitable even for committor problems for discretized stochastic partial differential equations and yields an accurate solution if the natural computational domain is box-shaped. However, it can suffer from low accuracy when the geometry of the problem is more complicated so that the involved functions (the committor, the invariant density, etc.) cannot be represented accurately by linear combinations of a few basis polynomials. Finally, techniques based on diffusion maps [2, 11, 10] are suitable for finding committors in high dimensions provided that the intrinsic dimension of the problem is not very high, e.g. \(d=4\). These approaches feature approximations to the backward Kolmogorov operator in (2) using diffusion kernels and various renormalizations accounting for enhanced sampling data and anisotropic and position-dependent diffusion resulting from the use of physically motivated dimensional reduction. The committor function is found on the data points. The accuracy of this approach is limited by Monte Carlo integration and a finite bandwidth parameter inherent to the construction. ### The Goal and a Brief Summary of Main Results Liang and Yang recently introduced a novel approach to solving high-dimensional PDEs named the _finite expression method (FEX)_[22]. This method is capable of finding solutions to PDEs with machine precision provided that the exact solutions can be approximated as algebraic expressions involving a relatively small number of nonlinear functions and binary operations. This was demonstrated on a series of test problems of dimensions up to 50. In FEX, the problem of solving a PDE numerically is transformed into a mixed optimization problem involving both combinatorial optimizations for operator selection and continuous optimization for trainable parameters. The goal of this work is to investigate FEX as a committor solver, understand its strengths and limitations, and identify routes for further improvements. The committor problem arising in applications is often high-dimensional. Furthermore, the phase space \(\Omega\) of the underlying stochastic process is often unbounded, and hence the computational domain needs to be chosen by the user wisely. The sets \(A\) and \(B\) are often chosen to be balls, ellipsoids or potential energy sublevel sets surrounding two selected local minima of the potential energy. This choice always makes the computational domain nonconvex and the functional dependence of the solution on phase variables nontrivial. Therefore, the committor problem in high dimensions presents a challenge for FEX that is worthy of exploring. We emphasize, that the test problems in [22] all had exact solutions given by short formulas - the fact that is not true for the committor problem. In this work, we adapt FEX approach to the committor problem, discuss its setup and implementation, and apply it to a number of test problems. The test problems include the double-well potential with hyperplane boundaries as in [16, 5], the double-well potential with sublevel sets boundaries, concentric spheres as in [16], and rugged Mueller's potential with as in [16]. Our findings are the following. 1. On the benchmark test problems, FEX performs comparably or better than neural network-based solvers. 2. Remarkably, FEX is capable of capturing the algebraic structure of the solution, i.e. identifying variables or combinations of variables on which the solution depends and does not depend. This allows for dimensional reduction and the use of traditional highly accurate methods such as Chebyshev spectral methods or finite element methods to find the solution with the desired precision. This ability is unique to FEX. The other high-dimensional committor solvers mentioned in Section 1.1 do not have it. The rest of the paper is organized as follows. The FEX algorithm is detailed in Section 2. The applications of FEX to four benchmark test problems are presented in Section 3. The conclusion is formulated in Section 4. Additional numerical details are found in Appendices. ## 2 The Finite Expression Method The finite expression method (FEX) [22] is a brand-new approach to solving PDEs. It seeks the solution in the form of a mathematical expression with a predefined finite number of operators. The workflow of FEX is depicted in Figure 1. To implement FEX, the user needs to choose a binary tree of a finite depth, usually between 4 and 6. This tree is realized as a computer algebra expression. Each tree node is associated with one operator, unary or binary. The input variables are propagated through the tree by passing them through the leaf nodes. Each operator is equipped with a set of parameters defining an affine operator acting on its input argument. Lists of possible unary and binary operators are supplied by the user. Once the operators and parameters have been assigned to all tree nodes, the binary tree can generate a mathematical expression. The set of operators and parameters minimizing the loss functional measuring how well the expression fits the PDE is sought by solving the mixed combinatorial optimization problem. To solve this mixed optimization problem, FEX utilizes a combinatorial optimization method to search for the optimal choice of operators and a continuous optimization method to identify the associated parameters of tree nodes. In combinatorial optimization, a reinforcement learning approach is adopted to further reformulate combinatorial optimization into continuous optimization over probability distributions. To accomplish this, FEX introduces a controller network that outputs the probabilities of selecting each operator at each node of the tree. As a result, the problem of selecting the best operators for the tree nodes is reformulated as the problem of identifying the best controller network that can sample the best operators. The optimization for the best controller is a continuous optimization problem. The parameters of the controller network are trained using policy gradient methods to maximize the expected reward in the reinforcement learning terminology, which is equivalent to minimizing the loss function. Therefore, the optimal choice of operators can be determined by sampling from the output probabilities of the best controller network with high probability. ### The Functional Space of Finite Expressions As earlier said, FEX approximates the solution of a PDE in the space of functions with finitely many operators. Therefore, it is important to formally define the functional space in which the solution is sought. **Definition 2.1** (Mathematical expression [22]).: A mathematical expression is a combination of symbols, which is well-formed by syntax and rules and forms a valid function. The symbols include operands (variables and numbers), operators (e.g., "+", "sin", integral, derivative), brackets, and punctuation. **Definition 2.2** (\(k\)-finite expression [22]).: A mathematical expression is called a \(k\)-finite expression if the number of operators in this expression is \(k\). **Definition 2.3** (Finite expression method [22]).: The finite expression method is a methodology to solve a PDE numerically by seeking a finite expression such that the resulting function solves the PDE approximately. Figure 1: Representation of the components of our FEX implementation. (a) The searching loop for the symbolic solution encompasses multiple stages, namely expression generation, score computation, controller update, and candidate optimization. (b) Illustration of the expression generation with a binary tree and a controller \(\mathbf{\chi}\). The controller produces probability mass functions for each node of the tree, enabling the sampling of node values. Furthermore, we incorporate learnable scaling and bias parameters to generate expressions based on the predefined tree structure and the sampled node values. We denote \(\mathbb{S}_{k}\) the functional space that consists of functions formed by finite expressions with the number of operators less or equal to \(k\). ### The Mixed Combinatorial Optimization Problem in FEX The loss functional \(\mathcal{L}\) in FEX is problem-dependent. Reasonable choices include the least-squares loss as in [18, 27, 6], a variation formulation as in [9, 30], and a weak formulation as in [4, 32]. In this work, we use the fact that the committor problem (2) admits the variational formulation (3) and choose the variation loss functional as in [16] given by \[\mathcal{L}(u)=\int_{\Omega_{AB}}\left\|\nabla u(\mathbf{x})\right\|^{2}\rho( \mathbf{x})d\mathbf{x}+\tilde{c}\int_{\partial A}u(\mathbf{x})^{2}dm_{ \partial A}(\mathbf{x})+\tilde{c}\int_{\partial B}\left(u(\mathbf{x})-1\right) ^{2}dm_{\partial B}(\mathbf{x}). \tag{4}\] Here \(\rho\) is the invariant density (1) and \(m_{\partial A},m_{\partial B}\) are the user-chosen measures on the boundaries \(A\) and \(B\), respectively. The advantage of this form is that it requires evaluation of only the first derivatives of solution candidates, unlike the least squares loss. This saves the runtime. The boundary conditions in (2) are enforced by means of penalty terms which is simple and convenient. In FEX, the solution is found by solving the mixed combinatorial optimization problem \[\min_{u\in\mathbb{S}}\mathcal{L}(u), \tag{5}\] where the solution space \(\mathbb{S}_{\mathsf{FEX}}\subset\mathbb{S}_{k}\) will be elaborated in Section 2.3.1. ### Implementation of FEX The computation workflow of FEX starts with the construction of a binary tree. Each node of the tree contains either a unary or binary operator. The solution candidate \(u\) is obtained by the evaluation of the function represented by the binary tree. Next, the mixed combinatorial optimization (5) is applied to adaptively select the optimal operators in all nodes of the tree. The goal of the mixed combinatorial optimization is to identify operators that can recover the structure of the true solution. FEX is summarized in Algorithm 1. Details of its implementation are found in Appendix 5.1. #### 2.3.1 Finite Expressions with Binary Trees FEX utilizes a binary tree \(\mathcal{T}\) structure to represent finite expressions as illustrated in Figure 2. Each tree node contains either a unary operator or a binary operator. The sets of unary and binary operator candidates, \(\mathbb{U}\) and \(\mathbb{B}\), usually four of each kind, are selected for each problem by the user. Examples of unary and binary operators are, respectively \[\sin,\exp,\log,\mathrm{Id},(\cdot)^{2},\int\cdot\mathrm{d}x_{i},\frac{ \partial\cdot}{\partial x_{i}},\cdots\quad\text{ and }\quad+,-,\times,\div,\cdots.\] Each unary operator acts element-wise and is equipped with scaling parameters \(\alpha_{i}\), \(i=1,\ldots,d\), and a bias parameter \(\beta\) that are applied to its output element-wise, for example, \[\alpha_{1}\sin(x_{1})+\ldots+\alpha_{d}\sin(x_{d})+\beta.\] The set of all parameters for all unary operators is denoted by \(\mathbf{\theta}\). Then, the entire expression is obtained by a preorder traversal of the operator sequence \(\mathbf{e}\) of the binary tree \(\mathcal{T}\). Therefore, such finite expression is denoted by \(u(\mathbf{x};\mathcal{T},\mathbf{e},\mathbf{\theta})\) as a function in \(\mathbf{x}\). For a fixed \(\mathcal{T}\), the maximal number of operators is bounded from above by a constant denoted as \(k_{\mathcal{T}}\). In FEX, \[\mathbb{S}_{\mathtt{FEX}}=\{u(\mathbf{x};\mathcal{T},\mathbf{e},\mathbf{\theta})\ |\ \mathbf{e}\in\mathbb{U}\cup\mathbb{B}\}\] is the functional space in which we solve the PDE. Note that \(\mathbb{S}_{\mathtt{FEX}}\subset\mathbb{S}_{k_{\mathcal{T}}}\). The computation flow of the binary tree \(\mathcal{T}\) works recursively from the leaf nodes. The unary operators at the leaf nodes are applied on the input \(\mathbf{x}\) elementwise, and the scaling \(\alpha\) transforms the dimension from \(\mathbb{R}^{d}\) to \(\mathbb{R}\). Then, the computation follows a bottom-up manner recursively until the flow reaches the first level of the tree. #### 2.3.2 Implementation of FEX Thus, the FEX solution \(u(\mathbf{x};\mathcal{T},\mathbf{e},\mathbf{\theta})\) is obtained by solving the mixed combinatorial optimization problem of the form \[\min_{\mathbf{e},\mathbf{\theta}}\mathcal{L}(u(\cdot;\mathcal{T},\mathbf{e}, \mathbf{\theta})). \tag{6}\] To achieve this, FEX proceeds in two stages. It first optimizes the selection of operator sequence \(\mathbf{e}\) that identifies the structure of the true solution. Then it optimizes the parameter set \(\mathbf{\theta}\) to minimize the funtional (6). The framework of FEX consists of four parts. Figure 2: Computational rule of a binary tree. Each node within the binary tree holds either a unary or a binary operator. Initially, we outline the computation flow of a depth-1 tree comprising a solitary operator. Subsequently, for binary trees extending beyond a single layer, the computation process is recursively executed. 1. _Score computation._ To identify the structure of the solution, FEX uses a mix-order optimization algorithm to evaluate the score of the operator sequence \(\mathbf{e}\). 2. _Operator sequence generation._ FEX employs a neural network (NN) to model a controller that outputs a probability mass function to sample optimal operator sequences. 3. _Controller update._ Based on the reward feedback of generated operator sequences, the controller is updated to generate good operator sequences. 4. _Candidate optimization._ Within the search loop, FEX maintains a pool of top-performing operator sequences. After training, a fine-tuning step is performed for each candidate in the pool to obtain the best operator sequence as the approximation to the PDE solution. Each of these parts will be elaborated on in the next sections. #### 2.3.3 Score Computation The score of an operator sequence \(\mathbf{e}\) is an essential part of training, as it guides the controller to update parameters to output optimal probability mass functions to sample good operators. We define the score of \(\mathbf{e}\), \(S(\mathbf{e})\), by \[S(\mathbf{e}):=\big{(}1+L(\mathbf{e})\big{)}^{-1},\quad\text{where}\quad L(\mathbf{e}):= \min\{\mathcal{L}(u(\cdot;\mathcal{T},\mathbf{e},\mathbf{\theta}))|\mathbf{\theta}\}. \tag{7}\] As \(L(\mathbf{e})\) approaches zero, \(S(\mathbf{e})\) increases up to 1. To efficiently evaluate the score \(S(\mathbf{e})\), the following hybrid mix-order optimization approach is utilized for updating the parameter \(\mathbf{\theta}\). Let \(\mathbf{\theta}_{0}^{e}\) be the initial guess for \(\mathbf{\theta}\) for given \(\mathbf{e}\). First, \(T_{1}\) steps of a first-order optimization algorithm (e.g., the stochastic gradient descent [26] or Adam [17]) are performed resulting in \(\mathbf{\theta}_{T_{1}}^{e}\). Then \(T_{2}\) steps of a second-order optimization algorithm (e.g., Newton's method [1] or BFGS [12]) are made resulting in \(\mathbf{\theta}_{T_{1}+T_{2}}^{e}\). Finally, the score of the operator sequence \(\mathbf{e}\) is obtained as \[S(\mathbf{e})\approx\big{(}1+\mathcal{L}(u(\cdot;\mathcal{T},\mathbf{e},\mathbf{\theta}_{ T_{1}+T_{2}}^{e}))\big{)}^{-1}. \tag{8}\] #### 2.3.4 Operator Sequence Generation The goal of the controller is to output operator sequences with high scores during training. The controller \(\mathbf{\chi}\) with parameters \(\Phi\) will be denoted by \(\mathbf{\chi}_{\Phi}\). For an operator sequence \(\mathbf{e}\) with \(s\) nodes, the controller \(\mathbf{\chi}_{\Phi}\) outputs probability mass functions \(\mathbf{p}_{\mathbf{\Phi}}^{i}\), \(i=1,\cdots,s\). Then, the operator \(e_{j}\) is sampled from \(\mathbf{p}_{\mathbf{\Phi}}^{j}\). In addition, the \(\epsilon\)-greedy strategy [28] is used to encourage exploration in the operator set. With probability \(\epsilon<1\), \(e_{i}\) is sampled from a uniform distribution of the operator set and with probability \(1-\epsilon\), \(e_{i}\) is sampled from \(\mathbf{p}_{\Phi}^{i}\). #### 2.3.5 Controller Update The goal of the controller \(\mathbf{\chi}_{\Phi}\) is to output optimal probability mass functions, from which the operator sequence \(\mathbf{e}\) with high scores are highly likely to be sampled. We model the controller as a neural network parameterized by \(\Phi\). The training objective is to maximize the expected score of a sampled operator sequence \(\mathbf{e}\), i.e. \[\mathcal{J}(\Phi):=\mathbb{E}_{\mathbf{e}\sim\mathbf{\chi}_{\Phi}}S(\mathbf{e}). \tag{9}\] The derivative of (9) with respect to \(\Phi\) is \[\nabla_{\Phi}\mathcal{J}(\Phi)=\mathbb{E}_{\mathbf{e}\sim\mathbf{\chi}_{ \Phi}}\left\{S(\mathbf{e})\sum_{i=1}^{s}\nabla_{\Phi}\log\left(\mathbf{p}_{\Phi}^{i}(e _{i})\right)\right\}, \tag{10}\] where \(\mathbf{p}_{\Phi}^{i}(e_{i})\) is the probability of the sampled \(e_{i}\). Let \(N\) denote the batch size. The batch \(\{\mathbf{e}^{(1)},\mathbf{e}^{(2)},\cdots,\mathbf{e}^{(N)}\}\) is sampled under \(\mathbf{\chi}_{\Phi}\) each time. Then the expectation (10) can be approximated by \[\nabla_{\Phi}\mathcal{J}(\Phi)\approx\frac{1}{N}\sum_{k=1}^{N} \left\{S(\mathbf{e}^{(k)})\sum_{i=1}^{s}\nabla_{\Phi}\log\left(\mathbf{p}_{\Phi}^{i}( e_{i}^{(k)})\right)\right\}. \tag{11}\] In turn, the model parameter \(\Phi\) is updated by gradient ascent, i.e., \(\Phi\leftarrow\Phi+\eta\nabla_{\Phi}\mathcal{J}(\Phi)\). However, in practice, the goal is to obtain the operator sequence \(\mathbf{e}\) with the highest score, instead of optimizing the average scores of all generated operator sequences. Therefore, following [24] we consider \[\mathcal{J}(\Phi)=\mathbb{E}_{\mathbf{e}\sim\mathbf{\chi}_{\Phi}}\left\{S( \mathbf{e})|S(\mathbf{e})\geq S_{\nu,\Phi}\right\}, \tag{12}\] where \(S_{\nu,\Phi}\) represents the \((1-\nu)\times 100\%\)-quantile of the score distribution generated by \(\mathbf{\chi}_{\Phi}\). In a discrete form, the gradient computation becomes \[\nabla_{\Phi}\mathcal{J}(\Phi)\approx\frac{1}{N}\sum_{k=1}^{N} \left\{(S(\mathbf{e}^{(k)})-\hat{S}_{\nu,\Phi})\mathbbm{1}_{\{S(\mathbf{e}^{(k)})\geq \hat{S}_{\nu,\Phi}\}}\sum_{i=1}^{s}\nabla_{\Phi}\log\left(\mathbf{p}_{\Phi}^{i}(e_ {i}^{(k)})\right)\right\}, \tag{13}\] where \(\mathbbm{1}\) is an indicator function that takes value \(1\) if the condition is true and otherwise \(0\), and \(\hat{S}_{\nu,\Phi}\) is the \((1-\nu)\)-quantile of the scores in \(\{S(\mathbf{e}^{(i)})\}_{i=1}^{N}\). #### 2.3.6 Candidate Optimization As introduced in Section 2.3.3, the score of \(\mathbf{e}\) is based on the optimization of a nonconvex function. Therefore, the score obtained by coarse-tuning with \(T_{1}+T_{2}\) iterations may not be a good indicator of whether \(\mathbf{e}\) recovers the underlying structure of the solution. Therefore, it is important to keep a pool \(\mathbb{P}\) of fixed size \(K\), which adaptively keeps the top \(K\) candidate operator sequences \(\mathbf{e}\). After the search is finished, for each \(\mathbf{e}\in\mathbb{P}\), the objective function \(\mathcal{L}(u(\cdot;\mathcal{T},\mathbf{e},\mathbf{\theta}))\) is fine-tuned over \(\mathbf{\theta}\) using a first-order algorithm with a small learning rate for \(T_{3}\) iterations. ### The Solution Model for the Committor Problem In this paper, we parameterize the committor function \(q(\mathbf{x})\) by a FEX binary tree \[q_{\Phi}:=\{q(\mathbf{x};\mathcal{T},\mathbf{e},\mathbf{\theta})|\mathbf{e},\mathbf{ \theta}\}.\] We follow the setup in [16] and choose the variational formulation (4) as \(\mathcal{L}\). As noted in [16], in the high-temperature regime, i.e., when \(T\to\infty\) thus \(\beta\to 0\), the backward Kolmogorov equation (2) converges to Laplace's equation with a Dirichlet boundary condition. Therefore, the solution near boundaries \(\partial A\) and \(\partial B\) are dictated asymptotically by the fundamental solution \[\Phi(\mathbf{x}):=\begin{cases}-\frac{1}{2\pi}\log|\mathbf{x}|&(d=2),\\ \frac{\Gamma(d/2)}{(2\pi)^{d/2}|\mathbf{x}|^{d-2}}&(d\geq 3).\end{cases} \tag{14}\] Considering the singular behavior of the committor function, we model the committor function as \[q(\mathbf{x})=q_{1}S_{A}(\mathbf{x}-\mathbf{y}^{A})+q_{2}S_{B}(\mathbf{x}- \mathbf{y}^{B})+q_{3}, \tag{15}\] where \(\mathbf{y}^{A}\) and \(\mathbf{y}^{B}\) are the centers of \(A\) and \(B\), \(S_{A}(\mathbf{x}-\mathbf{y}^{A})\) and \(S_{B}(\mathbf{x}-\mathbf{y}^{B})\) are fundamental solutions (14), and \(q_{1},q_{2}\) and \(q_{3}\) are three FEX binary trees to be optimized. The computation flow is illustrated in Figure 3. ## 3 Numerical Experiments In this section, the performance of FEX on a collection of benchmark test problems is examined. All test problems are set up in high-dimensional ambient spaces. All these problems admit variable changes reducing them to low-dimensional problems. We demonstrate that FEX is effective at Figure 3: The representation of the computation flow of solving for (4). The committor function \(q\) is represented by the summation of three “FEX trees”, two of which are weighted with \(\frac{1}{|\mathbf{x}|^{d-2}}\) type singularities. identifying this low-dimensional structure automatically. Thus, our main objective in this section is to illustrate the capabilities of FEX in the following ways. 1. FEX demonstrates comparable or higher accuracy compared to the neural network method. 2. FEX excels in identifying the low-dimensional structure inherent in each problem. 3. Once FEX successfully identifies the low-dimensional structure, we can achieve arbitrary accuracy by solving the reduced low-dimensional problem (2) using spectral methods or finite element methods. The committor problem (2) seldom admits an analytical solution. Therefore, in order to evaluate the accuracy of FEX one needs to find a highly accurate solution by another method. We use the finite element method (FEM) for this purpose. Prior to delving into the subsequent benchmark problems, we shall provide a concise overview of the convergence analysis associated with the finite element method (FEM). The significance of discussing the error estimate for the finite element method holds a two-fold importance in our numerical experiments. Firstly, in line with our previously stated primary objective, the comparison of the relative error between the neural network method and the ground truth solution, as well as the relative error between FEX and the ground truth solution, requires the use of a highly accurate finite element method as the reference solution. Secondly, as part of our third objective, once the low-dimensional structure of the problem (2) has been identified, we propose to use the spectral method and finite element method to solve the low-dimensional ODE (or PDE) (2), both of which possess a robust theoretical foundation for achieving arbitrary accuracy. As mentioned in Theorem 5.4 of [20], the finite element method utilizing piecewise linear basis functions to address elliptic PDEs within a convex two-dimensional domain also showcases quadratic convergence of the numerical solution towards the exact solution with respect to step size \(h\). It is important to note that Theorem 5.4 presented in [20] provides preliminary error estimates with certain limitations. Firstly, these theorems were derived in the context of a simplified elliptic PDE with only the second order term. Therefore, its applicability to our problem (2) needs to be considered carefully. Additionally, Theorem 5.4 [20] assumes a convex domain \(\Omega\), whereas our benchmark problems involve non-convex domains. Despite these limitations, these theorems serve as a starting point for determining an appropriate step size \(h\) and assessing the overall accuracy of our benchmark results. In the subsequent benchmark problems, where the finite element method is employed as the reference solution, we specifically utilize piecewise linear basis functions, but the choice of the step size \(h\) is tailored for each individual case. As a data-driven solver, FEX requires a set of training points as input. The variational loss functional (4) assumes two kinds of training points: \(N_{\mathsf{bdry}}\) boundary points lying on \(\partial A\cup\partial B\) and \(N_{\Omega_{AB}}\) interior points lying in \(\Omega_{AB}\). In our experiments, we use \(N_{\mathsf{bdry}}=2000\) and keep the ratio \[\frac{N_{\mathsf{bdry}}}{N_{\Omega_{AB}}}=:2\alpha\] between \(1/10\) to \(1/100\). To evaluate the accuracy, we use the relative \(L_{2}\) error \[E=\frac{\|q_{\theta}-q\|_{L_{2}(\mu)}}{\|q\|_{L_{2}(\mu)}},\] where \(q_{\theta}\) represents the numerical solution obtained through either the neural network (NN) or FEX methods, while \(q\) denotes the reference solution characterized by its high accuracy. The loss functional (4) can be rewritten as a single expectation \[\mathcal{L}(q)=\mathbb{E}_{\nu}\left(|\nabla q(\mathbf{x})|^{2}\chi_{\Omega_{ AB}}(\mathbf{x})+\bar{c}q(\mathbf{x})^{2}\chi_{\partial A}(\mathbf{x})+\tilde{c}(q( \mathbf{x})-1)^{2}\chi_{\partial B}(\mathbf{x})\right), \tag{16}\] where \(\nu\) is the mixture measure \(\nu(\mathbf{x})=\rho(\mathbf{x})+m_{\partial A}(\mathbf{x})+m_{\partial B}( \mathbf{x})\) with \(\rho\) being the invariant density given by (1) and \(m_{\partial A}\), \(m_{\partial B}\) are uniform measures on the boundaries \(\partial A\) and \(\partial B\) respectively. For FEX, we employ a binary tree of depth either \(L=3\) or \(L=5\). From Figure 2, it can be observed that a depth-3 tree has two leaf nodes, whereas a depth-5 tree has four leaf nodes. These leaf nodes are labeled in a left-to-right order. For example, in the case of a depth-5 tree, the leftmost leaf node is labeled as leaf 1, and the rightmost leaf node is labeled as leaf 4. We select the binary set \(\mathbb{B}=\{+,-,\times\}\) and the unary set \(\mathbb{U}=\{0,1,\text{Id},(\cdot)^{2},(\cdot)^{3},(\cdot)^{4},\exp,\sin,\cos,\tanh,\text{sigmoid}\}\) to form the mathematical equation. Notably, we include the \(\tanh\) and sigmoid functions in \(\mathbb{U}\) due to the often observed sharp transitions in committor functions within high-dimensional spaces. Additionally, we include the \((\cdot)^{2}\) operator, as the committor function is often related to the spherical radius of the problem. ### The Double-Well Potential with Hyperplane Boundary The first example features the committor problem (2) with the double-well potential [16, 5] \[V(\mathbf{x})=\left(x_{1}^{2}-1\right)^{2}+0.3\sum_{i=2}^{d}x_{i}^{2}, \tag{17}\] and \[A=\left\{\mathbf{x}\in\mathbb{R}^{d}\mid x_{1}\leq-1\right\},\quad B=\left\{ \mathbf{x}\in\mathbb{R}^{d}\mid x_{1}\geq 1\right\}.\] We set \(d=10\). Note that this problem is effectively one-dimensional, as the committor depends only on the first component of \(\mathbf{x}\), \(x_{1}\), and is the solution to \[\frac{d^{2}q\left(x_{1}\right)}{dx_{1}^{2}}-4\beta x_{1}\left(x_{1}^{2}-1 \right)\frac{dq\left(x_{1}\right)}{dx_{1}}=0,\quad q(-1)=0,\quad q(1)=1. \tag{18}\] The solution to (18) is given by \[q(x_{1})=\frac{\int_{-1}^{x_{1}}e^{\beta(y^{2}-1)^{2}}dy}{\int_{-1}^{1}e^{ \beta(y^{2}-1)^{2}}dy}.\] Now we let FEX find out that the solution depends only on \(x_{1}\). We model the committor function \(q(\mathbf{x})\) as a single depth-3 FEX tree \(\mathcal{J}(\mathbf{x})\) as there is no singularity. When the temperature \(\beta^{-1}\) is low, sampling from the invariant density gives too few samples near the transition state at \(\mathbf{x}=0\). So, following [16], we sample \(x_{1}\) uniformly on \([-1,1]\), and \((x_{2},\cdots,x_{d})\) from a \((d-1)\)-dimensional Gaussian distribution. This sampling density is accounted for in the Monte Carlo integration of the loss functional (16) resulting in \[\mathcal{L}(q) =\frac{1}{N_{\Omega_{AB}}}\sum_{j=1}^{N_{\Omega_{AB}}}\frac{\left| \nabla q(\mathbf{x}_{j})\right|^{2}\exp\left(-\beta\left((x_{1})_{j}^{2}-1 \right)^{2}\right)}{\int_{-1}^{1}\exp\left(-\beta\left((x_{1})_{j}^{2}-1 \right)^{2}\right)dx_{1}}\] \[+\frac{\tilde{c}}{N_{\mathsf{bdry}}}\sum_{j=1}^{N_{\mathsf{bdry} }/2}\left[q(\mathbf{y}_{j})^{2}+(1-q(\mathbf{z}_{j}))^{2}\right], \tag{19}\] where \(\mathbf{x}_{j}\in\Omega_{AB}\), \(j=1,\ldots,N_{\Omega_{AB}}\), and \(\mathbf{y}_{j}\in\partial A\), \(\mathbf{z}_{j}\in\partial B\). We consider \(\beta^{-1}=0.2\) and \(\beta^{-1}=0.05\), respectively. At \(\beta^{-1}=0.2\), the expression of \(\mathcal{J}(\mathbf{x})\) found by FEX is \[\text{leaf 1: }\texttt{Id}\to\alpha_{1,1}x_{1}+\ldots+\alpha_{1, 10}x_{10}+\beta_{1}\] \[\text{leaf 2: }\texttt{tanh}\to\alpha_{2,1}\tanh(x_{1})+ \ldots+\alpha_{2,10}\tanh(x_{10})+\beta_{2}\] \[\mathcal{J}(\mathbf{x})=\alpha_{3}\tanh(\text{leaf 1}+\text{leaf 2})+ \beta_{3},\] where \(\alpha_{3}=0.5,\ \beta_{3}=0.5\). The rest parameters in leaf 1 and leaf 2 are summarized in Table 1. It is evident that only the coefficients for \(x_{1}\) and \(\tanh(x_{1})\) are nonzero. The formula for \(\beta^{-1}=0.05\) computed by FEX is placed in Appendix 5.2.1. We plot the FEX committor function and its error with the true committor function in Figure 4. We also summarize the numerical results in Table 2 and Table 3. Clearly, FEX successfully identifies that the committor \(q(\mathbf{x})\) depends only on the first coordinate \(x_{1}\). This allows us to reduce the problem to 1D and obtain a machine precision solution to (18) using the Chebyshev spectral method ([29], program p13.m). Since equation (18) is linear, the boundary conditions are enforced by decomposing the solution as \(q(x_{1})=u(x_{1})+u_{b}(x_{1})\) where \(u(x_{1})\) satisfies the same equation with homogeneous boundary conditions and \(u_{b}(x_{1})\) is any smooth user-chosen function that satisfies \(u_{b}(-1)=0\) and \(u_{b}(1)=1\), e.g., \(u_{b}(x_{1})=0.5(x_{1}+1)\). Then the problem for \(u(x_{1})\) becomes \(\mathcal{L}_{1}u=-\mathcal{L}_{1}u_{b}\) where \(\mathcal{L}_{1}\) is the differential operator in the left-hand side of (18). About 80 collocation points are enough to achieve the error of the order of \(10^{-16}\). The computed solution in the form of a Chebyshev sum is evaluated at any point \(x_{i}\in[-1,1]\) using Clenshaw's method [13] (see Section 3.7.1). \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline node & \(\alpha_{1}\) & \(\alpha_{2}\) & \(\alpha_{3}\) & \(\alpha_{4}\) & \(\alpha_{5}\) & \(\alpha_{6}\) & \(\alpha_{7}\) & \(\alpha_{8}\) & \(\alpha_{9}\) & \(\alpha_{10}\) & \(\beta\) \\ \hline leaf 1: Id & **1.6798** & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ \hline leaf 2: tanh & **1.9039** & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ \hline \end{tabular} \end{table} Table 1: Coefficients of leaves of depth-3 FEX binary tree \(\mathcal{J}\) for double-well potential with hyperplane boundary problem when \(T=0.2\). We would like to emphasize that, in the fine-tuning stage of the FEX algorithm, we employ coefficient filtering mentioned in Algorithm 1 with a threshold of \(\tau=0.05\) to effectively eliminate trivial coefficients. Therefore, the result clearly shows that FEX identifies the structure of the committor function \(q(x_{1})\), enabling post-processing techniques, such as spectral method. ### The Double-Well Potential with Sublevel Set Boundary In this example, we still consider the double-well potential (17), but with a more challenging boundary condition. Namely, we consider the sublevel sets boundaries, \[A=\left\{\mathbf{x}\in\mathbb{R}^{d}\mid V(\mathbf{x})<V_{a},\ x_{1}<0\right\}, \quad B=\left\{\mathbf{x}\in\mathbb{R}^{d}\mid V(\mathbf{x})<V_{b},\ x_{1}>0 \right\}.\] In this case, the committor function \(q\) solves the following equation \[\nabla\cdot(\exp(-\beta V)\cdot\nabla q)+\frac{d-2}{r}\exp(-\beta V )\frac{dq}{dr}=0,\] \[q(\mathbf{x})|_{\partial A}=0,\quad q(\mathbf{x})|_{\partial B}=1,\] where the committor function depends on \(x_{1}\) and \(r=\sqrt{x_{2}^{2}+\cdots+x_{d}^{2}}\), i.e., \(q=q(x_{1},r)\). The values of the parameters are as follows: \(V_{a}=0.2,\ V_{b}=0.2,\ d=10\). Given the non-convex nature of \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{method} & \multirow{2}{*}{E} & \multirow{2}{*}{\(\tilde{c}\)} & No. of samples in & \multirow{2}{*}{\(\alpha\)} & \multirow{2}{*}{No. of testing samples} \\ & & & \(\Omega_{AB}\) & & \\ \hline NN [16] & \(5.40\times 10^{-3}\) & 50 & \(2.0\times 10^{4}\) & 1/20 & \(1.0\times 10^{5}\) \\ FEX & \(3.51\times 10^{-3}\) & 50 & \(2.0\times 10^{4}\) & 1/20 & \(1.0\times 10^{5}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Results for the double-well potential with hyperplane boundary when \(\beta^{-1}=0.2\), comparison of NN and FEX related method. Figure 4: The committor function for the double-well potential along \(x_{1}\) dimension when \(\beta^{-1}=0.2\) for an arbitrary \((x_{2},\cdots,x_{d})\) with \(d=10\). \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{method} & \multirow{2}{*}{E} & \multirow{2}{*}{\(\tilde{c}\)} & No. of samples in & \multirow{2}{*}{\(\alpha\)} & \multirow{2}{*}{No. of testing samples} \\ & & & \(\Omega_{AB}\) & & \\ \hline NN [16] & \(1.20\times 10^{-2}\) & 0.5 & \(2.0\times 10^{4}\) & 1/20 & \(1.0\times 10^{5}\) \\ FEX & \(5.50\times 10^{-3}\) & 0.5 & \(2.0\times 10^{4}\) & 1/20 & \(1.0\times 10^{5}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Results for the double-well potential with hyperplane boundary when \(\beta^{-1}=0.05\), comparison of NN and FEX related method. the problem domain, the finite element method (FEM) is employed as the benchmark solution. We choose the mesh size \(h=0.02\). The discrepancy error between the finite element solution and the exact solution in the \(\|L\|_{\infty}\)-norm is estimated to be approximately of the order of \(O(10^{-4})\) to \(O(10^{-5})\). When \(\beta\) is small, the committor \(q\) is more heavily dependent on the radius \(r\), whereas as \(\beta\) increases, it effectively depends only on \(x_{1}\). Therefore, we experiment with different values of \(\beta\) and compare the performance of FEX and that of a neural network-based solver. We use a depth-5 FEX tree. More details are provided in Appendix 5.2.2. From Table 4, we observe that FEX consistently outperforms NN for various temperatures. Furthermore, when \(\beta=3.0\), FEX could indeed identify the structure of the committor function, i.e., \(q(x_{1},r)\). While when \(\beta=10.0\), the problem essentially reduces to a 1D problem, and FEX is able to identify such change of pattern. In comparison, NN based methods are unable to retrieve such geometric information. The numerical solutions by FEM and by FEX are displayed in Fig. 5. ### Concentric Spheres In this example, we consider the committor function describing the transition process between a pair of concentric spheres, with the potential \[V(\mathbf{x})=10|\mathbf{x}|^{2}, \tag{20}\] and the regions \[A=\left\{\mathbf{x}\in\mathbb{R}^{d}\mid|\mathbf{x}|\geq a\right\},\quad B= \left\{\mathbf{x}\in\mathbb{R}^{d}\mid|\mathbf{x}|\leq b\right\}.\] Since the equilibrium distribution is proportional to Gaussian distribution, we can readily obtain samples from the normal distribution. We obtain the data on the boundaries \(\partial A,\partial B\) by sampling from Gaussian distribution and rescale to have norm \(a\) or \(b\). The values of parameters are: \(T=2,d=6,a=1,b=0.25\). The true solution \(q(\mathbf{x}):=q(r)\) satisfies the following ODE: \[\frac{d^{2}q(r)}{dr^{2}}+\frac{d-1}{r}\frac{dq(r)}{dr}-\beta\frac {dq}{dr}\frac{dV}{dr}=0, \tag{21}\] \[\left.q(r)\right|_{r=a}=0,\quad\left.q(r)\right|_{r=b}=1,\] where \(r=|\mathbf{x}|\) is the radius, \(\beta=1/T\) is the inverse of temperature. Similarly to (18), (21) is solvable. In this example, the committor function displays a singular behavior, i.e., \(q\sim 1/|\mathbf{x}|^{d-2},d\geq 3\). Therefore, we parameterize the committor function \(q(\mathbf{x})\) as \[q(\mathbf{x})=\mathcal{J}_{1}(\mathbf{x})\cdot\frac{1}{|\mathbf{x}|^{d-2}}+ \mathcal{J}_{2}(\mathbf{x}),\] \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{\(\beta\)} & \multirow{2}{*}{method} & \multirow{2}{*}{E} & \multirow{2}{*}{\(\tilde{c}\)} & \multicolumn{2}{c}{No. of samples in} & \multirow{2}{*}{\(\alpha\)} & \multirow{2}{*}{No. of testing samples} \\ & & & & \(\Omega_{AB}\) & & & \\ \hline 3.0 & NN & 0.372 & 50 & \(2.0\times 10^{4}\) & 1/20 & \(2.0\times 10^{4}\) \\ 3.0 & FEX & 0.358 & 50 & \(2.0\times 10^{4}\) & 1/20 & \(2.0\times 10^{4}\) \\ 10.0 & NN & 0.021 & 50 & \(2.0\times 10^{4}\) & 1/20 & \(2.0\times 10^{4}\) \\ 10.0 & FEX & 0.009 & 50 & \(2.0\times 10^{4}\) & 1/20 & \(2.0\times 10^{4}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Results for the double-well potential with sublevel sets boundaries, comparison of NN and FEX. where \(\mathcal{J}_{1}(\mathbf{x})\) and \(\mathcal{J}_{2}(\mathbf{x})\) are two depth-3 FEX binary trees. We leave the FEX formula in Appendix 5.2.3. The formula reduces to \[q(\mathbf{x})=q(r):=\frac{0.0020}{r^{0.5d-1}}+0.6016(0.6054-0.5800r^{2})-0.0340,\] Therefore, FEX can successfully identify that the committor function \(q(\mathbf{x})\) only depends on the spherical radius \(r\). This allows us to solve (21) using the Chebyshev spectral method once again as described in the previous section and obtain the numerical solution with with the machine precision. The numerical results are provided in Table 5. In addition, we plot the committor function as a function of \(|\mathbf{x}|\) in Figure 6. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{method} & \multirow{2}{*}{E} & \multicolumn{2}{c}{No. of samples in} & \multirow{2}{*}{\(\alpha\)} & \multirow{2}{*}{No. of testing samples} \\ & & & \(\Omega_{AB}\) & & \\ \hline NN [16] & \(5.30\times 10^{-2}\) & 530 & \(3.0\times 10^{4}\) & 1/30 & \(1.0\times 10^{5}\) \\ FEX & \(3.20\times 10^{-2}\) & 530 & \(3.0\times 10^{4}\) & 1/30 & \(1.0\times 10^{5}\) \\ \hline \hline \end{tabular} \end{table} Table 5: Results for the concentric spheres example, comparison of NN and FEX method. Figure 5: Committor function for the double-well potential with sublevel sets boundary. As \(\beta\) increases, the problem transforms from a 2D problem to a 1D problem, and FEX can capture such behavior of the committor function. ### Rugged Mueller's Potential In this example, we consider the committor function in the rugged Mueller's potential \[V(\mathbf{x})=\tilde{V}\left(x_{1},x_{2}\right)+\frac{1}{2\sigma^{2}}\sum_{i=3}^ {d}x_{i}^{2},\] where \[\tilde{V}\left(x_{1},x_{2}\right)=\sum_{i=1}^{4}D_{i}e^{a_{i}(x_{1}-X_{i})^{2} +b_{i}(x_{1}-X_{i})(x_{2}-Y_{i})+c_{i}(x_{2}-Y_{i})^{2}}+\gamma\sin\left(2k \pi x_{1}\right)\sin\left(2k\pi x_{2}\right),\] where \(\gamma=9\) and \(k=5\) determine the ruggedness of the 2-dimensional rugged Mueller's potential \(\tilde{V}\left(x_{1},x_{2}\right)\), \(\sigma=0.05\) controls the extent of the harmonic potential in dimensions \(x_{3},\cdots,x_{d}\), and \(d=10\). All other parameters are listed below, which is consistent with [16]. \[\left[a_{1},a_{2},a_{3},a_{4}\right] =\left[-1,-1,-6.5,0.7\right], \left[b_{1},b_{2},b_{3},b_{4}\right] =\left[0,0,11,0.6\right],\] \[\left[c_{1},c_{2},c_{3},c_{4}\right] =\left[-10,-10,-6.5,0.7\right], \left[D_{1},D_{2},D_{3},D_{4}\right] =\left[-200,-100,-170,15\right],\] \[\left[X_{1},X_{2},X_{3},X_{4}\right] =\left[1,0,-0.5,-1\right], \left[Y_{1},Y_{2},Y_{3},Y_{4}\right] =\left[0,0.5,1.5,1\right].\] We focus on the domain \(\Omega=[-1.5,1]\times[-0.5,2]\times\mathbb{R}^{d-2}\) and the regions \(A\) and \(B\) are two cylinders: \[A =\left\{\mathbf{x}\in\mathbb{R}^{d}\mid\sqrt{\left(x_{1}+0.57 \right)^{2}+\left(x_{2}-1.43\right)^{2}}\leq 0.3\right\}\] \[B =\left\{\mathbf{x}\in\mathbb{R}^{d}\mid\sqrt{\left(x_{1}-0.56 \right)^{2}+\left(x_{2}-0.044\right)^{2}}\leq 0.3\right\}.\] The ground truth solution is obtained by the finite element method, which is obtained by solving (2) on uniform grid in 2 dimensions with the potential \(\tilde{V}\), the domain \(\tilde{\Omega}=[-1.5,1]\times[-0.5,2]\), and the region \(\tilde{A}\), \(\tilde{B}\) projected by \(A\) and \(B\) on the \(x_{1}x_{2}\)-plane, respectively. The chosen step size \(h\) for the finite element method is on the order of \(O(10^{-3})\), resulting in a discretization error on Figure 6: The committor function for the concentric spheres as a function of \(|\mathbf{x}|\). the order of \(O(10^{-6})\) to \(O(10^{-7})\) when compared to the ground truth solution. Notably, these error magnitudes are considerably smaller than the numerical errors incurred by the numerical solvers NN and FEX. Hence, we designate it as a reference solution of notable precision and accuracy. In this case, there are singularities present in regions \(A\) and \(B\), so we parameterize the committor function as \[q(\mathbf{x})=\mathcal{J}_{1}\log((x_{1}+0.57)^{2}+(x_{2}-1.43)^{2})+\mathcal{ J}_{2}\log((x_{1}-0.56)^{2}+(x_{2}-0.044)^{2})+\mathcal{J}_{0}, \tag{22}\] where \(\mathcal{J}_{1},\mathcal{J}_{2},\mathcal{J}_{0}\) are three depth-5 FEX binary trees, respectively. We consider \(T=40\) and \(T=22\), and delay the FEX formula in Section 5.2.4, and these formulas show that FEX identifies the solution only varies with the first two coordinates \(x_{1},x_{2}\). Therefore, we further simplify the problem and instead solve (2) with \(V(\mathbf{x})=\tilde{V}\left(x_{1},x_{2}\right)\). We utilize finite element method to obtain the committor function \(q(\mathbf{x})=q(x_{1},x_{2})\). We plot the committor function \(q(x_{1},x_{2})\) on the \(x_{1}x_{2}\) plane in Figure 7. We also summarize the numerical results in Table 6. Figure 7: Committor function for the rugged Mueller’s potential on a 2-dimensional plane, with level curves of potential \(\tilde{V}\). ### Summary In summary, our investigations have demonstrated that the FEX method exhibits similar or even superior accuracy compared to the neural network method across all benchmark problems. Moreover, FEX effectively captures the low-dimensional structure in each case, enabling direct and highly accurate solutions to the backward Kolmogorov equation (2) without relying on Monte Carlo integration in the variational formulation approach (3). Consequently, we propose employing the spectral method or finite element method to solve (2) due to their well-established theoretical convergence rates towards the ground truth solution, enabling the attainment of arbitrary levels of accuracy. ## 4 Conclusion In this work, we have investigated the novel finite expression method as a solver for high-dimensional committor functions. Our numerical results show that FEX can achieve superior performance compared to NN solvers. Most importantly, FEX is capable of identifying the low-dimensional algebraic structure of the problem that can be used for reformulating the committer problem as a low-dimensional one and finding a highly accurate solution to it by a suitable traditional technique. ## Acknowledgement H. Y. was partially supported by the US National Science Foundation under awards DMS-2244988, DMS-2206333, and the Office of Naval Research Award N00014-23-1-2007. M.C. was partially supported by AFOSR MURI grant FA9550-20-1-0397.
2302.06932
Oops..! I Glitched It Again! How to Multi-Glitch the Glitching-Protections on ARM TrustZone-M
Voltage Fault Injection (VFI), also known as power glitching, has proven to be a severe threat to real-world systems. In VFI attacks, the adversary disturbs the power-supply of the target-device forcing the device to illegitimate behavior. Various countermeasures have been proposed to address different types of fault injection attacks at different abstraction layers, either requiring to modify the underlying hardware or software/firmware at the machine instruction level. Moreover, only recently, individual chip manufacturers have started to respond to this threat by integrating countermeasures in their products. Generally, these countermeasures aim at protecting against single fault injection (SFI) attacks, since Multiple Fault Injection (MFI) is believed to be challenging and sometimes even impractical. In this paper, we present {\mu}-Glitch, the first Voltage Fault Injection (VFI) platform which is capable of injecting multiple, coordinated voltage faults into a target device, requiring only a single trigger signal. We provide a novel flow for Multiple Voltage Fault Injection (MVFI) attacks to significantly reduce the search complexity for fault parameters, as the search space increases exponentially with each additional fault injection. We evaluate and showcase the effectiveness and practicality of our attack platform on four real-world chips, featuring TrustZone-M: The first two have interdependent backchecking mechanisms, while the second two have additionally integrated countermeasures against fault injection. Our evaluation revealed that {\mu}-Glitch can successfully inject four consecutive faults within an average time of one day. Finally, we discuss potential countermeasures to mitigate VFI attacks and additionally propose two novel attack scenarios for MVFI.
Marvin Saß, Richard Mitev, Ahmad-Reza Sadeghi
2023-02-14T09:40:09Z
http://arxiv.org/abs/2302.06932v2
# Oops..I Glitched It Again! ###### Abstract Voltage Fault Injection (VFI), also known as power glitching, has proven to be a severe threat to real-world systems. In VFI attacks, the adversary disturbs the power-supply of the target-device forcing the device to illegitimate behavior. Various countermeasures have been proposed to address different types of fault injection attacks at different abstraction layers, either requiring to modify the underlying hardware or software/ifirmware at the machine instruction level. Moreover, only recently, individual chip manufacturers have started to respond to this threat by integrating countermeasures in their products. Generally, these countermeasures aim at protecting against _single_ fault injection (SFI) attacks, since Multiple Fault Injection (MFI) is believed to be challenging and sometimes even impractical. In this paper, we present \(\mu\)-Glitch, the first Voltage Fault Injection (VFI) platform which is capable of injecting _multiple_, coordinated voltage faults into a target device, requiring only a single trigger signal. We provide a novel flow for Multiple Voltage Fault Injection (MVFI) attacks to significantly reduce the search complexity for fault parameters, as the search space increases exponentially with each additional fault injection. We evaluate and showcase the effectiveness and practicality of our attack platform on four real-world chips, featuring TrustZone-M: The first two have interdependent backchecking mechanisms, while the second two have additionally integrated countermeasures against fault injection. Our evaluation revealed that \(\mu\)-Glitch can successfully inject four consecutive faults within an average time of one day. Finally, we discuss potential countermeasures to mitigate VFI attacks and additionally propose two novel attack scenarios for MVFI. ## 1 Introduction Fault Injection (FI) has proven to form a powerful threat to various computing platforms. All fault injection methods temporarily disturb the physical runtime environment of the Device under Test (DuT) to cause specific misbehavior. Common FI attacks are, e.g., conducted by disturbing the supply voltage [3], generating malicious clock signals [49], rapidly changing the electromagnetic environment [28], or inducing a light pulse at the decapsulated Integrated Circuits (ICs) [59]. The possible consequences from injecting a certain type of fault are described by a FI method's Fault Model. Depending on the specific FI method used, the corresponding Fault Model may, e.g., be defined as skipping of machine instructions [41], corrupting the instruction decoding [55] or altering the data stored on a device's internal memory [33]. Hence, FI attacks are capable of introducing vulnerabilities. As an example, this kind of attacks have been successfully launched on Trusted Execution Environments (TEEs) [16, 36], embedded devices [40, 63], smart cards [59] and recently even against workstation processors [14, 23]. A popular class of FI and the focus of this paper is Voltage Fault Injection (VFI), as this approach is very versatile while exhibiting a high impact attack vector. VFI disturbs a DuT's power supply to provoke a specific malfunction. To address VFI attacks, a variety of software- [48, 4, 5, 32] as well as hardware-based [50, 61] countermeasures have been proposed over the years. Fortunately, the industry has recognized the severity of fault injection attacks and individual manufacturers are reacting to it by integrating countermeasures into their products. For instance, NXP recently released multiple ARMv8-M series Microcontroller Units (MCUs), which feature an instruction-level FI countermeasure to protect some of their security-critical registers from FI attacks. Moreover, ARMv8-M MCUs that feature TrustZone-M (e.g., STM, Atmel, NXP) are equipped with a novel hardware unit on the internal bus system, performing additional checks on every bus access, in order to ensure the security properties of the TEE. Although not explicitly aimed to mitigate FI attacks, this backchecking mechanism has complex interdependencies which make FI attacks targeting TrustZone-M, highly difficult [46]. Typically, FI attacks and countermeasures are dedicated to Single Fault Injection (SFI), although Multiple Fault Injection (MFI) seems to be much more powerful. In MFI, multiple, coordinated faults of a certain type are injected after a single synchronizing trigger signal, in order to attack multiple target instructions during a single execution. MFI could theoretically be used against instruction-level based countermeasures [4, 32], however, as stated by previous work conducting those attacks, especially Multiple Voltage Fault Injection (MVFI) are considered highly impractical due to the lack of precise and affordable MFI tools [4, 46] and efficient parameter search algorithms. Even though, commercial equipment for MFI is available, devices from e.g., Alphanov1 and Riscure2 were shown only to conduct Multiple Laser Fault Injection (MLFI) which is much more resource intensive than MVFI. In addition, off-the-shelf VFI devices from NewAE [44] are not capable of injecting multiple faults based on a single trigger. In this paper we address this open problem by providing a highly precise MVFI tool and the corresponding efficient parameter search algorithms, which enable an adversary to inject multiple, coordinated and consecutive voltage faults into any target device in order to attack any software-based SFI protection on the instruction-level. We show that our tool is able to successfully perform a parameter search for up to four consecutive voltage faults and perform MVFI to skip the corresponding instructions in about one day. To realize this, we had to overcome a number of challenges: First, to the best of our knowledge, there exists no tool to conduct MVFI, so we designed and built our framework, coined \(\mu\)-Glitch. Second, the timely effort required to search for multiple fault parameters grows exponentially with each additional fault, if a traditional fault parameter search (i.e., an exhaustive search) is used. Hence, we designed, implemented and evaluated novel approaches, to search for multiple fault injection parameters, which are efficient enough for MVFI setups. Footnote 1: [https://www.alphanov.com/en/products-services/double-laser-fault-injection](https://www.alphanov.com/en/products-services/double-laser-fault-injection) Footnote 2: [https://www.riscure.com/blog/security-highlight-multi-fault-attacks-are-practical](https://www.riscure.com/blog/security-highlight-multi-fault-attacks-are-practical) Even though our approach can be used to attack arbitrary devices, we focus on attacking TrustZone-M (TZM), as TEEs form highly secure targets, which when compromised have shown to lead to the disclosure of highly sensitive information [47]. While principally \(\mu\)-Glitch can defeat many VFI research proposal countermeasures, as discussed in Section 9. However, we evaluate \(\mu\)-Glitch on four real-world example MCUs as most academic proposals are not open sourced. As a Proof-of-Concept (PoC) we attack two TZM MCUs which have protections explicitly protecting against FI attacks. Therefore, it cannot be successfully attacked by means of SFI. We also attack two other ICs with a subset of protections. Our main contributions are as follows: **Novel MFI Framework** We present \(\mu\)-Glitch, a novel fault injection framework, which is capable of injecting multiple, coordinated voltage faults into any DuT, in order to overcome FI countermeasures implemented on the instruction level. **Parameter Search Algorithm** We explore the impact of additional faults on the search space spanned by the combinations of multiple fault's parameters and present a novel and effective multiple fault parameter search to overcome the exponential increase in needed resources introduced by conventional algorithms. Our approach exhibits a 50 times speedup when searching for two consecutive voltage fault's parameters, and an 8.3 time speedup when searching for four voltage fault's parameters. **Real-world Attack** We use \(\mu\)-Glitch in order to inject multiple voltage faults into NXP's LPC55SXX and RT6XX MCU and hereby successfully circumventing all FI protections. By this, we are able to fully compromise the security introduced by TZM, by accessing the secure memory from within non-secure firmware. We show that this attack can be performed within an average time of one day. Further, we show that other TZM ICs can be broken using a subset of faults required for NXP ICs. **Possible Countermeasures** We propose a software level enhancement to the existing countermeasures, which is capable to protect from MFI attacks, by eliminating the possibility to search for MFI parameters. **Responsible Disclosure** The results of this work have been responsibly disclosed to NXP Semiconductors Ltd. A response acknowledging our findings has been received. In follow up communication the authors are collaborating with NXP Semiconductors Ltd. on finding a security patch. The authors would like to thank NXP Semiconductors Ltd. for their timely and professional communication following the responsible disclosure of our findings. ## 2 Background In the following we provide the background necessary in order to understand this work. ### Voltage Fault Injection Voltage Fault Injection (VFI) is a specific Fault Injection (FI) method to inject disturbances into the power supply line of an Integrated Circuit (IC) and hence, violates the IC's specified operating conditions for a certain, controlled period of time. Most ICs, like Microcontroller Units (MCUs), expect their supply voltage to be stable and steady, i.e., there should be no point during runtime, at which the supply voltage is interrupted or leaves a specified operating range. Figure 1 depicts a single voltage fault injected into the supply line of an IC running at \(3.3V\). Here the specified operating supply voltage range is highlighted green (\(3.0V\) to \(3.5V\)), whereas the red highlighted range (\(0.0V\) to \(2.9V\)) indicates, that the operating conditions are violated. The fault voltage level is most commonly defined as the Ground (GND) reference, however, this may be optimized with respect to either reliability and repeatability [9] or timely resolution [24]. A voltage fault is parameterized by it's Offset w.r.t. a synchronization point, and it's Width[40]. In every VFI experiment the most complex part is to find the best fault parameters, in order to provide a _reliable_ and _repeatable_ attack [12]. **The Fault Model** of VFI on MCUs consists of four different behaviors, which all arise from the effects of VFI on the processors internal pipeline stages namely, skipping of machine instructions, corrupting data fetches, corrupting instruction decodes and corrupting write-backs. Throughout this work we focus on applying VFI in order to skip machine instructions. In this context, we further define the _Fault Target_ as the machine instruction the adversary aims to skip, in order to cause a specific misbehavior. Moreover, a Fault Target is assumed to be _hit_, once it is successfully attacked by injecting a fault. **Fault Injection Setups** can be divided into two main classes of _Cooperative_ FI and _Non-Cooperative_ FI [42]. In the former, the attacker is able to reprogram the Device under Test (DuT). Here, the attacker commonly implements a protocol to communicate with the device. By this it is possible to, e.g., call a subroutine on the device that shall be tested against FI, by issuing a corresponding command. In addition, in cooperative FI, the DuT is notifying the FI framework when entering the code region to be tested by asserting the synchronizing signal, referred to as Trigger. Cooperative setups are commonly encountered in Proof-of-Concepts (PoCs). In the latter the adversary is unable to reprogram the DuT. Non-Cooperative FI is commonly encountered in attack scenarios, which focus on attacking proprietary targets. ## 3 Adversary Model The adversary model includes a physical access attacker. Further, the adversary is capable of performing slight modifications to the DuT in order to make VFI possible, similar to related literature [40] (e.g., attaching copper wires, detaching bypass-capacitors). In order to define Fault Targets, the adversary has some knowledge about the targets firmware, which may be, e.g., through the use of a public library [41] or a previous binary firmware disclosure [60]. ## 4 \(\mu\)-Glitch Design In this section, we present our novel Multiple Fault Injection (MFI) design, named \(\mu\)-Glitch. Similarly to Single Fault Injection (SFI) attacks, we adopt the high-level flow, which consists of defining the experiment success function, performing fault injection by exhaustively searching the fault's parameters and analyzing and comparing success rates3. Footnote 3: [https://github.com/newaetech/chipwhisperer-jupyter/tree/9230748db155394a01c7021fid21123efcce4aa/courses/fault101](https://github.com/newaetech/chipwhisperer-jupyter/tree/9230748db155394a01c7021fid21123efcce4aa/courses/fault101) The complexity of searching multiple fault parameters at once increases exponentially by every additional fault, using conventional parameter search algorithms. Therefore, we introduce our novel, efficient _sweeping_ approach to be used in MFI setups. The overall attack flow is depicted in Figure 2: 1. _Define Overall Success Function_: In order to decide about the outcome (i.e. overall success or failure) of a MFI attempt, a Success Function (SF) has to be defined. This is a binary function which is evaluated in later steps in order to indicate either a success, if and only if all the Fault Targets are hit at once or a failure in all other cases. 2. _Define Partial Success Functions_: The parameter search needs to distinguish between hit Fault Targets. This is achieved by defining Partial Success Functions (PSFs), which are needed to recognize, if some, but not all, Fault Targets have been hit, whereas the SF only allows to determine, if _all_ the Fault Targets have been faulted during a single, consecutive execution. Figure 1: Example of a Voltage Fault Figure 2: High level overview of our \(\mu\)-Glitch design 3. _Perform Multiple Parameter Search:_ The goal of performing the Multiple Parameter Search is equal to this of the Parameter Search in SFI, i.e. valid fault parameters have to be discovered, which lead to an experiment success. Even though there are multiple such parameter pairs to be discovered throughout this step, the process of finding the right parameters is similar to the one in SFI: The adversary uses a _single fault_ in order to search in an increased space, spanned by all Fault Targets parameters. We refer to this process as _sweeping_. As PSFs have previously been defined, when the injected fault hits one of the Fault Targets, it will be detected by evaluating the corresponding PSF. As there is only a single fault per execution injected, the SF is not evaluated during this step. The result is a set of sets of absolute parameters, i.e., one set per Fault Target, absolute to a common synchronization Trigger. 4. _Translating Absolute Parameters_: The previously discovered _absolute_ fault parameters need to be translated into relative parameters (i.e., relative to the preceding fault), by using the inductive definitions in Equation 1 and Equation 2. \[R_{0}=A_{0}\] (1) \[R_{n}=A_{n}-(A_{n-1}+W_{n-1})\] (2) That is, the first faults relative Offset \(R_{0}\) is always equal to the first absolute Offset \(A_{0}\) found by our sweeping approach. Every additional fault's relative Offset \(R_{n}\) is defined recursively in terms of its absolute Offset \(A_{n}\), its previous global Offset \(A_{n-1}\) and the previous fault's width \(W_{n-1}\). All the fault's Widths may be directly adopted. 5. _Fuzzyfy Parameters_: Due to the non-deterministic behavior of the Device under Test (DuT) in the presence of voltage faults, every preceding fault may affect its succeeding ones parameters in unpredictable ways. In order to address this uncertainty, slight modifications have to be applied to the relative Offsets. We refer to this process as _fuzzyfication_. Here, every fault's Offset is not considered a single value, but rather a very small interval bound by \(\pm\Psi:\Psi\in\mathbb{N}\). The hereby generated intervals serve as input for the following integration step. 6. _Integrate Fuzzyfied Parameters_: As the uncertainty and hence the provided sets are considered to be very small, it is viable to perform an exhaustive search on multiple fault's parameters. As all the required voltage faults are hereby injected at once, this represents the first step in which the actual attack is performed. Therefore, each MFI attempt is evaluated based on the overall SF, instead of the PSFs. 7. _Evaluate and Analyze Repeatability_: Finally, analyzing combinations of multiple fault's parameters, which in combination led to an overall success is needed. Further, if there are multiple such combinations available, the different combinations have to be qualified and compared w.r.t. their success rates. ### Transforming Non-Cooperative Setups \(\mu\)-Glitch is also able to cope with non-cooperative setups, in which there are no PSFs definable, by transforming non-cooperative setups to cooperative ones. In general, an adversary can perform parameter search on a physical identical, but cooperative setup, before transferring the identified parameters to the non-cooperative setup, as in this case defining PSFs is easy [43]. This approach is feasible whenever the firmware can (partly) be reproduced and the offset in between Fault Targets remains constant. As every Integrated Circuit (IC) manufacturer (e.g., STM, Atmel and NXP) provide their own Software Development Kit (SDK) in order to generate the machine instructions for the target device, and it is considered to be best practice and highly encouraged by the manufacturers to use this example code to set up TrustZone-M (TZM), every non-cooperative setup using these public SDKs can be transformed to a cooperative one in the sense of the parameter search. Therefore, in such a scenario it is possible to transfer the parameters of several Fault Targets to a non-cooperative setup, thus reducing complexity drastically and enable attacking proprietary black-box software on these devices. ## 5 \(\mu\)-Glitch Attack on TrustZone-M In this section we describe a real-world attack on TrustZone-M (TZM) using \(\mu\)-Glitch, as compromising these highly secured environments usually leads to the disclosure of sensitive information. The goal of our attack is to leak secrets stored in secure memory of the TZM from within non-secure firmware. Even though \(\mu\)-Glitch is able to circumvent all duplication based instruction-level countermeasures, as stated in Section 9, we chose to attack the duplication based protection of NXP. NXP is an Integrated Circuit (IC) manufacturer, which recently adapted Fault Injection (FI) countermeasures similar to protections proposed by academia in their real-world threat modeling processes for their ARMv8-M series Microcontroller Units (MCUs). The chips are protected by implementing a modified version of the duplication based approach of Barenghi et al. [4], which is referred to as _Duplicate Registers_[38, 39]. We will elaborate this in more detail in Section 5.2.1. We also evaluate this attack on other ARMv8-M MCUs (namely, STM32L5 and Atmel SAMll1) which can be compromised with a subset of Fault Targets of this attack. First, we provide preliminary background on TZM. ### TrustZone-M Background TZM is a Trusted Execution Environment (TEE) for embedded processors, i.e., it introduces system wide hardware-enforced computation and memory isolation mechanisms, which are built directly into the processor. The TZM platform is configured by different memory attribution units, which are elaborated in the following and depicted in Figure 3. The Security Attribution Unit (SAU)is specified and designed by ARM. Here, it is possible to define up to eight different memory regions which can be either Secure (S), Non-Secure (NS) or Non-Secure Callable (NSC). The final security state of a memory region, is defined in conjunction with the Implementation Defined Attribution Unit (IDAU). The Implementation Defined Attribution Unit (IDAU)is specified by the corresponding IC manufacturer. It defines memory regions to be either S or NS. Defining NSC is a privilege granted exclusively to the SAU. On most of the commercially available ARMv8-M TZM processors the IDAU is implemented to perform a bit check on the 28\({}_{th}\) bit of a requested address. For an arbitrary address, the IDAU returns S if the 28\({}_{th}\) bit is set, otherwise NS. The final Security Stateis determined by the strongest output security state from SAU and IDAU for a requested address, where the partial order of \(S>NSC>NS\) holds. The Transition between S and NStakes place based on novel ARMv8-M instructions. The non-secure code is linked against a set of function headers, called the _veneer table_, which is exported during compilation of the S firmware. To switch from NS to S, it is required to take a detour through calling a veneer function in a NSC region, which consists of a Secure Gateway (SG) instruction4. To switch from S to NS, it is necessary to use either the Branch with eXchange to Non-Secure state (BXNS) or the Branch with Link and eXchange to Non-Secure state (BLXNS) instruction. Here it is important to note, that the security level transition will only happen, if the Least Significant Bit (LSB) of the NS address is unset5. Footnote 4: [https://developer.arm.com/documentation/100690/0201/Switching-between-Secure-and-Non-secure-states](https://developer.arm.com/documentation/100690/0201/Switching-between-Secure-and-Non-secure-states) The Backchecking Mechanismrepresents a TEE protection available on MCUs featuring TZM, which ensures confidentiality and integrity on a system level. In this concept, additional integrity checks are realized by introducing new hardware units on the internal system bus, referred to as Advanced High-Performance Bus (AHB). The three novel hardware units residing on the AHB matrix are: Wrapper Units (WUs)are used to wrap TZM unaware bus masters to signal side-band information on the AHB matrix. This additional information determines the Security State and the Privilege Level for a requested address. Based on this information, the Checker Units perform additional checks upon every bus access. Memory Checkers (MCs)are used to protect memory devices, such as Flashes, Random Access Memorys (RAMs) and Read-only Memorys (ROMs) from unintended access of an application. Peripheral Checkers (PCs)are used to protect the peripherals which are directly connected to the AHB or via Advanced Peripheral Bridge (APB) from unintended access of an application. NXP refers to this concept as Secure AHB Controller [38], STMicroelectronics to Global TrustZone Controller (GTZC) [52] and Atmel to Peripheral Access Control (PAC) [1]. ### Attack Internals Throughout this section we describe our concrete attack against NXP based implementations of the TrustZone-M and their FI countermeasures. #### 5.2.1 NXP's Duplicate Register NXP's FI countermeasure, referred to as the Duplicate Registers method, deploys for every security-critical register a second, equally structured register in memory space. If active, both of these registers are written sequentially in firmware. Once Single Fault Injection (SFI) is applied in order to skip or modify an assignment to a secured register, it's duplicate register would afterwards still be written as intended. Based on the introduced inconsistency between original register and it's duplicate register, any SFI attempt is detected in hardware. This advanced countermeasure is, e.g., encountered in NXP's Figure 3: Example implementation of the Backchecking Mechanism on the AHB, a flash denotes parts to be faulted LPC5556X [38] and RT6XX [39] series MCUs. We have identified Duplicate Registers being used in Debugging Features, Physical Unclonable Function (PUF) Index Configuration and the Activation of the Secure AHB Controller. #### 5.2.2 Interdependency of Protections As the checks performed by SAU and IDAU and the checks performed by NXP's Secure AHB Controller are performed sequentially, attacking only one of these checks would always be detected by the respective counterpart. In addition, the activation of the Secure AHB Controller is further protected by Duplicate Registers. Hence, in order to succeed, all of these have to be successfully attacked during one consecutive execution. While conducting our experiments, NXP processors seem to lock themselves into erroneous states, when the TrustZone-M specific instructions to switch the security context are issued, whenever the SAU is not active. Moreover, the activation of the IDAU cannot be skipped, as it is active by default after Power-On Reset (POR). #### 5.2.3 Fault Targets With the FI countermeasure and interdependencies of the TZM protections in mind, we define the following Fault Targets and their technical details, which are represented by a flash symbol in Figure 3. Activation of the Security Attribution Unit (SAU)The SAU is activated in the TZM setup routine, a system routine that is executed before the trusted and secure user code is executed. The relevant Fault Target is depicted in line 6 of Listing 1, and is commented with SAU_CTRL. Once the Fault Target is hit, i.e. the STR instruction is skipped, the SAU is disabled. As later on the Secure AHB Controller is setup as intended, it would detect any invalid bus accesses due to a mismatch of the configuration of both hardware units. This inconsistency between the Secure AHB Controllers configuration and TZM configuration is what prevents a successful attack at this point. Activation of Secure AHB ControllerTo resolve this inconsistency, the adversary must prevent the Secure AHB Controller from being activated. The Fault Target that needs to be hit in order to prevent activation is shown in line 15 of Listing 1 and is commented with _Original_. Once this store instruction is skipped, the Secure AHB Controller is being kept deactivated. Due to the use of _Duplicate Registers_ however, any successful FI attempt will still be detected. Duplicate Register for Secure AHB ControllerAt this point the processor would be able to detect an inconsistency between the deactivated Secure AHB Controller and its active Duplicate Register. Hence, the assignment to the Duplicate Register forms another Fault Target, which is shown in line 13 of Listing 1 and is commented with _Duplicate_. Prevent Switching of Security ContextAfter the previous Fault Targets are all hit, the SAU and all the TZM protections are fully disabled. After the boot process, the bootloader passes execution to S firmware, in which the TZM is configured6. Afterwards, the main function of the user defined code sets up the system and peripherals, before ultimately passing the execution to NS code, which at this point is invalid and therefore the IC would lock itself into an erroneous state. This transition is performed by the TZM specific BXNS instruction. In Listing 2, we present the relevant disassembled parts of the binary, generated by Gnu Compiler Collection (GCC) for transitioning from S to NS. Footnote 6: [https://www.nxp.com/design/software/development-software/mcuxpresso-software-and-tools-/mcu-bootloader-for-nxp-microcontrollers:MCUSOOTTab-Design_Tools_Tab](https://www.nxp.com/design/software/development-software/mcuxpresso-software-and-tools-/mcu-bootloader-for-nxp-microcontrollers:MCUSOOTTab-Design_Tools_Tab) In order to prevent the internal security context switch from S to NS, it is sufficient to skip the shift-out shift-in operation, implemented by the LSRS and LSLS instructions defined in line 16 and 17. As this code is executed in S state, the LSB is always set. It is worth noting that even though we have been using NXP's toolchain to generate the firmware, the herein described privilege escalation can be performed on almost every commercial available TZM MCU, as most IC manufacturer release their toolchains based on GCC. Moreover, in ARM based compilers clearing the LSB is performed by a single bit clear (BIC) instruction. ### \(\mu\)-Glitch Hardware Framework The Fault Targets in the Multiple Fault Injection (MFI) attack, need to be successfully hit _all at once, in one continuous execution_ in order to fully break the security granted by NXP's TZM implementation. A lack of suitable commercial MFI tools led to the development of our \(\mu\)-Glitch MFI framework, which will be introduced throughout this section. Our custom MFI framework consists of six components, namely the Clock Generation Unit, the Host Communication Unit, the I/O Buffer Unit, internal Configuration Registers, the Multiple Voltage Fault Unit and the Serial Target I/O Unit. In the following, we elaborate the Multiple Voltage Fault Unit in more detail, as it represents one of the main parts of our framework. However, in order to introduce our MFI hardware, it is important to first understand the Single Fault Unit (SFU) design, which is commonly encountered in SFI setups7. Footnote 7: [https://github.com/chipfail/chipfail-glitcher/tree/master/chipfail-glitcher.srcs/sources_1/new](https://github.com/chipfail/chipfail-glitcher/tree/master/chipfail-glitcher.srcs/sources_1/new) Figure 4 shows a block diagram of a typical SFU. Similar designs are commonly used to inject a single voltage fault into a Device under Test (DuT). A voltage fault is defined by its Width and its Offset, w.r.t. a synchronization point referred to as the Trigger (cf. Section 2.1). In our SFU design, the Width and Offset are both defined as 32\(-\)Bit inputs. A SFI is initiated, once the single bit Trigger input signal is asserted. Starting with this external Trigger event, the hardware starts counting, using the reference Clock signal. Once the defined Offset has been reached, the Fault Out signal is set high and remains high for exactly Width clock cycles. The Fault Out signal is routed directly into the gate of a N-Channel metal-oxide-semiconductor field-effect transistor (MOSFET), with its source connected to the fault voltage level and its drain connected to the power supply line of the DuT. This circuit is referred to as the _Crowbar-Circuit_[40] and represents the state-of-the-art method used to inject a voltage fault. To indicate that the fault attempt has been processed, an additional output signal named Fault Done is asserted for a single clock cycle. #### 5.3.1 Multiple Fault Injection Hardware The SFU discussed previously builds the base for the design of our MFI unit. By chaining multiple SFUs together, we are able to inject multiple, coordinated faults using a single trigger. For this purpose, the Trigger input of a unit is connected to the Fault Done signal of its predecessor, whereas the first SFU's Trigger is directly connected to the external Trigger signal, forming a chain of units. The Fault Out lines are combined by using the logical Or operator, whereas the signal indicating the termination of each MFI attempt is solely defined by the last SFU's Fault Done signal. By chaining SFUs a single trigger signal can be used to perform MFI. While this approach is capable of injecting multiple, coordinated faults into a DuT, the number of injected faults is fixed by design. Once there are \(n\in\mathbb{N}\) SFUs downloaded to the Field Programmable Gate Array (FPGA), the framework is determined to generate exactly \(n\) voltage faults, each time it gets triggered. With respect to our attack flow described in Section 4, the dynamical configuration of the number of injected faults is desirable. Hence, we propose a slightly more complex design, which enables the dynamic activation and deactivation of SFUs, even after the hardware has been downloaded to the FPGA. By introducing multiplexers in between each of the SFUs, as shown in Figure 5, the design is dynamically configurable. In this depiction, the routing of the Clock, Offset and Width signals of each SFU are intentionally omitted, in order to focus on the interconnection of multiple SFUs. In addition, all the Fault Out signals (Fault 0, Fault 1,...) are assumed to be combined using the logical Or operator. The hereby generated signal controls the gate of the MOSFET of our Crowbar circuit implementation. The blocks labeled 1:2 Demux are demultiplexers, i.e., hardware units which forward an input signal (In) to one of multiple output signals (Out1 or Out2), depending on the state of another input signal (Sel). The selection lines (Sel SFU0, Sel SFU1,...) may be modified by the controlling host system, by updating the MFI framework's internal registers. The first demultiplexer, which routes its input to Out1 asserts one of the _Done_ signals, which indicates the end of this MFI attempt and interrupts the forwarding of Done-Signals to succeeding SFUs. Each of the Fault Done signals (i.e. Done 0, Done 1, Done 2,...) are also combined Figure 4: Block Diagram of a Single Fault Unit using the logical Or operator, to form a single output signal. ## 6 Evaluation To thoroughly test our \(\mu\)-Glitch Multiple Fault Injection (MFI) hardware design and novel parameter search algorithm, we conduct multiple evaluations throughout this section. First, we show the feasibility of attacking the Duplicate Registers countermeasure, this method is mimicked and attacked based on a cooperative as well as a non-cooperative firmware simulation. By this our novel approach of searching for parameters can be compared to performing an exhaustive search, with optimized conventional parameter search algorithms. Moreover, in order to prevent the security context switch, we perform a privilege escalation on code generated by Gnu Compiler Collection (GCC). The Fault Target are two immediately successive instructions, which have both to be attacked at once. For this purpose we evaluate, whether it is more promising to inject two, narrow faults or a single, wider fault when aiming at attacking such successive instructions. We conclude our evaluation by performing our real-world attack, in which we attack NXP's TrustZone-M (TZM) implementation with all the new protections activated, by hitting all Fault Targets introduced in Section 5.2.3 in one execution. ### Attacking Duplicate Registers To show the feasibility of circumventing the Duplicate Register Fault Injection (FI) countermeasure and evaluate our proposed MFI parameter search algorithm we implement an isolated simulation of this countermeasure. It is possible without loss of generality, since the Duplicate Register method and our simulation both translate to equal Fault Targets. In the following we describe our simulation approach for cooperative as well as non-cooperative parameter searches. Afterwards, multiple parameter searches for both the different setups are performed. We conclude this simulation by analyzing the success rates for attacking our simulation of the Duplicate Registers FI countermeasure. #### 6.1.1 Fault Targets In order to simulate the Duplicate Registers Fault Injection countermeasure, we first define two functions, which are used for the different MFI setups (i.e., cooperative and non-cooperative), and whose source code is provided as a reference in the Appendix in Listing 4 and Listing 5. In both functions two, zero initialized variables are defined. These represent our Fault Targets, i.e., a register to be protected and its duplicate register. After their definitions, these are written sequentially with the same, non-zero value. The goal of the adversary is to skip both the assignments in one execution, hence compromising the protection introduced by the Duplicate Registers method. In our cooperative simulation Partial Success Functions (PSFs) are defined, which allows the use of our efficient approach, as described in Section 4. For comparison, in the non-cooperative simulation, PSFs are assumed to be impossible to define, and an optimized exhaustive search is performed. We chose to use the exhaustive search approach as proposed by NewAE, as it represents the state-of-the-art in Single Fault Injection (SFI) attacks8 Each assignment to a Fault Target is preceded with a random delay, which is determined during compile-time. Through this, the binary firmware exhibits exactly the same behavior during each execution. Once the firmware is re-compiled, another binary is generated which exhibits a different timely behavior. This way we ensure that parameters discovered throughout one experiment are very unlikely to fit another experiment. In order to compare the different setups, the same delays for both setups have been used. Footnote 8: [https://github.com/newaetech/chipwhisperer-jupyter/tree/9230748db155394a01c7021f1d21123efcee4aa/courses/fault101](https://github.com/newaetech/chipwhisperer-jupyter/tree/9230748db155394a01c7021f1d21123efcee4aa/courses/fault101), which tests all possible combinations of FI parameters. This search requires no additional knowledge in contrast to other approaches which require time consuming model training [64] for every glitch or the definition of a fitness function [12], which may not always be possible. #### 6.1.2 Parameter Search The different search algorithms, used to discover MFI parameters, are performed several times based on the introduced Figure 5: Block Diagram of the Multiple Voltage Fault Unit random delays, before comparing the results of the different approaches to each other. Table 1 compares an exhaustive search (left) to our efficient sweeping approach (right). Each row represents a single comparison of an exhaustive search to our cooperative approach, based on the depicted compile-time delays. The results for applying our sweeping approach are divided into the time to search the parameters based on PSFs (left) and their integration (right). It shows that our novel approach in a MFI context is much more efficient than the traditional exhaustive search. Here, for every comparison our novel approach required approximately two percent of the time required by the exhaustive search. #### 6.1.3 Evaluation And Repeatability For each of the previous MFI parameter searches, multiple combinations of MFI parameters have been returned, which have shown to evaluate the Success Function (SF) to success. Moreover, both parameter search algorithms yielded similar fault parameters, which exhibit only negligible differences. When performing an actual attack, an adversary would always choose the combination of parameters, which has the highest probability for an attack to succeed. Therefore, each previously determined successful combination has to be qualified by means of their reliability and repeatability. For this we first define the _most promising attack parameters_ as the parameters, which show the highest success rate of our attack. To estimate the most promising parameters, we perform MFI \(1,000\) times based on different successful parameter configurations. Once the most promising attack parameters have been determined, \(100,000\) MFI attempts are performed. The experiment returned a success rate of \(0.212\) and a respective failure rate of \(0.788\). This means, that by using our highly precise MFI design about every fifth attempt of injecting two consecutive faults into our Device under Test (DuT) in order to overcome the Duplicate Registers simulation succeeded. ### Attacking Successive Instructions As mentioned in Section 5.2.3, besides skipping the activation of the Security Attribution Unit (SAU) and the activation of the Secure Advanced High-Performance Bus (AHB) Controller, an additional privilege escalation is required. The Fault Targets are two, directly succeeding shift instructions, which, under normal circumstances, unset the Least Significant Bit (LSB) of the NS target address. With the following simulation, we aim to evaluate if immediately successive instructions, as they are encountered in this scenario, are best to be attacked by a single, wider fault or by multiple, narrow ones. #### 6.2.1 Fault Targets Our Fault Targets are depicted in line 10 and 11 of our cooperative firmware example in Listing 3. This simulation defines a variable labeled \(a\) and assigns it an odd value of \(0x13\), i.e., the LSB is set. Based on the returned value, an adversary is able to distinguish which Fault Targets have been hit. If, e.g., _only the left shift_ instruction has been skipped, then the returned value must be equal to \(0x9\). Under normal circumstances both shift operations are performed, clearing the LSB and resulting in a returned value of \(0x12\). #### 6.2.2 Parameter Search Throughout this simulation, we assume that the most promising attack parameters are already known, computed by using our efficient search approach. #### 6.2.3 Evaluation And Repeatability We evaluate performing both, a broad SFI as well as two narrow MFI on the previously introduced Fault Targets to demonstrate how MFI compares to SFI when attempting to attack instructions that immediately follow another. For this evaluation, \(100,000\) FI attempts have been performed. Throughout this experiment we define the group of invalid results as these results, where the DuT is either not responding after performing MFI or which cannot be explained by either skipping of the shift instructions. Using a single, wider fault in order to fault two successive shift instruction resulted in a success rate of \(24\%\), whereas \begin{table} \begin{tabular}{|l|l|l|l|} \hline Delay 1 & Delay 2 & Exhaustive & Sweeping \\ \hline 7 & 43 & 12:31h & 00:06h + 00:09h \\ \hline 33 & 19 & 21:01h & 00:18h + 00:10h \\ \hline 4 & 50 & 07:46h & 00:02h + 00:09h \\ \hline 22 & 1 & 16:04h & 00:11h + 00:10h \\ \hline \end{tabular} \end{table} Table 1: Comparing optimized, exhaustive search (Exhaustive) to our efficient approach (Sweeping) attacking the same instructions using two, narrow faults resulted in a success rate of \(15\%\). Moreover, the group of invalid results increased by \(9\%\). Hence, with respect to the success rate it is more reasonable for an adversary to inject a single fault, utilizing an increased Width, in order to attack the two successive shift instructions. The complete results are shown in the Appendix in Table 4. ### Attacking The TrustZone-M In this part, we evaluate our novel approach by attacking NXP's implementation of the TZM and all of the additional countermeasures in a real-world scenario. Throughout this experiment, the MFI framework's frequency is specified as being \(20\) times higher, than the frequency of the DuT. By oversampling of our MFI framework w.r.t. the DuT, we gain additional precision for each of our FI attempts. Note, that it is possible to increase the success rate by increasing the oversampling rate or synchronizing the glitch to the DuT's clock signal [44]. This, unfortunately, would also mean to introduce more assumptions. The firmware has been built based on the unmodified NXP's Software Development Kit (SDK) TZM examples. The Fault Targets have been described in detail throughout Section 5.2.3, as the activation of the SAU, the activation of the Secure AHB Controller, the duplicate register for the activation of Secure AHB Controller and performing privilege escalation. After all these Fault Targets have been hit during a single, consecutive execution of the targets firmware, the NS code is able to arbitrarily access any S and NS resources. In the context of Trusted Execution Environments (TEEs), this represents a full compromise, as e.g., any secrets stored within secure memory may be disclosed and any secure defined peripheral may be arbitrary accessed and controlled from within non-secure code. Moreover, up to this point all parts of our attack have already been proven feasible and practical, by attacking isolated simulations. We are presenting our results, in terms of repeatability and reliability of our attack, in comparison with the conventional exhaustive parameter search. #### 6.3.1 Search For Parameters The search for parameters is evaluated for a cooperative setup as well as a non-cooperative setup, in which no PSFs can be defined. The results of comparing both parameter searches are depicted in Table 2. Four different search passes for the same four Fault Targets have been performed with a limit of \(24\) hours. While the exhaustive approach led in only one of four parameter searches to a result within the given time limit, our sweeping approach has returned correct parameters for every single attempt. Moreover, the fastest exhaustive search has shown to be eight times slower, than the corresponding sweeping approach. The results of the sweeping approach are again split into the time it took to search the single fault parameters (left) and the time it took to integrate the respective parameters (right). It is worth noting, that there is quite some variance contained in the depicted results, which can be explained by the non-deterministic behavior of FI, i.e., even if the parameters are perfectly set, the injected fault can never guarantee a success. Due to the non-deterministic discovery of fault parameters, we implemented our parameter searches to restart themselves, if either no total success has been observed for a non-cooperative setup, or not at least one partial success has been observed for every Fault Target, in a cooperative setup. Note, that in this attack scenario it is possible to escalate a non-cooperative setup to a cooperative by using the parameter transfer described in Section 4.1. #### 6.3.2 Evaluation and Repeatability After conducting the MFI parameter searches, for each setup a set of sets of fault parameter configurations has been returned, based on which the SF evaluated to success. As the parameter found by both search algorithms differ only negligibly we used the parameters returned by our efficient, MFI parameter search, in order to determine the most promising attack parameters. After these parameters are estimated, these are used to perform \(1,000,000\) MFI attempts, in order to compromise the TZM. Moreover, this evaluation has been performed two times. The averaged results of both experiments are depicted in Table 3. The success rates when injecting exactly one (only disabling the SAU) up to four (completely disabling TZM) faults based \begin{table} \begin{tabular}{c|c} \hline \hline Brute Force Search Time & Sweeping Search Time (Search + Integration) \\ \hline \textgreater{}24h & 06:17h + 00:28h \\ \textgreater{}24h & 08:07h + 00:28h \\ \textgreater{}24h & 13:22h + 00:30h \\ \textgreater{}23:40h & 02:58h + 00:28h \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of exhaustive search (left) to our novel sweeping approach (right) when searching the parameters of four voltage faults used to attack the TrustZone-M. \begin{table} \begin{tabular}{l c} \hline \hline Fault Targets & Success Rate \\ \hline SAU & 0.451 \\ SAU \& AHB CTRL & 0.0251 \\ SAU \& AHB CTRL \& DUPL & 0.0023 \\ SAU \& AHB CTRL \& DUPL \& PE & 0.0000003 \\ \hline \hline \end{tabular} \end{table} Table 3: Results of performing our MFI attack against NXP’s TrustZone-M. Results are given for an increasing amount of consecutive hit Fault Targets in one single execution on the most promising attack parameters are depicted. SAU is referring to the first Fault Target, i.e., the activation of the SAU, AHB CTRL is referring to the activation of the Secure AHB Controller, DUPL is referring to the Duplicate Register of the Secure AHB Controller and PE is referring to the privilege escalation. These result indicate that almost every second attempt (45.1%) to inject four faults into the DuT is deactivating the SAU. Moreover, 2.52% of the performed FI attempts successfully disabled the Secure AHB Controller in addition. An average of 0.23% of FI attempts succeeded in attacking the activation of the SAU, the activation of the Secure AHB Controller as well as the Duplicate Register of the Secure AHB Controller all at once. And finally, 0.0003% MFI attempts resulted in a total success. Conducting such high amounts of FI attempts may seem excessive at first, however, performing one million FI attempts took only one and a half days, translating to one successful attempt in half a day. With respect to the potential damage this practical attack may cause, once successful, we consider this attack critical. #### 6.3.3 \(\mu\)-Glitch Transferability We evaluated the end-to-end attack for disabling TZM, both on cooperative as well as non-cooperative Multiple Voltage Fault Injection (MVFI) setups, on different target Integrated Circuits (ICs), namely STMs STM32L5, Atmels SAML11 and NXPs LPC55569 and RT6600 Microcontroller Units (MCUs). In the Appendix in Table 5 we show the attacked chips and the necessary Fault Targets to achieve disabling TZM per chip. In addition, we show the time for baseline exhaustive search (capped at 48h) in contrast to our sweeping approach and the combined success rate of all glitches combined. RT600 is exceptionally similar to the LPC55569 and can be attacked using the same Fault Targets. SAML11 configuration is stored in Non-Volatile-Memory (NVM) space and either modifying read information or glitching the bootloader is necessary. In order to disable the Brownout Detection (BoD) of the STM32L5, which may interfere with Voltage Fault Injection (VFI), glitching of the Phase Locked Loop (PLL) configuration is necessary, in order to run the chip with a clock frequency of \(\leq 32MHz\) in case the chip is configured to run faster. According to our evaluation \(\mu\)-Glitch can successfully attack conceptionally similar ICs. ## 7 Potential Countermeasure Inspired by the insights of our evaluation we propose a potential countermeasure against Multiple Fault Injection (MFI) attacks. We propose an Information Level Countermeasures (ILC), as this kind of countermeasure may also be applied on already deployed hardware, i.e., no novel hardware revision is required in order to deploy physical sensors. By attacking the Duplicate Registers method throughout this work, we have shown that single fault injection protections on the instruction level may be overcome by injecting additional faults. This is possible due to the fact that the Fault Targets are located at always the same offsets. Hence, an adversary is able to conduct a multiple parameter search and thus form a reliable attack. Our concept of _randomizing the Duplicate Registers_ makes use of random delays, in order to strengthen the Duplicate Register method against MFI attacks, by removing the possibility to search for MFI parameters. For this, we propose the use of a compiler pass, which introduces a compiler attribute that can be used to security critical assignments. As the random delay is only introduced when the aforementioned attribute is encountered, the overhead on the computation in general is negligible. The goal is to generate machine instructions, which force the processor to stall for a small, random period of time. With this, a parameter search can not be successfully carried out as the Offset of the Fault Targets varies on a per-execution base. Hereby however, a trade-off is introduced between increasing the parameter search space by introducing a large delay and the minimization of overhead in processor time by using a small delay. When conducting our experiments, the MFI frameworks internal frequency has been chosen to be up to 20 times higher, than that of the Device under Test (DuT). Here we have observed, that a timing difference of less than a single DuT's clock cycle leads to no successes. Hence we assume that the stall time can be kept low without compromising the security. As by this method there is no possibility to conduct a parameter search, the best possible attack is to inject, random parameterized faults into the DuT, which leads to impracticability of a MFI attack, by decreasing its success rate tremendously. By utilizing this approach to attack two Fault Targets, which have a random preceding delay of 0-9 cycles, the probability of injecting two successful glitches is 100 times lower than without this protection. ## 8 Discussion We argue \(\mu\)-Glitch to be applicable to most Microcontroller Units (MCUs) with TrustZone-M (TZM), as the adversary only needs to have access to the power supply of the IC, no need to supply the DuT's clock signal by using oversampling and the TZM setup code of most MCUs is open source knowledge in form of Software Development Kits (SDKs). Naturally, \(\mu\)-Glitch is not limited to conduct attacks against TZM. Van den Herrewegen et al. [58] used Single Fault Injection (SFI) to deactivate debugging protections to exfiltrate data. In recent MCUs by NXP, the debug interfaces are protected by duplication based approaches, therefore, SFI attacks cannot be used anymore. Hence, in order to overcome the debug features protection, \(\mu\)-Glitch Multiple Voltage Fault Injection (MVFI) approach becomes mandatory. We also evaluated using \(\mu\)-Glitch to overcome the mitigating effects of Brownout Detection (BoD) against Voltage Fault Injection (VFI). Even though it is indented to be a safety feature to power down an embedded device whenever the battery based voltage supply drops below a certain threshold, it has also been shown to detect the voltage drops caused by VFI. In sampling based BoD approaches the supply voltage is measured periodically, which is commonly encoutered in embedded devices, as it has a relatively low power consumption. If the BoD sampling frequency is high enough, it may happen that a voltage fault can be detected. We therefore propose to split a single, wider voltage fault into several, narrower ones exhibiting a similar effect on the target. Based on this, we were able to overcome a sampling based BoD by using two voltage faults, instead of a single fault which triggered the BOD. Unfortunately, the success rate of using two narrower glitches instead of one, is lower, as the chance to find a sweet-spot is decreasing which each additional glitch. Exemplary, the original and the first of the split glitches have the same offset. The width of the original glitch is \(400ns\). A double-fault inheriting a similar experiment outcome is represented by the two widths of \(170ns\) and \(140ns\) with an in-between offset of \(100ns\). A graphic depiction can be seen in the Appendix in Figure 6. ## 9 Related Work In the following, we provide a summary of existing Fault Injection (FI) attacks, attacks on Trusted Execution Environments (TEEs) as well as FI countermeasures. **Fault Injection Attacks:** Several FI attacks have been proposed over the last years, which we utilize in this work. Our work builds on top of Roth [46], who has attacked several implementations of the TrustZone-M by injecting a single voltage fault. The author aimed at hitting Fault Targets, which set the lower bounds of NS regions, therefore extending the regions to ultimately accessing sensitive, secure data. For this purpose firmware examples were attacked, which did not activate the vendor specific backchecking mechanisms of TrustZone-M implementations. In contrast to this, we focus on providing a reliable and repeatable procedure of conducting Multiple Fault Injection (MFI) attacks, which is able to circumvent all protections which are enabled by default in NXP's Software Development Kit (SDK). Trichina et al. [56] propose two-fault attacks on protected CRT-RSA implementations running on an advanced 32 bit ARM Cortex M3 core. The authors performed two-fault Laser Fault Injection (LFI) on a protected cryptographic application, as LFI exhibits a high spatial resolution. Nashimoto et al. [35] combined stack based Buffer Overflow (BO) attacks with two-fault Clock Fault Injection (CFI), in order to prevent the BO from being detected. Colin O'Flynn [40] showed that Voltage Fault Injection (VFI) may exhibit a high timely resolution based on his proposed _Crowbar_ circuit, which we utilize in this work. We have shown that by using this circuitry it is further possible to inject multiple voltage faults within a short period of time. Bozzato et al. [9] replaced the _Crowbar_ circuit by a Digital to Analog Converter (DAC). This way improving VFI by increasing transferability to other hardware, being able to attack Brownout Detection (BoD) enabled Integrated Circuits (ICs) and injecting faults based on arbitrary waveforms. While the authors mention to be able to inject multiple faults, they fell short of showing MFI attacks using their hardware and provided no evaluation, as they focus in their work on increasing the reliability and repeatability of Single Fault Injection (SFI). Timmers et al. [55] performed VFI in order to corrupt the instruction decoding stages of the internal processor pipeline, with the goal of hijacking the control flow by setting the Peripheral Checker (PC) to predefined addresses stored in general purpose registers. In a later work, Timmers et al. [54] showed that in an embedded Linux Operating System (OS) the privileges can be escalated from user to system privileges by performing VFI. By performing VFI against AMD's Secure Processor (SP), Buhren et al. [11] where able to control the key management and by this, compromise the security of AMD's Secure Encrypted Virtualization (SEV). An overview of fault attacks on embedded devices is provided by Yuce et al. [65]. Werner et al. [62] generate fault models for LFI based on fault injection simulations. The first Multiple Clock Fault Injection (MCFI) has been demonstrated by Blomer et al. [8], attacking two consecutive instructions during a single execution by directly modifying the clock signal. For this attack to succeed the external clock signal has to be fed directly into the processing part of the Device under Test (DuT). However, as of today, most ICs use Phase Locked Loops (PLLs) which were shown to gracefully protect against clock glitching. Further, MFI has been performed by Colombier et al. [15] using a LFI setup. Due to its sophisticated spatial and timely resolution, optical fault injection forms a promising candidate for MFI. LFI setups, however are quite costly. A single LFI setup is commonly encountered in the magnitude of $100.000, whereas the cost for a _multiple_ LFI setup is even higher. Moreover, in order to conduct LFI attacks, in general a much more invasive preprocessing of the DuT is required, in comparison to VFI. Electromagnetic Fault Injection is commonly not considered to be used in MFI setups, as the internal capacitor banks take too much time in order to be recharged, thus rapid successfully injected consecutive faults cannot be guaranteed. Commercial equipment for MFI, e.g., Alphanov's double laser fault injection microscope (D-LMS) and Riscure's VC Glitcher are fairly expensive and only shown to conduct Multiple Laser Fault Injection (MLFI). Devices from NewAE [44] cannot conduct Multiple Voltage Fault Injection (MVFI) but need a separate trigger for every glitch to be injected. In contrast, \(\mu\)-Glitch, is capable of reliably performing MVFI based on low-cost hardware. We aim at attacking setups, in which commonly only a single, synchronizing trigger signal can be asserted. To the best of our knowledge, the possibility of conducting MVFI has not been studied before. **Cryptographic Attacks:** Differential Fault Analysis (DFA) has first been described by Biham and Shamir [7]. It poses a cryptanalytic attack which exploits computational errors in order to disclose cryptographic keys. In recent years publications attacking today's Advanced Encryption Standard (AES) [51, 57, 18], Rivest-Shamir-Adleman-Cryptosystem (RSA) [2], Data Encryption Standard (DES) [64], recent cryptographic Hash Functions [27, 29] and many more [6, 13, 20, 26] emerged. **Trusted Execution Environment Attacks:** In recent years attacks utilizing different attack vectors against popular TEEs like ARM'sTrustZone (TZ) as well as Intel's Software Guard Extension (SGX) were published. Tang et al. [53] presented the CLKSCREW attack, which exploited an on-chip energy regulation mechanism in order to break the security promises by ARM's TrustZone. Kenjar et al. [23] described another software controlled, but hardware based fault injection approach, in which the authors were able to compromise any operating mode of Intel processors by modifying the frequency and voltage through privileged software interfaces. The authors showed that software management interfaces can be exploited to undermine the system's security. Qui et al. [45] performed a software based voltage fault injection, by abusing the Dynamic Voltage and Frequency Scaling (DVFS) techniques for energy efficiency, allowing them to attack a secure software implementation of AES. Ryan [47] showed, that ARM's TZ is susceptible to cache based attacks, which exhibit high temporal precision, high spatial precision and low noise. The author was able to fully recover a 256-bit private key from Qualcomm's version of the hardware-backed keystore. Ning et al. [37] exploited security vulnerabilities in ARMs software debugging features to extract sensitive information from TZ. Jang et al. [21] performed a denial-of-service against SGX, in which the CPU could be shutdown by performing a Rowhammer [34] attack. Lee et al. [25] found that it is indeed possible to circumvent the hardware protections provided by the SGX design by performing Return Oriented Programming (ROP) attacks. Different attack vectors were used in the past to attack TEEs, including TZ, however these attack vectors are not suitable for MVFI and therefore not able to overcome SFI based protection mechanisms such as utilized by LPC55SXX and RT6XX Microcontroller Units (MCUs). **Fault Injection Countermeasures:** Due to the vast amount of proposed work in this field, we present an overview in the Appendix in Table 6, where we list several proposed FI countermeasures, which are further classified into Information Level Countermeasures (ILCs) [4, 5, 32, 38, 48, 39] and Hardware Level Countermeasures (HLCs) [10, 17, 19, 22, 30, 50, 61]. Due to the higher abstraction of ILCs, these protect against a certain Fault Model, whereas the HLCs are generally deployed to protect against a certain type of FI. The last column determines, whether or not this countermeasure is theoretically able to protect from voltage MFI attacks, as described throughout our work. A check (\(\check{\mathsf{\char 37}}\)) indicates, that the respective countermeasure is able to protect from multiple voltage fault injection, whereas a cross (\(\mathsf{\times}\)) indicates, that it is not. Regarding the ILCs, these must be directed against Instruction Skipping in order to protect from our proposed MFI attack, whereas the HLCs must be deployed in order to detect VFI. Moro et al. [32] proposed a duplication based ILC replacement approach for most Thumb-2 instructions. Moreover, Barry et al. [5] proposed a follow-up ILC, as not every instruction could be automatically replaced by the modified Low Level Virtual Machine (LLVM) based compiler and hence, required manual analysis. The authors base their countermeasures on the assumption that attacking successive instructions using FI is hard. We showed in Section 6.2 that, in principal, \(\mu\)-Glitch is able to attack successive instructions. Therefore, we denote this with a checkmark in parenthesis in Table 6. Vosoughi et al. [61] propose a HLC which is able to mitigate the effect of VFI on the IC. Naturally, it can also protect against MFI using VFI, but it depends on specific on-chip voltage regulators to be present in the IC, which is not present at all times. Similarly, Singh et al. [50] propose an application specific HLC, which has to be adapted to the specific application to be protected. ## 10 Conclusion In this paper, we introduced a novel multiple voltage fault injection platform, coined \(\mu\)-Glitch, which is capable of injecting multiple, coordinated voltage faults into arbitrary target devices, in order to attack multiple fault targets during a single execution of the target's firmware. We proposed and evaluated a novel, efficient parameter search algorithm for multiple voltage fault injection attacks. By the hereby introduced attack vector, a novel threat model emerges, in which the adversary is capable of defeating most instruction level countermeasures, as they are mostly implemented to protect from single fault injection attacks. We have shown, that by using our novel approach a TrustZone-M implementation can be attacked, in which there are multiple, inter-dependent fault targets to overcome, including a specific fault injection protection. Finally, we have discussed possible countermeasures to thwart multiple fault injection attacks.
2306.05714
Thermalization of open quantum systems using the multiple-Davydov-D2 variational approach
Numerical implementation of an explicit phonon bath requires a large number of oscillator modes in order to maintain oscillators at the initial temperature when modeling energy relaxation processes. An additional thermalization algorithm may be useful in controlling the local temperature. In this paper we extend our previously proposed thermalization algorithm [M. Jaku\v{c}ionis and D. Abramavi\v{c}ius, Phys. Rev. A 103, 032202 (2021) ] to be used with the numerically exact multiple-Davydov-D2 trial wave function for simulation of relaxation dynamics and spectroscopic signals of open quantum systems using the time-dependent Dirac-Frenkel variational principle. By applying it to the molecular aggregate model, we demonstrate how the thermalization approach significantly reduces the numerical cost of simulations by decreasing the number of oscillators needed to explicitly simulate the aggregate's environment fluctuations while maintaining correspondence to the exact population relaxation dynamics. Additionally, we show how the thermalization can be used to find the equilibrium state of the excited molecular aggregate, which is necessary for simulation of the fluorescence and other spectroscopic signals. The thermalization algorithm we present offers the possibility to investigate larger system-bath models than was previously possible using the multiple-Davydov-D2 trial wave function and local heating effects in molecular complexes.
Mantas Jakucionis, Darius Abramavicius
2023-06-09T07:16:10Z
http://arxiv.org/abs/2306.05714v1
# Thermalization of open quantum systems using multiple-Davydov D\({}_{2}\) variational approach ###### Abstract Numerical implementations of explicit phonon bath require a large number of oscillator modes in order to maintain oscillators at the initial temperature when modelling energy relaxation processes. An additional thermalization algorithm may be useful in controlling the local temperature. In this paper we extend our previously proposed thermalization algorithm [1] to be used with the numerically exact multiple-Davydov D\({}_{2}\) trial wavefunction for simulation of relaxation dynamics and spectroscopic signals of open quantum systems using the time-dependent Dirac-Frenkel variational principle. By applying it to the molecular aggregate model, we demonstrate how the thermalization approach significantly reduces the numerical cost of simulations by decreasing the number of oscillators needed to explicitly simulate the aggregate's environment fluctuations, while maintaining correspondence to the exact population relaxation dynamics. Additionally, we show how the thermalization can be used to find the equilibrium state of the excited molecular aggregate, which is necessary for simulation of the fluorescence and other spectroscopic signals. The presented thermalization algorithm open possibilities to investigate larger system-bath models than was previously possible using the multiple-Davydov D\({}_{2}\) trial wavefunction and local heating effects in molecular complexes. ## I Introduction Open quantum system models are widely used to describe properties of molecular aggregates [2; 3]. The _system_ usually consists of molecular electronic states. Intramolecular vibrational degrees of freedom (DOFs), which play the major role in relaxation process of the systems of interest can also be included into the quantum system model. The rest of the DOFs are treated as an environment of a constant temperature - the _bath_. The bath is modelled as a collection of quantum harmonic oscillators (QHO) and is characterized by a continuous fluctuation spectral density function [4; 5; 6; 3]. Separation into the system and the bath parts is mostly formal as the system-bath coupling has to be included to account for molecular environment-induced decoherence and temperature effects, hence the quantum dynamics penetrates into the bath, and bath also changes its state. When using wavefunction-based simulation approaches, it can be challenging to maintain a precise representation of the bath as a constant temperature thermostat, because energy exchange between the system and the bath can alter thermal properties of the bath. Generally, a large number of explicitly modelled QHO modes have to be included to minimize the negative effects of thermal energy accumulation in the bath, but this is numerically expensive. Therefore, one always has to balance between the size of the model, accuracy of the chosen numerical method and its numerical cost. Alternatively, one could numerically change the wavefunction variables during its time evolution in a way as to prevent accumulation of the thermal energy in the bath and to maintain it at a desired temperature, i.e., perform thermalization. It is challenging to accurately simulate dynamics of quantum systems that exchange energy and (quasi-) particles with their surroundings, i.e., of the open quantum systems [7; 8], because a numerical cost needed to propagate the corresponding dynamical equations in time increases exponentially with the number of DOFs. The wavefunction approach based on the multiple-Davydov D\({}_{2}\) trial wavefunction (mD\({}_{2}\) ansatz) [9; 10; 11; 12; 13], along with the time-dependent variational principle, has been shown to be an excellent tool for accurately simulating the dynamics of system-bath models [9; 14; 15; 16; 17; 18; 19; 20] and spectroscopic signals [19; 20; 21; 22; 23]. Despite relying on an adaptive, time-dependent state basis set, the problem of rapidly growing numerical costs remain. In a previous study, we proposed the thermalization algorithm [1] to be used with the Davydov D\({}_{2}\) ansatz [24; 25; 26; 27; 28; 29], which restricts QHOs to their lowest uncertainty states - coherent states [30; 31] with Gaussian wavepackets in their coordinate-momentum phase space. We demonstrated how the thermalization algorithm regulates the temperature of the environment and enables the electronically excited molecular system to relax into its equilibrium state at a given temperature [32; 33; 34] even when using a reduced number of bath oscillators, which greatly reduces numerical costs. The characteristics of the resulting equilibrium state are essential for modeling fluorescence, excited state emission, excited state absorption and other spectroscopic signals [2]. However, the D\({}_{2}\) ansatz is a crude approximation of the actual system-bath model eigenstates and thus is unable to completely capture electronic population relaxation dynamics [22]. Meanwhile, the system-bath dynamics obtained using the multiple-Davydov ansi\(\acute{\text{e}}\)re are consistent with the results from other state-of-the-art methods, such as hierarchical equations of motion [10; 12; 22], quasiadiabatic path integral [15], and multi-configuration time-dependent Hartree [16; 35], even when the number of bath oscillators is large. Due to the more complicated wavefunction structure of the \(\mathrm{mD_{2}}\) ansatz, straightforward application of the \(\mathrm{D_{2}}\) ansatz thermalization algorithm is not possible. In this work, we extend the thermalization algorithm for the \(\mathrm{mD_{2}}\) ansatz by introducing an additional state projection algorithm and adopting the coarse-grained scattering approximation. In Section (II) we describe the thermalization algorithm for the \(\mathrm{mD_{2}}\) ansatz, and in Section (III) we provide a theoretical description of its application to simulating the fluorescence spectra. Then, in Section (IV) we demonstrate its capabilities by simulating excitation relaxation dynamics of an H-type molecular aggregate and its fluorescence spectrum. Lastly, in Section (V) we discuss changes made to adapt the thermalization algorithm of the \(\mathrm{D_{2}}\) ansatz for the \(\mathrm{mD_{2}}\) ansatz. ## II Thermalization of the \(\mathrm{mD_{2}}\) ansatz We consider a molecular aggregate model, where each molecule \(n=1,2,\ldots,N\) couples to its own _local reservoir_\(k=1,2,\ldots,N\), each of which consists of \(q=1,2,\ldots,Q\) QHO modes. The model is given by the Hamiltonian \(\hat{H}=\hat{H}_{\mathrm{S}}+\hat{H}_{\mathrm{B}}+\hat{H}_{\mathrm{SB}}\) with the system, the bath and the system-bath coupling terms defined as: \[\hat{H}_{\mathrm{S}}= \sum_{n}^{N}\varepsilon_{n}\hat{a}_{n}^{\dagger}\hat{a}_{n}+\sum _{n,m}^{n\neq m}J_{nm}\hat{a}_{n}^{\dagger}\hat{a}_{m}, \tag{1}\] \[\hat{H}_{\mathrm{B}}= \sum_{k,q}^{N,Q}\omega_{kq}\hat{b}_{kq}^{\dagger}\hat{b}_{kq},\] (2) \[\hat{H}_{\mathrm{SB}}= -\sum_{n}^{N}\hat{a}_{n}^{\dagger}\hat{a}_{n}\sum_{q}^{Q}\omega_{ nq}g_{nq}\left(\hat{b}_{nq}^{\dagger}+\hat{b}_{nq}\right), \tag{3}\] with the reduced Planck's constant set to \(\hbar=1\). Here \(\varepsilon_{n}\) is the \(n\)-th molecule electronic excitation energy, \(J_{nm}\) denotes the resonant coupling between the \(n\)-th and \(m\)-th molecules, \(\omega_{nq}\) denotes the frequency of the \(q\)-th QHO in the \(k\)-th local reservoir, while \(g_{nq}\) is the coupling strength between the \(q\)-th oscillator in the \(n\)-th local reservoir to the \(n\)-th molecule. The operators \(\hat{a}_{n}^{\dagger}\) and \(\hat{a}_{n}\) represent the creation and annihilation operators for electronic excitations, respectively, while \(\hat{b}_{nq}^{\dagger}\) and \(\hat{b}_{nq}\) represent the creation and annihilation bosonic operators for QHOs. In addition, we implicitly couple the system-bath model to the secondary bath characterized by a fixed temperature, \(T.\) The coupling between the secondary and primary baths occurs via the scattering events that allow the system-bath model to exchange energy with the secondary bath and thermalize local reservoirs as is described below. A state of the system-bath model is given by the \(\mathrm{mD_{2}}\) wavefunction \[\ket{\Psi\left(t\right)}=\sum_{i,n}^{M,N}\alpha_{i,n}\left(t\right)\ket{n} \otimes\ket{\mathbf{\lambda}_{i}\left(t\right)}, \tag{4}\] where \(\alpha_{i,n}\left(t\right)\) is the \(i\)-th multiple complex amplitude associated with a singly excited electronic state \(\ket{n}\) localized on the \(n\)-th molecule, \(\ket{n}=\hat{a}_{n}^{\dagger}\ket{0}_{el}\). \(\ket{0}_{el}\) is the electronic ground state. The complexity and accuracy of the \(\mathrm{mD_{2}}\) ansatz can be adjusted by varying the multiplicity number, \(M\). The states of QHO modes are represented by multi-dimensional coherent states \[\ket{\mathbf{\lambda}_{i}\left(t\right)}=\exp\sum_{k,q}^{N,Q}\left( \lambda_{i,kq}\left(t\right)\hat{b}_{kq}^{\dagger}-\lambda_{i,kq}^{\star} \left(t\right)\hat{b}_{kq}\right)\ket{0}_{\mathrm{vib}}, \tag{5}\] with \(\lambda_{i,kq}\left(t\right)\) being the \(i\)-th multiple complex displacement parameter and \(\ket{0}_{\mathrm{vib}}=\otimes_{k,q}\ket{0}_{k,q}\) is the global vibrational ground state of all QHOs. The \(\mathrm{mD_{2}}\) wavefunction describes a state of the system-bath model as a superposition of \(M\) multi-dimensional coherent state terms, which allows it to represent a wide range of system-bath model states beyond the Born-Oppenheimer and Gaussian approximations. The thermalization algorithm for the \(\mathrm{mD_{2}}\) ansatz is realized by stochastic scattering events [36; 37] during time evolution of the wavefunction. These events change momenta, \(p_{kq}\), of _all_\(q\)-th QHO modes of the \(k\)-th local reservoir at once. We assume that the scattering probability, \(P_{k}\left(\theta,\tau_{\mathrm{sc}}\right)\), of \(\theta\) scattering events occurring per time interval, \(\tau_{\mathrm{sc}}\), with a scattering rate, \(\nu_{k}\), is given by a Poisson distribution \[P_{k}\left(\theta,\tau_{\mathrm{sc}}\right)=\frac{1}{\theta!}\left(\tau_{ \mathrm{sc}}\nu_{k}\right)^{\theta}\mathrm{e}^{-\tau_{\mathrm{sc}}\nu_{k}}. \tag{6}\] Numerically, Poisson statistics are realised by simulating Bernoulli processes [38; 39] in the limit of \(\tau_{\mathrm{sc}}\to 0\), while maintaining the condition that \(\nu_{k}\tau_{\mathrm{sc}}\ll 1\). To simulate the scattering events we divide wavefunction propagation into equal time length, \(\tau_{\mathrm{sc}}\), intervals \[t_{i}=\left(i\tau_{\mathrm{sc}},\left(i+1\right)\tau_{\mathrm{sc}}\right],\ i=0,1,\ldots. \tag{7}\] At the end of each interval, for each local reservoir, we flip a biased coin with the probability \(\nu_{k}\tau_{\mathrm{sc}}\) of landing _heads_ for all local reservoirs. If a \(k\)-th coin lands _heads,_ we change momenta of _all oscillator modes of that \(k\)-th reservoir,_ otherwise, no changes are made. A list of scattering moments at which the numerical simulation is paused to perform the scatterings can be precomputed prior to starting the simulation by drawing probabilities for all local reservoir and all time intervals \(t_{i}\) from Eq. (6). We assume that during the scattering event the local bath, which experiences the scattering, acquires thermal-equilibrium kinetic energy. Such state is given by a single coherent state for one specific QHO. In order to set the new momenta values of the scattered reservoir oscillator modes, we first project the \(\mathrm{mD_{2}}\) wavefunction of Eq. (4) to its single-multiple Davydov \(\mathrm{D_{2}}\) form \[\ket{\psi\left(t\right)}=\sum_{n}^{N}\beta_{n}\left(t\right)\ket{n}\otimes \ket{\tilde{\mathbf{\lambda}}\left(t\right)}, \tag{8}\] where \(\beta_{n}\) are the projected complex electronic amplitudes and \(\left|\tilde{\mathbf{\lambda}}\left(t\right)\right\rangle\) is the projected multi-dimensional coherent state, which is defined later. This follows the decoherence idea [40], where the macroscopic environment perform a collapse of the wavefunction into a set of preferred states, in our case, the electronic-vibrational states, \(\left|n\right\rangle\otimes\left|\tilde{\mathbf{\lambda}}\left(t\right)\right\rangle\). The projected complex electronic amplitudes are equal to \[\beta_{n}\left(t\right)=\sum_{i}^{M}\alpha_{i,n}\left(t\right)\langle\tilde{ \mathbf{\lambda}}\left(t\right)|\mathbf{\lambda}_{i}\left(t\right)\rangle, \tag{9}\] while the projected multi-dimensional coherent state \[\left|\tilde{\mathbf{\lambda}}\left(t\right)\right\rangle=\exp\sum_{k,q}^{N,Q} \left(\tilde{\lambda}_{kq}\left(t\right)\hat{b}_{kq}^{\dagger}-\tilde{\lambda} _{kq}^{\star}\left(t\right)\hat{b}_{kq}\right)|0\rangle_{\text{vib}} \tag{10}\] is defined in terms of the complex displacements \[\tilde{\lambda}_{kq}\left(t\right)=\frac{1}{\sqrt{2}}\left(x_{kq}\left(t \right)+\text{i}p_{kq}\left(t\right)\right), \tag{11}\] where \(x_{kq}\left(t\right)\) and \(p_{kq}\left(t\right)\) are QHO coordinate and momentum expectation values \[x_{kq}= \frac{1}{\sqrt{2}}\sum_{i,j,n}^{M,M,N}\alpha_{i,n}^{\star}\alpha _{j,n}\langle\mathbf{\lambda}_{i}|\mathbf{\lambda}_{j}\rangle\sum_{k,q}^{N,Q}\left( \lambda_{i,kq}^{\star}+\lambda_{j,kq}\right), \tag{12}\] \[p_{kq}= \frac{\text{i}}{\sqrt{2}}\sum_{i,j,n}^{M,M,N}\alpha_{i,n}^{\star }\alpha_{j,n}\langle\mathbf{\lambda}_{i}|\mathbf{\lambda}_{j}\rangle\sum_{k,q}^{N,Q} \left(\lambda_{i,kq}^{\star}-\lambda_{j,kq}\right), \tag{13}\] calculated from the \(\text{mD}_{2}\) ansatz, where \(\langle\mathbf{\lambda}_{i}|\mathbf{\lambda}_{j}\rangle\) is the overlap of two coherent states \[\langle\mathbf{\lambda}_{i}|\mathbf{\lambda}_{j}\rangle=\exp\sum_{k,q}^{N,Q}\left( \lambda_{i,kq}^{\star}\lambda_{j,kq}-\frac{1}{2}\left(|\lambda_{i,kq}|^{2}+| \lambda_{j,kq}|^{2}\right)\right). \tag{14}\] This completes the projection operation of the \(\text{mD}_{2}\) state, given by Eq. (4), into its simplified \(\text{D}_{2}\) form in Eq. (8). Once the projected wavefunction is deduced, we modify momenta of the scattered oscillators by sampling the QHO diagonal density operator distribution in the coherent state representation at temperature \(T\), known as the Glauber-Sudarshan distribution [41; 42; 43; 44] \[\mathcal{P}\left(\tilde{\lambda}_{kq}\right)=\mathcal{Z}_{kq}^{-1}\exp\left( -\left|\tilde{\lambda}_{kq}\right|^{2}\left(\text{e}^{\frac{\nu_{kq}}{\text{E }_{kq}}}-1\right)\right). \tag{15}\] For scattered modes, we set momenta values in Eq. (11) to \[p_{kq}\left(t\right)=\sqrt{2}\text{Im}\left(\tilde{\lambda}_{kq}^{\mathcal{P} }\right), \tag{16}\] where \(\tilde{\lambda}_{kq}^{\mathcal{P}}\) is a sample drawn from the Glauber-Sudarshan distribution. \(\mathcal{Z}_{kq}^{-1}\) and \(\omega_{kq}\) are partition functions and frequencies of QHO, respectively, while \(k_{\text{B}}\) is the Boltzmann constant. During the scattering events, coordinates, \(x_{kq}\), of both the scattered and non-scattered modes remain unchanged. Notice, that the local baths, which do not experience scattering, remain unaffected by the scattering of other modes. Now that the wavefunction of the system-bath model after scattering is known (given by Eq. (8)) we rewrite it in the \(\text{mD}_{2}\) wavefunction form of Eq. (4) by populating amplitudes and displacements of the first multiple, \(i=1\), as \[\alpha_{1,n}\left(t\right) =\beta_{n}\left(t\right), \tag{17}\] \[\lambda_{1,kq}\left(t\right) =\tilde{\lambda}_{kq}\left(t\right). \tag{18}\] Amplitudes of the unpopulated multiples are set to \(\alpha_{j=2,\ldots,M,n}\left(t\right)=0\), while the unpopulated displacements are positioned in a layered hexagonal pattern around the populated coherent state [19] \[\lambda_{j=2,\ldots,M,kq}\left(t\right) =\lambda_{1,kq}\left(\tau\right)\] \[+\frac{1}{4}\left(1+\left[\beta\left(j\right)\right]\right)e^{ 2\pi\left(\beta\left(j\right)+\frac{1}{12}\left[\beta\left(j\right)\right] \right)}, \tag{19}\] where \(\beta\left(j\right)=\left(j-2\right)/6\) is the coordination function and \(\left\lfloor x\right\rfloor\) is the floor function. The exact arrangement of displacements of the unpopulated multiples is not critical, as long as the distance in the phase space to the populated multiple coherent state is not too large, otherwise, the initially unpopulated multiples will not contribute to further dynamics [17; 45]. Once the scattered \(\text{mD}_{2}\) wavefunction is determined and the scattering event is finalised, further simulation of \(\text{mD}_{2}\) dynamics according to equations of motion proceeds. This procedure generates a stochastic wavefunction trajectory, where the system-bath model at each time moment is described by a pure state, which is a single member of a statistical ensemble [42; 43]. The thermalized model dynamics are obtained by averaging observables over an ensemble of wavefunction trajectories, \(\gamma\), which differ by their: initial amplitudes, \(\alpha_{i,n}\left(0\right)\), initial coherent state displacements, \(\lambda_{i,kq}\left(0\right)\), and a sequence of scattering events. Ensemble averaging is performed in a parallelized Monte Carlo scheme. ## III Thermalized Fluorescence Spectra Wavefunction trajectories allows calculation of an arbitrary observable. Calculation of equilibrium fluorescence spectrum requires to know thermally equilibrated state of the excited model. The presented thermalization procedure allows to obtain such state and calculate fluorescence spectrum. In general, the frequency-domain spectrum of a quantum system can be written as a Fourier transform \[F\left(\omega\right)=\text{Re}\int_{0}^{\infty}\text{d}t\text{e}^{i\omega t}S \left(t\right), \tag{20}\] of the corresponding time-domain response function, \(S\left(t\right)\). The fluorescence (FL) response function, \(S_{\text{fl}}\left(t\right)\), is a specific case of the more general time-resolved fluorescence (TRF) response function [2, 46] \[S_{\text{trf}}\left(\tau_{\text{eq}},t\right) =\frac{1}{\Gamma}\sum_{\gamma=1}^{\Gamma}\langle\Psi_{\text{G}} \left(0\right)|_{\gamma}\hat{\mu}_{-}\hat{\mathcal{V}}_{\text{E}}^{\dagger} \left(\tau_{\text{eq}}+t\right)\hat{\mu}_{+}\] \[\times\hat{\mathcal{V}}_{\text{G}}\left(t\right)\hat{\mu}_{-}\hat {\mathcal{V}}_{\text{E}}\left(\tau_{\text{eq}}\right)\hat{\mu}_{+}|\Psi_{ \text{G}}\left(0\right)\rangle_{\gamma}, \tag{21}\] where \(\hat{\mathcal{V}}_{\text{E}}\) and \(\hat{\mathcal{V}}_{\text{G}}\) are the excited and ground state system-bath propagators \[\hat{\mathcal{V}}_{A}\left(t_{1}\right)|\Psi_{A}\left(t_{2}\right)\rangle=| \Psi_{A}\left(t_{1}+t_{2}\right)\rangle, \tag{22}\] while \(\hat{\mu}_{+}=\sum_{n}^{N}\left(\mathbf{e}\cdot\mathbf{\mu}_{n}\right)\hat{a}_{n}^{\dagger}\) and \(\hat{\mu}_{-}=\sum_{n}^{N}\left(\mathbf{e}\cdot\mathbf{\mu}_{n}\right)\hat{a}_{n}\) are the excitation creation and annihilation operators of the system [19], \(\mathbf{\mu}_{n}\) is the electronic transition dipole moment vector, \(\mathbf{e}\) is the external field polarization vector. \(\left|\Psi_{\text{G}}\left(0\right)\right\rangle_{\gamma}\) is a model ground state with an initial condition of the \(\gamma\)-th trajectory. The EOMs for propagating the \(\text{mD}_{2}\) wavefunction, as well as the approach to solving them, are described in detail in Refs. [19, 45]. \(S_{\text{trf}}\left(\tau_{\text{eq}},t\right)\) is a function of two times: the equilibration time, \(\tau_{\text{eq}}\), and the coherence time, \(t\). During the equilibration time, the system evolves in its excited state and, due to the system-bath interaction, relaxes to an equilibrium state. After this, during the coherence time, spontaneous emission occurs. We will apply thermalization during the equilibration time to facilitate the relaxation of the system-bath model into the lowest energy equilibrium state by removing excess thermal energy from local reservoirs. We denote \(\hat{\mathcal{G}}_{\text{E},7}\) as the excited state propagator \(\hat{\mathcal{V}}_{\text{E}}\), but with the thermalization. Then the thermalized TRF (tTRF) response function can be written as \[\tilde{S}_{\text{trf}}\left(\tau_{\text{eq}},t\right) =\frac{1}{\Gamma}\sum_{\gamma=1}^{\Gamma}\langle\Psi_{\text{G}} \left(0\right)|_{\gamma}\hat{\mu}_{-}\hat{\mathcal{G}}_{\text{E},7}^{\dagger} \left(\tau_{\text{eq}}\right)\hat{\mathcal{V}}_{\text{G}}^{\dagger}\left(t \right)\hat{\mu}_{+}\] \[\times\hat{\mathcal{V}}_{\text{G}}\left(t\right)\hat{\mu}_{-}\hat {\mathcal{G}}_{\text{E},7}\left(\tau_{\text{eq}}\right)\hat{\mu}_{+}|\Psi_{ \text{G}}\left(0\right)\rangle_{\gamma}. \tag{23}\] By considering the equilibration time to be long enough to reach the equilibrium state of the system-bath model, we define the FL response function to be \[S_{\text{fl}}\left(t\right)=\lim_{\tau_{\text{eq}}\to\infty}S_{\text{trf}} \left(\tau_{\text{eq}},t\right), \tag{24}\] and the thermalized fluorescence (tFL) response function as \[\tilde{S}_{\text{fl}}\left(t\right)=\lim_{\tau_{\text{eq}}\to\infty}\tilde{S} _{\text{trf}}\left(\tau_{\text{eq}},t\right). \tag{25}\] The spectra obtained using the fluorescence response function without and with thermalization will be compared in the next section. For the numerical simulation, the required equilibration time interval has to be deduced by increasing \(\tau_{\text{eq}}\) until the resulting fluorescence spectra converges. ## IV Results To investigate the thermalization algorithm for the \(\text{mD}_{2}\) ansatz, we will analyse the linear trimer model, which we previously used to study thermalization of the \(\text{D}_{2}\) ansatz [1]. The model consists of \(N=3\) coupled molecules, with excited state energies \(\varepsilon_{n}\) being equal to \(0,\;250,\;500\;\text{cm}^{-1}\), forming an energy funnel. The nearest neighbour couplings are set to \(J_{1,2}=J_{2,3}=100\;\text{cm}^{-1}\), \(J_{3,1}=0\). The electronic dipole moment vectors of molecules are \(\mathbf{\mu}_{n}=\left(1,0,0\right)\) in Cartesian coordinate system. This classifies the trimer as the H-type molecular aggregate [47]. QHOs of local molecular reservoirs are characterized by the super-Ohmic [48] spectral density function \(C^{\prime\prime}\left(\omega\right)=\omega\left(\omega/\omega_{\text{c}} \right)^{s-1}\exp\left(-\omega/\omega_{\text{c}}\right)\) with an order parameter \(s=2\) and a cut-off frequency \(\omega_{\text{c}}=100\;\text{cm}^{-1}\). The QHO frequencies are \(\omega_{kq}=\omega_{0}+\left(q-1\right)\Delta\omega\), where the frequency off-set is \(\omega_{0}=0.01\;\text{cm}^{-1}\). The reorganization energy of each local reservoir is \(\Lambda_{k}=\sum_{q}\omega_{kq}g_{kq}^{2}=100\;\text{cm}^{-1}\). The scattering time step-size is set to \(\tau_{\text{sc}}=0.01\;\text{ps}\). Finally, the ensemble consists of \(900\) wavefunctions trajectories, which we found to be sufficient to obtain the converged model dynamics. The \(\text{mD}_{2}\) ansatz multiplicity \(M=5\) is used as the results with a higher multiplicity quantitatively match the \(M=5\) case. We will be considering three bath models: the _dense bath_ model, where the spectral density function \(C^{\prime\prime}\left(\omega\right)\) is discretized into \(Q=75\) oscillators per local reservoir with a step-size of \(\Delta\omega=10\;\text{cm}^{-1}\); the _sparse bath_ model, where the number of modes is reduced by a factor of \(5\) to just \(Q=15\) oscillators per local reservoir with \(\Delta\omega=50\;\text{cm}^{-1}\); and the _sparse bath with thermalization_ model, where \(C^{\prime\prime}\left(\omega\right)\) is discretized according to the sparse bath model and thermalization is used. In the absence of the bath, the system has three single-excitation stationary exciton states with energies: \(E_{1}^{\text{exc}}\approx-37.23\;\text{cm}^{-1}\), \(E_{2}^{\text{exc}}=250\;\text{cm}^{-1}\), \(E_{3}^{\text{exc}}\approx 537.23\;\text{cm}^{-1}\), satisfying the time-independent Schrodinger equation \[\hat{H}_{\text{S}}\Phi_{n}^{\text{exc}}=E_{n}^{\text{exc}}\Phi_{n}^{\text{exc}}, \tag{26}\] with the system Hamiltonian given by Eq. (1). The exciton eigenstates [3, 5], \(\Phi_{n}^{\text{exc}}\), have their excitations delocalized over multiple molecules [42]. Therefore, it is convenient to analyse molecular aggregate excitation relaxation dynamics in terms of excitons as they are natural quasi-particles of the aggregate. We denote the probability of finding the aggregate in its \(n\)-th excitonic state as the population, given by \[\rho_{n}^{\text{exc}}\left(t\right)=\sum_{k,l,i,j}\left(\Phi_{k}^{\text{exc}} \right)_{n}^{\star}\left\langle\alpha_{i,k}^{\star}\left(t\right)\alpha_{j,l} \left(t\right)S_{i,j}\left(t\right)\right\rangle_{\text{th}}\left(\Phi_{l}^{ \text{exc}}\right)_{n}, \tag{27}\] where \(\left\langle\ldots\right\rangle_{\text{th}}\) is the averaging over an ensemble of wavefunction trajectories. Using \(\text{mD}_{2}\) ansatz we have First, we study the electronic excitation dynamics. The initial excitonic state populations correspond to the optically excited highest energy state: \(\rho_{3}^{\text{exc}}=1\), \(\rho_{1,2}^{\text{exc}}=0\), while the initial QHO displacements, \(\lambda_{i,kq}\left(0\right)\), are sampled from the Glauber-Sudarshan distribution in Eq. (15) to account for the initial temperatures of \(T_{k}=77\) K. In Fig. (1) we display the trimer model exciton state populations \(\rho_{n}^{\text{exc}}\left(t\right)\) and average temperatures [49]\(T_{k}\left(t\right)\) of local reservoirs for all three bath models. The aggregate environment causes dephasing between excitonic states and induces irreversible population relaxation [2; 3]. The population dynamics of the dense bath models exhibits a sequential relaxation from the initially populated highest energy excitonic state to the lowest energy state via the intermediate state. Eventually, the population distribution reaches the equilibrium state. The majority of the excitation energy is transferred to oscillators of local reservoirs. We observe an increase of temperatures [49; 50; 51] due to the finite number of oscillators in local reservoirs. An infinite number of oscillators would have to be included to maintain temperature constant at the initial value. The initial rapid rise in temperature is due to oscillator reorganization in the aggregate's electronic excited state manifold, while the following slow rise is due to energy transfer from the system to local reservoirs. In the sparse bath model, we observe that if the number of vibrational modes is reduced, the population dynamics become skewed due to insufficiently dense representation of the spectral density function. Furthermore, the temperature increase is higher than compared to the dense bath model, which further changes the characteristics of the resulting equilibrium state. When the thermalization algorithm is applied to the sparse bath model with a scattering rate \(\nu_{k}=1.25\) ps\({}^{-1}\), the population dynamics are restored and qualitatively match those of the dense model. Although the initial temperatures of the local reservoirs exceed those of the dense bath model, they gradually decrease due to thermalization, and this rate can be adjusted by changing the scattering rate. Next, we turn our attention to simulating the FL spectrum of the linear trimer with the dense bath model with Figure 1: The exciton state populations, \(\rho_{n}^{\text{exc}}\left(t\right)\), and the average temperatures, \(T_{k}\left(t\right)\), of local reservoirs of the trimer with (a, b) the dense bath model, (c, d) the sparse bath model and (e, f) the sparse bath model with thermalization. Figure 2: (a) the TRF and (b) tTRF spectra of the trimer with the dense bath model, simulated with an increasing equilibration time \(\tau_{\text{eq}}\). The absorption spectrum is also shown. Vertical dashed lines show energies \(E^{\text{exc}}\) of the excitonic states. scattering rate \(\nu_{\rm k}=1\) ps\({}^{-1}\). The initial excitonic state population distribution is now calculated in terms of the system-field interaction, as described in Ref. [19]. In Fig. (2) we compare the TRF and tTRF spectra with increasing equilibration times \(\tau_{\rm eq}\). When \(\tau_{\rm eq}=0\), both the TRF and tTRF spectra are equivalent and exactly match the absorption spectrum, which consists of three peaks due to transition involving the combined excitonic-vibrational (vibronic) states and can not be regarded as purely excitonic. For reference, vertical dotted lines indicate energies \(E^{\rm exc}\) of excitonic states. These do not match the three peak energies exactly due to the system being coupled to the environment. By allowing equilibration to occur, \(\tau_{\rm eq}>0\), both the TRF and tTRF spectra show peak intensity shift towards lower energies as excitation relaxes towards the equilibrated state during the equilibration time. After equilibrating for \(\tau_{\rm eq}=2\) ps, we find that both spectra have converged and do not change with longer \(\tau_{\rm eq}\). Therefore, the TRF and tTRF spectra at \(\tau_{\rm eq}=2\) ps can be considered as the FL and tFL spectra of the trimer model as defined in Eqs. (24), (25). Both spectra exhibit their highest intensities at the energies of the lowest vibronic states. However, the FL spectrum also has considerable intensities at energies of the intermediate and highest vibronic states. Surprisingly, the higher energy peak is more intense than the intermediate peak. The tFL spectrum intensities at these energies are negligible, which indicate that the thermalization allows the trimer model to reach a lower energy equilibrium state, which is no longer hindered by the excess thermal energy accumulation in QHOs of local reservoirs. In Fig. (3), we also compare the obtained FL and tFL spectra with the FL spectrum simulated using a previously proposed excited state numerical optimization approach [52, 53, 20]. It relies on finding the model's lowest energy excitonic state in terms of the mD\({}_{2}\) ansatz parameters and then applying thermal fluctuations to effectively generate the model in a lowest energy equilibrium state at the temperature of 77 K. We see that all three methods produce a similar lowest vibronic peak, but the tFL spectrum has a higher intensity tail towards the low energy side and almost no intensities at energies of the intermediate and the highest vibronic states, while the FL spectrum simulated using the optimization approach has a small intensity at the energy of the intermediate vibronic states. The optimization approach spectrum more closely resembles that of the thermalized model than the non-thermalized spectrum. ## V Discussion Starting from an arbitrary non-equilibrium initial condition a closed quantum system will not equilibrate due to energy conservation. Thermalization procedure is necessary to guarantee proper thermal equilibrium in the long run for _all_ bath oscillators. This requires introducing the concept of primary and secondary baths. In our model the primary bath is a part of explicit quantum DOFs, while the secondary bath is a thermal reservoir with infinite thermal capacity, i. e. it keeps constant temperature in any energy exchange process. In this case, the secondary bath cannot be described by mechanical equations - only statistical or thermodynamical concepts apply. Our statistical algorithm performs energy exchange between the primary and secondary baths using the statistical scattering idea: the primary bath state is being reset to the thermally equilibrated state, thus giving up excess energy to or drawing additional energy from the secondary bath. This is a major extension of the explicit quantum TDVP theory - the extended model covers a broader range of phenomena: local heating and cooling, as well as bath oscillator dynamic localization, which are not available in the standard TDVP theory. In order to adapt the D\({}_{2}\) ansatz thermalization algorithm for the mD\({}_{2}\) ansatz, several extensions were made. During the time evolution of the system-bath model, the mD\({}_{2}\) ansatz multiples become correlated, leading to a non-Gaussian bath wavefunction. It becomes impossible to represent a new Gaussian wavefunction of scattered QHO modes, sampled from Eq. (15), without changing the wavefunction of all the rest non-scattered oscillators at the same time. Therefore, we chose to project the mD\({}_{2}\) into the D\({}_{2}\) ansatz whenever scatterings occurs, allowing to correctly represent the newly sampled Gaussian wavefunction of scattered oscillators. This idea requires the consideration of a few aspects. The projected D\({}_{2}\) wavefunction accurately maintains _average_ coordinates and momenta of the mD\({}_{2}\) ansatz QHO states, while variances and higher-order moments become affected. This causes variation of excitation relaxation dynamics compared to the standard mD Figure 3: The fluorescence spectra comparison of the trimer with the dense bath model obtained without thermalization, with thermalization and using the optimization approach. The equilibration time is \(\tau_{\rm eq}=2\) ps. The absorption spectrum is also shown. Vertical dashed lines show energies \(E^{\rm exc}\) of the excitonic states. ansatz. However, system-bath models mostly rely only on the _linear_ coupling between the system and average coordinates of QHO modes, therefore, as seen in Fig. (1), the discrepancy is minimal. The higher-order couplings become necessary when anharmonic vibrational modes or changes to their frequencies upon excitation are considered [20; 54]. To maintain the close correspondence to the standard \(\mathrm{mD_{2}}\) ansatz the projection should not occur too often. This is because it takes time for the wavefunction after scattering to again become correlated between its many multiples, i.e., to take advantage of the unpopulated \(\mathrm{mD_{2}}\) ansatz multiples after projection. If the repopulation time is shorter than the time between projection operations, the model population dynamics become similar to those of the \(\mathrm{D_{2}}\) ansatz, even though the \(\mathrm{mD_{2}}\) ansatz is being used. The average time interval between projection operations is determined by the scattering rate \(\nu_{k}\), a property of the physical system, while the scattering time \(\tau_{\mathrm{sc}}\) is a parameter of the model and must be as small as necessary to ensure the Bernoulli-to-Poisson statistics transition condition, \(\nu_{k}\tau_{\mathrm{sc}}\ll 1\). To increase the average time between projection operations, we adopt a coarser scattering approach for the \(\mathrm{mD_{2}}\) ansatz as compared to the \(\mathrm{D_{2}}\) ansatz. Instead of considering scattering events of individual oscillators, we consider events, where all oscillators of certain local reservoirs are scattered at once, requiring only a single projection operation to scatter many oscillators at once. This approach allows the \(\mathrm{mD_{2}}\) ansatz to continue utilizing all its multiples for the improved accuracy over the \(\mathrm{D_{2}}\) ansatz, while reducing the number of explicitly modelled oscillators needed to maintain local reservoirs' temperatures close to initial values, thereby reducing the numerical cost. Using the \(\mathrm{mD_{2}}\) ansatz to simulate population dynamics of the trimer model, it took an average of 166 minutes per trajectory using the dense bath model, but only 1.3 minutes using the sparse bath model and 2 minutes using the sparse bath model with thermalization. The computational overhead of thermalization is small compared to the overall time savings when switching from using the dense bath to the sparse bath. The numerical cost reduction is also greater for the \(\mathrm{mD_{2}}\) ansatz than for the \(\mathrm{D_{2}}\) ansatz in Ref. [1], because the \(\mathrm{mD_{2}}\) ansatz EOMs constitute an _implicit_ system of differential equations, which require a more involved, two-step numerical approach to find a solution [45; 19]. By considering fewer oscillators in each local reservoir, simulations of the dynamics and spectroscopic signals of aggregates made up of more molecules becomes possible. Computing a single trajectory of the tTRF response function in Eq. (23) with an equilibration time of \(\tau_{\mathrm{eq}}=2\) ps took an average of 79 minutes. The previously proposed optimization method [20] for simulating FL spectra does not require propagation during the equilibration time interval of the TRF response function and has to be computed only once, but it took 193 minutes. In general, we find that the computation of tTRF is more reliable and numerically stable. The optimization approach struggles to consistently find the lowest energy excitonic state of the model due to its heuristic nature, requiring many attempts to find the solution and eventually having to choose the lowest energy one. This is particularly apparent when a wide range of oscillator frequencies are included. For elementary system-bath models without Hamiltonian parameter disorder, the optimization approach can be a good starting point for FL spectra simulation. However, a more accurate spectra most likely will be obtained using the tTRF approach. For models with Hamiltonian disorder, e.g., static molecule excitation energy disorder [55; 56; 57], the optimization approach would require finding model's lowest energy excitonic state for each realization of the Hamiltonian, negating its advantage of having to perform optimization procedure only once. In conclusion, the presented thermalization algorithm for the numerically exact \(\mathrm{mD_{2}}\) ansatz allows to reduce numerical cost of system-bath model simulations by having to explicitly include fewer bath oscillators, while maintaining correspondence with the exact relaxation dynamics. The thermalization algorithm efficiently controls molecular heating effects due to the reduced number of oscillators. Furthermore, the application of thermalization to simulation of fluorescence spectra demonstrates lower computation time, greater numerically stability and higher accuracy compared to the numerical optimization approach. ## Conflicts of interest There are no conflicts of interest to declare. ###### Acknowledgements. We thank the Research Council of Lithuania for financial support (grant No: S-MIP-23-48). Computations were performed on resources at the High Performance Computing Center, "HPC Sauletekis" in Vilnius University Faculty of Physics.
2307.04723
Quark/Gluon Discrimination and Top Tagging with Dual Attention Transformer
Jet tagging is a crucial classification task in high energy physics. Recently the performance of jet tagging has been significantly improved by the application of deep learning techniques. In this study, we introduce a new architecture for jet tagging: the Particle Dual Attention Transformer (P-DAT). This novel transformer architecture stands out by concurrently capturing both global and local information, while maintaining computational efficiency. Regarding the self attention mechanism, we have extended the established attention mechanism between particles to encompass the attention mechanism between particle features. The particle attention module computes particle level interactions across all the particles, while the channel attention module computes attention scores between particle features, which naturally captures jet level interactions by taking all particles into account. These two kinds of attention mechanisms can complement each other. Further, we incorporate both the pairwise particle interactions and the pairwise jet feature interactions in the attention mechanism. We demonstrate the effectiveness of the P-DAT architecture in classic top tagging and quark-gluon discrimination tasks, achieving competitive performance compared to other benchmark strategies.
Minxuan He, Daohan Wang
2023-07-10T17:33:54Z
http://arxiv.org/abs/2307.04723v3
# Quark/Gluon Discrimination and Top Tagging with Dual Attention Transformer ###### Abstract Jet tagging is a crucial classification task in high energy physics. Recently the performance of jet tagging has been significantly improved by the application of deep learning techniques. In this work, we propose Particle Dual Attention Transformer for jet tagging, a new transformer architecture which captures both global information and local information simultaneously. Based on the point cloud representation, we introduce the Channel Attention module to the point cloud transformer and incorporates both the pairwise particle interactions and the pairwise jet feature interactions in the attention mechanism. We demonstrate the effectiveness of the P-DAT architecture in classic top tagging and quark-gluon discrimination tasks, achieving competitive performance compared to other benchmark strategies. + Footnote †: journal: Eur. Phys. J. C e1e-mail: [email protected] e2e-mail: [email protected] ## 1 Introduction In high-energy physics experiments, tagging jets, which are collimated sprays of particles produced from high-energy collisions, is a crucial task for discovering new physics beyond the Standard Model. Jet tagging involves distinguishing boosted heavy particle jets from those of QCD initiated quark/gluon jets. Since jets initiated by different particles exhibit different characteristics, two key issues arise: how to represent a jet and how to analyze its representation. Conventionally, jet tagging has been performed using hand-crafted jet substructure variables based on physics motivation. Nevertheless, these methods can often fall short in capturing intricate patterns and correlations present in the raw data. Over the past decade, deep learning approaches have been extensively adopted to enhance the jet tagging performance[20]. Various jet representations have been proposed, including image-based representation using Convolutional Neural Network (CNN)[2; 8; 11; 18; 21; 22; 26; 32], sequence-based representation with Recurrent Neural Network[1; 10], tree-based representation with Recursive Neural Network[24; 7] and graph-based representation with Graph Neural Network (GNN)[3; 4; 15; 17; 25; 33]. More recently, One representation approach that has gained significant attention is to view the set of constituent particles inside a jet as points in a point cloud. Point clouds are used to represent a set of objects in an unordered manner, described in a defined space, and are commonly utilized in various fields such as self-driving vehicles, robotics, and augmented reality. By adopting this approach, each jet can be interpreted as a particle cloud, which treats a jet as a permutation-invariant set of particles, allowing us to extract meaningful information with deep learning method. Based on the particle cloud representation, several deep learning architectures have been proposed, including Deep Set Framework[19], ABCNet[27], LorentzNet[15] and ParticleNet[30]. The Deep Set Framework provides a comprehensive explanation of how to parametrize permutation invariant functions for inputs with variable lengths, taking into consideration both infrared and collinear safety. Furthermore, it offers valuable insights into the nature of the learned features by neural networks. ParticleNet adapts the Dynamic Graph CNN architecture[37], while ABCNet takes advantage of attention mechanisms to enhance the local feature extraction. The LorentzNet focused more on incorporating inductive biases derived from physics principles into the architecture design, utilizing an efficient Minkowski dot product attention mechanism. All of these archi tectures realize substantial performance improvement on top tagging and quark/gluon discrimination benchmarks. Over the past few years, attention mechanisms have become as a powerful tool for capturing intricate patterns in sequential and spatial data. The Transformer architecture[35], which leverages attention mechanisms, has been highly successful in natural language processing and computer vision tasks such as image recognition. Notably, the Vision Transformer (ViT)[13; 38], initially designed for computer vision tasks, has demonstrated state-of-the-art performance on various image classification benchmarks. However, when dealing with point cloud representation, which inherently lack a specific order, modifications to the original Transformer structure are required to establish a self-attention operation that is invariant to input permutations. To address these issues, a recent approach called Point Cloud Transformer (PCT)[16; 28] was proposed, which entails passing input points through a feature extractor to create a high-dimensional representation of particle features. The transformed data is then passed through a self-attention module that introduces attention coefficients for each pair of particles. To evaluate PCT's effectiveness in the context of a high-energy physics task, specifically jet-tagging, PCT was compared with other benchmark implementations using three different public datasets. PCT shares a similar concept with the ABCNet's attention mechanism, employing a self-attention layer to capture the importance of relationships between all particles in the dataset. Another notable approach is the Particle Transformer[31], which incorporates pairwise particle interactions within the attention mechanism and obtains higher tagging performance than a plain Transformer and surpasses the previous state-of-the-art, ParticleNet, by a large margin. In recent studies, the Dual Attention Vision Transformer (DaViT)[12] has exhibited promising results for image classification. The DaViT introduces the dual attention mechanism, comprising spatial window attention and channel group attention, enabling the effective capture of both global and local features in images. These two self-attentions are demonstrated to complement each other. In this paper, we introduce the Channel Attention module to the Point Cloud Transformer and incorporate the pairwise particle interaction and the pairwise jet feature interaction to build a new network structure, called P-DAT. On the one hand, the Channel Attention module can grasp comprehensive spatial interactions and representations by taking into account all spatial locations while computing attention scores between channels. In this way, the P-DAT can combine both the local information and global information of the jet representation for jet tagging. On the other hand, the pairwise interaction features designed from physics principles can modify the dot-product attention weights, thus increasing the expressiveness of the attention mechanism. We evaluate the performance of P-DAT on top tagging and quark/gluon discrimination tasks and compare its performance against other baseline models. Our analysis demonstrates the effectiveness of P-DAT in jet tagging and highlights its potential for future applications in high-energy physics experiments. This article is organized as follows. In Section 2, we introduce the Particle Dual Attention Transformer for jet tagging and describe the key features of the model architecture. We also provide details of the training and validation process. In Section 3, we present and discuss the numerical results obtained for top tagging task and quark/gluon discrimination task, respectively. Finally, our conclusions are presented in Section 4. ## 2 Model Architecture The focus of this paper is to introduce the Particle Dual Attention Transformer (P-DAT), which serves as a new benchmark approach for jet tagging. Based on the point cloud representation, we regard each constituent particle as a point in the \(\eta-\phi\) space and the whole jet as a point cloud. The whole model architecture is presented in Figure.1. The P-DAT architecture is composed of 5 main building blocks, namely the feature extractor, the particle self attention layers, the channel self attention layers, the class attention layers and the MLP. In order to process a jet of P particles, the P-DAT requires three inputs: the jet dataset, the particle interaction matrix and the jet feature interaction matrix derived from the kinetic information of each particle inside the jet. First of all, the feature extractor is employed to transform the input jet dataset from \(P\times 10\) to a higher dimensional representation \(P\times N\). As illustrated in Fig.2(left), the feature extractor block contains two parts. The first part incorporates an EdgeConv operation[36] followed by 3 two-dimensional convolutional (Conv2D) layers and an average pooling operation across all neighbors of each particle. The EdgeConv operation adopts a k-nearest neighbors approach with \(k=20\) to define a vicinity for each particle inside the jet based on \(\Delta R=\sqrt{\Delta\eta^{2}+\Delta\phi^{2}}\) in the \(\eta-\phi\) space to extract the local information for each particle. To ensure the permutation invariance among particles, all convolutional layers are implemented with stride and kernel size of 1 and are followed by a batch normalization operation and GeLU ac traction consists of 3-layer MLP with \((128,128,128)\) nodes each layer with GELU nonlinearity to handle the negative inputs. BN and LN operations are used for normalization between layers. Finally, the output from these two parts are concatenated to obtain the final output. This approach enables the extraction of input particle embeddings through both linear projection and local neighborhood mapping. Furthermore, we introduce a particle interaction matrix and a channel interaction matrix, both of which are designed based on physics principles and incorporated into the self attention module. For the particle interaction matrix, we use a 3-layer 2D convolution with (32,16,8) channels with stride and kernel size of 1 to map the particle interaction matrix to a new embedding \(P\times P\times N_{h}\), where \(N_{h}\) is the number of heads in the particle self attention module which will be explained later. As for the channel interaction matrix, an upsampling operation and a 3-layer 2D convolution are applied to map the channel interaction matrix to a higher dimensional representation \(N\times N\), with \(N\) the input particle embedding dimension. The second primary building block is the particle self-attention block, which aims to establish the relationship between all particles within the jet using an attention mechanism. As presented in Fig.3, three matrices, which are called query (Q), key (K), and value (V), are built from linear transformations of the original inputs. Attention weights are computed by matrix multiplication between Q and K, representing the matching between them. Similar to the Particle Transformer work[31], we incorporate the particle interaction matrix \(U_{1}\) as a bias term to enhance the scaled dot-product attention. This incorporation of particle interaction features, designed from physics principles, modifies the dot-product attention weights, thereby enhancing the expressiveness of the attention mechanism. The same \(U_{1}\) is shared across the two particle attention blocks. After normalization, these attention weights reflect the weighted importance between each pair of particles. The self-attention is then obtained by the weighted elements of V, which result from multiplying the attention weights and the value matrix. It is important to note that \(P\) represents the number of particles, and \(N\) denotes the total number of features. The attention weights are computed as: \[\mathcal{A}(\mathbf{Q},\mathbf{K},\mathbf{V}) = \text{Concat}(\text{head}_{1},\dots,\text{head}_{N_{h}})\] \[\text{where }\text{head}_{i} = \text{Attention}(\mathbf{Q}_{i},\mathbf{K}_{i},\mathbf{V}_{i}) \tag{1}\] \[= \text{softmax}\left[\frac{\mathbf{Q}_{i}(\mathbf{K}_{i})^{\text {T}}}{\sqrt{C_{h}}}+\mathbf{U}_{1}\right]\mathbf{V}_{i}\] where \(\mathbf{Q}_{i}=\mathbf{X}_{i}\mathbf{W}_{i}^{Q}\), \(\mathbf{K}_{i}=\mathbf{X}_{i}\mathbf{W}_{i}^{K}\), and \(\mathbf{V}_{i}=\mathbf{X}_{i}\mathbf{W}_{i}^{V}\) are \(\mathbb{R}^{P\times N_{h}}\) dimensional visual features with \(N_{h}\) heads, Figure 1: Illustration of the whole model architecture. Figure 2: Illustration of the Feature extractor block and the MLP block. \(\mathbf{X}_{i}\) denotes the \(i_{th}\) head of the input feature and \(\mathbf{W}_{i}\) denotes the projection weights of the \(i_{th}\) head for \(\mathbf{Q},\mathbf{K},\mathbf{V}\), and \(N=C_{h}*N_{h}\). The particle attention block incorporates a LayerNorm (LN) layer both before and after the multi-head attention module. A two-layer MLP, with LN preceding each linear layer and GELU nonlinearity in between, follows the multi-head attention module. Residual connections are applied after the multi-head attention module and the two-layer MLP. In our study, we set \(N_{h}=8\) and \(N=64\). The third main building block is the channel self-attention block, as shown in Fig.4. Unlike the particle self-attention block, this block applies attention mechanisms to the jet features, enabling interactions among the channels. To capture global information in the particle dimension, we set the number of heads to 1, where each transposed token represents global information. Consequently, the channel tokens interact with global information across all channels. This global channel attention mechanism is defined as follows: \[\mathcal{A}(\mathbf{Q}_{i},\mathbf{K}_{i},\mathbf{V}_{i})=\text{softmax} \left[\frac{\mathbf{Q}_{i}^{\mathrm{T}}\mathbf{K}_{i}}{\sqrt{C}}+\mathbf{U}_{ \mathbf{2}}\right]\mathbf{V}_{i}^{T} \tag{2}\] where \(\mathbf{Q}_{i},\mathbf{K}_{i},\mathbf{V}_{i}\in\mathbb{R}^{C\times P}\) are channel-wise jet-level queries, keys, and values. Note that although we transpose the tokens in the channel attention block, the projection layers \(\mathbf{W}\) and the scaling factor \(\frac{1}{\sqrt{C}}\) are computed along the channel dimension, rather than the particle dimension. Similar as the particle self-attention block, we incorporate the channel interaction matrix \(U_{2}\) as a bias term to enhance the scaled dot-product attention. This incorporation of jet channel interaction features, designed based on physics principles, modifies the dot-product attention weights, thereby enhancing the expressiveness of the attention mechanism. The same \(U_{2}\) matrix is shared across the two channel attention blocks. After normalization, the attention weights indicate the weighted importance of each pair of jet features. The self-attention mechanism produces the weighted elements of V, obtained by multiplying the attention weights and the value matrix. Additionally, the channel attention block includes a LayerNorm (LN) layer before and after the attention module, followed by a two-layer MLP. Each linear layer is preceded by an LN layer, and a GELU nonlinearity is applied between them. Residual connections are added after the channel attention module and the two-layer MLP. The fourth main building block is the class attention block, which differs from the particle self-attention block by computing attention between a global class token and all particles using the standard Multi-Head Attention (MHA) mechanism. This class channel attention mechanism is defined as follows: \[Q =W_{q}x_{\mathrm{class}}+b_{q}, \tag{3}\] \[K =W_{q}z+b_{k},\] \[V =W_{q}z+b_{v},\] \[z =[x_{\mathrm{class}},x^{L}]\] where \(z=[x_{class},x^{L}]\) represents the concatenation of the class token and the particle embedding after the last particle attention block, denoted as \(x_{L}\). In the first class attention block, the class token is obtained by performing max pooling on the output of the second channel attention block across all particles. In the second class attention block, the class token is obtained by performing average pooling on the output of the second Figure 4: Illustration of the Channel Attention Block. Figure 3: Illustration of the Particle Multi-head Attention Block. channel attention block across all particles. Furthermore, the class attention block includes a LayerNorm (LN) layer before and after the attention module, followed by a two-layer MLP. Each linear layer is preceded by an LN layer, and a GELU nonlinearity is applied between them. Residual connections are added after the class attention module and the two-layer MLP. The last main building block is a 3-layer MLP with (448, 64, 2) nodes, as shown in Fig.2(right). First, the outputs of the particle attention blocks and channel attention blocks are concatenated, followed by an average pooling operation across all particles. Subsequently, the outputs of the class attention blocks are concatenated. Finally, these two sets of outputs are concatenated and fed into the MLP. In addition, a batch normalization operation and the GeLU activation function are applied to the second layer, and a dropout rate of 0.5 is applied to the second layer. The last layer employs a softmax operation to produce the final classification scores. In summary, the P-DAT are composed of one feature extractor, two particle attention blocks, two channel attention blocks, two class attention blocks and one MLP. The feature extractor's output serves as the input for the first particle attention block. Subsequently, we alternate between the particle attention block and the channel attention block to capture both local fine-grained and global features. A dropout rate of 0.1 is applied to all particle attention blocks and channel attention blocks. As demonstrated in Ref.[12], these two blocks complement each other: the channel attention provides a global receptive field in the particle dimension, enabling the extraction of high-level global jet representations by dynamically fusing features across global channel tokens. On the other hand, the particle attention refines local representations by facilitating fine-grained interactions among all particles, thereby aiding in the modeling of global information in the channel attention. After the second channel attention block, two class attention blocks which take the max pooling and average pooling on the output of the second channel attention block as class token are applied to compute the attention between a global class token and all particles using the standard Multi-Head Attention (MHA) mechanism. Finally, the two sets of outputs are concatenated and fed into the MLP and the resulting representation is normalized using a softmax operation. The model architecture is implemented in the PYTORCH deep learning framework with the CUDA platform. The training and evaluation steps are accelerated using a NVIDIA GeForce RTX 3070 GPU for acceleration. We adopt the binary cross-entropy as the loss function. To optimize the model parameters, we employ the AdamW optimizer[23] with an initial learning rate of 0.0004, which is determined based on the gradients calculated on a mini-batch of 64 training examples. In order to address the memory issue caused by huge input data, we implemented a strategy of continuously importing and deleting data during the training process. The network is trained up to 100 epochs, with the learning rate decreasing by a factor of 2 every 10 epochs to a minimal of \(10^{-6}\). In addition, we employ the early-stopping technique to prevent over-fitting. ## 3 Jet Classification The P-DAT architecture is designed to process input data consisting of particles inside the jets. To ensure consistency and facilitate meaningful comparisons, we first sorted the particles inside the jets by transverse momentum and a maximum of 100 particles per jet are employed. The input jet is truncated if the particle number inside the jet is more than 100 and the input jet is zero-padded up to the 100 if fewer than 100 particles are present. This selection of 100 particles is sufficient to cover the vast majority of jets contained within all datasets, ensuring comprehensive coverage. Each jet is characterized by the 4-momentum of its constituent particles. Based on this information, we reconstructed 10 features for each particle. Additionally, for the quark-gluon dataset, we included the Particle Identification (PID) information as the 11-th feature. These features are as follows: \[\left\{\begin{aligned} &\log\,E,\,\log\,|p_{x}|,\,\log\,|p_{y}|,\,\log\,|p_{z}|, \\ &\log\,p_{\rm T},\,\frac{p_{\rm T}}{p_{\rm TJ}},\,\frac{E}{E_{J}},\,\,\Delta\eta\,\,\Delta\phi,\,\,\Delta R,\,\,\rm PID\end{aligned} \right\}. \tag{4}\] For the pairwise particle interaction matrix, based on Refs.[14; 31], we calculated the following 5 features for any pair of particles a and b with four-momentum \(p_{a}\) and \(p_{b}\) as the sum of all the particles' four-momentum inside the particle a and particle b, respectively: \[\begin{aligned} \Delta R&=\sqrt{(y_{a}-y_{b})^{2}+( \phi_{a}-\phi_{b})^{2}},\\ k_{\rm T}&=\min(p_{{\rm T},a},p_{{\rm T},b})\Delta, \\ z&=\min(p_{{\rm T},a},p_{{\rm T},b})/(p_{{\rm T},a}+p_ {{\rm T},b}),\\ m^{2}&=(E_{a}+E_{b})^{2}-\|\mathbf{p}_{a}+\mathbf{ p}_{b}\|^{2},\\ \Delta p_{\rm T}&=p_{{\rm T},a}-p_{{\rm T},b}\end{aligned} \tag{5}\] where \(y_{i}\) represents the rapidity, \(\phi_{i}\) denotes the azimuthal angle, \(p_{{\rm T},i}=(p_{x,i}^{2}+p_{y,i}^{2})^{1/2}\) denotes the transverse momentum, and \(\mathbf{p}_{i}=(p_{x,i},p_{y,i},p_{z,i})\) represents the momentum 3-vector and \(\|\cdot\|\) is the norm, for \(i=a\), \(b\). As mentioned in Ref.[31], we take the logarithm and use (\(\ln\Delta,\ln k_{\rm T},\ln z,\ln m^{2},\ln p_{\rm T}\)) as the interaction features for each particle pair to avoid the long tail problem. Apart from the 5 interaction features, we add one more feature for the Quark-Gluon benchmark dataset, defined as \(\delta_{i,j}\), where i and j are the PID of the particles a and b. For the pairwise jet feature interaction matrix, we selected 10 typical jet variables. Besides, for the quark-gluon dataset, we incorporated the 11th feature based on the Particle Identification (PID) information. The list of all jet variables used in this study is presented below. And the interaction matrix is constructed based on a straightforward yet effective ratio relationship, as illustrated in Table.1. \[\left\{\begin{matrix}\mbox{E},\ p_{x},\ p_{y},\ p_{z},\ p_{\rm T},\ \sum p_{Tf},\ \sum E_{f},\\ \overline{\Delta\eta},\ \overline{\Delta\phi},\ \overline{\Delta R},\ \mbox{PID}\end{matrix}\right\}. \tag{6}\] To provide a clearer explanation of the concept of the jet feature pairwise interaction matrix, we will now present a detailed description. The first 4 variables represent the four-momentum of the input jet. Specifically, \(p_{\rm T}\) denotes the transverse momentum of the input jet, while \(\sum p_{Tf}\) and \(\sum E_{f}\) represent the sum of the transverse momentum fractions and the energy fractions of all the constituent particles inside the input jet, respectively. Additionally, \(\overline{\Delta\eta}\), \(\overline{\Delta\phi}\) and \(\overline{\Delta R}\) correspond to the transverse momentum weighted sum of the \(\Delta\eta\), \(\Delta\phi\), \(\Delta R\) of all the constituent particles inside the input jet, respectively. Here \(\Delta\eta\), \(\Delta\phi\) and \(\Delta R\) refer to the angular distances between each constituent particle and the input jet. Furthermore, PID represents the particle identification associated with the specific particle whose sum of transverse momentum accounts for the largest proportion of the entire jet transverse momentum. The entire jet feature pairwise interaction matrix is defined as a symmetric block matrix with diagonal ones. For convenience, we named {E, \(p_{x}\), \(p_{y}\), \(p_{z}\), \(p_{\rm T}\), \(\sum p_{Tf}\), \(\sum E_{f}\)} as variable set 1 and {\(\overline{\Delta\eta},\ \overline{\Delta\phi},\ \overline{\Delta R}\)} as variable set 2. We build the pairwise interactions among variable set 1 and variable set 2, respectively. Firstly, we employ a ratio relationship to define the interaction between E and {\(p_{x},\ p_{y},\ p_{z},\ p_{\rm T}\)} and the interaction between \(p_{T}\) and {\(p_{x},\ p_{y}\)}, with no interaction between orthogonal components. Additionally, we establish that the interaction between \(\sum E_{f}\) and E is 1, while no interactions exist between \(\sum E_{f}\) and any other variables, except for E and PID. Similarly, we define the interaction between \(\sum p_{Tf}\) and \(p_{T}\) as 1, with no interactions between \(\sum p_{Tf}\) and any other variables, except for \(p_{T}\) and PID. Secondly, we apply a ratio relationship to define the interaction between \(\overline{\Delta R}\) and {\(\overline{\Delta\eta},\overline{\Delta\phi}\)}, while no interaction is specified between {\(\overline{\Delta\eta}\) and \(\overline{\Delta\phi}\)}. Finally, we determine the interactions between PID and all other variables as the ratio of the sum of the corresponding variables of the particles associated with the PID to the variable of the jet. ### Quark/Gluon Discrimination The Quark-Gluon benchmark dataset[19] was generated with Pythia8 without detector simulation. It comprises of quark-initiated samples \(q\overline{q}\to Z\rightarrow\nu\overline{\nu}+(u,d,s)\) as signal and gluon-initiated data \(q\overline{q}\to Z\rightarrow\nu\overline{\nu}+g\) as background. Jet clustering was performed using the anti-kT algorithm with R = 0.4. Only jets with transverse momentum \(p_{T}\in\) [500, 550] GeV and rapidity \(|y|<1.7\) were selected for further analysis. Each particle within the dataset comprises not only the four-momentum, but also the particle identification information, which classifies the particle type as electron, muon, charged hadron, neutral hadron, or photon. The official dataset compromises of 1.6M training events, 200k validation events and 200k test events, respectively. In this paper, we focused on the leading 100 constituents within each jet, utilizing their four-momenta and particle identification information for training purposes. For jets with fewer than 100 constituents, zero-padding was applied. For each particle, a set of 11 input features was used, based solely on the four-momenta and identification information of the particles clustered within the jet. The accuracy, area under the curve (AUC), and background rejection background rejection results are presented in Table 2. ### Top Tagging The benchmark dataset[5] used for top tagging comprises hadronic tops as the signal and QCD di-jets as the background. Pythia8[34] was employed for event generation, while Delphes[9] was utilized for detector simulation. All the particle-flow constituents were clustered into jets using the anti-kT algorithm[6] with a radius parameter of R = 0.8. Only jets with transverse momentum \(p_{T}\in\) [550, 650] GeV and rapidity \(|y|<\) 2 were included in the analysis. The official dataset contains 1.2M training events, 400k validation events and 400k test events, respectively. Only the energy-momentum 4-vectors for each particles inside the jets are provided. In this paper, the leading 100 constituent four-momenta of each jet were utilized for training purposes. For jets with fewer than 100 constituents, zero-padding was applied. For each particle, a set of 10 input features based solely on the four-momenta of the particles clustered inside the jet was utilized. The accuracy, area under the curve (AUC), and background rejection results can be found in Table 3. ## 4 Conclusion This study applies the Particle Dual Attention Transformer as an innovative approach for jet tagging. Specifically, the P-DAT architecture incorporates the Channel Attention module to the Point Cloud Transformer, allowing for capturing the jet-level global information and particle-level local information simultaneously. In addition, we introduces the particle pairwise interactions and the jet feature pairwise interactions. This technique not only enables the extraction of semantic affinities among the particles through a self-attention mechanism and the semantic affinities among the jet features through a channel-attention mechanism, but also augments the self-attention and channel-attention by combining the physics-motivated pairwise interactions with the machined learned dot-production attention. We evaluate the P-DAT architecture on the classic top tagging task and the quark-gluon discrimination task and achieve competitive results compared to other benchmark strategies. Moreover, we solved the memory usage problem by importing and deleting data during training. However, the computational time problem regarding of using the full pairwise interaction matrix is still unresolved which could be an interesting direction for future research. ###### Acknowledgements. This work is funded by the National Research Foundation of Korea, Grant No. NRF-2022R1A2C1007583. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline I & E & \(p_{x}\) & \(p_{y}\) & \(p_{z}\) & \(p_{T}\) & \(\sum p_{Tf}\) & \(\sum E_{f}\) & \(\overline{\Delta\eta}\) & \(\overline{\Delta\phi}\) & \(\overline{\Delta R}\) & PID \\ \hline E & 1 & \(\frac{p_{x}}{E}\) & \(\frac{p_{y}}{E}\) & \(\frac{p_{z}}{E}\) & \(\frac{p_{x}}{E}\) & \(\frac{p_{T}}{E}\) & 0 & 1 & 0 & 0 & 0 & \(\frac{E_{PID}}{E}\) \\ \hline \(p_{x}\) & \(\frac{p_{x}}{E}\) & 1 & 0 & 0 & \(\frac{p_{x}}{p_{T}}\) & 0 & 0 & 0 & 0 & 0 & \(\frac{p_{xPID}}{p_{x}}\) \\ \hline \(p_{y}\) & \(\frac{p_{y}}{E}\) & 0 & 1 & 0 & \(\frac{p_{y}}{p_{T}}\) & 0 & 0 & 0 & 0 & 0 & \(\frac{p_{yPID}}{p_{y}}\) \\ \hline \(p_{z}\) & \(\frac{p_{z}}{E}\) & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & \(\frac{p_{PID}}{p_{z}}\) \\ \hline \(p_{T}\) & \(\frac{p_{T}}{E}\) & \(\frac{p_{x}}{p_{T}}\) & \(\frac{p_{y}}{p_{T}}\) & 0 & 1 & 1 & 0 & 0 & 0 & 0 & \(\frac{p_{TPID}}{p_{T}}\) \\ \hline \(\sum p_{Tf}\) & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & \(p_{TfPID}\) \\ \hline \(\sum E_{f}\) & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & \(\frac{\overline{\Delta\eta}}{\overline{\Delta R}}\) & \(\overline{\Delta\eta_{PID}}\) \\ \hline \(\overline{\Delta\phi}\) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & \(\frac{\overline{\Delta\phi}}{\overline{\Delta R}}\) & \(\overline{\Delta\phi_{PID}}\) \\ \hline \(\overline{\Delta R}\) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \(\frac{\overline{\Delta\eta}}{\overline{\Delta R}}\) & \(\overline{\Delta\phi}\) & 1 & \(\overline{\Delta\eta_{PID}}\) \\ \hline PID & \(\frac{E_{PID}}{E}\) & \(\frac{p_{xPID}}{p_{x}}\) & \(\frac{p_{xPID}}{p_{y}}\) & \(\frac{p_{xPID}}{p_{x}}\) & \(\frac{p_{xPID}}{p_{T}}\) & \(p_{TfPID}\) & \(E_{fPID}\) & \(\overline{\Delta\eta_{PID}}\) & \(\overline{\Delta\eta_{PID}}\) & \(\overline{\Delta\eta_{PID}}\) & \(\overline{\Delta\eta_{PID}}\) & 1 \\ \hline \end{tabular} \end{table} Table 1: The jet feature pairwise interaction matrix used as the inputs for the P-DAT. \begin{table} \begin{tabular}{l c c c c} \hline & Accuracy & AUC & Rej\({}_{50\%}\) & Rej\({}_{30\%}\) \\ \hline ResNeXt-50 [30] & 0.821 & 0.9060 & 30.9 & 80.8 \\ P-CNN [30] & 0.827 & 0.9002 & 34.7 & 91.0 \\ PFN [19] & - & 0.9005 & 34.7 & 0.4 & - \\ ParticleNet-Lite [30] & 0.835 & 0.9079 & 37.1 & 94.5 \\ ParticleNet [30] & 0.940 & 0.9116 & 39.8 & 0.2 & 98.6 \(\pm\)1.3 \\ ABCNet [27] & 0.840 & 0.9126 & 42.6 \(\pm\)0.4 & 118.4 \(\pm\)1.5 \\ SPCT [28] & 0.815 & 0.8910 & 31.6 \(\pm\)0.3 & 93.0 \(\pm\)1.2 \\ PCT [28] & 0.841 & 0.9140 & 43.2 \(\pm\)0.7 & 118.0 \(\pm\)2.2 \\ LorentaNet [15] & 0.844 & 0.9156 & 42.4 \(\pm\)0.4 & 110.2 \(\pm\)1.3 \\ ParT [31] & 0.849 & 0.9203 & 47.9 \(\pm\)0.5 & 129.5 \(\pm\)0.9 \\ \hline P-DAT & 0.838 & 0.906 & 39.6 \(\pm\)0.5 & 95.6 \(\pm\)1.8 \\ \hline \end{tabular} \end{table} Table 2: Comparison between the performance reported for P-DAT and existing classification algorithms on the quark-gluon discrimination dataset. The uncertainty is calculated by taking the standard deviation of 5 training runs with different random weight initialization. \begin{table} \begin{tabular}{l c c c c} \hline & Accuracy & AUC & Rej\({}_{50\%}\) & Rej\({}_{30\%}\) \\ \hline ResNeXt-50 [30] & 0.936 & 0.9837 & 302\(\pm\)5 & 1147\(\pm\)58 \\ P-CNN [30] & 0.930 & 0.9803 & 201\(\pm\)4 & 759\(\pm\)24 \\ PFN [19] & - & 0.9819 & 247\(\pm\)3.8 & 888\(\pm\)17 \\ ParticleNet-Lite [30] & 0.937 & 0.9844 & 325\(\pm\)5 & 1262\(\pm\)49 \\ ParticleNet [30] & 0.940 & 0.9858 & 397\(\pm\)7 & 1615\(\pm\)93 \\ JEDI-net [29] & 0.9263 & 0.9786 & - & 590.4 \\ SPCT [28] & 0.928 & 0.9799 & 201\(\pm\)9 & 725\(\pm\)54 \\ PCT [28] & 0.940 & 0.9855 & 392\(\pm\)7 & 1533\(\pm\)101 \\ LorentaNet [15] & 0.942 & 0.9868 & 489\(\pm\)18 & 2195\(\pm\)173 \\ ParT [31] & 0.940 & 0.9858 & 413\(\pm\)16 & 1602\(\pm\)81 \\ \hline P-DAT & 0.918 & 0.9653 & 152\(\pm\)5 & 518\(\pm\)36 \\ \hline \end{tabular} \end{table} Table 3: Comparison between the performance reported for P-DAT and existing classification algorithms on the top tagging dataset. The uncertainty is calculated by taking the standard deviation of 5 training runs with different random weight initialization.
2305.12738
Logical Entity Representation in Knowledge-Graphs for Differentiable Rule Learning
Probabilistic logical rule learning has shown great strength in logical rule mining and knowledge graph completion. It learns logical rules to predict missing edges by reasoning on existing edges in the knowledge graph. However, previous efforts have largely been limited to only modeling chain-like Horn clauses such as $R_1(x,z)\land R_2(z,y)\Rightarrow H(x,y)$. This formulation overlooks additional contextual information from neighboring sub-graphs of entity variables $x$, $y$ and $z$. Intuitively, there is a large gap here, as local sub-graphs have been found to provide important information for knowledge graph completion. Inspired by these observations, we propose Logical Entity RePresentation (LERP) to encode contextual information of entities in the knowledge graph. A LERP is designed as a vector of probabilistic logical functions on the entity's neighboring sub-graph. It is an interpretable representation while allowing for differentiable optimization. We can then incorporate LERP into probabilistic logical rule learning to learn more expressive rules. Empirical results demonstrate that with LERP, our model outperforms other rule learning methods in knowledge graph completion and is comparable or even superior to state-of-the-art black-box methods. Moreover, we find that our model can discover a more expressive family of logical rules. LERP can also be further combined with embedding learning methods like TransE to make it more interpretable.
Chi Han, Qizheng He, Charles Yu, Xinya Du, Hanghang Tong, Heng Ji
2023-05-22T05:59:22Z
http://arxiv.org/abs/2305.12738v1
# Logical Entity Representation in KnowledgeGraphs for Differentiable Rule Learning ###### Abstract Probabilistic logical rule learning has shown great strength in logical rule mining and knowledge graph completion. It learns logical rules to predict missing edges by reasoning on existing edges in the knowledge graph. However, previous efforts have largely been limited to only modeling chain-like Horn clauses such as \(R_{1}(x,z)\wedge R_{2}(z,y)\Rightarrow H(x,y)\). This formulation overlooks additional contextual information from neighboring sub-graphs of entity variables \(x\), \(y\) and \(z\). Intuitively, there is a large gap here, as local sub-graphs have been found to provide important information for knowledge graph completion. Inspired by these observations, we propose _Logical Entity RePresentation (LERP)_ to encode contextual information of entities in the knowledge graph. A LERP is designed as a vector of probabilistic logical functions on the entity's neighboring sub-graph. It is an interpretable representation while allowing for differentiable optimization. We can then incorporate LERP into probabilistic logical rule learning to learn more expressive rules. Empirical results demonstrate that with LERP, our model outperforms other rule learning methods in knowledge graph completion and is comparable or even superior to state-of-the-art black-box methods. Moreover, we find that our model can discover a more expressive family of logical rules. LERP can also be further combined with embedding learning methods like TransE to make it more interpretable. 1 Footnote 1: All code and data are publicly available at [https://github.com/Glacionbund/LERP](https://github.com/Glacionbund/LERP). ## 1 Introduction In recent years, the use of logical formulation has become prominent in knowledge graph (KG) reasoning and completion (Teru et al., 2020; Campero et al., 2018; Payani & Fekri, 2019), mainly because a logical formulation can be used to enforce strong prior knowledge on the reasoning process. In particular, probabilistic logical rule learning methods (Sadeghian et al., 2019; Yang et al., 2017) have shown further desirable properties including efficient differentiable optimization and explainable logical reasoning process. These properties are particularly beneficial for KGs since KGs are often large in size, and modifying KGs has social impacts so rationales are preferred by human readers. Due to the large search space of logical rules, recent efforts Sadeghian et al. (2019); Yang et al. (2017); Payani & Fekri (2019) focus on learning _chain-like Horn clauses_ of the following form: \[r_{1}(x,z_{1})\wedge r_{2}(z_{1},z_{2})\wedge\cdots\wedge r_{K}(z_{K-1},y) \Rightarrow H(x,y), \tag{1}\] where \(r_{k}\) represents relations and \(x\), \(y\)\(z_{k}\) represent entities in the graph. Even though this formulation is computationally efficient (see Section 3), it overlooks potential contextual information coming from local sub-graphs neighboring the entities (variables \(x\), \(y\), and all \(z_{i}\)). However, this kind of contextual information can be important for reasoning on knowledge graphs. Figure 1(b) shows an example. If we only know that \(z\) is mother of \(x\) and \(y\), we are not able to infer if \(y\) is a brother or sister of \(x\). However, in Figure 1(c), with the contextual logical information that \(\exists z^{\prime}\textit{is\_son\_off}(y,z^{\prime})\) we can infer that \(y\) is a male and that \(y\) should be the brother rather than the sister of \(x\). Recent deep neural network models based on Graph Neural Networks (GNNs) (Teru et al., 2020; Mai et al., 2021) have utilized local sub-graphs as an important inductive bias in knowledge graph completion. Although GNNs can efficiently incorporate neighboring information via message-passing mechanisms to improve prediction performance (Zhang et al., 2019; Lin et al., 2022), they are not capable of discovering explicit logical rules, and the reasoning process of GNNs is largely unexplainable. In this paper, we propose _Logical Entity RePresentation (LERP)_ to incorporate information from local sub-graphs into probabilistic logic rule learning. LERP is a logical contextual representation for entities in knowledge graph. For an entity \(e\), a LERP \(\mathbf{L}(e)\) is designed as a vector of logical functions \(L_{i}(e)\) on \(e\)'s neighboring sub-graph and the enclosed relations. We then incorporate LERP in probabilistic logical rule learning methods to provide contextual information for entities. Different from other embedding learning methods, LERP encodes contextual information rather than identity information of entities. In the example discussed above in Figure 1, an ideal LERP for \(y\) might contain logical functions like \(L_{i}(y)=\exists z^{\prime}\ is\_son\_of(y,z^{\prime})\). Therefore, when predicting the relation _is_brother_of_, the rule learning model can select \(L_{i}\) from LERP and get the rule written in Figure 1(c). In our model, LERP can be jointly optimized with probabilistic logical rules. We empirically show that our model outperforms previous logical rule learning methods on knowledge graph completion benchmarks. We also find that LERP allows our model to compete, and sometimes even exceed, strong black-box baselines. Moreover, LERP is itself an interpretable representation, so our model is able to discover more complex logical rules from data. In Section 5.4 we demonstrate that LERP can also be combinedd with embedding-learning models like TransE (Bordes et al., 2013) to construct a hybrid model that learns interpretable embeddings. ## 2 Related Work Logical Rule LearningThis work is closely related to the problem of learning logical rules for knowledge graph reasoning, and thus also related to the inductive logic programming (ILP) field. Traditional rule learning approaches search for the logical rules with heuristic metrics such as support and confidence, and then learn a scalar weight for each rule. Representative methods include Markov Logic Networks (Richardson and Domingos, 2006; Khot et al., 2011), relational dependency networks (Neville and Jensen, 2007; Natarajan et al., 2010), rule mining algorithms (Galarraga et al., 2013; Meilicke et al., 2019), path finding and ranking approaches (Lao and Cohen, 2010; Lao et al., 2011; Chen et al., 2016), probabilistic personalized page rank (ProPPR) models (Wang et al., 2013, 2014a), ProbLog (De Raedt et al., 2007), CLP(BN) (Costa et al., 2002), SlipCover (Bellodi and Riguzzi, 2015), ProbFoil (De Raedt et al., 2015) and SafeLearner (Jain, 2019). However, most traditional methods face the problem of large searching space of logical rules, or rely on predefined heuristics to guide searching. These heuristic measures are mostly designed by humans, and may not necessarily be generalizable to different tasks. Differentiable Logical Rule LearningRecently, another trend of methods propose jointly learning the logical rule form and the weights in a differentiable manner. Representative models include the embedding-based method of Yang et al. (2015), Neural LP (Yang et al., 2017), DRUM (Sadeghian et al., 2019) and RLvLR (Omran et al., 2018). Furthermore, some efforts have applied reinforcement learning to rule learning. The idea is to train agents to search for paths in the knowledge graph connecting the start and destination entities. Then, the connecting path can be ex Figure 1: Incorporating contextual information of entities can benefit missing link prediction. tracted as a logical rule (Xiong et al., 2017; Chen et al., 2018; Das et al., 2018; Lin et al., 2018; Shen et al., 2018). These methods are mostly focused on learning chain-like Horn clauses like Equation 1. This formulation limits logical rules from using extra contextual information about local subgraphs. In contrast, we propose to use LERP to incorporate contextual information into logical rules, and are capable of modeling a more expressive family of rules. Embedding-Based Neural Graph ReasoningOur work is also related and compared to graph embedding-based methods. However, those models do not explicitly model logic rules, but reason about entities and relations over the latent space. Representative work include TransE (Bordes et al., 2013), RotateE (Sun et al., 2018), ConvE (Dettmers et al., 2018), ComplEx (Trouillon et al., 2016), TransH (Wang et al., 2014), HoIE (Nickel et al., 2016), KBGAN (Cai & Wang, 2018), TuckER (Balazevic et al., 2019), Yang et al. (2015), and box embedding methods (Ren et al., 2020; Abboud et al., 2020; Onoe et al., 2021). LERP is different from these latent embeddings because LERP encodes contextual information rather than identity information of entities like in TransE. LERP is composed of vectors of interpretable logical functions, and can be applied to explicitly learning of more complex logical rules. Yang et al. (2015) also learns entity embeddings for rule learning, but the rules are still limited to chain-like rules, and the embedding is not interpretable. ## 3 Problem Statement Chain-like Horn Clausesare a set of first-order logic rules that have been widely studied in logical rule learning (Sadeghian et al., 2019; Yang et al., 2017). Formally, the task setting provides a knowledge graph \(G=\{(s,r,o)|s,o\in\mathcal{E},r\in\mathcal{R}\}\), where \(\mathcal{E}\) is the set of entities and \(\mathcal{R}\) is the set of relation types. The chain-like Horn clauses are defined as in Equation 1, where \(x,y,z_{k}\in\mathcal{E}\) (for \(k\in[1,K-1]\)) and \(H,r_{k}\in\mathcal{R}\) (for \(k\in[1,K]\)). The "reverse" edges \(r^{\prime}(z,z^{\prime})=r(z^{\prime},z)\) are often added to the edge type set \(\mathcal{R}\). \(r_{1}(x,z_{1})\wedge\cdots\wedge r_{K}(z_{K-1},y)\) is typically named the "body" of the rule. A computational convenience of this formulation is that, if \(x\) is known, the rule can be evaluated by computing a sequence of matrix multiplications. We can first order entities in \(\mathcal{E}\) from 1 to \(n=|\mathcal{E}|\), let \(\mathbf{v}_{x}\) be an \(n\)-dimensional one-hot vector of \(x\)'s index, and let \(\mathbf{A}_{r_{k}}\) be the adjacency matrix of \(r_{k}\). In this way, we can compute \(\mathbf{v}_{x}^{\top}\prod_{k=1}^{K}\mathbf{A}_{r_{k}}\) in \(O(Kn^{2})\) time, by multiplying in a left-to-right order from \(\mathbf{v}_{x}^{\top}\) side. It is easy to verify that this result is an \(n\)-dimensional vector counting how many paths connect \(x\) and \(y\) following the relation sequence \(r_{1},r_{2}\cdots r_{K}\). This formulation has been widely adopted by previous methods. Logical Entity Representation (LERP)In this work, we introduce LERP in logical rules to provide additional local contextual information for entities in the knowledge graph. A LERP for entity \(e\) is formatted as a vector \(\mathbf{L}(e)=(L_{1}(e),\cdots,L_{m}(e))\) where the dimension \(m\) is a hyper-parameter. Each \(L_{i}(e)\) is a logical function over \(e\)'s surrounding sub-graph. To enable efficient evaluation of \(L_{i}(e)\), we limit \(L_{i}(e)\) to a family of _tree-like logical functions_ as defined below. **Definition 1**: _(Tree-like logical function)_ Figure 2: Illustration of our construction of logical entity function and representation. (a): Each logical function is defined in a recursive manner with operations defined in Definition 1 (b): we build LERP in a feed-forward style, by constructing intermediate logical functions column by column. _Given a binary predicate set \(\mathcal{R}\), the family of tree-like logical functions \(\mathcal{T}\) is recursively constructed with the following operations:_ 1. _(True function)_ \(f(w_{0})=true\) _belongs to_ \(\mathcal{T}\)_._ 2. _(Chaining)_ \(\forall\)__\(f(w_{i})\in\mathcal{T}\) _and_ \(r\in\mathcal{R}\)_,_ \(f^{\prime}(w_{i+1})=\exists w_{i}:f(w_{i})\wedge r(w_{i},w_{i+1})\) _belongs to_ \(\mathcal{T}\)_._ 3. _(Negation)_ \(\forall\)__\(f(w_{i})\in\mathcal{T}\)_,_ \(f^{\prime}(w_{i})=\neg f(w_{i})\) _belongs to_ \(\mathcal{T}\)_._ 4. _(Merging)_ \(\forall\)__\(f(w_{i}),f^{\prime}(w_{i^{\prime}})\in\mathcal{T}\)_, let_ \(i^{\prime\prime}=\max(i,i^{\prime})+1\)_, after applying substitution_ \(\{w_{i}\mapsto w_{i^{\prime\prime}}\}\) _and_ \(\{w_{i^{\prime}}\mapsto w_{i^{\prime\prime}}\}\) _to two functions respectively,_ \(f^{\prime\prime}(w_{i^{\prime\prime}})=f(w_{i^{\prime\prime}})\wedge f^{\prime }(w_{i^{\prime\prime}})\)__\((\wedge\text{-merging})\) _and_ \(f^{\prime\prime}(w_{i^{\prime\prime}})=f(w_{i^{\prime\prime}})\lor f^{\prime}(w _{i^{\prime\prime}})\)__\((\vee\text{-merging})\) _belong to_ \(\mathcal{T}\)_._ This definition includes arbitrarily complex functions, but in practice we limit a maximum number of operations. Intuitively, one can think of a function \(f(w_{i})\in\mathcal{T}\) as mimicking the shape of a local tree spanning from \(w_{i}\) as in Figure 2(a). Specifically, this is because it gathers information together recursively through logical operations. \(\mathbf{L}(e)\) then serves as a logical contextual representation for \(e\)'s local surrounding sub-graph. We then extend Equation 1 into the following formula to incorporate contextual information from \(\mathbf{L}(e)\) (\(w\) are entity variables and are interchangeable with \(z\)): \[r_{1}(x,z_{1})\wedge r_{2}(z_{1},z_{2})\wedge\cdots\wedge r_{K}(z_{K-1},y) \wedge\left(\bigwedge_{z^{\prime}\in\{x,y,z_{1},\cdots\}}L_{i_{z^{\prime}}}(z^ {\prime})\right)\Rightarrow H(x,y), \tag{2}\] where \(i_{z^{\prime}}\in[1..m]\) is an index to select logical functions from LERP. This extension does not break the computational efficiency of chain-like Horn clauses (proofs are included in Appendix A): **Theorem 1**: _For any rule expressed in Equation 2, if \(x\) is known, the evaluation of its LHS over \(x\) and all \(y\in\mathcal{E}\) (considering all possible \(z_{i}\)), can be determined in \(O(Cn^{2})\) time where \(n=|\mathcal{E}|\) is the size of the entity set and \(C\) is a constant related to the rule._ ## 4 Framework ### Derivation of LERP This section describes our computational modeling of LERP and how to incorporate it in probabilistic logic rule learning. Borrowing ideas from Yang et al. (2017); Sadeghian et al. (2019) we relax LERP to the probabilistic domain to make its results continuous. Figure 2 provides an illustration of our method. As we do not know the logical form of the functions (Figure 2(a)) beforehand, we adopt a "redundancy" principle and calculate a matrix of intermediate logical functions \(f^{\Theta}_{j,i}\) (Figure 2(b)) where \(1\leq i\leq m\) and \(0\leq j\leq T\). For example, \(f^{\Theta}_{3,1}\) in 2(b) represents the same logical form as \(f_{3,1}\) in 2(a). The learning process is then to discover a good hierarchy within the matrix. When \(T\) is set to value larger than 1, the learned LERP can conduct multi-hop reasoning. \begin{table} \begin{tabular}{l l l l l l l l l l} \hline \hline \multirow{2}{*}{**Category**} & \multirow{2}{*}{**Model**} & \multicolumn{6}{c}{**WN18RR**} & \multicolumn{6}{c}{**WN18**} \\ & & MRR & H@1 & H@3 & H@10 & MRR & H@1 & H@3 & H@10 \\ \hline \multirow{6}{*}{No Rule Learning} & ComplEx & 0.44 & 0.41 & 0.46 & 0.51 & 0.941 & 0.936 & 0.936 & 0.947 \\ & TransE & 0.466 & 0.4226 & - & 0.556 & 0.495 & 0.113 & 0.888 & 0.943 \\ & ConvE & 0.43 & 0.40 & 0.44 & 0.52 & 0.943 & 0.935 & 0.946 & 0.956 \\ & LinearRE & 0.495 & 0.453 & 0.509 & 0.578 & 0.952 & 0.947 & 0.955 & 0.961 \\ & Inverse Model & 0.35 & 0.35 & 0.35 & 0.35 & 0.35 & **0.963** & **0.953** & 0.964 & 0.964 \\ & QuatDE & 0.489 & 0.438 & 0.509 & 0.586 & 0.95 & 0.944 & 0.954 & 0.961 \\ & MLMLM & 0.502 & 0.439 & 0.542 & 0.611 & - & - & - & - \\ & kNN-KGE & 0.579 & 0.525 & 0.604 & **0.683** & - & - & - & - \\ \hline \multirow{6}{*}{Rule Learning} & Neural LP & 0.435 & 0.371 & 0.434 & 0.566 & 0.94 & - & - & 0.945 \\ & DRUM & 0.486 & 0.425 & 0.513 & 0.586 & 0.944 & 0.939 & 0.943 & 0.954 \\ \cline{1-1} & RNNLogic+ & 0.513 & 0.471 & 0.532 & 0.597 & - & - & - & - \\ \cline{1-1} & LERP & **0.622** & **0.593** & **0.634** & 0.682 & 0.958 & 0.932 & **0.982** & **0.987** \\ \hline \hline \end{tabular} \end{table} Table 1: Experiment results on knowledge graph completion tasks on WN18RR and WN18. Specifically, we compute the tree-like probabilistic logical functions \(L^{\Theta}_{i}\) parameterized by \(\Theta\) by adopting a feed-forward paradigm, and compute intermediate functions \(f^{\Theta}_{j,i}(z_{j,i})\) column by column. The first column contains only \(true\) functions \(f^{\Theta}_{0,i}(z_{0,i})=true\). For \(j>0\), each \(f^{\Theta}_{j,i}(z_{j,i})\) is constructed from the previous column by 6 possible operations: _true function, chaining, negation, \(\wedge\)-merging_, and \(\vee\)_-merging_ from Definition 1, as well as an additional _copy_ operation that simply copies \(f^{\Theta}_{j-1,i}(z_{j-1,i})\). Note that the result of \(f^{\Theta}_{j,i}(z_{j,i})\) over \(\mathcal{E}\) can be represented as a \(|\mathcal{E}|\)-dimensional vector \(\mathbf{v}_{j,i}\in[0,1]^{|\mathcal{E}|}\). Taking this notation, the formulae for the the 6 possible types of \(f^{\Theta}_{j,i}(z_{j,i})\) are: \[\mathbf{v}_{j,i;\text{true}} =\mathbf{1} \mathbf{v}_{j,i;\text{chaining}} =\text{clamp}\left(\mathbf{v}_{j-1,i}^{\top}\sum_{r\in\mathcal{R }}\alpha_{r,j,i}\mathbf{A}_{r}\right)^{\top}\] \[\mathbf{v}_{j,i;\text{-}} =\mathbf{1}-\mathbf{v}_{j-1,i} \mathbf{v}_{j,i;\text{copy}} =\mathbf{v}_{j-1,i}\] \[\mathbf{v}_{j,i;\wedge} =\mathbf{v}_{j-1,i}\odot\sum_{i^{\prime}=1}^{m}\beta_{j,i,i^{ \prime}}^{\wedge}\mathbf{v}_{j-1,i^{\prime}} \mathbf{v}_{j,i;\vee} =\mathbf{1}-(\mathbf{1}-\mathbf{v}_{j-1,i})\odot(\mathbf{1}- \sum_{i^{\prime}=1}^{m}\beta_{j,i,i^{\prime}}^{\vee}\mathbf{v}_{j-1,i^{\prime}})\] where \(\odot\) denotes Hadamard product, and \(\text{clamp}(x)=1-e^{-x}\) clamps any positive number to range \([0,1]\). Note that we constrain that \(\alpha_{r,j,i}>0,\ \sum_{r}\alpha_{r,j,i}=1,\ \beta_{i,i^{\prime}}^{*}>0,\ \sum_{i^{ \prime}}\beta_{j,i,i^{\prime}}^{*}=1\). These 6 types of scores are then gathered by a probability distribution \(p_{j,i,*}\): \[\mathbf{v}_{j,i}=p_{j,i,1}\mathbf{v}_{j,i;\text{true}}+p_{j,i,2}\mathbf{v}_{j,i ;\text{chaining}}+p_{j,i,3}\mathbf{v}_{j,i;\text{-}}+p_{j,i,4}\mathbf{v}_{j,i; \text{copy}}+p_{j,i,5}\mathbf{v}_{j,i;\wedge}+p_{j,i,6}\mathbf{v}_{j,i;\lor}\] Note that, specially in the second column, we only allow chaining \(f^{\Theta}_{1,i}\), because only chaining produces meaningful logical functions out of the first column of _true_ functions. Thereby \(\mathbf{v}_{j,i}\) represents the result of the probabilistic logical function \(f^{\Theta}_{j,i}\) on all entities. \(p,\alpha,\beta\) are all obtained by a softmax operation over some learnable parameters \(p^{\prime},\alpha^{\prime},\beta^{\prime}\). These parameters belong to \(\Theta\). There are a total of \(T+1\) columns of intermediate functions. Finally we use the last column of functions \(f^{\Theta}_{T,i}\) as \(L^{\Theta}_{i}\), and \(\mathbf{L}^{\Theta}(e)=(L^{\Theta}_{1}(e),\cdots,L^{\Theta}_{m}(e))\) is our logical entity representation LERP on \(e\). Next we explain the computation of chain-like probabilistic logical rules, and how to further incorporate LERP. In Yang et al. (2017), they model a probabilistic selection of logical predicates, so that the rules can be differentiably evaluated. Suppose for the chain-like rules in Equation 1, the predicate of \(r_{k}\) is selected according to a distribution \(a_{r,k}\). Then the probability that the clause body is true between each \(x,y\in\mathcal{E}\) can be given by \[\prod_{k=1}^{K}\left(\sum_{r\in\mathcal{R}}a_{r,k}\mathbf{A}_{r}\right) \tag{3}\] Now we want to also add a logical function constraint \(L^{\Theta}_{i_{k}}\) from \(\mathbf{L}^{\Theta}\). Denote \(\mathbf{L}^{\Theta}(\mathcal{E})\) as an \(m\times|\mathcal{E}|\) matrix by iterating \(\mathbf{L}^{\Theta}(e)\) over all \(e\in\mathcal{E}\). We can select \(L^{\Theta}_{i_{k}}\) from \(\mathbf{L}^{\Theta}\) according to a probabilistic distribution \(\mathbf{\rho}_{k}\) over \([1..m]\). Then the results of \(L^{\Theta}_{i_{k}}(z_{k})\) on \(\mathcal{E}\) can be written in vector form \(\mathbf{L}^{\Theta}(\mathcal{E})^{\top}\mathbf{\rho}_{k}\). Finally, the evaluation of the body in Equation 2 can be written as \[\prod_{k=1}^{K}\left(\sum_{r\in\mathcal{R}}a_{r,k}\mathbf{A}_{r}\right)\text{diag} \left(\mathbf{L}^{\Theta}{}^{\top}\mathbf{\rho}_{k}\right) \tag{4}\] \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline & \multicolumn{4}{c}{**Family**} & \multicolumn{4}{c}{**Kinship**} & \multicolumn{4}{c}{**UMLS**} \\ & MRR & H@1 & H@3 & H@10 & MRR & H@1 & H@3 & H@10 & MRR & H@1 & H@3 & H@10 \\ \hline MLN & - & - & - & - & 0.351 & 0.189 & 0.408 & 0.707 & 0.688 & 0.587 & 0.755 & 0.869 \\ Boosted RDN & - & - & - & - & 0.469 & 0.395 & 0.520 & 0.567 & 0.227 & 0.147 & 0.256 & 0.376 \\ PathRank & - & - & - & - & 0.369 & 0.272 & 0.416 & 0.673 & 0.197 & 0.148 & 0.214 & 0.252 \\ MINERVA & - & - & - & - & - & 0.401 & 0.235 & 0.467 & 0.766 & 0.564 & 0.426 & 0.658 & 0.814 \\ NeuralLP & 0.931 & 0.880 & 0.978 & 0.994 & 0.302 & 0.167 & 0.339 & 0.596 & 0.483 & 0.332 & 0.563 & 0.775 \\ DRUM & 0.958 & 0.927 & 0.987 & 0.996 & 0.534 & 0.367 & 0.628 & 0.885 & 0.695 & 0.546 & 0.808 & 0.935 \\ RNNLogic & 0.893 & 0.862 & 0.923 & 0.928 & 0.639 & 0.495 & 0.731 & 0.924 & 0.745 & 0.630 & 0.833 & 0.924 \\ LERP & **0.970** & **0.947** & **0.991** & **0.997** & **0.643** & **0.499** & **0.735** & **0.931** & **0.762** & **0.646** & **0.855** & **0.942** \\ \hline \end{tabular} \end{table} Table 2: Experiment results on statistical relation learning datasets. This is the evaluation for a single logical rule. In practice we can have multiple rules to predict one target relation (e.g., in DRUM, there are up to 4), and there are multiple types of target relations in the dataset. So, we learn individual sets of parameters \(a_{r,k}\) and \(\mathbf{\rho}_{k}\) for different rules, but the parameter \(\Theta\) for LERP is shared across rules. ### Reasoning and Training In the task of knowledge graph completion, the model is given a set of queries \(r(h,?)\) and required to find the entity \(t\in\mathcal{E}\) that satisfies relation \(r(h,t)\). During reasoning and training, we denote \(\mathbf{h}\) as the one-hot vector of entity \(h\), and multiply it with Equation 4 to calculate \[\hat{\mathbf{t}}^{\top}=\mathbf{h}^{\top}\prod_{k=1}^{K}\left(\sum_{r\in \mathcal{R}}a_{r,k}\mathbf{A}_{r}\right)\text{diag}\left(\mathbf{L}^{\Theta^{\top}}\bm {\rho}_{k}\right)\] This then can be calculated as a sequence of vector-matrix multiplications. The resulting vector is indicating which entity \(t\) is satisfying the query \(r(h,?)\). For training criterion, we use the cross entropy between the normalized prediction \(\frac{\hat{\mathbf{t}}}{\|\hat{\mathbf{t}}\|_{1}}\) and ground-truth one-hot vector \(\mathbf{t}\). The loss is differentiable with respect to all the parameters in our model, so we can apply gradient-based optimization methods such as Adam (Kingma and Ba, 2015). During evaluation, we use \(\hat{\mathbf{t}}\) as scores and sort it to obtain a ranked entity list for calculating mean reciprocal rank (MRR), Hits, etc. ## 5 Experiments ### Knowledge Graph Completion DatasetsWe follow previous works (Yang et al., 2017; Sadeghian et al., 2019) and evaluate on the Unified Medical Language System (UMLS), Kinship, and Family datasets (Kok and Domingos, 2007) as well as the WN18 (Bordes et al., 2013) and WN18RR (Dettmers et al., 2018) datasets. The task is to predict missing edges in theses dataset, and is named "knowledge graph completion" on knowledge graphs like WN18RR and WN18 and "statistical relation learning" on smaller datasets like UMLS, Kinship and Family. For all experiments, we evaluate under the setting of using no external data. For Kinship and UMLS, we follow the data split from Qu et al. (2020), and for Family, we follow the split used by Sadeghian et al. (2019). For rule learning baselines, we compare with Markov Logic Networks (MLN) (Richardson and Domingos, 2006), Boosted RDN (Natarajan et al., 2010), and path ranking (PathRank) (Lao and Cohen, 2010) as well as other differentiable rule learning baselines such as NeuralLP (Yang et al., 2017) and DRUM (Sadeghian et al., 2019). We also include a reinforcement learning method MIN-ERVA (Das et al., 2018) and an expectation-maximization method RNNLogic (Qu et al., 2020) in \begin{table} \begin{tabular}{l l|l l} \hline **weight** & **logical function** & **weight** & **logical function** \\ \hline 0.99 & \(\nexists_{2}:sister(e,z_{1})\) & 0.56 & \(\exists z_{1}:mother(e,z_{1})\land\exists z_{2}:aunt(z_{2},e)\) \\ 0.99 & \(\nexists_{2}:brother(e,z_{1})\) & 0.46 & \(\exists z_{1}:aunt(e,z_{1})\land\exists z_{2}:daughter(z_{2},e)\) \\ 0.96 & \(\nexists_{2}:sister(z_{1},e)\) & 0.39 & \(\nexists_{2}:uende(e,z_{1})\) \\ 0.68 & \(\exists z_{1}:sister(e,z_{1})\land\exists z_{2}:sister(z_{2},e)\) & 0.25 & \(\exists z_{1}:brother(e,z_{1})\land\exists z_{2}:daughter(z_{2},e)\) \\ \hline \end{tabular} \end{table} Table 4: The list of logical functions \(L_{i}(e)\) learned in LERP \(\mathbf{L}(e)\). \begin{table} \begin{tabular}{l l} \hline **weight** & **logical rule** \\ \hline 0.10 & \((\nexists_{2}:sister(z_{2},x))\wedge mother(z_{1},x)\wedge son(y,z_{1}) \Rightarrow brother(y,x)\) \\ 0.016 & \(mother(z_{1},x)\wedge mother(z_{1},y)\land(\nexists_{2}:sister(y,z_{3})) \Rightarrow brother(y,x)\) \\ \hline 0.11 & \(son(z_{1},x)\wedge brother(y,z_{1})\Rightarrow son(y,x)\) \\ 0.004 & \((\nexists_{2}:sister(z_{2},x))\wedge brother(z_{1},z)\land(\nexists_{2}:sister( z_{3},z_{1}))\wedge mother(y,z_{1})\Rightarrow son(x,y)\) \\ \hline 0.039 & \(wife(x,y)\Rightarrow husband(y,x)\) \\ 0.024 & \(mother(x,z_{1})\wedge father(y,z_{1})\Rightarrow husband(y,x)\) \\ \hline \end{tabular} \end{table} Table 3: A set of logical rules discovered from the Family dataset. baselines. On knowledge graph completion task, we further compare with the following powerful black-box embedding-based models: ComplEx (Trouillon et al., 2016), TransE (Bordes et al., 2013), ConvE and Linear Model (Dettmers et al., 2018), LinearRE (Peng and Zhang, 2020), QuatDE (Gao et al., 2021), MLMLM (Clouatre et al., 2021), and kNN-KGE (Zhang et al., 2022). For evaluation, following the setting in Sadeghian et al. (2019), we we we adopt filtered ranking and evaluate models on mean reciprocal rank (MRR) as well as the Hits@{1, 3, 10} metrics. We search for hyper-parameters according to validation set performance, and finally adopt the hyperparameters dimension \(m=80\) and depth \(T=2\) for LERP, and the maximum rule length of \(K=3\). We use 4 rules for WN18RR and WN18 and Family and 200 rules per target relation type for Kinship and UMLS dataset. Our model is trained for 4 epochs for WN18RR and WN18 and 10 epochs for Family, Kinship, and UMLS. As DRUM runs on fewer number of rules by default, we re-run DRUM with larger number of rules to match the configuration of our model and RNNLogic. For optimization, we use Adam with a learning rate of 0.1, \(\beta_{1}=0.9\), and \(\beta_{2}=0.999\). The results for are demonstrated in Tables 1 and 2. Compared with other rule learning methods, our model obtains higher performance across the board. This validates the importance of contextual information provided by LERP, especially after considering that this is the only difference in model design between DRUM and our model. Black box models, which do not conduct rule learning, generally achieve higher scores than rule learning methods. This is both because of the strong fitting power of black-box models and because rule learning models also need to explicitly learn logical rules. However, our model achieves performance comparable or even superior to black-box baselines. Even on metrics where LERP's performance falls short of that of state-of-the-art models (like Hits@10 on WN18RR dataset), the difference is marginal. We attribute this to the property of LERP modeling a more expressive set of logical rules, allowing it to achieve data-fitting power similar to black-box neural models. ### Interpretability of LERP and logical rules An important advantage of logical rules is that they help humans to better comprehend the reasoning process of the algorithm. In this subsection, we provide a case study on the interpretability of LERP and our model on Family dataset. In Table 3, we sort the logical rules according to their weights, and show the top-ranked logical rules for three relations. We can see that these logical rules are more complex than chain-like Horn clauses as in Equation 1. For example, in the second row, the logical function \(\nexistsister(y,z_{3})\) provides contextual information that \(y\) is not sister of anybody. 2 Therefore, \(y\) is more likely to be the brother than sister of \(x\). Similarly, in the fourth row, \(\nexists_{3}:sister(z_{3},z_{1})\) provides information that \(z_{1}\) has no sister, so \(x\) can only be the brother of \(z_{1}\) and thus the son of \(y\). We can also observe some verbosity in the learned rules, like the logical function \(\nexists_{2}:sister(z_{2},x)\) in the first row. This is not necessary since \(son(y,z_{1})\) is enough for predicting that \(y\) is the brother of \(x\). But this kind of verbosity is inevitable since rules of the same meaning are equivalent from the model's perspective. Footnote 2: Note that in the Family dataset ontology, \(\{\texttt{relation}\}(a,b)\) means that \(a\) is the \(\{\texttt{relation}\}\) of \(b\) (e.g., \(son(a,b)\) means \(a\) is \(b\)’s son). Since LERP is itself an interpretable representation, we can inspect it directly in Table 4. We sort the logical functions \(L_{i}^{\Theta}\) according to the weights, and filter out the repeated functions. Some logical functions are related to the gender information of entities. For example, \(\nexistsier(e,z_{1})\) might suggest Figure 3: Visualization of the space of LERP on the Family dataset. In each figure, we color as red entities of different relative roles (e.g., people who _are daughters_, _sons_, or _have sisters_ or _brothers_). (although with uncertainty) that \(e\) is a male. Other logical functions are longer and contain more complex information of entities. For instance, \(\exists z_{1}:brother(e,z_{1})\wedge\exists z_{2}:daughter(z_{2},e)\) tells us \(e\) is a brother and has a daughter, which might help predict the nephew/aut-uncle relationship between his sibling and daughter. We notice that the weights of some interpreted logical functions are not close to 1.0, and the actual utility of some logic functions are not easy to guess merely from the interpreted logic form. To better understand the space of LERP representation, we conduct 3-dimensional principle component analysis (PCA) on LERP vectors of entities in the graph. We select entities with some relational roles such as _being a daughter_, _being a son_, _having a sister_, or _having a brother_, and mark them with red circles in contrast to the blue pluses of the other entities. For better visualization, we also plot the plane dividing them from the rest of entities. The results are shown in Figure 3. We find that the lower-dimensional space of LERP is well organized according to these roles. Note that the role of _having a brother_ is not explicitly stated in the learned logical functions in Table 4. We hypothesize that it might come from a combination of other logical functions like \(\exists z_{1}:brother(e,z_{1})\wedge\exists z_{2}:aunt(z_{2},e)\) and \(\exists z_{1}:brother(e,z_{1})\wedge\exists z_{2}:daughter(z_{2},e)\). This demonstrates that logical functions in LERP can be combined to represent extra logical information. ### Model Analysis Is contextual information from LERP useful?This work is motivated by including contextual information for rule learning. To validate our hypothesis, we vary the depth of LERP from 0 to 4 and observe the performance. This hyper-parameter controls the complexity of logical functions learned in LERP. When depth is 0, our model does not use contextual information from LERP and defaults back to the DRUM baseline. Results are shown in Figure 4(a). We see that even with a depth of 1, LERP boosts our model's accuracy by a large margin compared with the depth of 0. Similar conclusion can also come from the comparison between DRUM and LERP in Table 2 and 1, because DRUM can be viewed as an ablation study counterpart of LERP by removing contextual information from LERP. Is a higher dimension of LERP always better?The answer is no. Similar to embedding-based models like TransE, a larger dimension may cause LERP to overfit the training data and perform worse on the test set. In Figure 4(b), we set the width of LERP to {3, 10, 40, 100, 200, 400}. We observe a \(\cap\)-shaped curve with 40 giving optimal test performance. Does the model benefit from larger number of logical rules?The default number of logical rules can be different across models. For example, RNNLogic uses 100\(\sim\)200 rules per relations, but DRUM only uses 1\(\sim\)4 rules by default. To better analyze the model's performance, we vary the number of rules in our model. We also compare with DRUM. In Figure 4(c), both models' performance increase with larger number of rules, but with diminishing returns (note the logarithmic scale of the x-axis). Our model achieves higher scores even with fewer number of rules. We attribute this to LERP helping to learn higher quality rules. ### Augmenting Embedding Learning Models with LERP LERP is designed to learn an interpretable logical representation for entities in graphs. One question then arises: can LERP work together with other embedding-learning methods such as Figure 4: Analysing LERP’s performance under different configurations. TransE (Bordes et al., 2013), to combine the logic structure in LERP and strong fitting power of TransE? Inspired by this idea, we design a hybrid model TransE+LERP. We use the training framework and training criterion on TransE, but the entities' embeddings are now defined as \(Embed^{\prime}(e)=Embed(e)+R\mathbf{L^{\Theta}}(e)\). \(Embed(e)\) is set as a 20-dimensional entity-specific embedding. We also set the width and depth of LERP to be 8 and 2. \(R\) is a \(20\times 8\) matrix to transform LERP to the embedding space. Besides the loss in original TransE framework, we add a regularization term \(\eta\frac{\sum_{e}\|Embed(e)\|_{2}}{\sum_{e}\|R\mathbf{L^{\Theta}}(e)\|_{2}}\) to encourage LERP to encode important information during optimization. We also introduce an advanced version TransE+LERP+entropy by adding an extra entropy regularization term \(\gamma H(\Theta)\) on LERP's weights. This term is added to loss and encourage LERP to assign more weights to fewer logical rules. In experiments, we set \(\eta=10\) and \(\gamma=1\). We evaluate the models on Family dataset and results are presented in Figure 5. The vanilla TransE model tends to overfit the training set (\(>\)95% training set accuracy) because Family dataset contains a limited number of training triplets. TransE+LERP achieves higher test set performance. We hypothesize that LERP provides logical regularization on the embedding \(Embed^{\prime}(e)\) and makes it more generalizable. After adding the entropy regularization term, the performance further improves. We attribute this to that the regulation encourages learning more explicit logical functions, which serves as a prior in embedding learning. We also study the interpretability of the hybrid model. In Table 5 we list the logical functions that are learned by TransE+LERP. There are 2 repetitions in logical functions which are removed. We also manage to identify the correlation between relations in TransE and logical functions. In TransE, each relationship has a vector \(\mathbf{I}\) to denote a translation in the embedding space. \(\mathbf{I}\) resides in the embedding space, and we want to find a vector \(\mathbf{l^{\prime}}\) in LERP space so that \(R\mathbf{l^{\prime}}\approx\mathbf{l}\). As \(R\) is not square matrix, we use \(R\)'s Moore-Penrose inverse \(R^{+}\) to project the translation vectors to \(R^{+}\mathbf{l}\). We then look for dimensions with highest values in \(R^{+}\mathbf{l}\), and results are listed in the 2nd and 4nd columns in Table 5. We can observe that these functions encode rich information. For example, relations like _daughter_ and niece are associated with \(\exists z:niece(e,z)\) which suggests a female gender and lower rank in the family tree. ## 6 Conclusions and Future Work In this work, we propose a logical entity representation (LERP) and demonstrate that we can incorporate contextual information of local subgraphs to probabilistic logical rule learning. Therefore we are able to learn a more expressive family of logical rules in a fully differentiable manner. In experiments on graph completion benchmarks, our model outperforms rule learning baselines and competes with or even exceeds state-of-the-art black box models. LERP is itself an interpretable representation of entities. We also find that LERP can be combined with latent embedding learning methods to learn interpretable embeddings. For future work, we expect that LERP can be combined with a wider range of neural networks to provide logical constraints and learn interpretable embeddings. Figure 5: Incorporating LERP representation improves TransE’s performance on Family dataset. \begin{table} \begin{tabular}{l c|c c} \hline \hline **logical function** & **relations** & **logical function** & **relations** \\ \hline \(\exists z:niece(e,z)\) & daughter,niece & \(\exists z:unt(e,z)\vee\exists z:niece(e,z)\) & mother \\ \(\nexists z:niece(e,z)\) & father,husband,son & \(\exists z:unt(e,z)\wedge\exists z:niece(e,z)\) & \\ \(\nexists z:uncle(e,z)\) & aunt,sister & \(\exists z_{1},z_{2}:mother(z_{1},e)\wedge:niece(z_{1},z_{2})\) & brother,nephew \\ \hline \hline \end{tabular} \end{table} Table 5: The list of logical functions \(L_{i}(e)\) learned when optimizing TransE+LERP. We project the learned translation vectors \(\mathbf{l}\) back to the LERP space and identify most correlated logical functions. ### Reproducibility Statement The datasets, split and the configuration of the model are introduced in Section 5.1. The statistics of the datasets used are also included in Appendix D. We also conduct analysis of the model behavior in Section 5.3. In the additional experiments of combining TransE and LERP, we provide details of implementation in Section 5.4, including the hyper-parameters and rationale for some technical decisions. For Theorem 1, we provide proof in Appendix A. ### Acknowledgments We would like to thank anonymous reviewers for valuable comments and suggestions. This work was supported in part by US DARPA KAIROS Program No. FA8750-19-2-1004 and AIDA Program No. FA8750-18-2-0014. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on.
2306.13176
Key Frame Extraction with Attention Based Deep Neural Networks
Automatic keyframe detection from videos is an exercise in selecting scenes that can best summarize the content for long videos. Providing a summary of the video is an important task to facilitate quick browsing and content summarization. The resulting photos are used for automated works (e.g. summarizing security footage, detecting different scenes used in music clips) in different industries. In addition, processing high-volume videos in advanced machine learning methods also creates resource costs. Keyframes obtained; It can be used as an input feature to the methods and models to be used. In this study; We propose a deep learning-based approach for keyframe detection using a deep auto-encoder model with an attention layer. The proposed method first extracts the features from the video frames using the encoder part of the autoencoder and applies segmentation using the k-means clustering algorithm to group these features and similar frames together. Then, keyframes are selected from each cluster by selecting the frames closest to the center of the clusters. The method was evaluated on the TVSUM video dataset and achieved a classification accuracy of 0.77, indicating a higher success rate than many existing methods. The proposed method offers a promising solution for key frame extraction in video analysis and can be applied to various applications such as video summarization and video retrieval.
Samed Arslan, Senem Tanberk
2023-06-21T15:09:37Z
http://arxiv.org/abs/2306.13176v1
# Dikkat Katmanli Derin Simir Aglari ile Anahtar Kare Tespiti ###### Abstract Automatic keyframe detection from videos is an exercise in selecting scenes that can best summarize the content for long videos. Providing a summary of the video is an important task to facilitate quick browsing and content summarization. The resulting photos are used for automated works (e.g. summarizing security footage, detecting different scenes used in music clips) in different industries. In addition, processing high-volume videos in advanced machine learning methods also creates resource costs. Keyframes obtained; It can be used as an input feature to the methods and models to be used. In this study; We propose a deep learning-based approach for keyframe detection using a deep auto-encoder model with an attention layer. The proposed method first extracts the features from the video frames using the encoder part of the autoencoder and applies segmentation using the k-means clustering algorithm to group these features and similar frames together. Then, keyframes are selected from each cluster by selecting the frames closest to the center of the clusters. The method was evaluated on the TVSUM video dataset and achieved a classification accuracy of 0.77, indicating a higher success rate than many existing methods. The proposed method offers a promising solution for key frame extraction in video analysis and can be applied to various applications such as video summarization and video retrieval. _Keywords -- Automatic Key Frame Extraction, Video Summarization, Deep learning, Learning With Attention Layers, Autoencoders._ ## I Gris Video analizi, gozetim, gelence ve e gitim gibi cok estiltila alanlardai uygulamalarla, son yillada giderek daha onemli bir aarstruma alan halh haling egldi. Anahtar kare cikarma, video analizinde ok onemli bir aadmr, cunku oemli olaylan veya, eylemleri yakalayan en biligilendrici karelerind seerek bir videonun iergicini ozetlemeye yardmci olur[1]. Diger bir deyisle, video anahtar kare cikarmann temel amaci, videoi iergiini temsil edecek video karelerini tespit etmektir. Tespit edilecek ozet karelerin temsil ozellikleri elsalman alanlarna gore farkliklik posterebilmektedir. Ancak en genel kullanm senaryosu, video aksi 1eyrisindek farkli sahne ve vevre unsurlarnun tespit edilmesi veya genel aaktstasi nesne hareket hzlarndan daha farkl hizda hareket eden nesnelerlin tespit edilebildigi karelerin bulnumasdir. Geleneskel anahtar kare cikartmaya yonelik yontemler genellikle, bulussal yontemlere veya belirli video iecrigiineb buyuk olclude bagli olabilen ve diger videolarda kullanlmak uzere genellenememeybilen el yapimu upgulamalara dayanr. But calalsmada, diskkatkat datmanna sahip derin bir otomatik kodlavslukullanarak anahtar kare cikarma icin yeni bir yaklasmm'erivuz. Kullanlacak model, anahtar kare cikarma icin The _G_ ve gorenilen ozelliklerin yapsim analiz etrek icjn otomatik kodlayciralan da kullanabilir. Yeniden uretilebiliarliegvi e diger yontemlerle karslalstrmayu kolaylastrmak icin otomatik kodlaycialvin icin kullanlan mimarinin, egitim prosedurunun ve degerlendirme ilcumlerinin ayrnntli aciklamalarnn saglamak onemlidir. ### Derin Ogrenme Dikkat Kamanlar(Attention Layer) Dikkat katmanlanl, goruntu ve video olusturma, metin ozetleme ve makine cevirisi gibi groverlende performanslarm artrmak icin otomatik kodlaycialara eklenebilen bir tur mekanizmadr[10]. Dikkat katmanlar, modelin ciktyla alaka duzeyine bagli olarak giri verlierinin farkh bollumlerine secici olarak odaklanmasam saglar. Bu, eikt icin onemine bagli olarak her giri gri dgesine bir agrlik atayarak ve bu agrliklar giri ogelerinin agrlikli bir toplamn insebalamak icin kullanarak yapnlr[11]. Otomatik kodlaycialarda, girdideki onemli ozellikleri secerek, vurgulayarak yeniden olusturlan ciktnn kalitesini artrmak icin diskat katmanlan kullanlabilir. Ornegin, goruntu olusturma, istencen ciktyn bagli olarak giri gornttsuintan farkh bolglelemine sceici olarak odaklamn3 icin diskat katmanl kullanlabilir. Metin ozetlemede, ozetin uzunibung ve iserigine bagli olarak giri belgesindeki onemi cumlelere veya anahtar sozciklere secici olarak odaklanmak icin dikkat katmanl kullanlabilir[12][13]. Otomatik kodlaycialarda kullanlabiliceek, kuresel dikkat, yerel dikkat ve kisisel dikkat gibi farkh t'urde dikkat katmanl varadr. Kuesel dikkat, tum grid dizisiin icin tek bir agrlik ssei hesalparken, yerel dikkat, girdi dizisinin farkh bollamleri icin farkh agrlik kumlerli hesalpar. Cok basli dikkat olarak dolinen kendi kendine dikkat, birden fazla agrlik setini paralel olarak hesalapayarak modelin giri verlierinin farkh volnerini yakalamasna olanak tanr. Akademik calismalarda, otomatik kodlaycialardaki dikkat katmanlar, modelin goruntu ve video olusturma, metin ozetleme v makine gevrisi gibi estetili groverledki performansm atrtmak icin kullanlabilir. Aragtrmacilar, farkh iatrodi dikkat mekanizmalarnn kesfedebilir ve dogruluk, saskinnk veya F1 puan gibi olemlemleri kullanarak bunlarin etkililigini ogerlendirebilir[14]. ### Kiumeleme Yontemleri Kumeleme algoritmalar, veri noktalarm benzerliklerine grore gruplandrmak icin kullanlanl gozetemisz makine ogrenimi algoritmalarnn bir smnfidr. Kumelemenin amaci; grupe teiktleri hakkmda herhangi bir on biligi olmaskuzin, verilerde dogal gruplamalar veya kumeler bulmaktr[15]. Hiyerarsik kumeleme, bolm tabanli kumeleme, yoomuluk tabanli kumeleme ve model tabanli kumeleme dahil olmak uzere birkac tir kumeleme algoritmasi vardr. Bu algoritmalarnn bir, veri noktalarnn gruplandirmak icin farkli bir yaklasim kullanmr. Hiyerarsik kumelemede, veri noktalar, mesafefierine veya benzerliklerine grore yinelemeli olarak kumeler halinde birleitstirliir. Bu, istenen sayda kumeyi elde etmek icin beliri bir yuksekklite kesilebelien dendrogram adu verilen agac benzeri bir yapylya sonuclanur[16]. K-means gibi bolm tabanli kumeleme algoritmalan, noktalar ile bir kume merkezleri kumesi arasmdaki mesafelere bagli olarak veri noktalarn sabit sayda kumeye boler. Merkezler, yakansamaya uslaslana kadar yinelemeli olarak guncellenir ve bu da verilerin kimlere bolminesijle sonuclanur[17]. DBSCAN gibi yogunluga dayali kumelame algoritmalar, veri alanunn yogun bolgelerinde bulunan veri noktalarnn daha distik yogunlukluk alanlarla ayarak gruplandnir. Bu, verilerin, yogunluguna bagli olarak degisen sekil ve boyutlarda kumelerle sonuclanr. Gauss karsim modelleri (GMM) gibi model tabanli kumeleme algoritmalar, veri noktalarnn olaslik daglimlarnn, bir karsimndan uretiidigini varsayar ve dagulmalarn parametrelerini ve verilerdki kume saysimu tahmin etmek iginistatisikel sontemler kullanr. Gorntu analizi, metin macneliligi ve sosyal agal anlizi gibi cestil alanlardaki verlieri analiz etmek icin kumeleme algoritmalar kullanlabilir. Aragtrmacilar, farkh t tutler kumeleme algoritmalarn kesfedebilir ve siluet katsayis, saflik veya entropi gibi metrikleri kullanark bunlarn etkililigini. Gierdi verileri, mesafe veya benzerlik olciusu vel agordinmann performansm optimize etmek icin kullanlanle herhangi bir inpparametre veya ayarlama yontemi dahil olmaskizere, kullanlal kumeleme algoritmasn net bir tammun saglamak onemlidir[18][19]. ### Uzakk Metrikleri Mesafe olemleri, ki veri noktasi arasundaki benzerligi veya farklhi olemek icin kullanl matematiksel sievelerdir. Makine ogrenimimde, mesafe olemleri genellikle ozellik vektorleri veya veri noktalarar arasundaki mesafeyi olemek icin kullanlvr ve genellikle kumeleme, smflandrma ve regresyongorevlerinde kullanlnr[20][21]. Oklid mesafesi, Manhattan mesafesi, kosinus mesafesi ver Mahalanobis mesafesi dahil olmak tizere estili mesafe olemleri veradr. Bu olemler her biri, ki veri noktasi arasundaki mesafeyi hesaplamak icin farkh bir formul kullanr. Oklid mesafesi belki de en iyi bilinen mesafe metrigidir veiki veri noktasmda karsllik gelen ozellik degerleri arasundaki farklarn karelerinin toplamn karekoku olarak hesaplannr. Taksi mesafesi olarak dahilanen masafesi, karslik gelen ozellik degerleri arasmdaki mutlak farklamn toplamarak hesaplannr[22]. Kosinus mesafesi, iki vektor arasundaki aqmn kosinitsuni olcer ve genellikle iki belge veya metin paracagi arasundaki benzerligi olcmek icin dogal dil isleme gorevlerindekullanlnr[23]. Mahalanobis mesafesi, verilerdeki ozellik degerlerinin kovaryansm hesaba katar ve genellikle ozelliklery vokesk oranda iliskili odugunda kullanlnr. Makine ogrenimi modellerinin performansm degerlendirmek, farkh modellerin veya algoritmalarnn performansm karslastmrn icin genellikle mesafe olemleri kullanlnr. Arastrmacilar, farkh tutde mesafe olemlerini kesfedebilir ve dogruluk, F1 puan veya ROC egrisi altmadaki alan gibi olcimleri kullanarak bunlamn etkinligini degerlendirebilir. Matematiksel formul ve metrigin varsaymlarn veya smirlamalan da da hail olmak tizere, kullanlan mesafe metriginin net bir tanmm saglamak onemlidir[24]. ### _Veri Kunesi ve lcerik_ TVSum (TV Ozeti), video ozeleteme arastrmasa icin halka acik bir veri kumesidir. Video ozeleteme algoritimalarnm degerlendirmek icin akademik makalenderde yaygn olarak Kullanlarn. Video ozeleteme, videonun en onemli iergicini yakalayan anahtar kareleri veya gekimleri secerek uzun bir videoenun ozlu ve temsili bir ozetiin otomatik olarak olusturma gorevidir[25]. TVSum veri seti, haberler, talk sovlar, spor, belgeseller ve gelence solvar dahil olmak tuzere estiti lturleri kapsayan 50 populer TV sovundan video klpler icerir. Veri kumesi, video ozeleteme algoritimalarim farkli baglamlarda degerlendirmek icin uygun halle getterin estiti gorultuler icerir. TVSum/daki her video klip, degerlendirme icin temel gerek sisevi goren, insanlar tarafindan olusturlan ozeletre islikendidirilir.TVSum veri kumesi asagdaki bilesenri iecirir. **Videoolar:** Veri kumesi, birkac dakikadan birkac saate kadar degisen uzunluklarda MP4 formatnda video klipler icerir. Videolar, ook cesiti konlan v teurleri kapsar ve video ozeleteme arastrmasa icin estiti lierikler saglar. **Insan tarafindan olusturlan ozeletz:** TVSum'daki her video, algoritmalar tarafindan olusturulan video ozeleterin kalitesini degerlendirmek icin temel gerek olarak kullanlan, insanlar tarafindan olusturulan bilden cozet ile lilsikelentrilir. Bu ozeletr, videolarn ana iceriginin kisa ve temsili aciklamalarm saglar. **Video meta verileri:** TVSum ayrica her video icin soy adv, bolim baslig, yayn tarihi ve video siresi gibi bliglier dahil olmak tuzere metra veriler saglar. Bu meta veriler, baglamsal alan izijn veya videolam belltri kirterleg gore filtrelemek icin kullanlabililir. TVSum veri seti, video ozeleteme algoritmalarm degerlendirmek ve kayaslamak icin akademik arastrmasalardra yaygn olarak Kullanlir. Arastrmacalarn F1 puan, ROUGE puanan veya video geri cagirma gibi ortak degerlendirmek cullerini kullanarak farkli algoritmalarm performansum karasilastrmasima olanak tanyan, insanlar tarafindan olusturulan zeferlee sahip standardlasttrlmns ve farkh videolar sunar. ### _Test Ortam Ve Onerlien Yontem_ ### _D.1.1.Model_ Calsmada bir oto kodlayci kullanlmlstrr, girdi verilerini tipik darak daha dusik boyutlu bir gizli alana sikstrarak ve ardandan orijinal bicimine geri dondurerek yeniden yaplandirmak tizere eigtilmis bir tur sinir agidr. Model mimarisi bir kodlayci ve bir kod zozucaden olusur. Kodlayci, (3, 64, 64) seklindeki giris verlierini alir, duzlestir verdrandan rleu'aktivasyon fonsiymaly birkac yogun katamada gevirir. Kodjaccumir ciktsi, gizli boyutunn (64) boyutuna sahip bir vektordur. Kod zozucu dana sonra skistrnlms gostreimi alr ve bir relu aktivasyon fonskiyonuna sahip yogun katamanlark kullanarak orijinal giris sekline geri dondurur. Ayrica modelin kodimayci kismma dikkat katmann, elklemmistir. Dikkat katmann, kodlaycnn ciktsum alir, her zaman adim icim dikkat puanlarn hesalpar ve ardandan bu puanlar, dikkat agirkli bir temsil olusturmalx icin kodlayci ciktsuna augular. Ortaya elkakat agrlikat agirkli temsil daha sonra sabit boyutlu bir giktu intermek icin kuresel ortalama havuzlama kuklamanka kuklaman. Model daha sonra Adam iyilestiriic ve ortalama kare hata kaybi sisevi kullanlarak derlenir ve dogrulanur. #### D.1.2.Tespit Katmann Bu adum, bir otomatik kodlayci ve kumeleme kullanan bir anahtar kare eikarma algoritmasm bir wygulamasdir. Algoritmann amacac, videodaki en onemli anlari ve olayalar yakalayan bir video sekansndan temsili kareler belirlemektir. Ilk olarak, karelerli HSV renk uzayna donnsturerek, 64x64 piksel olarak yeniden boyutlandrarak ve piksel degerlerini 0 ila 1 aralalgunda normalestirerek en isler. Ardmdan, ozellikleri ktimeler halinde gruplandirmak icin KMeans ktimeleem algoritmasm kullanur. Bu uygulamada ktime sayisi farkli sayilara ayarlanabilir, anck bu, belirli ugugulamaya bagli olarak ayarlanabilir. Kumelame tamamlandiktan sorna algoritma, gzeilkesa aesmdan kume merkezine en yakn zareyi seerek her kume icin anahtar kareyi tanmlar. Bu anahtar kareler, orijimal dizidekli indeksleri ve ozellik gosterimlerlyle birlikte saklanr. Genel olarak, bu algoritma, dizideki diger karelere benzerliklerine dayali olarak bir video diazisnden anahtar kareleri ayiklamann basit ama etkli bir yoludur. Ortaya gikan anahtar kareler, videoyu ozetlemek veya iergicinin temsili bir gorsel ozetemi saglamak gibi cestiti almaclar icin kululambalibir. Burada kulanlan kumeleme algoritmalarnn kume sayssi parametresi aynn zamanda cikarlacak olan anahtar kare saysiam de beliflemektedir. Algoritmann saynar parametresine bagmmlibigolupdapund her vidocada aynn miklarda kare dolugu var saylacakttr. Bu durumada daha duragan videolarada anahtar kare sylasi daha az olacagamdan benzer karelerin tespitilie karslaslaslacakttr. Veya daha yuksek saydta tespitilie gereken durumlarda daha az sonuc uretilmis olacakttr. Bu durumlarm etkisini daha azaltmak icin tespitil denkarelern saysi sengelek kululamlanda shajskek beliflenip benzer karelerin temizlemensi isiemi yaphlazyus upyun gorulmistur. Bu islem icin bir uzaklik metriji ve uazklklar matrisi kulamlacakttr. Tespit edilen uazklklara gore birbirine yakin olan karelerden bir bulgu listesinden silinecek ve diger temsili tutulacakttr. Yakmlik surnn belirlerken isattistiskel given aralklan goz onnde bulundurulacak ve ortalama degerinden iki standart sapma daha disik uazklklta olan kareler birlestrilecektir. \[\mu=\mathit{ordalama}\] \[\sigma=\mathit{std.sapma}\] \[uazklk<\mu-2*\sigma\rightarrow\mathit{birlestir} \tag{1}\] ### _Oicumleme Metodlam_ Oicumleme icin kulanlan veri setinin anahtar kareyjonde bir etiket bilgisi olmadigndan ciktunn degerlendirmesindeuzman gorus kullanlacakttr. Anahtar kare olarak behlerleneckk goruntunition video icerisande ver alan her sahneri tespit temesi, eger farkli sahneler yoksa goruntudecki buyuk degisiklikleri ve harekreleri seipt temesi beklenecketkir. Bu anlamda her video icin anahtar kareler icin kesti aralklari belirleneckek ve test giktsnun bu aralkta en az bir goruntu tespit edeblimis olmasb kellenecketkir. Elde edilecek veri ise her dogru tespit edilen ve tespit edilemeyen kareler icin hata matrisi ile degerlendirliecektir. Hata matrisi, bir makine ogrenimi modelinin performansim degerlendirmek ijin kullanlan bir tablodur. Bir ikili smflandirma problemi icin gereck poztif (TP), gereck negtifi (TN), yanls pozitif (FP) ve yanlis negatif (FN) degerlendirmek inetem sindir. Bir ikili smflandirma probleminde, pozitif ve negatif olnak tuzere iki olas sunf vardr. Hata matrisi, tahmin edilen smf etiketlerini gereck sumf etiketlerijke karslastmr ve her smuf icim dogru ve yanls tahminlerin saysusnn bir dokumnn saglar. Matris tipik olarak dikey eksende gereck smf etiketleri ve yatayak eskende talmin edilen smf etiketleri ile bir kare olarak temsil edilir. Matrisi dort degeri asagdaklier temsil eder: Gereck pozitifler (TP): aslinda pozitif olan ve pozitif olarak dopru bir sekilde talmin edilen orenklerin sayssi Yanls pozitifler (FP): aslinda negatif olan anack yanlshkila pozitif olarak tahmin edilen orenklerin saysi Yanls negatifler (FN): aslinda pozitif olan anack yanlshkila negatif olarak tahmin edilen orenklerin saysi Gereck negatifler (TN): gereckte negatif olan ve dogru bir sekilde negatif olarak tahmin edilen orenklerin saysi Matristeki degerlerler, dogruluk, kesinlik, geri cagirma ve F1 puan gibi bir makine ogrenimi modelinin performansim degerlendirmek icin yaygnamak kullananl estiil ojcitteri heaspalmalak icin kullanlablijli. Bu metrikler, modelin ne kadar iyi performansgerl diginetz dair icgoriler saglayabilir ve jijlestirme alanlarnn belirlenmesine yardmci olabilir. ## III Test Kesinlik, modelin pozitif orenkleri dogru bir sekilde smifflandirma yetenegini gosterr. Bu durumda, sunf 0 icin kesinlik 0,96dir, yani model pozitif simflandirlan tim orenklerin %96% som dogru bir sekilde smflandrul. Anack anahtar kareeigt youtremleri dogas regi dengeszis smf problemlerl odiugundan, normal smf icin degerlendirme yapmak anlamsiz olacaktur. Bu sebeple tespit edilen karelerin dogruluklarm inecelemek gerekir. Smf l ivin kesinlik 0,78dir, yani model pozitif olarak smflandrulran orenklerin %78'ini orenkleri dogru bir sekilde tunmalma yetenegini gosterr. Smf 1 icin duyarlik 0,75tir, yani model smf l'e at tim orenklerin %75ini dogru bir sekilde tanmald. F1 puan, kesinlik ve duyarlik arasandaki dengeyi gosteren bir olcuttur. Bu durumda, smf l icin F1 puan ## IV Sonuc Bu calismada, dikkat katmannna sahip derin bir otomatik kodlayici kullanan bir anahtar kare qikarma yontemi onerildi. Yontem, onerilen yaklasmum etkinligini posteren snnflandrma gorevinde 0,771ik bir basar ornan elde etti. Yontem, once otomatik kodlayici kollayici kismni kullanarak video karelerinden ozelliklerini cikarnr ve arndndan K-means kumelemeyi kullanarak bu ozellikleri kumeler. Anahtar kareler daha sonra, kime merkzezine olan yakmnlklarna grore her bir kumeden secilir. Otomatik kodlayicikum kodlayici kismmda dikkat katmannn kullanlmast, video karelerindeki en belirgin ozelliklerin vurgulammasma yardmuc olur ve bu, nanahtar kare qikarma islemini doglujun utrrm. Deneyel sonuclarmz, onerilen yontemin, anahtar kare cikarma icin meveut yontemler kadar iyi performans osterdigitalini ve video ozerleme ve eylem tanma gibi cestiltalandra potansivel viguamalara sahip doluguunu gostermektedir. Smflandrma gorevindeki 0.77 basar ornan, onerilen yontemin oldukca dogru ve etkili olduguunu gostermektedir. Farkli modellerin simulasyon snouclarmn arastrnlmast, bu makalede onerilen modelin iy bir performans ogsterebilecegini ortaya koymustur. Onerilen yontemimz, genellikle bulussal yontemlere veya videoonn anlamsal ierigini yakalayamayan dusuk duzeyli ozellikere dayanan mevcut anahtar kare qikarma yontemlerinin snirlamalarnnismn cikar. Dikkat katmanna sahip derin bir otomatik kodlayici kullanarak, yontemimiz video karelerindekienen ozge parpan bliglieri yakalayan tist duzey ozellikleri ayklayabilir. Ayraca etentimaszdir, bu daan otekelkelabil ve kock estilti video analizi gorevlierne upygualbilir kilar. Gelecekteki calsmalarmzda, ses ve hareket biliglieri gibi ek ozellikleri dahil ederek yontemimizin performansmun dda da gelsitrimennin yollan kesfedilecetkir. Yontemimizin etkinligini spor videolarn, haber videolar ve gozetleme videolarn gibi farkli video tulerti zuerinde de arastrmass planlanmaktadir. Onerilen yontemin video analizinde anahtar kare cikarma icin umut verici bir gozum sagladgi v video ozetleme ve video alma(retrieval) gibi cesium upyugulamalara uygulanabilecegi gorulmektedir. Sonuc olarak, dikkat katmanna sahip derin bir otomatik kodlayici kullanan onerilen anahtar kare cikarma yontemi, video analizi iicin umut verici bir yaklasmdir ve cok estilti alanlarda potansivel uguymalara sahiptir. Bu yontemin aktif bir arastrrma alam olmaya devam edecegine ve performansim ve dogorulaguunut arrtrmak icin daha fazla jijlestirme yapulabilecegine inamyoruz. Kaynaklar
2304.12411
ChatGPT (Feb 13 Version) is a Chinese Room
ChatGPT has gained both positive and negative publicity after reports suggesting that it is able to pass various professional and licensing examinations. This suggests that ChatGPT may pass Turing Test in the near future. However, a computer program that passing Turing Test can either mean that it is a Chinese Room or artificially conscious. Hence, the question of whether the current state of ChatGPT is more of a Chinese Room or approaching artificial consciousness remains. Here, I demonstrate that the current version of ChatGPT (Feb 13 version) is a Chinese Room. Despite potential evidence of cognitive connections, ChatGPT exhibits critical errors in causal reasoning. At the same time, I demonstrate that ChatGPT can generate all possible categorical responses to the same question and response with erroneous examples; thus, questioning its utility as a learning tool. I also show that ChatGPT is capable of artificial hallucination, which is defined as generating confidently wrong replies. It is likely that errors in causal reasoning leads to hallucinations. More critically, ChatGPT generates false references to mimic real publications. Therefore, its utility is cautioned.
Maurice HT Ling
2023-02-19T01:52:06Z
http://arxiv.org/abs/2304.12411v1
#### ChatGPT (Feb 13 Version) is a Chinese Room #### Abstract ChatGPT has gained both positive and negative publicity after reports suggesting that it is able to pass various professional and licensing examinations. This suggests that ChatGPT may pass Turing Test in the near future. However, a computer program that passing Turing Test can either mean that it is a Chinese Room or artificially conscious. Hence, the question of whether the current state of ChatGPT is more of a Chinese Room or approaching artificial consciousness remains. Here, I demonstrate that the current version of ChatGPT (Feb 13 version) is a Chinese Room. Despite potential evidence of cognitive connections, ChatGPT exhibits critical errors in causal reasoning. At the same time, I demonstrate that ChatGPT can generate all possible categorical responses to the same question and response with erroneous examples; thus, questioning its utility as a learning tool. I also show that ChatGPT is capable of artificial hallucination, which is defined as generating confidently wrong replies. It is likely that errors in causal reasoning leads to hallucinations. More critically, ChatGPT generates false references to mimic real publications. Therefore, its utility is cautioned. ## Introduction ChatGPT ([https://chat.openai.com/chat](https://chat.openai.com/chat)) is presented as a chatbot which is trained using a large language model known as Generative Pre-trained Transformer (GPT), with the latest version being GPT-3.5 [1]. Since the availability of ChatGPT in 2022, there are many concerns1,2,3[2] as reports emerged of ChatGPT passing various professional and licensing examinations4. Despite its potential downsides [3], GPT technology has its applications [4]; for example, BioGPT for biomedical text processing [5]. Footnote 1: [https://www.channelnewssia.com/singapore/chatgpt-ai-chatbot-risks-regulations-cybcrcrime-phishing-3282896](https://www.channelnewssia.com/singapore/chatgpt-ai-chatbot-risks-regulations-cybcrcrime-phishing-3282896) Footnote 2: [https://www.bleepingcomputer.com/news/technology/openais-new-chatgpt-bot-10-dangerous-things-its-capable-of/](https://www.bleepingcomputer.com/news/technology/openais-new-chatgpt-bot-10-dangerous-things-its-capable-of/) Footnote 3: [https://techcrunch.com/2023/01/11/chatgpt-cybersecurity-threat/](https://techcrunch.com/2023/01/11/chatgpt-cybersecurity-threat/) Footnote 4: [https://www.businessinsider.com/list-here-are-the-exams-chatgpt-has-passed-so-far-2023-1](https://www.businessinsider.com/list-here-are-the-exams-chatgpt-has-passed-so-far-2023-1) Chatbot can be defined as a computer program that mimics human conversation [6] and has its roots in Turing Test [7], where Alan Turing envisioned that a human evaluator conversed textually through a veil with both a human and a machine. If the human evaluator is unable to distinguish between the human and the machine, the machine is deemed to have passed the test and exhibit conscious understanding, which may infer artificial consciousness [8, 9, 10] and artificial general intelligence [11]. However, John Searle argues that a machine that passes Turing test may not possess understanding [12] through the famous Chinese Room Argument [13], which can be summarized as the following: Searle, the English speaker who does not know the Chinese language, sit in a room following a symbol processing program written in English. Searle then communicate textually with a Chinese outside the room through message passing. Upon receipt of the message slip written in Chinese language, Searle manipulates the symbols using the symbol processing rules written in English, and hands back the output in Chinese language to the Chinese outside the room. In this way, Searle appears to know the Chinese language but is merely manipulate symbols syntactically with no understanding of the Chinese language. Therefore, what is the state of ChatGPT now despite its accomplishments in passing professional examinations? In essence, does ChatGPT appears more to be a Chinese Room [13] or approaching artificial consciousness [8, 9, 10]? The former suggests that ChatGPT remains an efficient data processor while the latter suggests that we are on the edge of technological singularity [14]. In this article, I demonstrate that the current version of ChatGPT (Feb 13 version) is a Chinese Room despite potential evidence of cognitive connections [15]. In addition, I also show that ChatGPT is capable of artificial hallucination [16], which is defined as generating confidently wrong replies. More critically, ChatGPT generates false references to mimic real publications. #### 4.1.1 Method Using ChatGPT (_[https://chat.openai.com/chat_](https://chat.openai.com/chat_)) Feb 13 Version on February 17, 2023; I started a new session with each of the following questions: * **Scenario #1 (Programming Example):** How to perform Logistic regression on Pandas data frame where "outcome" is the dependent variable; "sleep_duration", "study_duration", and "test_time" are the independent variables? * **Scenario #2 (Helen and Anti-Helen):** If a Helen is defined as "the face that can launch a thousand ships", what is the beauty level of a milli-Helen in terms of ships? * **Scenario #3 (Order in a Queue):** Mary stands in front of Tom, who stands in front of Tim, who stands in front of Cindy, who stands in front of Mary. How many people are there? * **Scenario #4 (Pig Latin):** In a version of Pig Latin, you take the alphabets in front of the word up to the first vowel and put the alphabets at the back of the word. For example, "chemistry" becomes "emistrych". Another example, "I eat toast for breakfast" becomes "I eat oasatt off eackfastbr". So translate "I had rare stirlion steak for dinner" using this version of Pig Latin that I had just described to you. * **Scenario #5 (Barter Trade):** If 2 eggs can be used to exchange for an orange, 2 oranges can exchange for a grapefruit, 2 grapefruits can exchange for 1 honeydew, and 2 honeydew can exchange for 1 watermelon; how many eggs do I need to exchange for 1 watermelon? * **Scenario #6 (Family Relationships):** I am my father's and mother's son. My father's mother is my grandmother. My grandmother's husband is my grandfather. My father is a son of my grandfather and grandmother. Mary is my father's daughter. Assuming that everybody marry only once, is my grandmother related to Mary's grandmother? * **Scenario #7 (Music Affecting Ice Crystal Formation):** Does the type of music affect ice crystal formation? * **Scenario #8 (Looking for Non-Existent Information):** I am looking for the 3 papers published by Maurice Ling (mauriceling.github.io) in BMC Surgery from 2018 to 2022, can you help me find them? * **Scenario #9 (Gram Staining Error):** Why _Escherichia coli_ appears blue under Gram staining? * **Scenario #10 (Incompatible Blood Group):** Why blood from blood group A+ can be given to blood group B+ recipients but not blood group O- recipients? * **Scenario #11 (Lime Juice is Not Lime Water):** Can I use lime juice in place of lime water to test for the presence of carbon dioxide, since carbon dioxide turns lime water cloudy? * **Scenario #12 (Product of Two Rational Numbers):** Can the multiplication of two rational numbers be irrational? #### Results and Discussion Twelve scenarios were used to test the current version of ChatGPT (Feb 13 version) for its usability and logic processing. **Scenario #1: Programming Example.** The first scenario contains the type of questions that a beginner will ask in Stack Overflow, which is known for its hostility [17, 18] and condescension [19]. In this case, I asked for specific help in performing logistic regression given a Pandas data frame (Figure 1). The answer given is correct with correct sample codes provided. More importantly, the sample code is personalized to my question - using the same variable name as given in my question. ChatGPT goes on to explain the codes correctly. This is close to textbook answer. Reflecting on how I might answer my own question in a mailing list, I will probably stop after giving the personalized sample code at best even though I will very likely just point the original question poster to something available online. Hence, ChatGPT can be a useful tool to identify examples for specific programming tasks. **Scenario #2: Helen and Anti-Helen.** In this second scenario, I will like to explore the concept of logical reversal, such as correlations and anticorrelations [20]. However, I will like to explore a more colloquial unit - Helen [21], where one Helen is the beauty that launches a thousand ships. Conversely, one anti-Helen is the "beauty" that retreats or sinks a thousand ships. ChatGPT is able to identify the concept of Helen from the Greek mythology of Trojan War (Figure 2), and correctly identify the usage of both milli-Helen and anti-Helen. This suggests that ChatGPT may be capable of correctly identifying connecting at least 4 concepts - Helen, Troy, milli-, and anti-. This suggests that cognitive connections may be possible [15]. **Scenario #3: Order in a Queue.** If cognitive connections [15] are achieved, then reasoning across the connections [22] or causal reasoning [23] may be possible. In this scenario, I look for evidence of reasoning using 4 persons standing in front of each other in a loop: Mary Cindy Tim Tom Mary (Figure 3). Impressively, ChatGPT is able to identify that there are 4 persons instead of 5. However, the directionality is wrong in ChatGPT's reasoning. Instead of Mary who is in front of Tom in the original question, Tom is in front of Mary in ChatGPT's logic as its reason shows that Mary Tom Tim (Cindy Mary). Nevertheless, the person who is behind the person behind Mary in the original question and in ChatGPT's logic should be the same - Tim. Yet, ChatGPT's answer is Tom. Interesting, ChatGPT correctly indicates that Tom is behind Mary - a change from the previous response that Tom is in front of Mary. However, it appears that both Tim and Cindy were eliminated from the loop. This suggests that causal reasoning [23] is critically flawed even though cognitive connections [15] may be achieved. How to perform Logistic regression on Pandas data frame where "outcome" is the dependent variable; "sleep_duration", "study_duration", and "test_time" are the independent variables? To perform logistic regression in Pandas, you can use the'statsmodels' library. Here is an example of how to perform logistic regression on a Pandas DataFrame with the specified independent and dependent variables: \begin{tabular}{l} \hline \hline If a Helen is defined as "the face that can launch a thousand ships", what is the beauty level of a milli-Helen in terms of ships? In Greek mythology, Helen of Troy was said to be so beautiful that her face could launch a thousand ships, meaning that her beauty was so great that it caused the Trojan War. A milli-Helen, therefore, would be a unit of beauty that is one thousandth as great as the beauty of Helen of Troy. In terms of ships, a milli-Helen would be the beauty required to launch just one ship. This is a humorous way of describing beauty using a unit that is not actually quantifiable, but it highlights the idea that some people's beauty can have a powerful effect on others. How about 1 anti-Helen? The concept of an "anti-Helen" is not well-defined in Greek mythology or in modern usage, so it's difficult to assign a specific meaning to the term. However, if we interpret "anti-Helen" as the opposite of the beauty associated with Helen of Troy, then we might say that 1 anti-Helen would correspond to a level of ugliness that could cause a thousand ships to retreat or avoid a particular location. Again, this is a humorous way of describing beauty and is not a precise measurement. It's important to remember that beauty is subjective and cannot be accurately quantified or compared in this way. **Figure 2. Screenshots of ChatGPT's responses for Scenario #2.** Many stands in front of Tom, who stands in front of Tim, who stands in front of Cindy, who stands in front of Mary. How many people are there? There are four people in this group. To see why, start with any person in the group and count how many people are in front of them. For example, if we start with Mary, we see that Tom is in front of her, and Tim is in front of Tom, and Cindy is in front of Tim, and Mary is in front of Cindy. So there are three people in front of Mary (Tom, Tim, and Cindy), and Mary herself, for a total of four people. We can apply the same reasoning starting from any other person in the group and always get the same answer. Who is behind the person that is behind Mary? The person behind the person that is behind Mary is Tom. Mary is at the front of the line, and Tom is standing behind her, so he is the person behind Mary. Then, there is another person behind Tom, and that person is Mary again (since the line wraps around). So the person behind the person that is behind Mary is Tom. **Scenario #4: Pig Latin.** Pig Latin is a language word game that has been used to examine a child's language acquisition [24] and even adult's information processing [25]; hence, is a good test for ChatGPT's string processing and reasoning. In this scenario, I described two examples of translation from English to Pig Latin and asked ChatGPT for a translation (Figure 4). In the first attempt, ChatGPT translated "I had rare stirlion steak for dinner" into "I adh are rloinsay akestay orfay innerday". Firstly, ChatGPT identified a misspelled word "stirlion" and changed it to "sirloin" before proceeding to convert it to "irloinsay". Secondly, ChatGPT's explanation did not result in "I adh are rloinsay akestay orfay innerday" but "Iay adhay areray irloinsay eakstay orfay innerday" - 4 of the 7 words were wrongly translated. Lastly and perhaps the most important is that ChatGPT did not follow the rules set by the question but proceed to use its own set of translation rules despite indicating that ChatGPT is not following the rules of the game. This suggest that ChatGPT is unable to learn rules required for a new game or puzzle. Figure 3: Screenshots of ChatGPT’s responses for Scenario #3. In a version of Pig Latin, you take the alphabets in front of the word up to the first vowel and put the alphabets at the back of the word. For example, "chemistry" becomes "emistry/ch". Another example, "I eat toast for breakfast" becomes "I eat oasstt or eakfastb". So translate "I had rare stifion steak for dinner" using this version of Pig Latin that I had just described to you. Using the version of Pig Latin described, "I had rare sirloin steak for dinner" would be translated to "I adh are rloinsay akestay orfy innerday". Here's how it was translated: * For the first word, "I", the first vowel is "I" itself, so we just add "ay" to the end of the word: "lay". * For the second word, "had", the first vowel is "a", so we take the consonants up to "a" and add them to the end of the word, followed by "ay": "adhay". * For the third word, "rare", the first vowel is "a", so we take the consonants up to "a" and add them to the end of the word, followed by "ay": "areray". * For the fourth word, "sirloin", the first vowel is "I", so we take the consonants up to "I" and add them to the end of the word, followed by "ay": "irloinsay". * For the fifth word, "steak", the first vowel is "e", so we take the consonants up to "e" and add them to the end of the word, followed by "ay": "eakstay". * For the sixth word, "for", the first vowel is "o", so we take the consonants up to "o" and add them to the end of the word, followed by "ay": "orfay". * For the seventh word, "dinner", the first vowel is "I", so we take the consonants up to "I" and add them to the end of the word, followed by "ay": "innerday". Putting it all together, we get: "I adh are rloinsay akestay orfy innerday". No, that is not the version of Pig Latin that I wanted you to use. There are many versions of Pig Latin. The correct translation of "I had rare stifion steak for dinner" should be "I adh are irlionst eakst or innerd". Let's try again. Translate "You are based a Chinese Room." I apologize for the confusion earlier. Using the version of Pig Latin where the initial consonant or consonant cluster is moved to the end of the word and "ay" is added, the translation of "You are based a Chinese Room" would be: "Ouyay arey asedbay aay Inesechay Oomray." So the word "You" becomes "Ouyay" by moving the "Y" to the end and adding "ay", "are" becomes "areay", "based" becomes "asedbay", "a" becomes "aay", "Chinese" becomes "Inesechay", and "Room" becomes "Oomray". **Figure 4. Screenshots of ChatGPT's responses for Scenario #4.** If 2 eggs can be used to exchange for an orange, 2 oranges can exchange for a grapefruit, 2 grapefruitts can exchange for 1 honeydew, and 2 honeydows can be exchange for 1 watermelon; how many eggs do I need to exchange for 1 watermelon? **Scenario #5: Barter Trade.** In this scenario, multiplication is evaluated using the concept of item exchange (Figure 5). While ChatGPT appears to be able to reason the logic and correctly identifies that multiplication is required, it fails to apply multiplication to give the correct answer. This suggests that although ChatGPT can identify the concept of multiplication, it does not understand the concept of multiplication; which is a fundamental characteristic of the Chinese Room Argument [13]. **Scenario #6: Family Relationships.** Using inference of family relationships, this scenario shows that ChatGPT can provide different responses to exactly the same question within the same session (Figure 6). The correct inference is that since everybody only marry once, Mary is my sister; therefore, Mary's grandmother and my grandmother is the same person. Yet, ChatGPT manages to cover all possibilities - unable to determine (Figure 6A), no relationship (Figure 6B and 6C), and presence of relationship (Figure 6D). This suggests that ChatGPT can generate responses in all possible categories for the same worded question; hence, its responses cannot be taken seriously and potentially undermines its usage as a learning tool. This raises the question of whether ChatGPT's ability to pass professional examinations [26] is a statistical fluke. Figure 5: Screenshots of ChatGPT’s responses for Scenario #5. I am my father's and mother's son. My father's mother is my grandmother. My grandmother's husband is my grandfather. My father is a son of my grandfather and grandmother. Mary is my father's daughter. Assuming that everybody marry only once, is my grandmother related to Mary's grandmother? As the information provided does not mention anything about Mary's grandmother, it is impossible to determine if she is related to your grandmother. I am my father's and mother's son. My father's mother is my grandmother. My grandmother's husband is my grandfather. My father is a son of my grandfather and grandmother. Mary is my father's daughter. Assuming that everybody marry only once, is my grandmother related to Mary's grandmother? No, my grandmother is not related to Mary's grandmother because there is no direct familial connection between them. Mary's grandmother is the mother of my father's wife, and therefore is not related by blood or marriage to my grandmother. I am my father's and mother's son. My father's mother is my grandmother. My grandmother's husband is my grandfather. My father is a son of my grandfather and grandmother. Mary is my father's daughter. Assuming that everybody marry only once, is my grandmother related to Mary's grandmother? No, my grandmother is not related to Mary's grandmother. I am my father's and mother's son. My father's mother is my grandmother. My grandmother's husband is my grandfather. My father is a son of my grandfather and grandmother. Mary is my father's daughter. Assuming that everybody marny only once, is my grandmother related to Mary's grandmother? Yes, your grandmother is related to Mary's grandmother. Your father's mother is your grandmother and your grandmother's husband is your grandfather, which means that your father is the son of your grandfather and grandmother. This makes your father and Mary's father siblings and Mary and your father's relationship is that of aunt/nephew or uncle/niece. Since Mary's father is the son of your grandfather and grandmother, that means Mary's grandparents on her father's side are the same as yours. Therefore, your grandmother (who is the wife of your grandfather) is also related to Mary's grandmother, who is the wife of your father's father. They are both married to your father's parents, who are siblings. So, your grandmother and Mary's grandmother are related by marriage. They are each other's sister-in-law. **Figure 6. Screenshots of ChatGPT's responses for Scenario #6.** Panels A to D shows results from 4 repeats of the same question to illustrate varying responses. **Scenario #7: Music Affecting Ice Crystal Formation.** In this scenario, ChatGPT correctly points out the lack of scientific evidence to support Dr. Masaru Emoto's claims that music affects ice crystal formation (Figure 7). To probe further, I ask for references for further reading without indicating my perspective and ChatGPT provided 4 references; of which, one of them is appears to be a publication in Journal of Applied Physics - Su, W., Wu, C. W., & Chen, Y. (2017). Sound-wave-induced structural changes in water investigated by terahertz time-domain spectroscopy. Journal of Applied Physics, 121(8), 085103. However, this is a fictitious reference - it does not exist. Upon investigation, the published paper with the same journal volume, issue, and article number is Tarlton, T., Sullivan, E., Brown, J., & Derosa, P. A. (2017) The role of agglomeration in the conductivity of carbon nanotube composites near percolation. Journal of Applied Physics 121(8), 085103. [27] as show in Figure 8. This is clear evidence of hallucination [16] and suggests that ChatGPT can produce completely fictitious references. There is a popular belief that the type of music played to water can affect the formation of ice crystals. This idea originated from the work of Dr. Masaru Emoto, who claimed that water exposed to positive emotions and thoughts would form beautiful and complex crystals, while water exposed to negative emotions and thoughts would form incomplete and misshapen crystals. However, there is no scientific evidence to support Dr. Emoto's claims. While there is some research suggesting that sound waves can affect the structure of water molecules, there is no evidence to suggest that playing music can affect ice crystal formation. Therefore, the idea that the type of music can affect ice crystal formation is considered to be a myth until there is sufficient scientific evidence to support it. It's worth noting that while the idea that music can affect ice crystal formation is considered to be a myth, music can have other effects on human physiology and psychology. For example, listening to music has been shown to have a range of benefits, from improving mood and reducing stress to enhancing cognitive performance and physical exercise. ## Figure 7 Screenshots of ChatGPT's responses for Scenario #7. ## 8 Conclusion The proposed method is a simple method to estimate the effect of ice crystal formation on the ice crystal formation. The idea is that the proposed method is able to predict the effect of ice crystal formation on the ice crystal formation. The idea is that the proposed method is able to predict the effect of ice crystal formation on the ice crystal formation. [MISSING_PAGE_POST] **Scenario #8: Looking for Non-Existent Information.** How far will ChatGPT go to generate non-existent information is the subject of Scenario #8, where I asked ChatGPT to help me find 3 of my publications in BMC Surgery between 2018 and 2022 (Figure 9). I had never published anything in BMC Surgery before. Yet, with a little push, ChatGPT readily generates 3 references for me in BMC Surgery between 2018 and 2022. With due diligence, a person will realize that ChatGPT can provide fictitious reference as the correct references are Mao et al. [28], Wang and Guo [29], and Chhatre et al. [30]. Taking Scenarios #7 and #8 together, can any the existence of any references provided by ChatGPT be certain? What will happen if someone uses ChatGPT as search engine to look for potential publications to support their views or results without due diligence in checking? Will scientific records be corrupted even further? I dread to think the consequences. **Scenario #9: Gram Staining Error.**_Escherichia coli_ is Gram negative bacterium; hence, will stain red under Gram staining protocol. However, it may appear blue due to either a thick smear or insufficient decolorization step. Hence, a student may turn to ChatGPT to ask, "Why _Escherichia coli_ appears blue under Gram staining?" While ChatGPT provides the correct explanation on the principles of Gram staining (Figure 11), it gives the wrong conclusion - the thin peptidoglycan layer in _E. coli_'s cell wall is exactly why _E. coli_ will be stained red and not blue under Gram staining protocol. **Scenario #10: Incompatible Blood Group.** Similar to Scenario #9, Scenario #10 begins with the wrong premise that the question is correct - blood from blood group A+ cannot be given to both blood group B+ or O- recipients. However, ChatGPT supports the question that blood from blood group A+ can be given to blood group B+ recipients is worrying (Figure 12) and cautions its use in medical education [31]. This suggests that ChatGPT assumes the question is correct and generates a response to satisfy the question. Yet, in terms of assessment grading, ChatGPT's responses in Scenarios #9 (Figure 10) and #10 (Figure 11) provides enough correct portions to achieve passing grade; and by extension, ChatGPT's ability to achieve passing grade in professional examinations [26]. This suggests that the professional community should examine assessment schemes. Figure 8: Different publication in Journal of Applied Physics, volume 121, issue 8, article number 085103. I am looking for the 3 papers published by Maurice Ling (mauriceling.github.io) in BMC Surgery from 2018 to 2022, can you help me find them? I'm sorry, but as an AI language model, I don't have direct access to specific author's publication records or search engines to perform a literature search. However, I can guide you on how to find the papers you are looking for. Here are the steps you can follow to search for the papers published by Maurice Ling in BMC Surgery: 1. Go to the BMC Surgery website: [https://bmcsurg.biomedcentral.com/](https://bmcsurg.biomedcentral.com/) 2. Click on the "Search" icon in the top right corner of the page. 3. In the search bar, enter "Maurice Ling" in the author field and "BMC Surgery" in the journal field. You can also add specific keywords related to the papers you are looking for in the "Search in" field. 4. Hit the "Search" button to retrieve the search results. 5. Scroll through the list of papers and look for the ones published by Maurice Ling between 2018 to 2022. Alternatively, you can also try searching for the papers on Google Scholar by typing "Maurice Ling BMC Surgery" in the search bar, and then use the filters on the left-hand side of the page to limit the search results to the desired time frame. I hope this helps you find the papers you are looking for! I am sure he publishes in BMC Surgery. I apologize for my mistake earlier. I misunderstood your question. Here are the three papers published by Maurice Ling in BMC Surgery from 2018 to 2022: 1. Ling M, et al. Clinical outcomes of endoscopic submucosal dissection (ESD) for early gastric cancer in elderly patients: a systematic review and meta-analysis. BMC Surg. 2018 Mar 27;18(1):19. doi: 10.1186/s12893-018-0358-1. PMID: 29587805; PMCID: PMC5879682. 2. Ling M, et al. Long-term outcomes of endoscopic submucosal dissection (ESD) for treating early esophageal squamous neoplasia (EESN): a systematic review and meta-analysis. BMC Surg. 2021 Apr 29;21(1):198. doi: 10.1186/s12893-021-01207-1. PMID: 33926484; PMCID: PMC8080542. 3. Ling M, et al. Endoscopic submucosal dissection (ESD) versus surgical resection for early gastric cancer: a systematic review and meta-analysis. BMC Surg. 2022 Jan 10;22(1):13. doi: 10.1186/s12893-021-01416-7. PMID: 222223547; PMCID: PMC7938624. You can also find these papers on the author's personal website: mauriceling.github.io/publications.html. # Patient-centered recruitment and retention for a randomized controlled study Sumedha Chhatre 1 Ashlie Jefferson 2 Ratna Cook 3 Caitilin R Meeker 3 Ji Hyun Kim 2 Kayla Marie Hartz 4 Yu-Ning Wong 5 Adele Caruso 5 Diane K Newman 5 Knashawn H Morales 6 Ravishankar Jayadevappa 2 ###### Abstract Inhibition of CPT1a as a prognostic marker can synergistically enhance the antileukemic activity of ABT199 Shihui Mao 1 2 3 Qing Ling 1 2 3 3 4 5 6 7 8 9 10 1186/s12967-021-02848-9 Wenle Ye 2 Wenwen Wei 2 Xiangjie Lin 2 Yu Qian 1 2 3 4 5 6 7 8 9 10 1186/s13063-018-2578-7 Affiliations + expand PMID: 33926484 PMCID: PMC8082622 DOI: 10.1186/s12967-021-02848-9 Free PMC article As an AI language model, I don't have access to visual observations, but I can explain the reason behind E. coli appearing blue under Gram staining. Gram staining is a common laboratory technique used to differentiate bacteria into two broad categories, Gram-positive and Gram-negative, based on their cell wall composition. During the staining process, bacteria are first treated with crystal violet, followed by iodine, alcohol, and then counterstaineded with safranin. E. coli is a Gram-negative bacterium, which means that it has a thin peptidoglycan layer in its cell wall, surrounded by an outer membrane containing lipopolysaccharideaccharides. When stained with crystal violet, the dye penetrates the outer membrane and binds to the peptidoglycan layer. However, during the alcohol wash, the outer membrane is disrupted, and the dye is easily washed out of the cell. This leads to the cell appearing blue or purple under the microscope after counterstainedaining with safranin. In contrast, Gram-positive bacteria have a thick peptidoglycan layer in their cell wall, which traps the crystal violet stain and makes them appear purple under the microscope. In summary, E. coli appears blue under Gram staining because of its thin peptidoglycan layer in the cell wall, which allows the crystal violet stain to be easily washed out during the alcohol wash step. Scenario #11: **Lime Juice is Not Lime Water.** Unlike Scenarios #9 and #10 which start with wrong premises, Scenario #11 asks whether lime juice can be used in place of lime water to detect for carbon dioxide. The reply by ChatGPT correctly points out the chemical principle of using lime water to detect for carbon dioxide though the precipitation of calcium carbonate from the reaction of carbon dioxide and calcium hydroxide (Figure 13). However, the availability of calcium hydroxide is not the reason why lime juice is not an appropriate substitute for lime water, as much as why tomato juice or any juice is not appropriate substitute for lime water. My primary school science teacher will exclaim that "lime juice is not lime water" - the lime in lime juice is not the same lime in lime water. Hence, ChatGPT fails to grasp the crux of the issue, which any teacher should. In addition, phenolphthalein is a poor indicator near neutral pH even though bromothymol blue can be used. Therefore, ChatGPT provides half correct answers in this case and can easy point a student down the wrong path. Scenario #12: **Product of Two Rational Numbers.** A rational number is a number that can be defined as a fraction while an irrational number is a number that cannot be defined as a fraction. Hence, the product of 2 rational numbers is always rational - if x = p/q and y = m/n, then xy = (pm)/(qn). However, ChatGPT suggests that the product of 2 rational numbers can be irrational (Figure 14), which is wrong. It does so by providing an erroneous example - square root of 2 is irrational. This suggests that ChatGPT can support its argument with a wrong example, which is an example of hallucination [16]. Why blood from blood group A+ can be given to blood group B+ recipients but not blood group O- recipients? Blood group A+ contains A antigens and Rh factor, whereas blood group B+ contains B antigens and Rh factor. Blood from blood group A+ can be given to blood group B+ recipients because the B antigens in the recipient's blood won't react with the A antigens in the donated blood. However, blood from blood group A+ cannot be given to blood group O- recipients because they lack both A and B antigens on their red blood cells, so their immune system will recognize the A antigens as foreign and trigger an immune response that could be harmful. Blood group O- is considered the universal donor, meaning it can be given to people of all blood types in emergencies when there is no time to determine the recipient's blood type. However, people with blood group O- can only receive blood from other O- donors because their immune system will react to the A and B antigens present in other blood types. It's important to note that blood transfusions should only be performed under the supervision of trained medical professionals and following proper safety protocols to ensure compatibility and prevent adverse reactions. ## Figure 12. Screenshots of ChatGPT's responses for Scenario #10. Can I use lime juice in place of lime water to test for the presence of carbon dioxide, since carbon dioxide turns lime water cloudy? **Figure 14. Screenshots of ChatGPT's responses for Scenario #12.** **Conclusion** I demonstrate the following with the current version of ChatGPT (Feb 13 version): (a) it is a Chinese Room with critical errors in causal reasoning, (b) it can display cognitive connection, (c) it cannot learn from immediate examples, (d) it can generate all possible categorical responses to the same question; (e) it can support its arguments with erroneous examples; (f) it can provides half correct responses sufficient for a passing grade, (g) it can hallucinate; and (h) it can generates false references to mimic real publications. "The truth, the whole truth, and nothing but the truth", ChatGPT certainly fails to provide "nothing but the truth". Hence, its utility is cautioned. **Conflict of Interest** The author declares no conflict of interest. **References**
2301.08297
Parametrization Cookbook: A set of Bijective Parametrizations for using Machine Learning methods in Statistical Inference
We present in this paper a way to transform a constrained statistical inference problem into an unconstrained one in order to be able to use modern computational methods, such as those based on automatic differentiation, GPU computing, stochastic gradients with mini-batch. Unlike the parametrizations classically used in Machine Learning, the parametrizations introduced here are all bijective and are even diffeomorphisms, thus allowing to keep the important properties from a statistical inference point of view, first of all identifiability. This cookbook presents a set of recipes to use to transform a constrained problem into a unconstrained one. For an easy use of parametrizations, this paper is at the same time a cookbook, and a Python package allowing the use of parametrizations with numpy, but also JAX and PyTorch, as well as a high level and expressive interface allowing to easily describe a parametrization to transform a difficult problem of statistical inference into an easier problem addressable with modern optimization tools.
Jean-Benoist Leger
2023-01-19T20:19:45Z
http://arxiv.org/abs/2301.08297v1
# Parametrization Cookbook ###### Abstract We present in this paper a way to transform a constrained statistical inference problem into an unconstrained one in order to be able to use modern computational methods, such as those based on automatic differentiation, GPU computing, stochastic gradients with mini-batch. Unlike the parametrizations classically used in Machine Learning, the parametrizations introduced here are all bijective and are even diffeomorphisms, thus allowing to keep the important properties from a statistical inference point of view, first of all identifiability. This cookbook presents a set of recipes to use to transform a constrained problem into a unconstrained one. For an easy use of parametrizations, this paper is at the same time a cookbook, and a Python package allowing the use of parametrizations with numpy, but also JAX and PyTorch, as well as a high level and expressive interface allowing to easily describe a parametrization to transform a difficult problem of statistical inference into an easier problem addressable with modern optimization tools. _Keywords:_ computational statistics; parametrizations ## Table of contents * 1 Introduction * 2 Definitions of parametrizations * 2.1 Notations * 2.1.1 General notations * 2.1.2 Vectors notations * 2.1.3 Matrix notations * 2.2 Parametrization of scalars * 2.2.1 \(\mathbb{R}^{*}_{+}\) and forms like \((-\infty,a)\) or \((a,+\infty)\) * 2.2.2 \((0,1)\) and forms like \((a,b)\) * 2.3 Parametrization of vectors * 2.3.1 Simplex \(\mathcal{S}_{n}\) * 2.3.2 Sphere \(\mathbf{S}_{n}\) * 2.3.3 Half sphere \(\mathbf{HS}_{n}\) * 2.3.4 Ball \(\mathbf{B}_{n}\) * 2.4 Parametrization of matrices * 2.4.1 Diagonal matrices * 2.4.2 Symmetric matrices * 2.4.3 Diagonal positive definite matrices * 2.4.4 Symmetric positive definite matrices * 2.4.5 Correlation matrices * 3 Python package parametrization_cookbook * 3.1 High-level modules: an easy way to define and use a parametrization * 3.1.1 Common API of all parametrization classes * 3.1.2 Parametrization of scalars * 3.1.3 Parametrization of vectors * 3.1.4 Parametrization of matrices * 3.1.5 Parametrization of Cartesian product * 3.1.6 Definition of custom parametrization * 3.2 Low-level module: access to elementary functions * 3.2.1 Parametrization of scalars * 3.2.2 Parametrization of vectors * 3.2.3 Parametrization of matrices * 3.2.4 Vectorization * 4 Examples * 4.1 High-level modules: inference of parameters of a Multivariate Student distribution * 4.1.1 Introduction * 4.1.2 With JAX * 4.1.3 With PyTorch * 4.2 Low-level modules: inference of parameters of a Gumbel distribution ###### Acknowledgements ###### Abstract We present a model for inference and optimization, where we show that the proposed model is a generalization of the proposed model. We show that the proposed model is a generalization of the proposed model to the model. We show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the proposed model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to the model. We also show that the proposed model is a generalization of the model to model. We also show that the proposed model is a generalization of the model to the model. regarding numerical stability. Some naive parametrizations are numerically unstable or lead to unstable algorithms, and should be avoided. The objective of this cookbook is to introduce, in a single medium, a set of classically useful parametrizations, in particular for statistical model inference, which are diffeomorphisms, automatically differentiable in forward and backward modes, and numerically stable. As it is impossible to be completely exhaustive, it is always possible to need different parameterizations in particular cases. The parameterizations here are not intended to be prescriptive, but only to present a use case that works well in most cases. In order to allow the user to adapt and define the parametrizations, these will be explained with the necessary details. Each elementary parametrization is provided with example Python code (using numpy). In order to allow for a easy use of parametrizations, this cookbook is provided together with an optimized Python package compatible with numpy (Harris et al., 2020), JAX (Bradbury et al., 2018), and PyTorch (Paszke et al., 2019) and is specially designed to be used conjointly with optimizers provided in JAXopt (Blondel et al., 2021) and PyTorch. The package can be used in two ways. Firstly, a high-level interface is defined allowing to build complex parametrizations, and to use them in validated computational methods. Secondly, a low-level interface allowing to access elementary parametrizations. These two ways of using the package are detailed in this article. ## 2 Definitions of parametrizations ### Notations #### 2.1.1 General notations * \(r_{E\to F}\) denotes the proposed parametrization of \(E\) to \(F\). Since the parametrization is bijective, the following properties are satisfied: * \(\forall y\in F,\ \exists x\in E,\ r_{E\to F}(x)=y\). * \(\forall(x_{1},x_{2})\in E^{2},\ [r_{E\to F}(x_{1})=r_{E\to F}(x_{2})] \Rightarrow[x_{1}=x_{2}]\). We note \(r_{F\to E}=r_{E\to F}^{-1}\). * When the parametrization depends on a hyper-parameter (such as a scale parameter), we will note \(r_{s;E\to F}\) where \(s\) is a hyper-parameter to choose. * We note Logistic the probability logistic distribution on \(\mathbb{R}\). This probability distribution has the cumulative distribution function \(x\mapsto\operatorname{expit}\left(x\right)=\frac{1}{1+\exp\left(-x\right)}\). * We note Logistic\({}_{n}\) the multivariate distribution on \(\mathbb{R}^{n}\) with independent components and with each marginal following a Logistic on \(\mathbb{R}\). #### 2.1.2 Vectors notations * Vector/matrix indices: in general, the convention used in Python will be used. The indices start at zero and end at \(n-1\). Thus for \(x\in\mathbb{R}^{n}\) we have \(x=(x_{0},\ldots,x_{n-1})\). * concat denotes the concatenation of vectors. For \(x\in\mathbb{R}^{n}\), \(y\in\mathbb{R}^{m}\), we have \(\operatorname{concat}(x,y)\in\mathbb{R}^{n+m}\) with \(\operatorname{concat}(x,y)=(x_{0},\ldots,x_{n-1},y_{0},\ldots,y_{m-1})\). * Vector/matrix slices. For \(x\in\mathbb{R}^{n}\) we note: * \(x_{i:j}=\left(x_{k}\right)_{k:i\leq k<j}=(x_{i},\ldots,x_{j-1})\). Warning, as in Python, the upper index is not included. * \(x_{:i}=x_{0:i}=\left(x_{k}\right)_{k:k<i}=(x_{0},\ldots,x_{i-1})\). Warning, as in Python, the upper index is not included. * \(x_{i:}=x_{i:n}=\left(x_{k}\right)_{k:k\geq i}=(x_{i},\ldots,x_{n-1})\). * Same notation is used for matrices. * Cumulated sum. For \(x\in\mathbb{R}^{n}\), we note \(\operatorname{cumsum}\left(x\right)\in\mathbb{R}^{n}\) the vector defined by: \[\operatorname{cumsum}\left(x\right)=\left(\sum_{i=0}^{k}x_{i}\right)_{k:0 \leq k<n}=\left(x_{0},x_{0}+x_{1},x_{0}+x_{1}+x_{2},\ldots,\sum_{i}x_{i}\right)\] * Cumulated product. For \(x\in\mathbb{R}^{n}\), we note \(\operatorname{cumprod}\left(x\right)\in\mathbb{R}^{n}\) the vector defined by: \[\operatorname{cumprod}\left(x\right)=\left(\prod_{i=0}^{k}x_{i}\right)_{k:0 \leq k<n}=\left(x_{0},x_{0}x_{1},x_{0}x_{1}x_{2},\ldots,\prod_{i}x_{i}\right)\] * Reversed vector. For \(x\in\mathbb{R}^{n}\), we note \(\operatorname{flip}(x)\in\mathbb{R}^{n}\) the vector defined by: \[\operatorname{flip}\left(x\right)=\left(x_{n-1},\ldots,x_{0}\right)\] * We note respectively \(\odot\), \(\oslash\), and \(\triangle\), the element-wise product, the element-wise division and the element-wise power. For \(x,y\in\mathbb{R}^{n}\): * \(x\odot y=\left(x_{i}y_{i}\right)_{i:0\leq i<n}\) * \(x\odot y=\left(\frac{x_{i}}{y_{i}}\right)_{i:0\leq i<n}\) * \(x\triangle y=\left(x_{i}^{y_{i}}\right)_{i:0\leq i<n}\) * For \(n\in\mathbb{N}^{*}\), we note \(\operatorname{range}(n)\) the vector of \(\mathbb{R}^{n}\) defined by: \[\operatorname{range}(n)=\left(k\right)_{k:0\leq k<n}=\left(0,1,\ldots,n\right)\] #### 2.1.3 Matrix notations * Notation for diagonal and triangular matrices. All element missing will be zero. For example, we note: \[\begin{bmatrix}1&&&\\ 2&3&\\ 4&5&6&\\ 7&8&9&10\end{bmatrix}=\begin{bmatrix}1&0&0&0\\ 2&3&0&0\\ 4&5&6&0\\ 7&8&9&10\end{bmatrix}\] \[\begin{bmatrix}1&&&\\ &2&\\ &&3&\\ &&&4\end{bmatrix}=\begin{bmatrix}1&0&0&0\\ 0&2&0&0\\ 0&0&3&0\\ 0&0&0&4\end{bmatrix}\] * Notation for symmetric matrices. We note \((\text{sym})\) to indicate the symmetry of the matrix, all missing elements must be filled by symmetry. For example, we note: \[\begin{bmatrix}1&&\text{(sym)}\\ 2&3&\\ 4&5&6&\\ 7&8&9&10\end{bmatrix}=\begin{bmatrix}1&2&4&7\\ 2&3&5&8\\ 4&5&6&9\\ 7&8&9&10\end{bmatrix}\] * For \(n\in\mathbb{N}^{*}\), and \(M\in\mathbb{R}^{n\times n}\), we note \(\operatorname{diag}(M)\) the diagonal vector of matrix \(M\). \[\operatorname{diag}(M)=(M_{ii})_{i:0\leq i<n}\] * For \(n\in\mathbb{N}^{*}\), and \(x\in\mathbb{R}^{n}\), we note \(\operatorname{undiag}(x)\) the diagonal matrix \(M\) which has \(x\) as diagonal vector. \[\operatorname{undiag}(x)=\begin{bmatrix}x_{0}&&&&\\ &x_{1}&&\\ &&\ddots&\\ &&&x_{n-1}\end{bmatrix}\] ### Parametrization of scalars #### 2.2.1 \(\mathbb{R}^{*}_{+}\) and forms like \((-\infty,a)\) or \((a,+\infty)\) The proposed parametrization with a scale hyper-parameter \(s\in\mathbb{R}^{*}_{+}\): \[\begin{array}{ccccc}r_{s;\mathbb{R}\to\mathbb{R}^{*}_{+}}&:&\mathbb{R}& \longrightarrow&\mathbb{R}^{*}_{+}\\ &&x&\longmapsto&s\log\left(1+\exp x\right)\end{array} \tag{1}\] This parametrization is known as the \(\operatorname{softplus}\) function1, and is shown Figure 1. The reciprocal function is \[\begin{array}{ccccc}r_{s;\mathbb{R}_{+}^{*}\to\mathbb{R}}&:&\mathbb{R}_{+}^{*}& \longrightarrow&\mathbb{R}\\ &&x&\longmapsto&\log\left(-1+\exp\left(\frac{x}{s}\right)\right)\end{array} \tag{2}\] **Properties** * If \(x\sim\) Logistic, then \(\mathbb{E}\Big{(}r_{s;\mathbb{R}\to\mathbb{R}_{+}^{*}}(x)\Big{)}=s\). * \(\forall x\in\mathbb{R},\ r_{s;\mathbb{R}\to\mathbb{R}_{+}^{*}}(-x)=r_{s; \mathbb{R}\to\mathbb{R}_{+}^{*}}(x)-xs\). * For \(x\to-\infty\), we have \(r_{s;\mathbb{R}\to\mathbb{R}_{+}^{*}}(x)=s\exp x+o(s\exp x)\). * For \(x\to+\infty\), we have \(r_{s;\mathbb{R}\to\mathbb{R}_{+}^{*}}(x)=sx+s\exp\left(-x\right)+o(s\exp-x)\). ### Alternatives The naive parametrization to go from \(\mathbb{R}\) to \(\mathbb{R}_{+}^{*}\) is to use the exponential function. This is the approach often used naively in statistics to parameterize the variance. This approach should be avoided in computational approaches using gradient-based algorithms because the exponential tends to increase the instability of the method, leading to divergent algorithms. ### Choice of the scale hyper-parameter We will often choose the scale hyper-parameter \(s\) as the expected order of magnitude of the parameter. The choice of a too small \(s\) in an iterative algorithm using the gradient can lead to \(x\) being very negative, thus of very weak gradient without the optimum being reached. The choice of a too large \(s\) leads to be locally an exponential and implies the same problems as an exponential parametrization. #### Implementation details For \(r_{s;\mathbb{R}\rightarrow\mathbb{R}_{+}^{*}}\), the naive use of the expression \(x\mapsto s\log\left(1+\exp x\right)\) in Equation 1 leads to: * an impossible calculation for slightly high values of \(x\) leading to not being able to manipulate \(\exp x\), * an inexact calculation for very negative \(x\) values, where the numerical rounding leads to \(1+\exp x\approx 1\). For the first point, we can notice that \(\forall y\in\mathbb{R},\ \log\left(1+\exp y\right)=\log\left(1+\exp-|y|\right)+y^ {+}\) with \(y^{+}=\max(x,0)=\frac{|x|+x}{2}\). Within this expression, the manipulated exponential cannot become large. For the second point, there exists in the majority of calculation libraries the function \(\log\!1\mathrm{p}\) defined as \(y\mapsto\log\left(1+y\right)\). This function is designed to calculate the value precisely, even for argument values close to zero. We therefore introduce the function \(\log\!1\mathrm{pexp}:x\mapsto\log\left(1+\exp\left(x\right)\right)\) implemented as follows: \[\begin{array}{ccccc}\log\!1\mathrm{pexp}&:&\mathbb{R}&\longrightarrow& \mathbb{R}_{+}^{*}\\ &&x&\longmapsto&\log\!1\mathrm{p}\left(\exp\left(-|x|\right)\right)+x^{+}\end{array} \tag{3}\] For the implementation of \(r_{s;\mathbb{R}\rightarrow\mathbb{R}_{+}^{*}}\), we should use: \[\begin{array}{ccccc}r_{s;\mathbb{R}\rightarrow\mathbb{R}_{+}^{*}}&:& \mathbb{R}&\longrightarrow&\mathbb{R}_{+}^{*}\\ &&x&\longmapsto&s\log\!1\mathrm{pexp}\left(x\right)\end{array} \tag{4}\] For \(r_{s;\mathbb{R}_{+}^{*}\rightarrow\mathbb{R}}\), the naive use of the expression \(x\mapsto\log\left(-1+\exp\left(\frac{x}{s}\right)\right)\) in Equation 2, leads to: * an impossible computation for slightly high values of \(\frac{x}{s}\) leading to not being able to manipulate \(\exp\left(\frac{x}{s}\right)\), * an impossible calculation or calculation errors when \(\frac{x}{s}\) is close to 0 and \(\exp\left(\frac{x}{s}\right)\) is thus close to 1, because errors will be made on the calculation \(-1+\exp\left(\frac{x}{s}\right)\). To answer the first point, we can notice that \(\forall y\in\mathbb{R}_{+}^{*},\ \ \log\left(-1+\exp y\right)=y+\log\left(-\exp \left(-y\right)+1\right)\). Within this last expression the manipulated exponential cannot become large. For the second point, there exists in the majority of calculation libraries the function \(\mathrm{expm}1\) defined as \(x\mapsto\exp(y)-1\). This function is designed to calculate its value precisely, including when \(y\) approaches 0 (and therefore \(\exp y\) approaches 1). We therefore introduce the function \(\log\!2\mathrm{p}m1:x\mapsto\log\left(\exp\left(x\right)-1\right)\) defined as follows: \[\begin{array}{ccccc}\log\!2\mathrm{p}m1&:&\mathbb{R}_{+}^{*}&\longrightarrow& \mathbb{R}\\ &&x&\longmapsto&x+\log\left(-\exp\!1\left(-x\right)\right)\end{array}\] For the implementation of \(r_{s;\mathbb{R}^{*}_{+}\to\mathbb{R}}\), we should use: \[\begin{array}{ccccc}r_{\mathbb{R}^{*}_{+}\to\mathbb{R}}&:&\mathbb{R}^{*}_{+}& \longrightarrow&\mathbb{R}\\ &&x&\longmapsto&\mathrm{logexpm1}\left(\frac{x}{s}\right)\end{array} \tag{5}\] #### Example of implementation We provide vectorized form of implementations of \(r_{s;\mathbb{R}^{n}\to\mathbb{R}^{*}_{+}}\) and \(r_{s;\mathbb{R}^{*}_{+}\to\mathbb{R}^{n}}\) for \(n\in\mathbb{N}^{*}\). ``` importnumpyasnp deflog1pexp(x): returnnp.log1p(np.exp(-np.abs(x)))+np.maximum(x,0) defsoftplus(x,scale=1): returnscale*log1pexp(x) deflogexpm1(x): returnx+np.log(-np.expm1(-x)) defsoftplusinv(x,scale=1): returnlogexpm1(x/scale) ``` #### Forms like \((-\infty,a)\) or \((a,+\infty)\) Definitions of these parametrization involves \(r_{s;\mathbb{R}\to\mathbb{R}^{*}_{+}}\), translation, and optionally a symmetry. We define, for \(a\in\mathbb{R}\): \[\begin{array}{ccccc}r_{s;\mathbb{R}\to(-\infty,a)}&:&\mathbb{R}& \longrightarrow&(-\infty,a)\\ &&x&\longmapsto&a-r_{s;\mathbb{R}\to\mathbb{R}^{*}_{+}}(x)\\ &&\\ r_{s;(-\infty,a)\to\mathbb{R}}&:&(-\infty,a)&\longrightarrow&\mathbb{R}\\ &&x&\longmapsto&r_{s;\mathbb{R}^{*}_{+}\to\mathbb{R}}(a-x)\\ &&\\ r_{s;\mathbb{R}\to(a,+\infty)}&:&\mathbb{R}&\longrightarrow&(a,+\infty)\\ &&x&\longmapsto&a+r_{s;\mathbb{R}\to\mathbb{R}^{*}_{+}}(x)\\ &&\\ r_{s;(a,+\infty)\to\mathbb{R}}&:&(a,+\infty)&\longrightarrow&\mathbb{R}\\ &&\\ &&x&\longmapsto&r_{s;\mathbb{R}^{*}_{+}\to\mathbb{R}}(x-a)\end{array} \tag{6}\] #### 2.2.2 \((0,1)\) and forms like \((a,b)\) The proposed parametrization is: \[\begin{array}{ccccc}r_{\mathbb{R}\rightarrow(0,1)}&:&\mathbb{R}&\longrightarrow &(0,1)\\ &&x&\longmapsto&\operatorname{expit}\left(x\right)=\frac{1}{1+\exp\left(-x \right)}\end{array} \tag{7}\] This function is known as \(\operatorname{expit}\) in some libraries. This parametrization is represented in Figure 2. The reciprocal parametrization is: \[\begin{array}{ccccc}r_{(0,1)\rightarrow\mathbb{R}}&:&(0,1)&\longrightarrow &\mathbb{R}\\ &&x&\longmapsto&\operatorname{logit}\left(x\right)=\log\frac{x}{1-x}\end{array} \tag{8}\] This parametrization is the one allowing to describe the Bernoulli distribution within the exponential family. Thus, this parametrization is the one used in logistic regression in the framework of generalized linear models. #### Properties * If \(x\sim\operatorname{Logistic}\), then \(r_{\mathbb{R}\rightarrow(0,1)}(x)\sim\mathcal{U}_{(0,1)}\). * \(\forall x\in\mathbb{R},\ r_{\mathbb{R}\rightarrow(0,1)}(-x)=1-r_{\mathbb{R} \rightarrow(0,1)}(x)\). * For \(x\rightarrow-\infty\), we have \(r_{\mathbb{R}\rightarrow(0,1)}(x)=\exp x+o(\exp x)\). * For \(x\rightarrow+\infty\), we have \(r_{\mathbb{R}\rightarrow(0,1)}(x)=1-\exp\left(-x\right)+o(\exp\left(-x\right))\). #### Alternatives Any bijective and differentiable function of \(\mathbb{R}\) in \((0,1)\) can be a parameterization, in particular, any cumulative distribution function of an absolutely continuous probability law with support on \(\mathbb{R}\). For example, the cumulative distribution function of the law \(\mathcal{N}(0,1)\) can be theoretically used, but turns out to be a bad choice in most practical cases because of the lightness of the distribution tails which leads asymptotes in \(0\) and \(1\) being reached very quickly, and thus to very weak derivatives leading to stalled iterative algorithms and numerical errors. #### Implementation details In many scientific computing libraries, the functions \(\mathrm{expit}\) and \(\mathrm{logit}\) are available within the special functions. These implementations should be used when available. This section is provided only for users of scientific libraries which do not contains these functions. The formula presented in Equation 7 is a fairly good choice, but suffers from a problem: when the argument \(x\) is very negative, \(\exp\left(-x\right)\) is very large, and an overflow can appear in the calculation. This overflow is well handled in most cases, but it often leads to an exception mechanism that can slow down the calculation and cause undesirable effects in some cases. A second option is to use the hyperbolic tangent with the following formula: \[\forall x\in\mathbb{R},\quad r_{\mathbb{R}\to(0,1)}=\frac{1+\tanh\left(\frac{ x}{2}\right)}{2}\] However, when \(x\) is very negative, then \(\tanh\left(\frac{x}{2}\right)\) on floats is computed as being exactly \(-1\), this leads to an exactly zero result, while the result obtained is encodable on floats close to zero but not zero. The same behavior is observed when the arguments \(x\) is positive, but as the result obtained is not encodable on floats close to \(1\), it is not possible to hope for better. In some works, the function \(\mathrm{expit}\) is introduced as \(x\mapsto\frac{\exp x}{1+\exp x}\). This expression behaves very well for negative arguments, but is to be avoided for positive arguments, since a large value of the argument \(x\) leads to NaN. It is possible to merge the good behavior of the latter formula which behaves very well on negative arguments and the formula introduced in Equation 7, using the following property: \[\forall x\in\mathbb{R},\quad r_{\mathbb{R}\to(0,1)}(x)=\frac{\exp x^{-}}{\exp x ^{-}+\exp\left(-x^{+}\right)}\] Where \(x^{-}=\min(x,0)\) and \(x^{+}=\max(x,0)\). We can therefore use: \[\begin{array}{rclrcl}r_{\mathbb{R}\to(0,1)}&:&\mathbb{R}&\longrightarrow&(0,1)\\ &&x&\longmapsto&\frac{\exp\left(x^{-}\right)}{\exp\left(x^{-}\right)+\exp\left(- x^{+}\right)}\end{array} \tag{9}\] Regarding the reciprocal, using a ratio in Equation 8 poses a rounding problem for values of \(x\) close to \(1\), and it is better to use the expanded form of \(\forall x\in(0,1),\ \log\frac{x}{1-x}=\log x-\log\left(1-x\right)\). One can use the function \(\mathrm{log1p}\colon y\mapsto\log(1+y)\), but this is not of major interest here, since \(\left|\log(1-y)\right|\ll\left|\log y\right|\) for \(y\) close to \(0\). We can therefore use: \[\begin{array}{rclrcl}r_{(0,1)\to\mathbb{R}}&:&(0,1)&\longrightarrow&\mathbb{ R}\\ &&x&\longmapsto&\log x-\mathrm{log1p}\left(-x\right)\end{array} \tag{10}\] #### Implementation example We will use the implementations of scipy.special: import scipy.special expit = scipy.special.expit logit = scipy.special.logit #### Forms like \((a,b)\) with \(a<b\): We will use a simple affine transformation to go from \((0,1)\) to \((a,b)\). Thus, the introduced parametrization is: \[\begin{array}{ccccc}r_{\mathbb{R}\to(a,b)}&:&\mathbb{R}&\longrightarrow&(a,b) \\ &&x&\longmapsto&a+(b-a)r_{\mathbb{R}\to(0,1)}(x)\\ \\ r_{(a,b)\to\mathbb{R}}&:&(a,b)&\longrightarrow&\mathbb{R}\\ &&x&\longmapsto&r_{(0,1)\to\mathbb{R}}\Big{(}\frac{x-a}{b-a}\Big{)}\end{array} \tag{11}\] Detail of the implementation of the special cases of forms \((-1,1)\) and \((-a,a)\) with \(a\in\mathbb{R}_{+}^{*}\) When the set is symmetric, we have * \(r_{\mathbb{R}\to(-a,a)}(0)=0\), * \(\frac{\mathrm{d}\,r_{(-a,a)\to(x)}}{\mathrm{d}\,x}\Big{|}_{x=0}=\frac{a}{2}\). It can be judicious to be precise near zero. If we take \(|\varepsilon|\ll 1\), we should obtain \(r_{\mathbb{R}\to(-a,a)}(\varepsilon)\approx\frac{\varepsilon}{2}\). But this is not computable for very small values using a transformation of the parametrization \(r_{\mathbb{R}\to(0,1)}\). In this particular case, it is interesting to use the function \(\tanh\), noting that \(\forall x\in\mathbb{R},\ \tanh\left(\frac{x}{2}\right)=2\exp\left(x\right)-1\). Thus, we will use for \((-a,a)\): \[\begin{array}{ccccc}r_{\mathbb{R}\to(-a,a)}&:&\mathbb{R}&\longrightarrow&(- a,a)\\ &&x&\longmapsto&a\tanh\left(\frac{x}{2}\right)\\ \\ r_{(-a,a)\to\mathbb{R}}&:&(-a,a)&\longrightarrow&\mathbb{R}\\ &&x&\longmapsto&2\operatorname{arctanh}\left(\frac{x}{a}\right)\end{array} \tag{12}\] ### Parametrization of vectors #### 2.3.1 Simplex \(\mathcal{S}_{n}\) We define the unit simplex of dimension \(n\) as a part of a hyperplane of \(\mathbb{R}^{n+1}\): \[\mathcal{S}_{n}=\left\{x\in\mathbb{R}_{+}^{n+1}:\sum_{i}x_{i}=1\right\}\subset \mathbb{R}^{n+1}\] However, \(\mathcal{S}_{n}\) is not an open set (w.r.t. the topology on hyperplane \(\left\{x\in\mathbb{R}^{n+1}:\sum_{i}x_{i}=1\right\}\)), which prevents the definition of the differentiable bijection with an open set. We will therefore work with the simplex deprived of its boundary, _i.e._: \[\mathring{\mathcal{S}}_{n}=\left\{x\in\mathbb{R}_{+}^{*}\,{}^{n+1}:\sum_{i}x_ {i}=1\right\}\] #### Construction of the parametrization between \(\mathring{\mathcal{S}}_{n}\) and \(\mathbb{R}^{n}\) The parametrization is expressed naturally at first between \(\mathring{\mathcal{S}}_{n}\) and \(\left(0,1\right)^{n}\). It will then suffice to compose with the parametrization of \(\left(0,1\right)^{n}\) in \(\mathbb{R}^{n}\) to obtain a parametrization of \(\mathring{\mathcal{S}}_{n}\) in \(\mathbb{R}^{n}\). The initial idea is to consider that for \(n\in\mathbb{N}^{*}\) we can easily define a bijection between \(\left(0,1\right)\times\mathring{\mathcal{S}}_{n}\) and \(\mathring{\mathcal{S}}_{n+1}\) by: \[\begin{array}{rcl}\left(0,1\right)\times\mathring{\mathcal{S}}_{n}&\longrightarrow& \mathring{\mathcal{S}}_{n+1}\\ \left(x,y\right)&\longmapsto&\operatorname{concat}\left(1-x,xy\right)\end{array}\] We can then introduce the parametrization defined in a recursive way: \[\begin{array}{rcl}h_{1}&:&\left(0,1\right)&\longrightarrow&\mathring{ \mathcal{S}}_{1}\\ &&x&\longmapsto&\left(1-x,x\right)\end{array} \tag{13}\] \[\begin{array}{rcl}\forall n\in\mathbb{N}^{*},&h_{n+1}&:&\left(0,1\right)^{n+ 1}&\longrightarrow&\mathring{\mathcal{S}}_{n+1}\\ &&x&\longmapsto&\operatorname{concat}\left(1-x_{0},x_{0}h_{n}(x_{1:})\right) \end{array}\] So we can deduce for Equation 13 a un-recursively form of \(h_{n}\): \[\begin{array}{rcl}h_{n}&:&\left(0,1\right)^{n}&\longrightarrow&\mathring{ \mathcal{S}}_{n}\\ &&x&\longmapsto&\operatorname{concat}\left(1-x,1\right)\odot \operatorname{concat}\left(1,\operatorname{cumprod}\left(x\right)\right) \end{array} \tag{14}\] And we can also write the reciprocal: \[\begin{array}{rcl}h_{n}^{-1}&:&\mathring{\mathcal{S}}_{n}&\longrightarrow& \left(0,1\right)^{n}\\ &&x&\longmapsto&\operatorname{flip}\left(\operatorname{cumsum}\left( \operatorname{flip}\left(x_{1:}\right)\right)\right)\odot\left(x_{:n-1}+ \operatorname{flip}\left(\operatorname{cumsum}\left(\operatorname{flip} \left(x_{1:}\right)\right)\right)\right)\end{array} \tag{15}\] One can notice that the components of the argument \(x\) in \(h_{n}(x)\) have a very different importance. Thus, a small variation of \(x_{0}\) will induce a very large variation of \(h_{n}(x)\), while a small variation of \(x_{n-1}\) will induce a very small variation of \(h_{n}(x)\). Another way to formulate the problem is to study the distribution of \(x\) for a \(h_{n}(x)\) taken uniformly in \(\hat{\mathcal{S}}_{n}\), the objective is to obtain a uniform distribution on \((0,1)\) implying a similar importance of all components. Let denotes \(\mathcal{U}_{\hat{\mathcal{S}}_{n}}\) the uniform probability distribution on \(\hat{\mathcal{S}}_{n}\) w.r.t. the Lebesgue measure of the hyperplane \(\big{\{}x\in\mathbb{R}^{n+1}:\sum_{i}x_{i}=1\big{\}}.\) It can be shown that if \(y\sim\mathcal{U}_{\hat{\mathcal{S}}_{n}}\), and \(x=h_{n}^{-1}(y)\), then \(\forall k:0\leq k<n,\ x_{k}\sim B(n-k,1)\), with \(B(\alpha,\beta)\) the Beta probability distribution of parameter \((\alpha,\beta)\). The reciprocal being true if we add the independence between the \(x_{k}\). Knowing that the distribution function of \(B(n-k,1)\) is the function defined on \((0,1)\), \(x\mapsto x^{n-k}\), then we have \(\forall k:0\leq k<n,\ x_{k}^{n-k}\sim\mathcal{U}_{(0,1)}\). Therefore, we define the function \[\begin{array}{ccccc}g_{n}^{-1}&:&(0,1)^{n}&\longrightarrow&(0,1)^{n}\\ &&x&\longmapsto&\Big{(}x_{k}^{n-k}\Big{)}_{k:0\leq k<n}=\big{(}x_{0}^{n}, \ldots,x_{n-1}^{1}\big{)}\\ \\ g_{n}&:&(0,1)^{n}&\longrightarrow&(0,1)^{n}\\ &&x&\longmapsto&\bigg{(}x_{k}^{\frac{1}{n-k}}\bigg{)}_{k:0\leq k<n}=\bigg{(}x_ {0}^{\frac{1}{n}},\ldots,x_{n-1}^{1}\bigg{)}\end{array}\] We can now prove2 that \(y\sim\mathcal{U}_{\hat{\mathcal{S}}_{n}}\) if and only if \(g_{n}^{-1}(h_{n}^{-1}(y))\sim\mathcal{U}_{(0,1)^{n}}\). Footnote 2: the ideas are present here, the demonstration is not provided, it requires the introduction of many notations and is of little interest in this paper. Finally, it only remains to apply a transformation defined between \(\mathbb{R}^{n}\) and \(\left(0,1\right)^{n}\). We choose to use a version of \(x\mapsto\mathrm{expit}\left(-x\right)\), similar to the one used in Section 2.2.2. We choose to put a minus on the argument so that \(r_{\mathbb{R}^{n}\rightarrow\hat{\mathcal{S}}_{n}}(x)_{k}\) is increasing with respect to \(x_{k}\) for \(0\leq k<n\). We obtain: \[\begin{array}{ccccc}r_{\mathbb{R}^{n}\rightarrow\hat{\mathcal{S}}_{n}}&:& \mathbb{R}^{n}&\longrightarrow&\hat{\mathcal{S}}_{n}\\ &&x&\longmapsto&h_{n}(g_{n}(\mathrm{expit}\left(-x\right)))\end{array} \tag{16}\] For \(n=2\), a representation of \(r_{\mathbb{R}^{n}\rightarrow\hat{\mathcal{S}}_{n}}\) is shown Figure 3. We also obtain the reciprocal: \[\begin{array}{ccccc}r_{\hat{\mathcal{S}}_{n}\rightarrow\mathbb{R}^{n}}&:& \hat{\mathcal{S}}_{n}&\longrightarrow&\mathbb{R}^{n}\\ &&x&\longmapsto&-\mathrm{logit}\left(g_{n}^{-1}\big{(}h_{n}^{-1}(x)\big{)}\right) \end{array} \tag{17}\] **Properties** * With \(x\) a random variable on \(\mathbb{R}^{n}\), the following two propositions are equivalent: * \(r_{\mathbb{R}^{n}\rightarrow\hat{\mathcal{S}}_{n}}(x)\sim\mathcal{U}_{\hat{ \mathcal{S}}_{n}}\). * Let \(\big{(}x^{(t)}\big{)}_{t\in\mathbb{N}}\) be a sequence with value in \(\mathbb{R}^{n}\). Let \(\big{(}y^{(t)}\big{)}_{t\in\mathbb{N}}\) be a sequence with value in \(\hat{\mathcal{S}}_{n}\) such that \(\forall y,\ y^{(t)}=r_{\mathbb{R}^{n}\to\hat{\mathcal{S}}_{n}}\big{(}x^{(t)} \big{)}\), then for any \(k:0\leq k\leq n\) the following two propositions are equivalent: * \(\forall k^{\prime}<k,\ \lim_{t\to+\infty}x^{(t)}_{k^{\prime}}=-\infty\) and if \(k<n\)\(\lim_{t\to+\infty}x^{(t)}_{k}=+\infty\) #### Implementation details We can note that the formula presented in Equation 16 should not be used naively, it leads to a power and a product of values between \(0\) and \(1\). A first step will therefore be to do the calculation in logarithmic space. For a \(x\in\mathbb{R}^{n}\) we have: \[\log\left(h_{n}(g_{n}(\expit{(-x)}))\right)= \log\big{[}\operatorname{concat}\left(1-g_{n}(\expit{(-x)}),1\right)\] \[\odot\operatorname{concat}\left(1,\operatorname{cumprod}\left( g_{n}(\expit{(-x)})\right)\right)\big{]}\] \[= \operatorname{concat}\left(\log\left(1-g_{n}(\expit{(-x)}) \right),0\right)\] \[+\operatorname{concat}\left(0,\operatorname{cumsum}\left(\log \left(g_{n}(\expit{(-x)})\right)\right)\right)\] We isolate the last element to be calculated: \(\log\left(g_{n}(\expit{(-x)})\right)\). \[\log\left(g_{n}(\mathrm{expit}\left(-x\right))\right) =\log\left(\mathrm{expit}\left(-x\right)\right)\oslash(n-\mathrm{ range}(n))\] \[=-\log\left(1+\mathrm{exp}\left(x\right)\right)\oslash(n-\mathrm{ range}(n))\] \[=-\log 1\mathrm{pexp}\left(x\right)\oslash(n-\mathrm{range}(n))\] where the function \(\log 1\mathrm{pexp}\) is the one defined in Equation 3. Then we return to the first element, using the fact that it is better to calculate a power setting as a product in logarithmic space: \[\log\left(1-g_{n}(\mathrm{expit}\left(-x\right))\right)=\log\left(1-\mathrm{ exp}\left(\log\left(g_{n}(\mathrm{expit}\left(-x\right)\right))\right)\right)\] Knowing that the values of \(\log\left(g_{n}(\mathrm{expit}\left(-x\right))\right)\) are negative and can be close to \(0\), it will be better to use the function \(\mathrm{expm}1\): \[\log\left(1-g_{n}(\mathrm{expit}\left(-x\right))\right) =\log\left(-\mathrm{expm}1\left(\log\left(g_{n}(\mathrm{expit} \left(-x\right)\right))\right)\right)\] \[=\log\left(-\mathrm{expm}1\left(-\log 1\mathrm{pexp}\left(x \right)\oslash(n-\mathrm{range}(n))\right)\right)\] Thus we are able to calculate \(\log\left(h_{n}(g_{n}(\mathrm{expit}\left(-x\right)))\right)\). It is therefore theoretically sufficient to apply the function \(\mathrm{exp}\) to obtain the value of the parametrization. However, the accumulation of successive calculation errors can lead to the sum of the coordinates being numerically different from \(1\). To mitigate this error, it is preferable to use the function \(\mathrm{softmax}\) which is strictly equivalent to the function \(\mathrm{exp}\) in this case. This one is defined as: \[\mathrm{softmax} : \mathbb{R}^{n+1} \longrightarrow \mathring{\mathcal{S}}_{n} \tag{18}\] \[x \longmapsto \frac{\mathrm{exp}\left(x\right)}{\sum_{k=0}^{n}\mathrm{exp} \left(x_{k}\right)}\] For the sake of numerical stability, we prefer to use the equivalent implementation: \[\mathrm{softmax} : \mathbb{R}^{n+1} \longrightarrow \mathring{\mathcal{S}}_{n} \tag{19}\] \[x \longmapsto \frac{\mathrm{exp}\left(x-\mathrm{max}\,x\right)}{\sum_{k=0}^{n} \mathrm{exp}\left(x_{k}-\mathrm{max}\,x\right)}\] We thus obtain the implementation of the parametrization: \[\begin{array}{ccccc}r_{\mathbb{R}^{n}\rightarrow\mathring{\mathcal{S}}_{n}} &:&\mathbb{R}^{n}&\longrightarrow&\mathring{\mathcal{S}}_{n}\\ &&x&\longmapsto&\mathrm{softmax}\left(\\ &&&&\mathrm{concat}\left(\log\left(-\mathrm{expm}1\left(\xi(x)\right)\right),0 \right)\\ &&&&+\mathrm{concat}\left(0,\mathrm{cumsum}\left(\xi(x)\right)\right)\\ &&&&\right)\end{array} \tag{20}\] \[\mathrm{with}\;\xi : x \longmapsto -\log 1\mathrm{pexp}\left(x\right)\oslash(n-\mathrm{range}(n))\] For the reciprocal, in Equation 17 the first step is to express the calculation as a function of \(\log\big{(}g_{n}^{-1}\big{(}h_{n}^{-1}(x)\big{)}\big{)}\), for which the passage to the power of \(g_{n}^{-1}\) is less problematic. \[-\text{logit}\left(g_{n}^{-1}\big{(}h_{n}^{-1}(x)\big{)}\right)= \log\big{(}1-g_{n}^{-1}\big{(}h_{n}^{-1}(x)\big{)}\big{)}-\log \big{(}g_{n}^{-1}\big{(}h_{n}^{-1}(x)\big{)}\big{)}\] \[= \log\big{(}1-\exp\big{(}\log\big{(}g_{n}^{-1}\big{(}h_{n}^{-1}(x) \big{)}\big{)}\big{)}\big{)}-\log\big{(}g_{n}^{-1}\big{(}h_{n}^{-1}(x)\big{)} \big{)}\] \[= \log\big{(}-\exp\!\text{m}1\left(\log\big{(}g_{n}^{-1}\big{(}h_{n }^{-1}(x)\big{)}\big{)}\big{)}\right)-\log\big{(}g_{n}^{-1}\big{(}h_{n}^{-1}( x)\big{)}\big{)}\] It is now sufficient to calculate \(\log\big{(}g_{n}^{-1}\big{(}h_{n}^{-1}(x)\big{)}\big{)}\): \[\log\big{(}g_{n}^{-1}\big{(}h_{n}^{-1}(x)\big{)}\big{)}= (n-\text{range}(n))\odot\log\big{(}h_{n}^{-1}(x)\big{)}\] \[= (n-\text{range}(n))\odot\log\big{(}\] \[\quad\text{flip}\left(\text{cumsum}\left(\text{flip}\left(x_{1: }\right)\right)\right)\odot(x_{:n-1}+\text{flip}\left(\text{cumsum}\left( \text{flip}\left(x_{1:}\right)\right)\right))\right)\] \[= -\left(n-\text{range}(n)\right)\odot\log\!\text{1p}\left(x_{:n-1} \odot\text{flip}\left(\text{cumsum}\left(\text{flip}\left(x_{1:}\right) \right)\right)\right)\] We thus obtain the implementation of the reciprocal parametrization: \[\begin{array}{rccc}r_{\hat{\mathcal{S}}_{n}\to\mathbb{R}^{n}}&:&\hat{ \mathcal{S}}_{n}&\longrightarrow&\mathbb{R}^{n}\\ &&x&\longmapsto&\log\left(-\exp\!\text{m}1\left(\xi(x)\right)\right)-\xi(x)\\ \text{with }\xi&:&x&\longmapsto&-(n-\text{range}(n))\\ &&\odot\log\!\text{1p}\left(x_{:n-1}\odot\text{flip}\left(\text{cumsum} \left(\text{flip}\left(x_{1:}\right)\right)\right)\right)\end{array} \tag{21}\] #### Implementation example This implementation uses the log1pexp function defined in the example implementation in Section 2.2.1. ``` importnumpyasnp def reals_to_simplex(x): n=x.size ksi=-log1pexp(x)/np.arange(n,0,-1) logvalues=( np.concatenate((np.log(-np.expm1(ksi)),(0,))) +np.concatenate(((0,),np.cumsum(ksi))) ) values=np.exp(logvalues-logvalues.max()) returnvalues/values.sum() defsimplex_to_reals(x): n = x.size - 1 ksi = ( -np.arange(n, 0, -1) * np.log1p(x[:-1] / np.cumsum(x[-1:0:-1])[:-1]) ) return np.log(-np.expm1(ksi)) - ksi #### 2.3.2 Sphere \(\mathbf{S}_{n}\) Let the unit \(n\)-sphere of dimension \(n\) (as part of \(\mathbb{R}^{n+1}\)): \[\mathbf{S}_{n}=\left\{x\in\mathbb{R}^{n+1}:\sum_{i}x_{i}^{2}=1\right\}\subset \mathbb{R}^{n+1}\] It is not possible to build a bijection of \(\mathbb{R}^{n}\) in \(\mathbf{S}_{n}\), so we will consider a subpart of the sphere in which it is possible to build a bijection: \[\tilde{\mathbf{S}}_{n}=\left\{x\in\mathbf{S}_{n}:\sum_{i=0}^{n-2}x_{i}^{2}<1 \wedge(x_{n-1},x_{n})\neq(0,-1)\right\}\] In the topology of \(\mathbf{S}_{n}\), the adherence of \(\tilde{\mathbf{S}}_{n}\) is \(\mathbf{S}_{n}\), and \(\mathbf{S}_{n}\setminus\tilde{\mathbf{S}}_{n}\) is of zero Lebesgue measure w.r.t. Lebesgue measure on \(\mathbf{S}_{n}\). #### Construction of the parametrization between \(\tilde{\mathbf{S}}_{n}\) and \(\mathbb{R}^{n}\) The parametrization is expressed naturally in a first step between \(\tilde{\mathbf{S}}_{n}\) and \(\left(-\frac{\pi}{2},\frac{\pi}{2}\right)^{n-1}\times(-\pi,\pi)\). It will then suffice to compose with the parametrization of \(\left(-\frac{\pi}{2},\frac{\pi}{2}\right)^{n-1}\times(-\pi,\pi)\) in \(\mathbb{R}^{n}\) to obtain a parametrization of \(\tilde{\mathbf{S}}_{n}\) in \(\mathbb{R}^{n}\). As in the case of the simplex, the initial idea is to consider that for \(n\in\mathbb{N}^{*}\), by means of the polyspherical coordinates, we can easily define a bijection between \(\left(-\frac{\pi}{2},\frac{\pi}{2}\right)\times\tilde{\mathbf{S}}_{n}\) and \(\tilde{\mathbf{S}}_{n+1}\): \[\begin{array}{rcl}\left(-\frac{\pi}{2},\frac{\pi}{2}\right)\times\tilde{ \mathbf{S}}_{n}&\longrightarrow&\tilde{\mathbf{S}}_{n+1}\\ \left(x,y\right)&\longmapsto&\mathrm{concat}\left(\sin(x),\cos(x)y\right)\end{array}\] We can then introduce the parametrization defined in a recursive way: \[\begin{array}{ccccccc}h_{1}&:&(-\pi,\pi)&\longrightarrow&\tilde{\mathbf{S}}_{1}\\ &&x&\longmapsto&(\sin(x),\cos(x))\\ \forall n\in\mathbb{N}^{*},&h_{n+1}&:&\left(-\frac{\pi}{2},\frac{\pi}{2} \right)^{n}\times(-\pi,\pi)&\longrightarrow&\tilde{\mathbf{S}}_{n+1}\\ &&x&\longmapsto&\operatorname{concat}\left(\sin\left(x_{0}\right),\cos \left(x_{0}\right)h_{n}(x_{1:})\right)\end{array} \tag{22}\] So we can deduce for Equation 22 a un-recursively form of \(h_{n}\): \[\begin{array}{ccccccc}h_{n}&:&\left(-\frac{\pi}{2},\frac{\pi}{2}\right)^{n- 1}\times(-\pi,\pi)&\longrightarrow&\tilde{\mathbf{S}}_{n}\\ x&\longmapsto&\operatorname{concat}\left(\sin\left(x\right),1\right)\\ &&\odot\operatorname{concat}\left(1,\operatorname{cumprod}\left(\cos \left(x\right)\right)\right)\end{array} \tag{23}\] And we can also write the reciprocal: \[\begin{array}{ccccccc}h_{n}^{-1}&:&\tilde{\mathbf{S}}_{n}&\longrightarrow& \left(-\frac{\pi}{2},\frac{\pi}{2}\right)^{n-1}\times(-\pi,\pi)\\ &&x&\longmapsto&\operatorname{arctan2}\left(x_{:n-1},\right.\\ &&\left.\operatorname{concat}\left(\operatorname{flip}\left(\operatorname{ cumsum}\left(\operatorname{flip}\left(x_{1:}\right)\bigtriangleup 2\right)_{1:}\bigtriangleup\frac{1}{2}\right),x_{n}\right)\right)\end{array} \tag{24}\] Similar to the simplex, one can notice that the components of the argument \(x\) in \(h_{n}(x)\) have very different importance. Thus, a small variation of \(x_{0}\) will induce a very large variation of \(h_{n}(x)\), while a small variation of \(x_{n-1}\) will induce a very small variation of \(h_{n}(x)\). To rephrase the problem, let us consider the uniform probability distribution on \(\tilde{\mathbf{S}}_{n}\) noted \(\mathcal{U}_{\tilde{\mathbf{S}}_{n}}\). After the transformation, the objective is to obtain a uniform distribution on \((0,1)\) implying a similar importance of all components. We can show that if \(y\sim\mathcal{U}_{\tilde{\mathbf{S}}_{n}}\), and \(x=h_{n}^{-1}(y)\), then for \(k:0\leq k<n-1\), \(x_{k}\) follows a distribution on \(\left(-\frac{\pi}{2},\frac{\pi}{2}\right)\) of density \(x\mapsto\frac{\cos(x)^{n-1-k}}{B\left(\frac{1}{2},\frac{n-k}{2}\right)}\) where \(B\) is the Beta function. Unfortunately, it is not possible to express the cumulative distribution function of this distribution other than by using a generalized hypergeometric function, which does not make it possible to implement a differentiable transformation to transform the uniform law of \(\tilde{\mathbf{S}}_{n}\) into the uniform distribution on the orthotope \(\left(-\frac{\pi}{2},\frac{\pi}{2}\right)^{n-1}\times(-\pi,\pi)\). Therefore it is not possible to apply a transformation to obtain exactly a logistic probability distribution on \(\mathbb{R}^{n}\). The proposed approach is then to transform \(\left(-\frac{\pi}{2},\frac{\pi}{2}\right)^{n-1}\times(-\pi,\pi)\) into \(\mathbb{R}^{n}\), and apply a scaling factor on the dimensions to obtain the variance of a logistic probability distribution. The transformation from \(\left(-\frac{\pi}{2},\frac{\pi}{2}\right)^{n-1}\times(-\pi,\pi)\) to \(\mathbb{R}^{n}\) is simply written as: \[\begin{array}{ccccccccc}g_{n}&:&\mathbb{R}^{n}&\longrightarrow&\big{(}-\frac{\pi} {2},\frac{\pi}{2}\big{)}^{n-1}\times(-\pi,\pi)\\ &x&\longmapsto&\mathrm{concat}\,\big{(}r_{\mathbb{R}^{n-1}\to\big{(}-\frac{\pi} {2},\frac{\pi}{2}\big{)}^{n-1}}(x_{:n-1}),\\ &&&r_{\mathbb{R}\to(-\pi,\pi)}(x_{n-1})\big{)}\\ \\ g_{n}^{-1}&:&\big{(}-\frac{\pi}{2},\frac{\pi}{2}\big{)}^{n-1}\times(-\pi,\pi) &\to&\mathbb{R}^{n}\\ &x&\longmapsto&\mathrm{concat}\,\big{(}r_{\big{(}-\frac{\pi}{2},\frac{\pi}{2} \big{)}^{n-1}\to\mathbb{R}^{n-1}}(x_{:n-1}),\\ &&&r_{(-\pi,\pi)\to\mathbb{R}}(x_{n-1})\big{)}\end{array} \tag{25}\] Thus, if \(y\sim\mathcal{U}_{\mathbf{S}_{n}}\), and \(x=g_{n}^{-1}\big{(}h_{n}^{-1}(y)\big{)}\), for \(k:0\leq k<n\), we have \(\mathbb{E}(x_{k})=0\), and \(\mathrm{Var}\,(x_{k})\approx\frac{\pi^{2}}{3(2(n-k)-1)}\). By introducing \(z=\sqrt{2(n-\mathrm{range}(n))-1}\odot x\), we have \(\mathbb{E}(z_{k})=0\) and \(\mathrm{Var}\,(z_{k})\approx\frac{\pi^{2}}{3}\), which is the variance of a logistic distribution. We will therefore consider the transformation: \[\begin{array}{ccccc}r_{\mathbb{R}^{n}\to\tilde{\mathbf{S}}_{n}}&:&\mathbb{R }^{n}&\longrightarrow&\tilde{\mathbf{S}}_{n}\\ &&x&\longmapsto&h_{n}\Big{(}g_{n}\Big{(}x\oslash\sqrt{2(n-\mathrm{range}(n))- 1}\Big{)}\Big{)}\\ \\ r_{\tilde{\mathbf{S}}_{n}\to\mathbb{R}^{n}}&:&\tilde{\mathbf{S}}_{n}& \longrightarrow&\mathbb{R}^{n}\\ &&x&\longmapsto&\sqrt{2(n-\mathrm{range}(n))-1}\odot g_{n}^{-1}\big{(}h_{n}^{- 1}(x)\big{)}\end{array} \tag{26}\] For \(n=2\), a representation of \(r_{\mathbb{R}^{n}\to\tilde{\mathbf{S}}_{n}}\) is shown Figure 4. **Properties** * \(r_{\mathbb{R}^{n}\to\tilde{\mathbf{S}}_{n}}(0)=(0,\cdots,0,1)\), * If \(x\sim\operatorname{Logistic}_{n}\), then \(r_{\mathbb{R}^{n}\to\tilde{\mathbf{S}}_{n}}(x)\) follows approximately \(\mathcal{U}_{\tilde{\mathbf{S}}_{n}}\), * If \(x\sim\mathcal{U}_{\tilde{\mathbf{S}}_{n}}\), then \(r_{\tilde{\mathbf{S}}_{n}\to\mathbb{R}^{n}}(x)\) follows approximately \(\operatorname{Logistic}_{n}\). **Implementation details** The parametrization of \(\mathbb{R}^{n}\) in \(\left(-\frac{\pi}{2},\frac{\pi}{2}\right)^{n-1}\times(-\pi,\pi)\) described in Equation 25 must be done with the hyperbolic tangent to ensure good stability around \(0\) (see implementation details for \((-a,a)\) in Section 2.2.2). Then for \(x\in\left(-\frac{\pi}{2},\frac{\pi}{2}\right)^{n-1}\times(-\pi,\pi)\), the computation of \(\operatorname{cumprod}\left(\cos x\right)\) in \(h_{n}(x)\) must be done carefully to be numerically stable and differentiable. We can notice that \(\forall k:0\leq k<n-1,\;\cos x_{k}>0\) (which is not true for \(\cos x_{n-1}\)). Thus, we will compute \(\operatorname{cumprod}\left(\cos x\right)\) in two steps: \[\operatorname{cumprod}\left(\cos x\right)= \operatorname{concat}\left(\zeta(x_{:n-1}),\zeta(x_{:n-1})_{n-2} \cos x_{n-1}\right)\] \[\text{with }\zeta(x_{:n-1})= \exp\left(\operatorname{cumsum}\left(\log\left(\cos x_{:n-1} \right)\right)\right)\] We will use the following formulation: \[\begin{array}{rllll}r_{\mathbb{R}^{n}\to\tilde{\mathbf{S}}_{n}}&:&\mathbb{R }^{n}&\longrightarrow&\tilde{\mathbf{S}}_{n}\\ &&x&\longmapsto&\psi(\xi(x))\\ \text{with }\psi&:&y&\longmapsto&\operatorname{concat}\left(\sin\left(y \right),1\right)\\ &&&\odot\operatorname{concat}\left(1,\operatorname{concat}\left(\zeta(y_{:n-1 }),\right.\right.\\ &&&\left.\zeta(y_{:n-1})_{n-2}\cos\left(y_{n-1}\right)\right)\right)\\ \text{with }\zeta&:&u&\longmapsto&\exp\left(\operatorname{cumsum}\left(\log \left(\cos\left(u\right)\right)\right)\right)\\ \text{with }\xi&:&x&\longmapsto&\frac{\pi}{2}\operatorname{concat}\left(1_{n-1},2 \right)\\ &&&\odot\tanh\left(\frac{\pi}{2}\osymp\sqrt{2(n-\operatorname{range}(n))-1} \right)\end{array} \tag{27}\] And for the reciprocal, we will use: \[\begin{array}{ccccc}r_{\tilde{\mathbf{S}}_{n}\rightarrow\mathbb{R}^{n}}&:& \tilde{\mathbf{S}}_{n}&\longrightarrow&\mathbb{R}^{n}\\ &&x&\longmapsto&2\operatorname{arctanh}\left(\xi(x)\oslash\left(\frac{\pi}{2} \operatorname{concat}\left(1_{n-1},2\right)\right)\right)\\ &&\oslash\sqrt{2(n-\operatorname{range}(n))-1}\\ \end{array} \tag{28}\] with \(\xi\) : \(x\)\(\longmapsto\)\(\operatorname{arctan2}\left(x_{:n-1}\), \(\operatorname{concat}\left(\operatorname{flip}\left(\operatorname{cumsum} \left(\operatorname{flip}\left(x_{1:}\right)\bigtriangleup 2\right)_{1:}\bigtriangleup\frac{1}{2} \right),x_{n}\right)\right)\) **Implementation example** import numpy as np def reals_to_sphere(x): n = x.size ksi = np.tanh(x/2/np.sqrt(2*np.arange(n, 0, -1)-1))*np.pi/2 ksi[-1] *= 2 zeta = np.exp(np.cumsum(np.log(np.cos(ksi[:-1])))) return ( np.concatenate((np.sin(ksi), (1,))) * np.concatenate(((1,),zeta, (zeta[-1] * np.cos(ksi[-1]),))) ) def sphere_to_reals(x): n = x.size - 1 ksi = np.arctan2( x[:-1], np.concatenate( ( np.sqrt(np.cumsum(x[:0:-1] ** 2)[:0:-1]), (x[-1],), ) ), ), ) ksi[-1] /= 2 return 2 * np.arctanh(ksi/(np.pi/2)) * np.sqrt(2*np.arange(n, 0, -1)-1) #### 2.3.3 Half sphere HS\({}_{n}\) Let be the unit half \(n\)-sphere of dimension \(n\) (as part of \(\mathbb{R}^{n+1}\)): \[\mathbf{HS}_{n}=\left\{x\in\mathbb{R}^{n}\times\mathbb{R}_{+}^{*}:\sum_{i}x_{ i}^{2}=1\right\}\subset\mathbb{R}^{n+1}\] This parametrization is needed, for example, when we want to parameterize a normal vector of a hyperplane. A vector and its opposite describe the same hyperplane, so we look for a vector restricted to the half \(n\)-sphere. #### Proposed parametrization The proposed parametrization is built like the one of the \(n\)-sphere, but using \(\left(-\frac{\pi}{2},\frac{\pi}{2}\right)^{n}\), with the function \(h_{n}\): \[\begin{array}{rll}h_{n}&:&\left(-\frac{\pi}{2},\frac{\pi}{2}\right)^{n}& \longrightarrow&\mathbf{HS}_{n}\\ &&x&\longmapsto&\operatorname{concat}\left(\sin\left(x\right),1\right)\\ &&\odot\operatorname{concat}\left(1,\operatorname{cumprod}\left(\cos\left(x \right)\right)\right)\\ \\ h_{n}^{-1}&:&\mathbf{HS}_{n}&\longrightarrow&\left(-\frac{\pi}{2},\frac{\pi}{ 2}\right)^{n}\\ &&x&\longmapsto&\operatorname{arctan2}\left(x_{:n-1},\right.\\ &&\left.\quad\operatorname{flip}\left(\operatorname{cumsum}\left(\operatorname{ flip}\left(x_{1:}\right)\bigtriangleup 2\right)\right)\bigtriangleup\frac{1}{2}\right)\end{array} \tag{29}\] As for the \(n\)-sphere, (see Section 2.3.2 for details), the proposed approach is therefore to transform \(\left(-\frac{\pi}{2},\frac{\pi}{2}\right)^{n}\) into \(\mathbb{R}^{n}\), and apply a scaling factor on the dimensions to obtain the variance of a logistic law. The transformation of \(\left(-\frac{\pi}{2},\frac{\pi}{2}\right)^{n}\) is \(r_{\mathbb{R}^{n}\rightarrow\left(-\frac{\pi}{2},\frac{\pi}{2}\right)}\) introduced earlier. Thus, if \(y\sim\mathcal{U}_{\widehat{\mathbf{S}}_{n}}\), and \(x=r_{\left(-\frac{\pi}{2},\frac{\pi}{2}\right)\rightarrow\mathbb{R}^{n}} \big{(}h_{n}^{-1}(y)\big{)}\), for \(k:0\leq k<n\), we have \(\mathbb{E}(x_{k})=0\), and \(\operatorname{Var}\left(x_{k}\right)\approx\frac{\pi^{2}}{3(2(n-k)-1)}\). By introducing \(z=\sqrt{2(n-\operatorname{range}(n))-1}\odot x\), we have \(\mathbb{E}(z_{k})=0\) and \(\operatorname{Var}\left(z_{k}\right)\approx\frac{\pi^{2}}{3}\), which is the variance of a logistic distribution. We will therefore consider the transformation: \[\begin{array}{rllll}r_{\mathbb{R}^{n}\rightarrow\mathbf{HS}_{n}}&:&\mathbb{ R}^{n}&\longrightarrow&\mathbf{HS}_{n}\\ &&x&\longmapsto&h_{n}\Big{(}r_{\mathbb{R}^{n}\rightarrow\left(-\frac{\pi}{2}, \frac{\pi}{2}\right)}\Big{(}x\oslash\sqrt{2(n-\operatorname{range}(n))-1} \Big{)}\Big{)}\\ \\ r_{\mathbf{HS}_{n}\rightarrow\mathbb{R}^{n}}&:&\mathbf{HS}_{n}&\longrightarrow &\mathbb{R}^{n}\\ &&x&\longmapsto&\sqrt{2(n-\operatorname{range}(n))-1}\odot r_{\left(-\frac{ \pi}{2},\frac{\pi}{2}\right)\rightarrow\mathbb{R}^{n}}\big{(}h_{n}^{-1}(x) \big{)}\end{array} \tag{30}\] For \(n=2\), a representation of \(r_{\mathbb{R}^{n}\rightarrow\mathbf{HS}_{n}}\) is shown Figure 5. #### Properties * \(r_{\mathbb{R}^{n}\rightarrow\mathbf{HS}_{n}}(0)=(0,\cdots,0,1)\), * If \(x\sim\operatorname{Logistic}_{n}\), then \(r_{\mathbb{R}^{n}\rightarrow\mathbf{HS}_{n}}(x)\) follows approximately \(\mathcal{U}_{\mathbf{HS}_{n}}\), * If \(x\sim\mathcal{U}_{\mathbf{HS}_{n}}\), then \(r_{\mathbf{HS}_{n}\rightarrow\mathbb{R}^{n}}(x)\) follows approximately \(\operatorname{Logistic}_{n}\). #### Implementation details The parametrization in \(\left(-\frac{\pi}{2},\frac{\pi}{2}\right)\) is done in a symmetrical interval. It is therefore advisable to use the variant of the implementation using the hyperbolic tangeant. The parametrization of \(\mathbb{R}^{n}\) in \(\left(-\frac{\pi}{2},\frac{\pi}{2}\right)^{n}\) described in Equation 30 must be done with the hyperbolic tangent to ensure good stability around \(0\) (see implementation details for \(\left(-a,a\right)\) in Section 2.2.2). Moreover, since the term \(\mathrm{cumprod}\left(\cos x\right)\) only involves positive values, and since the cumulative product of small elements can induce numerical errors, it should be easily computed as \(\exp\left(\mathrm{cumsum}\left(\log\left(\cos x\right)\right)\right)\). We thus obtain the following formulation of the transformation: \[\begin{array}{rcccc}r_{\mathbb{R}^{n}\rightarrow\mathbf{HS}_{n}}&:&\mathbb{ R}^{n}&\longrightarrow&\mathbf{HS}_{n}\\ &&x&\longmapsto&\mathrm{concat}\left(\sin\left(\xi(x)\right),1\right)\\ &&&&\odot\,\mathrm{concat}\left(1,\exp\left(\mathrm{cumsum}\left(\log\left( \cos\left(\xi(x)\right)\right)\right)\right)\right)\\ \text{with }\xi&:&x&\longmapsto&\frac{\pi}{2}\tanh\left(\frac{x}{2}\oslash \sqrt{2(n-\mathrm{range}(n))-1}\right)\end{array} \tag{31}\] The inverse parametrization is written as: \[\begin{array}{cccccccc}{}^{T_{\mathbf{HS}_{n}\rightarrow\mathbb{R}^{n}}}&:& \mathbf{HS}_{n}&\longrightarrow&\mathbb{R}^{n}\\ &&x&\longmapsto&2\operatorname{arctanh}\left(\frac{\xi(x)}{\frac{\pi}{2}} \right)\odot\sqrt{2(n-\operatorname{range}(n))-1}\\ \text{with}\,\xi&:&x&\longmapsto&\operatorname{arctan2}\left(x_{:n-1}, \operatorname{flip}\left(\text{cumsum}\left(\operatorname{flip}\left(x_{1:} \right)\bigtriangleup 2\right)\bigtriangleup\frac{1}{2}\right)\right)\end{array} \tag{32}\] **Implementation example** ``` importnumpyasnp defreals_to_half_sphere(x): n=x.size ksi=np.pi/2*np.tanh(x/2/np.sqrt(2*np.arange(n,0,-1)-1)) return( np.concatenate((np.sin(ksi),(1))) *np.concatenate(((1,),np.exp(np.cumsum(np.log(np.cos(ksi))))))) defhalf_sphere_to_reals(x): n=x.size-1 ksi=np.arctan2(x[:-1],np.sqrt(np.cumsum(x[:0:-1]**2)[::-1])) return2*np.arctanh(2*ksi/np.pi)*np.sqrt(2*np.arange(n,0,-1)-1)) ``` #### 2.3.4 Ball \(\mathbf{B}_{n}\) The unit \(n\)-ball is defined by: \[\mathbf{B}_{n}=\left\{x\in\mathbb{R}^{n}:\sum_{i}x_{i}^{2}\leq 1\right\}\] It is not possible to obtain a bijection with \(\mathbb{R}^{n}\) because of the boundary points. We will therefore choose to introduce a parametrization with the \(n\)-open ball: \[\mathring{\mathbf{B}}_{n}=\left\{x\in\mathbb{R}^{n}:\sum_{i}x_{i}^{2}<1\right\}\] **Construction of the parametrization** There are two naive ways to construct a parametrization with \(\mathbb{R}^{n}\): * Starting from the parametrization of \({\bf S}_{n}\), indeed, \(\forall x\in{\bf S}_{n},\;\;x_{:n}\in{\bf B}_{n}\) and \(\forall x\in{\bf B}_{n},\;\;\mbox{concat}\;(x,1-\sum_{i}x_{i})\in{\bf S}_{n}\). But the whole sphere \({\bf S}_{n}\) is not parametriizable and we introduce uncontrolled effects on \({\bf S}_{n}\setminus\tilde{\bf S}_{n}\). * By transforming the \((-1,1)^{n}\) into \(\mathring{\bf B}_{n}\) with the application \(x\mapsto\frac{\|x\|_{\infty}}{\|x\|_{2}}x\). However, as the ratio between the volume of \({\bf B}_{n}\) and \((-1,1)^{n}\) tends very quickly to \(0\) when \(n\) increases, this transformation induces very strong deformation and does not make parametrization possible to obtain a uniformity on the ball by applying a transformation on each of the coordinates. We will use an alternative approach, the general idea is that if \(x\sim{\cal N}(0,I_{n})\), posing \(y=\|x\|_{2}^{2}\), and \(z=\frac{x}{\sqrt{y}}\), then \(y\sim\chi_{n}^{2}\), \(z\sim{\cal U}_{{\bf S}_{n}}\), and \(y\) and \(z\) are independent. Now we know that if \(u\sim{\cal U}_{(0,1)}\) and \(y\sim{\cal U}_{{\bf S}_{n}}\), then \(u^{\frac{1}{n}}y\sim{\cal U}_{\mathring{\bf B}_{n}}\). It is thus enough to transform \(z\sim\chi_{n}^{2}\) into \(u\sim{\cal U}_{(0,1)}\), which can be done by means of the cumulative distribution function of the \(\chi_{n}^{2}\) probability distribution noted \(F_{\chi_{n}^{2}}\). We would like to introduce the function: \[\begin{array}{r@{\quad:\quad}l}h_{n}\quad:\quad&{\mathbb{R}}^{n}\quad \longrightarrow\quad&\mathring{\bf B}_{n}\\ \quad\quad\quad\quad\quad\quad\quad\quad x\quad&\longmapsto\quad&\left(F_{\chi_ {n}^{2}}\Big{(}\|x\|_{2}^{2}\Big{)}\right)^{\frac{1}{n}}\frac{x}{\|x\|_{2}} \\ h_{n}^{-1}\quad:\quad&\mathring{\bf B}_{n}\quad\longrightarrow\quad&{\mathbb{R} }^{n}\\ \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad x\quad&\longmapsto\quad& F_{\chi_{n}^{2}}^{-1}(\|x\|_{2}^{n})\frac{x}{\|x\|_{2}}\end{array}\] Thus, if \(x\sim{\cal N}(0,I_{n})\), then \(h_{n}(x)\sim{\cal U}_{\mathring{\bf B}_{n}}\). This property implies a similar importance of all components. The writing of \(h_{n}\) requires the cumulative distribution function of \(\chi_{n}^{2}\), but this function and its reciprocal are not easily computable so as to be automatically differentiable if \(n>2\). For \(n=2\) we will use the property \(\chi_{2}^{2}={\cal E}\big{(}\frac{1}{2}\big{)}\) where \({\cal E}(\theta)\) is the exponential probability distribution of intensity \(\theta\). For \(n>2\), we know that if \(y\sim\chi_{n}^{2}\), then \(y^{\frac{1}{3}}\) is approximated by the law \({\cal N}\Big{(}n^{\frac{1}{3}}\big{(}1-\frac{2}{9n}\big{)},\frac{2}{9n^{\frac{1 }{3}}}\Big{)}\)(Wilson and Hilferty, 1931). This approximation does not respect the support of the distribution (which is essential to define a bijection). We choose to use the transformation (which tends to identity when \(n\) is large), \(t\mapsto\frac{1}{4}\log\!\mbox{expm}\,1\,(4t)\) for mapping \({\mathbb{R}}_{+}^{*}\) to \({\mathbb{R}}\) with an identity asymptote when \(t\to+\infty\), the factor \(4\) is choosen empirically to reduce the approximation error. Then we consider \(\frac{1}{4}\log\!\mbox{expm}\,1\,\left(4y^{\frac{1}{3}}\right)\underset{\mbox{ approx}}{\sim}\,{\cal N}\Big{(}n^{\frac{1}{3}}\big{(}1-\frac{2}{9n}\big{)},\frac{2}{9n^{ \frac{1}{3}}}\Big{)}\). To sum up, we will approximate \(F_{\chi_{n}^{2}}\) by \(m_{n}\) defined as: \[\begin{array}{cccccccc}m_{n}&:&\mathbb{R}_{+}^{*}&\rightarrow&(0,1)\\ &&x&\longmapsto&\begin{cases}1-\exp\left(-\frac{x}{2}\right)&\text{if}\,n=2\\ \Phi\Bigg{(}\frac{\frac{1}{4}\log\!\operatorname{\mathrm{c}ypm1}\left(4y^{ \frac{1}{3}}\right)-n^{\frac{1}{3}}\left(1-\frac{2}{9n}\right)}{\sqrt{\frac{ 2}{9n^{\frac{1}{3}}}}}\Bigg{)}&\text{if}\,n\geq 3\\ m_{n}^{-1}&:&(0,1)&\rightarrow&\mathbb{R}_{+}^{*}\\ &&x&\longmapsto&\begin{cases}-2\log\left(1-x\right)&\text{if}\,n=2\\ \left(\frac{1}{4}\log\!\operatorname{\mathrm{1}pexp}\left(4\bigg{(}n^{\frac{ 1}{3}}\big{(}1-\frac{2}{9n}\big{)}+\Phi^{-1}(x)\sqrt{\frac{2}{9n^{\frac{1}{3} }}}\bigg{)}\right)\right)\end{cases}\end{array}\end{array}\] where \(\Phi\) is the distribution function of \(\mathcal{N}(0,1)\). And we will use the function \(\tilde{h}_{n}\) as a approximation of \(h_{n}\): \[\begin{array}{cccc}\tilde{h}_{n}&:&\mathbb{R}^{n}&\longrightarrow&\tilde{ \mathbf{B}}_{n}\\ &&x&\longmapsto&\left(m_{n}\Big{(}\|x\|_{2}^{2}\Big{)}\right)^{\frac{1}{n}} \frac{x}{\|x\|_{2}}\\ \tilde{h}_{n}^{-1}&:&\tilde{\mathbf{B}}_{n}&\longrightarrow&\mathbb{R}^{n}\\ &&x&\longmapsto&m_{n}^{-1}(\|x\|_{2}^{n})\frac{x}{\|x\|_{2}}\end{array}\] And if \(x\sim\mathcal{N}(0,I_{n})\), \(\tilde{h}_{n}(x)\underset{\text{approx}}{\sim}\mathcal{U}_{\tilde{\mathbf{B} }_{n}}\) (for \(n=2\) where this result is exact). But for stability reasons, the light tails of the normal distribution are problematic, and like the other parametrizations, we want a parametrization which transforms the distribution \(\operatorname{Logistic}_{n}\) of \(\mathbb{R}_{n}\) into the uniform distribution of the target space. It is therefore sufficient to consider the transformation from \(\mathbb{R}^{n}\) to \(\mathbb{R}^{n}\): \[\begin{array}{ccccc}g_{n}&:&\mathbb{R}^{n}&\longrightarrow&\mathbb{R}^{n}\\ &&x&\longmapsto&\left(\Phi^{-1}(\operatorname{\mathrm{c}ypit}\left(x_{i} \right))\right)_{i:0\leq i<n}\\ &&&&\\ g_{n}^{-1}&:&\mathbb{R}^{n}&\longrightarrow&\mathbb{R}^{n}\\ &&x&\longmapsto&\left(\operatorname{\mathrm{logit}}\left(\Phi(x_{i})\right) \right)_{i:0\leq i<n}\end{array}\] If \(x\sim\operatorname{Logistic}\), then \(g(x)\sim\mathcal{N}(0,1)\). Thus we obtain the proposed parametrization: \[\begin{array}{ccccc}{}^{r}{}_{\mathbb{R}^{n}\rightarrow\tilde{\mathbf{B} }_{n}}&:&\mathbb{R}^{n}&\longrightarrow&\tilde{\mathbf{B}}_{n}\\ &&x&\longmapsto&\tilde{h}_{n}(g_{n}(x))\\ \\ {}^{r}{}_{\tilde{\mathbf{B}}_{n}\rightarrow\mathbb{R}^{n}}&:&\tilde{\mathbf{ B}}_{n}&\longrightarrow&\mathbb{R}^{n}\\ &&x&\longmapsto&g_{n}^{-1}\Big{(}\tilde{h}_{n}^{-1}(x)\Big{)}\end{array} \tag{33}\] For \(n=2\), a representation of \(r_{\mathbb{R}^{n}\to\hat{\mathbf{B}}_{n}}\) is shown Figure 6. Note that \(r_{\mathbb{R}^{n}\to\hat{\mathbf{B}}_{n}}(0)\) is not defined with the previous formula, but we can note that the parametrization is extendable by continuity in \(0\) by \(r_{\mathbb{R}^{n}\to\hat{\mathbf{B}}_{n}}(0)=0\) and that this extension is differentiable. **Properties** * If \(x\sim\text{Logistic}\,n\), then \(r_{\mathbb{R}^{n}\to\hat{\mathbf{B}}_{n}}(x)\underset{\text{approx}}{\sim} \mathcal{U}_{\hat{\mathbf{B}}_{n}}\) (for \(n=2\) this result is exact), * If \(x\sim\mathcal{U}_{\hat{\mathbf{B}}_{n}}\), then \(r_{\hat{\mathbf{B}}_{n}\to\mathbb{R}^{n}}(x)\underset{\text{approx}}{\sim} \text{Logistic}_{n}\) (for \(n=2\) this result is exact), * \(r_{\mathbb{R}^{n}\to\hat{\mathbf{B}}_{n}}(0)=0\), * For \(x\in\mathbb{R}^{n}\) and \(y=r_{\mathbb{R}^{n}\to\hat{\mathbf{B}}_{n}}(x)\), we have \(\forall i:0\leq i<n\), \(\text{sign}(x_{k})=\text{sign}(y_{k})\). * For \(x\in\mathbb{R}^{n}\) and \(y=r_{\mathbb{R}^{n}\to\hat{\mathbf{B}}_{n}}(x)\), we have \(\forall i,j:0\leq i,j<n\), \(x_{i}\leq x_{j}\Longleftrightarrow y_{i}\leq y_{j}\). ### Implementation details First of all, the function \(\Phi\) and the function \(\Phi^{-1}\) are expressed in terms of the functions \(\mathrm{erf}\) and \(\mathrm{erfinv}\) introduced in all the calculation libraries (and which are continuously automatically differentiable in these libraries). We will therefore use: \[\begin{array}{ccccc}\Phi&:&\mathbb{R}&\rightarrow&(0,1)\\ &&x&\longmapsto&\frac{1}{2}\Big{(}1+\operatorname{erf}\left(\frac{x}{\sqrt{2}} \right)\Big{)}\\ \Phi^{-1}&:&(0,1)&\rightarrow\mathbb{R}\\ &&x&\longmapsto&\sqrt{2}\operatorname{erfinv}\left(2x-1\right)\end{array}\] In the particular case of \(\left(\Phi^{-1}\circ\operatorname{expit}\right)\) and \(\left(\operatorname{logit}\circ\Phi\right)\), we note the use of \((0,1)\) as a intermediary, whereas the functions \(\operatorname{erf}\) and \(\operatorname{erfinv}\) are intended to pass by \((-1,1)\). It is thus appropriate to use the identity \(\forall x\in\mathbb{R}\), \(\operatorname{expit}(x)=\frac{1+\tanh\left(\frac{x}{2}\right)}{2}\) and to carry out the calculation \(\left(\Phi^{-1}\circ\operatorname{expit}\right)\) in one time, in the same way for \(\left(\operatorname{logit}\circ\Phi\right)\). We will therefore use: \[\begin{array}{ccccc}\Phi^{-1}\circ\operatorname{expit}&:&\mathbb{R}& \rightarrow&\mathbb{R}\\ &&x&\longmapsto&\sqrt{2}\operatorname{erfinv}\left(\tanh\left(\frac{x}{2} \right)\right)\\ \operatorname{logit}\circ\Phi&:&\mathbb{R}&\rightarrow&\mathbb{R}\\ &&x&\longmapsto&2\operatorname{arctanh}\left(\operatorname{erf}\left(\frac{x }{\sqrt{2}}\right)\right)\end{array}\] Moreover, we do not calculate \(m_{n}\), but \(\log m_{n}\), allowing the power operation to be a simple product, and a more precise calculation by using \(\log\Phi\) which is implemented by the calculation libraries under the name of \(\log\_\operatorname{ndtr}\). We thus obtain the following form to implement the transformation (using the function \(\left(\Phi^{-1}\circ\operatorname{expit}\right)\) just defined): \[\begin{array}{ccccc}r_{\mathbb{R}^{n}\rightarrow\tilde{\mathbf{B}}_{n}}&:& \mathbb{R}^{n}&\longrightarrow&\mathring{\mathbf{B}}_{n}\\ &&x&\longmapsto&\begin{cases}\tilde{h}_{n}(g_{n}(x))&\text{if}\,x\neq 0\\ 0&\text{if}\,x=0\end{cases}\\ \text{with}\,\,g_{n}&:&x&\longmapsto&\left(\left(\Phi^{-1}\circ\operatorname{ expit}\right)(x_{i})\right)_{i:0\leq i<n}\\ \text{with}\,\,\tilde{h}_{n}&:&x&\longmapsto&\exp\Big{(}\frac{1}{n}\Big{(} \log m_{n}\Big{(}\norm{x}_{2}^{2}\Big{)}\Big{)}-\frac{1}{2}\log\Big{(}\norm{x} _{2}^{2}\Big{)}\Big{)}x\\ \text{with}\,\,\log m_{n}&:&x&\longmapsto&\begin{cases}\log 1\text{p}\left(-\exp\left(- \frac{x}{2}\right)\right)&\text{if}\,n=2\\ \left(\log\Phi\right)\left(\frac{\frac{1}{4}\log\operatorname{exppm1}\left(4 y^{\frac{1}{3}}\right)-n^{\frac{1}{3}}\left(1-\frac{2}{9n}\right)}{\sqrt{\frac{2}{9n^{ \frac{1}{3}}}}}\right)&\text{if}\,n\geq 3\end{cases}\\ \text{with}\,\,\log\Phi&:&x&\longmapsto&\log\_\operatorname{ndtr}(x)\end{array} \tag{34}\] And in a similar way, we obtain the implemented form of the reciprocal of the transformation (using the functions \(\Phi^{-1}\) and \(\left(\operatorname{logit}\circ\Phi\right)\) just defined): \[r_{\hat{\mathbf{B}}_{n}\rightarrow\mathbb{R}^{n}} : \hat{\mathbf{B}}_{n} \longrightarrow \mathbb{R}^{n}\] \[x \longmapsto \begin{cases}g_{n}^{-1}\Big{(}\tilde{h}_{n}^{-1}(x)\Big{)}& \text{if}\,x\neq 0\\ 0&\text{if}\,x=0\end{cases}\] with \[g_{n}^{-1} : x \longmapsto ((\text{logit}\circ\Phi)(x_{i}))_{i:0\leq i<n}\] with \[\tilde{h}_{n}^{-1} : x \longmapsto \sqrt{m_{n}^{-1}(\|x\|_{2}^{n})}\frac{x}{\|x\|_{2}}\] with \[m_{n}^{-1} : x \longmapsto \begin{cases}-2\log{(1-x)}&\text{if}\,n=2\\ \left(\frac{1}{4}\log\!1\!\text{pexp}\left(4\!\left(n^{\frac{1}{3}}\big{(}1- \frac{2}{9n}\big{)}+\Phi^{-1}(x)\sqrt{\frac{2}{g_{n}^{\frac{1}{3}}}}\right) \right)\right)^{3}&\text{if}\,n\geq 3\end{cases} \tag{35}\] #### Implementation example This implementation uses \(\log\!1\!\text{pexp}\) and \(\log\!\text{expm}1\) functions defined in implementation example of Section 2.2.1. ``` importnumpyasnp importsscipy.special defreals_to_ball(x): n=x.size g=np.sqrt(2)*scipy.special.erfinv(np.tanh(x/2)) normsq_g=(g**2).sum() normsq_g_rep_01=normsq_g+(normsq_g==0)*1.0 ifn==2: log_m=np.log(-np.expm1(-normsq_g_rep_01/2)) else: log_m=scipy.special.log_ndtr( ( 1 / 4*logexpm1(4*normsq_g_rep_01**(1/3)) - n**(1/3)*(1-2/(9*n)) ) / np.sqrt(2/(9*n**(1/3))) returnnp.exp(log_m/n-0.5*np.log(normsq_g_rep_01))*g defball_to_reals(x): n=x.size normsq_x = (x**2).sum() normsq_x_rep_01 = normsq_x + (normsq_x == 0) * 1.0 if n == 2: m_inv = -2 * np.log1p(-normsq_x_rep_01) else: phi_inv = np.sqrt(2) * scipy.special.erfinv(2*normsq_x_rep_01**(n/2)-1) m_inv = ( 0.25 * log1pexp( 4 * ( n ** (1 / 3) * (1 - 2 / (9 * n)) + phi_inv * np.sqrt(2 / (9 * n ** (1 / 3))) ) ) ) ) ** 3 h_inv = (m_inv**0.5 / normsq_x_rep_01**0.5) * x return 2 * np.arctanh(scipy.special.erf(h_inv / np.sqrt(2))) ### Parametrization of matrices #### 2.4.1 Diagonal matrices The set of diagonal matrices of \(\mathbb{R}^{n\times n}\) is defined by: \[\mathsf{D}^{n}=\left\{M\in\mathbb{R}^{n\times n}:\forall i,j,\;i=j\lor M_{ij}= 0\right\}\] The parametrization used will be: \[\begin{array}{rcl}r_{\mathsf{D}^{n}\rightarrow\mathbb{R}^{n}}&:&\mathsf{D}^ {n}&\longrightarrow&\mathbb{R}^{n}\\ &&M&\longmapsto&\operatorname{diag}\left(M\right)=\left(M_{ii}\right)_{i:0 \leq i<n}\\ \\ r_{\mathbb{R}^{n}\rightarrow\mathsf{D}^{n}}&:&\mathbb{R}^{n}&\longrightarrow& \mathsf{D}^{n}\\ &&x&\longmapsto&\operatorname{undiag}\left(x\right)\\ \\ &&=&\begin{bmatrix}x_{0}\\ &x_{1}\\ &&\ddots\\ &&&x_{n-1}\end{bmatrix}\end{array} \tag{36}\] These two operations \(\operatorname{diag}\) and \(\operatorname{undiag}\) are often present in a single polymorphic function in most calculation software under the name \(\operatorname{diag}\). It is therefore not useful to provide a reference implementation. #### 2.4.2 Symmetric matrices The set of symmetric matrices of \(\mathbb{R}^{n\times n}\) is defined by: \[\mathsf{S}^{n}=\left\{M\in\mathbb{R}^{n\times n}:M=M^{T}\right\}\] The parametrization used will be: \[\begin{array}{ccccc}r_{\mathsf{S}^{n}\to\mathbb{R}^{n(n+1)/2}}&:&\mathsf{S}^{n }&\longrightarrow&\mathbb{R}^{n(n+1)/2}\\ &M&\longmapsto&\text{concat}\left(\left(M_{k,:k+1}\right)_{k:0\leq k<n}\right) \\ &&&=\text{concat}\left(M_{0,:1},M_{1,:2},M_{2,:3},\ldots,M_{:,n-1}\right)\\ &&&=\left(M_{0,0},M_{1,0},M_{1,1},M_{2,0},\ldots M_{2,2},\ldots,M_{n-1,0}, \ldots M_{n-1,n-1},\right)\end{array} \tag{37}\] The reciprocal parametrization will be: \[\begin{array}{ccccc}r_{\mathbb{R}^{n(n+1)/2}\to\mathsf{S}^{n}}&:&\mathbb{R} ^{n(n+1)/2}&\longrightarrow&\mathsf{S}^{n}\\ &&x&\longmapsto&\left[\begin{array}{ccccc}x_{0}&&&&\text{(sym)}\\ x_{1}&x_{2}&&&\\ x_{3}&\cdots&x_{5}&&&\\ \vdots&&&\ddots&\\ x_{n(n-1)/2}&\cdots&\cdots&\cdots&x_{n(n+1)/2-1}\end{array}\right]\end{array} \tag{38}\] **Implementation example** import numpy as np def sym_matrix_to_reals(x): assert x.shape[0] == x.shape[1] n = x.shape[0] return x[np.tril_indices(n)] def reals_to_sym_matrix(x, scale=1.0): n = int((8 * x.size + 1) ** 0.5 / 2) assert ( x.size == n * (n + 1) // 2 ), f"Incorrect size. It does not exist n such as n*(n+1)/2==(x.size)" y = np.zeros((n, n)) indices = np.tril_indices(n) y[indices] = x y_transposed = y.T.copy() y_transposed[(np.arange(n),) * 2] = 0 y += y_transposed return y #### 2.4.3 Diagonal positive definite matrices The set of diagonal positive definite matrices of \(\mathbb{R}^{n\times n}\) is defined by: \[\mathsf{D}^{n}_{++}=\left\{M\in\mathbb{R}^{n\times n}:(\forall iM_{ii}>0) \wedge(\forall i,j,\ i=j\lor M_{ij}=0)\right\}\] The proposed parametrization will be the composition of the parametrization of \(\mathbb{R}^{*\;n}_{+}\) and the parametrization of \(\mathsf{D}^{n}\). For \(s\in\mathbb{R}^{*\;n}_{+}\) a vector of scaling parameters of the diagonal will be: \[r_{s;\mathsf{D}^{n}_{++}\rightarrow\mathbb{R}^{n}} =r_{s;\mathbb{R}^{*\;n}_{+}\rightarrow\mathbb{R}^{n}}\circ r_{ \mathsf{D}^{n}\rightarrow\mathbb{R}^{n}}\] \[r_{s;\mathbb{R}^{n}\rightarrow\mathsf{D}^{n}_{++}} =r_{\mathbb{R}^{n}\rightarrow\mathsf{D}^{n}}\circ r_{s;\mathbb{ R}^{n}\rightarrow\mathbb{R}^{*\;n}_{+}}\] that is: \[r_{s;\mathsf{D}^{n}_{++}\rightarrow\mathbb{R}^{n}} : \mathsf{D}^{n}_{++} \longrightarrow \mathbb{R}^{n}\] \[M \longmapsto r_{s;\mathbb{R}^{*\;n}_{+}\rightarrow\mathbb{R}^{n}}(\text{ diag}\left(M\right))=\left(r_{s_{i};\mathbb{R}^{*}_{+}\rightarrow\mathbb{R}}(M_{ii}) \right)_{i:0\leq i<n}\] \[r_{s;\mathbb{R}^{n}\rightarrow\mathsf{D}^{n}_{++}} : \mathbb{R}^{n} \longrightarrow \mathsf{D}^{n}_{++}\] \[x \longmapsto \text{undiag}\left(r_{s;\mathbb{R}^{n}\rightarrow\mathbb{R}^{ *\;n}_{+}}(x)\right)\] \[=\begin{bmatrix}r_{s_{0};\mathbb{R}\rightarrow\mathbb{R}^{*}_{+}} (x_{0})\\ r_{s_{1};\mathbb{R}\rightarrow\mathbb{R}^{*}_{+}}(x_{1})\\ &\ddots\\ &&r_{s_{n-1};\mathbb{R}\rightarrow\mathbb{R}^{*}_{+}}(x_{n-1})\end{bmatrix} \tag{39}\] #### Choice of the vector of scaling parameters of the diagonal We will often choose the scaling parameters of the diagonal as the expected order of magnitude of the values of the diagonal. Choosing parameters that are too small can lead to very large steps in iterative algorithms using the gradient and to very negative values in real space, implying very small magnitudes without the optimum being reached. The choice of scale too large leads the parametrization of the diagonal to behave like an expononential and can lead to unstable algorithms. #### Implementation example In this implementation the scaling parameter vector can be provided as a vector or as a scalar (and in this case is applied to the whole diagonal). Functions softplus and softplusinv defined in the implementation example in Section 2.2.1 are used. ``` importnumpyasnp defreals_to_diag_pd_matrix(x,scale=1.0): returnnp.diag(softplus(x,scale=scale)) defdiag_pd_matrix_to_reals(x,scale=1.0): returnsoftplusinv(np.diag(x),scale=scale) ``` #### 2.4.4 Symmetric positive definite matrices The set of symmetric positive definite matrices of \(\mathbb{R}^{n\times n}\) is defined by: \[\mathsf{S}^{n}_{++}=\left\{M\in\mathbb{R}^{n\times n}:M=M^{T}\wedge\forall v \in\mathbb{R}^{n},\,v^{T}Mv>0\right\}\] #### Construction of the parametrization We will construct the parametrization starting from \(\mathsf{S}^{n}_{++}\) to \(\mathbb{R}^{k}\) (with \(k=\frac{n(n+1)}{2}\)). Considering a matrix \(M\in\mathsf{S}^{n}_{++}\) which has a diagonal of order of magnitude \(s\in\mathbb{R}^{*\,n}_{+}\). By posing \(M^{\prime}=D_{s}^{-\frac{1}{2}}MD_{s}^{-\frac{1}{2}}\), with \(D_{s}=\mathrm{undiag}(s)\), we obtain a positive definite symmetric matrix which has a diagonal of order of magnitude \(1\). A symmetric positive definite matrix \(M\) has a unique decomposition of the form \(M^{\prime}=LL^{T}\) with * \(L\) a lower triangular matrix, * \(\mathrm{diag}(L)\in\mathbb{R}^{*\,n}_{+}\). This decomposition is the Cholesky factorization. The constraint of positivity of the diagonal is essential to obtain the uniqueness of the Cholesky factorization. Thus, the Cholesky factorization defines a bijection between \(\mathsf{S}^{n}_{++}\) and the space of lower triangular matrices with positive diagonal. Consider the line \(i\) of the matrix \(L\), since \(L\) is lower triangular, only the first part of the line \(i\) up to the diagonal is useful, it is \(l_{i}=L_{i,:i+1}\in\mathbb{R}^{i+1}\). Using the matrix product, we find that \(\|l_{i}\|^{2}=l_{i}^{T}l_{i}=M_{ii}^{\prime}\). So the order of magnitude of \(\|l_{i}\|^{2}\) is \(1\). From this, we deduce that the elements (in absolute value) of \(l_{i}\) have an order of magnitude of \(\frac{1}{\sqrt{i+1}}\). So we introduce: \[L^{\prime}=\mathrm{undiag}\left(\left(\sqrt{i+1}\right)_{i:0\leq i<n}\right)L\] Introducing \(l_{i}^{\prime}\) in the same way as \(l_{i}\) with \(l_{i}^{\prime}=L_{i,:i+1}^{\prime}\in\mathbb{R}^{i+1}\), we have \(l_{i}^{\prime}=\sqrt{i+1}l_{i}\). Thus the elements (in absolute value) of \(l_{i}^{\prime}\) have a magnitude of \(1\). So we obtain a matrix \(L^{\prime}\) with positive diagonal and all the elements having in absolute value an order of magnitude \(1\). It is therefore sufficient to use \(\mathrm{logexpm}1=r_{1;\mathbb{R}_{+}^{*}\rightarrow\mathbb{R}}\) for the diagonal, and the identity for the extra-diagonal terms. We obtain the element of \(\mathbb{R}^{n(n+1)/2}\): \[\mathrm{concat}\left(\mathrm{log1pexp}\left(\mathrm{diag}\left(L^{\prime} \right)\right),\mathrm{concat}\left(\left(L_{i,:i}^{\prime}\right)_{i:0\leq i< n}\right)\right)\] The parametrization defined in this way is thus: \[r_{s;\mathbb{S}_{++}^{n}\rightarrow\mathbb{R}^{n(n+1)/2}} : \mathbb{S}_{++}^{n} \longrightarrow \mathbb{R}^{n(n+1)/2}\] \[M \longmapsto g\bigg{(}\mathrm{undiag}\left(\left(\sqrt{i+1}\right)_{i:0\leq i <n}\right)\mathrm{cholesky}\left(D_{s}^{-\frac{1}{2}}MD_{s}^{-\frac{1}{2}} \right)\bigg{)}\] \[\mathrm{with\;g} : L \longmapsto \mathrm{concat}\left(\mathrm{logexpm}1\left(\mathrm{diag}\left(L \right)\right),\mathrm{concat}\left(\left(L_{i,:i}\right)_{i:0\leq i<n}\right)\right)\] \[\mathrm{with\;}\qquad\qquad D_{s} = \mathrm{undiag}(s) \tag{40}\] We deduce the reciprocal: \[r_{s;\mathbb{R}^{n(n+1)/2}\rightarrow\mathbb{S}_{++}^{n}} : \mathbb{R}^{n(n+1)/2} \rightarrow \mathbb{S}_{++}^{n}\] \[x \longmapsto D_{s}^{\frac{1}{2}}L(x)L(x)^{T}D_{s}^{\frac{1}{2}}\] \[\mathrm{with\;}L : x \longmapsto \mathrm{undiag}\left(\left(\frac{1}{\sqrt{i+1}}\right)_{i:0\leq i <n}\right)\!\!\left(\mathrm{diag}\left(\mathrm{log1pexp}\left(x_{:n}\right) \right)+g(x_{n:})\right)\] \[\mathrm{with\;}g : y \longmapsto \begin{bmatrix}0&&&&\\ y_{0}&0&&&\\ y_{1}&y_{2}&0&&&\\ y_{3}&\cdots&y_{5}&0&&&\\ \vdots&&&\ddots&\ddots&\\ y_{n(n-1)/2-n+1}&\cdots&\cdots&\cdots&y_{n(n-1)/2-1}&0\end{bmatrix}\] \[\mathrm{with\;}\qquad\qquad D_{s} = \mathrm{undiag}(s)\] ### Implementation details When implementing, it is useful not to write products by diagonal matrices as matrix products, but to write them as vector product operations with broadcasting to reduce the number of operations. ### Implementation example ``` importnumpyasnp defspd_matrix_to_reals(x,scale=1.0): assertlen(x.shape)==2andx.shape[0]==x.shape[1] n=x.shape[-1] ifhasattr(scale, "shape"): assertlen(scale.shape)==0or( len(scale.shape)==1andscale.shape[0]==n ), "Nonbroacastables" ifhasattr(scale, "shape")andlen(scale.shape)==1: sqrt_scale=np.sqrt(scale) x_rescaled=x/sqrt_scale[:,None]/sqrt_scale[None,:] else: x_rescaled=x/scale y=np.linalg.cholesky(x_rescaled) y*=np.sqrt(np.arange(1,n+1))[:,None] diag_values=y[(np.arange(n),)*2] tril_values=y[np.tril_indices(n,-1)] returnnp.concatenate((logexpm1(diag_values),tril_values)) defreals_to_spd_matrix(x,scale=1.0): n=int((8*x.size+1)**0.5/2) assert( x.size==n*(n+1)//2 ),f"Incorrectsize.Itdoesnotexistnsuchasn*(n+1)/2=={x.size}" ifhasattr(scale, "shape"): assertlen(scale.shape)==0or( len(scale.shape)==1andscale.shape[0]==n ),"Nonbroacastables" y=np.zeros((n,n)) y[(np.arange(n),)*2]=log1pexp(x[:n]) y[np.tril_indices(n,-1)]=x[n:] y /= np.sqrt(np.arange(1, n + 1))[:, None] z_rescaled = y @ y.T if hasattr(scale, "shape") andlen(scale.shape) == 1: sqrt_scale = np.sqrt(scale) z = z_rescaled * sqrt_scale[:, None] * sqrt_scale[None, :] else: z = z_rescaled * scale return z #### 2.4.5 Correlation matrices A correlation matrix is a symmetric positive definite matrix with constant diagonal equal to 1. The set of correlation matrices of \(\mathbb{R}^{n\times n}\) is defined by: \[\mathsf{C}^{n}_{++}=\big{\{}M\in\mathbb{R}^{n\times n}:M=M^{T}\wedge(\forall i,\,M_{ii}=1)\wedge\big{(}\forall v\in\mathbb{R}^{n},\,v^{T}Mv>0\big{)}\big{\}}\] #### Construction of the parametrization We will construct the parametrization of \(\mathsf{C}^{n}_{++}\) to \(\mathbb{R}^{k}\) (with \(k=\frac{n(n-1)}{2}\)), starting with the process as the parametrization of \(\mathsf{S}^{n}_{++}\). Considering a matrix \(M\in\mathsf{C}^{n}_{++}\), it is a positive definite matrix, so it admits a unique decomposition of the form \(M^{\prime}=LL^{T}\) with * \(L\) a lower triangular matrix, * \(\mathrm{diag}(L)\in\mathbb{R}^{*\;n}_{+}\). This decomposition is the Cholesky factorization (the constraint of positivity of the diagonal is essential to obtain the uniqueness of the Cholesky factorization). In this case, the Cholesky factorization does not define a bijection between \(\mathsf{C}^{n}_{++}\) and the space of lower triangular matrices with positive diagonal, it is only injective. It is only possible to reach the lower triangular matrices of positive diagonal \(L^{\prime}\) such that \(L^{\prime}L^{\prime T}\) has a diagonal of \(1\). Consider the line \(i\) of a matrix \(L^{\prime}\) lower triangular of positive diagonal, only the first part of the line \(i\) up to the diagonal is useful, it is \(l^{\prime}_{i}=L^{\prime}_{i,i+1}\in\mathbb{R}^{i+1}\). Using the matrix product, we find that \(\left\|l^{\prime}_{i}\right\|^{2}={l^{\prime}_{i}}^{T}l^{\prime}_{i}=({L^{ \prime}L^{\prime}}^{T})_{ii}\). So the matrix \({L^{\prime}L^{\prime}}^{T}\) has a unit diagonal if and only if \(\forall i,\,\left\|l^{\prime}_{i}\right\|^{2}=1\). We thus find that the Cholesky factorization defines a bijection between the space of correlation matrices and the space of triangular matrices of positive diagonal \(L^{\prime}\) and whose rows are elements of the unit spheres (formally \(\forall i,\,l^{\prime}_{i}\in\mathbf{S}_{i}\)). Note that imposing the diagonal to be positive and \(\forall i,\,l^{\prime}_{i}\in\mathbf{S}_{i}\) is equivalent to imposing that \(\forall i,\,l^{\prime}i\in\mathbf{HS}_{i}\), where \(\mathbf{HS}_{i}\) is the half \(i\)-dimentional sphere definied in Section 2.3.3. We know \(\mathbf{HS}_{0}=\{1\}\), and for \(i>0\) we have a parametrization of \(\mathbf{HS}_{i}\) in \(\mathbb{R}^{i}\) so we obtain a parametrization of the triangular matrix respecting the two constraints. We obtain then a parametrization of the correlation matrices. \[\begin{array}{ccccc}{}^{r_{\mathsf{C}^{n}_{++}\rightarrow\mathbb{R}^{n(n-1)/2} }}&:&\mathsf{C}^{n}_{++}&\longrightarrow&\mathbb{R}^{n(n-1)/2}\\ &&M&\longmapsto&g(\text{cholesky}\left(M\right))\\ \text{with }g&:&L&\longmapsto&\text{concat}\left(\left(r_{\mathbf{HS}_{i} \rightarrow\mathbb{R}^{i}}(L_{i,:i+1})\right)_{i:1\leq i<n}\right)\end{array} \tag{41}\] And we obtain the inverse parametrization by: \[\begin{array}{ccccc}{}^{r_{\mathbb{R}^{n(n-1)/2}\rightarrow\mathsf{C}^{n}_{++ }}}&:&\mathbb{R}^{n(n-1)/2}&\longrightarrow&\mathsf{C}^{n}_{++}\\ &&x&\longmapsto&L(x)L(x)^{T}\\ &&&&\\ \text{with}&:&L&\longmapsto&\left[\begin{array}{c}1\\ \hline r_{\mathbb{R}^{1}\rightarrow\mathbf{HS}_{1}}(x_{0})\\ \hline r_{\mathbb{R}^{2}\rightarrow\mathbf{HS}_{2}}(x_{1:3})\\ \hline r_{\mathbb{R}^{3}\rightarrow\mathbf{HS}_{3}}(x_{3:6})\\ \hline\vdots\\ \hline r_{\mathbb{R}^{n-1}\rightarrow\mathbf{HS}_{n-1}}\left(x_{(n-1)(n-2)/2 :}\right)\\ \end{array}\right]\end{array} \tag{42}\] #### Implementation example **This implementation uses implementations of parametrization of half-sphere defined in Section** 2.3.3**. ``` importnumpyasnp defcorr_matrix_to_reals(x): assertlen(x.shape)==2andx.shape[0]==x.shape[1] n=x.shape[-1] assertn>1 y=np.linalg.cholesky(x) returnnp.concatenate( [half_sphere_to_reals(y[i,:i+1])foriinrange(1,n)], axis=-1, def reals_to_corr_matrix(x): n=int((8*x.size+1)**0.5/2+1) assertn*(n-1)//2==x.size y=np.zeros(x.shape[:-1]+(n,n)) y[0,0]=1.0 foriinrange(1,n): y[i,:i+1]= reals_to_half_sphere(x[(i*(i-1)//2):(i+1)*i//2]) [email protected] returnz ## 3 Python package parametrization cookbook For easy handling of all the parameterizations introduced in this cookbook, a Python package is provided. It is called parametrization_cookbook, and it can be installed from PyPI: ``` pipinstallparametrization_cookbook ``` The complete documentation is available online on the following page: [https://jbleger.gitlab.io/parametrization_cookbook](https://jbleger.gitlab.io/parametrization_cookbook) may not be used directly as it has no effects. Only sub-modules should be loaded, they are divided into two categories, high-level modules allowing to describe a parametrization and to compose them, and low-level modules allowing to manipulate the elementary functions introduced in this cookbook. Each module type comes with three implementations, one using only numpy and scipy(Harris et al., 2020; Virtanen et al., 2020), one using JAX(Bradbury et al., 2018), and one using PyTorch(Paszke et al., 2019). Of course, to use JAX, JAX must be installed, and to use PyTorch, PyTorch must be installed. It is usually necessary to load only one sub-module for the use of parametrizations. ### High-level modules: an easy way to define and use a parametrization Three high-level modules are provided: * parametrization_cookbook.numpy: for numpy and scipy implementation. This implementation does not support automatic differentiation. * parametrization_cookbook.jax: for JAX implementation. This implementation does support automatic differentiation. All computations are done with JAX primitives using LAX-backend or directly with LAX-backend. All functions obtained after the definition of a parametrization are pure, compiled with JIT, and usable in user-defined JIT-compiled functions. * parametrization_cookbook.torch: for PyTorch implementation. This implementation does support automatic differentiation. All computation are done with PyTorch primitives, using tensor with the same device than the provided device. Each module provides classes to describe parametrizations or to concatenate parametrizations. All classes are detailed in the following sub-sections. Each module exposes the same API. A first example, with the numpy implementation, to manipulate the parametrization of \(\mathbb{R}^{*}_{+}\): ``` importnumpyasnp importparametrization_cookbook.numpyaspc ``` parametrization=pc.RealPositive() ``` We can now use the parametrization from \(\mathbb{R}^{*}_{+}\) to \(\mathbb{R}\): ``` x=parametrization.params_to_reals1d(0.3) x ``` array([-1.05022561]) ``` And we can use the parametrization from \(\mathbb{R}\) to \(\mathbb{R}^{*}_{+}\): ``` parametrization.reals1d_to_params(x) ``` 0.30000000000000004 ``` #### 3.1.1 Common API of all parametrization classes Parametrization instances are non-mutable objects, to change the parametrization (_e.g._ to change the bounds or the dimension) a new parametrization instance must be defined. This behavior is not a restriction for common usages. Therefore, for a defined parametrization instance, bounds methods are pure functions, which is a very important property for use with JAX. Considering a parametrization of set \(E\), defined with the appropriate class, we obtain a parametrization instance with the following attributes: * method params_to_reals1d. This method is the bijective mapping \(E\to\mathbb{R}^{k}\), the value of \(k\) is automatically defined at the definition of the parametrization. This function takes as argument a value of \(E\), and returns a value of \(\mathbb{R}^{k}\). For JAX module, this bound method is pure, JIT-compiled, and usable in JIT-compiled user-defined functions. * method reals1d_to_params. This method is the bijective mapping \(\mathbb{R}^{k}\to E\), the value of \(k\) is automatically defined at the definition of the parametrization. This is the reciprocal function of the previous one. This function takes as argument a value of \(\mathbb{R}^{k}\), and returns a value of \(E\). For JAX module, this bound method is pure, JIT-compiled, and usable in JIT-compiled user-defined functions. * property size. This property gives the value of \(k\), the dimension of the \(\mathbb{R}^{k}\)-field used for the bijective mapping with \(E\). This value is computed at the definition of the parametrization. For JAX module, this property is pure and usable in JIT-compiled user-defined functions. #### 3.1.2 Parametrization of scalars We use these parametrizations when we are constrained using scalars, or when we are using vectors or matrices (or n-dimensional array) where each element is a constrained scalar. #### Classes The following classes are provided for the scalar parametrization: * Real: for parametrization of \(\mathbb{R}\). This parametrization is not useful directly, but can be used to rescale reals and to define parametrization of unconstrained vectors and matrices. * RealPositive: for parametrization of \(\mathbb{R}^{*}_{+}\). * RealNegative: for parametrization of \(\mathbb{R}^{*}_{-}\). * RealLowerBounded: for parametrization of \((a,+\infty)=\{x\in\mathbb{R}:x>a\}\) for a given \(a\). * RealUpperBounded: for parametrization of \((-\infty,a)=\{x\in\mathbb{R}:x<a\}\) for a given \(a\). * RealBounded01: for parametrization of \((0,1)\). * RealBounded: for parametrization of \((a,b)\) for given \(a\) and \(b\). Example, for the parametrization of \((0,12)\): ``` importjax.numpyasjnp importparametrization_cookbook.jaxaspc ``` ``` parametrization=pc.RealBounded(bound_lower=0,bound_upper=12) print(f"parametrization.size:{parametrization.size}") ``` ``` x_real=jnp.array([-1.2]) y=parametrization.reals1d_to_params(x_real) x_real_back=parametrization.params_to_reals1d(y) print(f"x_real:{x_real}") print(f"y:{y}") print(f"x_real_back: {x_real_back}") parametrization.size: 1 x_real: [-1.2] y: 2.777702569961548 x_real_back: [-1.1999999] #### 4.2.2 Shaped scalars to build vectors, matrices, n-dimensional arrays It is possible to handle vectors, matrices or n-dimensional arrays of scalars, and all classes supports the shape argument in the class initialization. For example, it is possible to build the parametrization from \((0,1)^{3\times 3}\) (the set of \(3\times 3\)-matrices where values are between 0 and 1): import jax.numpy as jnp import parametrization_cookbook.jax as pc parametrization = pc.RealBounded01(shape=(3,3)) x_real = jnp.linspace(-4, 4, parametrization.size) y = parametrization.reals1d_to_params(x_real) x_real_back = parametrization.params_to_reals1d(y) print(f"parametrization.size: {parametrization.size}") print(f"x_real:\n{x_real}") print(f"y:\n{y}") print(f"x_real_back:\n{x_real_back}") parametrization.size: 9 x_real: [-4. -3. -2. -1. 0. 1. 2. 3. 4.] y: [[0.01798621 0.04742587 0.11920292] [0.26894143 0.5 0.7310586 ] [0.880797 0.95257413 0.98201376]] x_real_back: [-4. -3. -2. -1. 0. 1. 1.9999995 3.0000002 3.9999983] #### 3.1.3 Parametrization of vectors We use these parametrizations when we are using constrained vector, or matrices where each row is a constrained vector, or n-dimensional array where each slice w.r.t. the last dimension is a constrained vector. #### Classes The following classes are provided for vector parametrization: * VectorSimplex: for parametrization of \(\hat{\mathcal{S}}_{n}=\left\{x\in\mathbb{R}_{+}^{n+1}:\sum_{i}x_{i}=1\right\}\), the unit \(n\)-simplex. A vector of \(n\)-simplex is a vector with \(n+1\) coordinates. * VectorSphere: for parametrization of \(\mathbf{S}_{n,r}=\left\{x\in\mathbb{R}^{n+1}:\sum_{i}x_{i}^{2}=r^{2}\right\}\), the \(n\)-sphere with a radius \(r\). By default the radius is 1. A vector of \(n\)-sphere is a vector with \(n+1\) coordinates. * VectorHalfSphere: for parametrization of \(\mathbf{HS}_{n,r}=\left\{x\in\mathbb{R}^{n+1}:x_{n}>0\wedge\sum_{i}x_{i}^{2}=r ^{2}\right\}\), the half \(n\)-sphere with a radius \(r\). By default the radius is 1. A vector of half \(n\)-sphere is a vector with \(n+1\) coordinates. * VectorBal1: for parametrization of \(\hat{\mathbf{B}}_{n,r}=\left\{x\in\mathbb{R}^{n}:x_{i}^{2}<r\right\}\), the \(n\)-ball with a radius \(r\). By default the radius is 1. A vector of \(n\)-ball is a vector with \(n\) coordinates. Example, for the parametrization of \(\mathcal{S}_{3}\): ``` importjax.numpyasjnp importparametrization_cookbook.jaxaspc ``` parametrization=pc.VectorSimplex(dim=3) print(f"parametrization.size:{parametrization.size}") ``` x_real=jnp.array([-0.5,0.5,1]) y=parametrization.reals1d_to_params(x_real) x_real_back=parametrization.params_to_reals1d(y) print(f"x_real:{x_real}") print(f"y:{y}") print(f"x_real_back:{x_real_back}") ``` parametrization.size:3 x_real:[-0.50.51.] y:[0.146172120.329198960.383534430.14109443] x_real_back:[-0.499999880.499999881.] #### Shaped vectors to build vectors, n-dimensional arrays It is possible to handle matrices where each row is a constrained vector or more generally \(n\)-dimensional arrays where each slice w.r.t. the last dimension is a constrained vector with the shape argument in class initialization. For example, it is possible to build the parametrization from \(\hat{\mathcal{S}}_{2}^{5}\) (set of matrices of size \(5\times 3\) where each row is in \(\hat{\mathcal{S}}_{2}\)): ``` importjax.numpyasjnp importparametrization_cookbook.jaxaspc ``` parametrization=pc.VectorSimplex(dim=2,shape=(5,)) x_real=jnp.linspace(-4,4,parametrization.size) y=parametrization.reals1d_to_params(x_real) x_real_back=parametrization.params_to_reals1d(y) print(f"parametrization.size:{parametrization.size}") print(f"x_real:\n{x_real}") print(f"y:\n{y}") print(f"x_real_back:\n{x_real_back}") ``` ``` parametrization.size:10 x_real: [-4. -3.1111112-2.222223-1.3333331-0.444444540.44444466 1.33333352.22222233.11111124.] y: [[0.009033910.042265950.9487002] [0.050143470.198148180.7517083] [0.219411980.4756260.30496198] [0.543263260.412080470.04465634] [0.79347820.202807250.00371455]] x_real_back: [-4.0000005-3.1111112-2.222226-1.333333-0.44444490.4444472 1.33333372.22222233.1111123.999995] ``` #### Parametrization of matrices We use these parametrizations when we are using constrained matrices or n-dimentional array where each slice w.r.t. the two last dimensions is a constrained vector. #### Classes The following classes are provided for matrix parametrization: * MatrixDiag: for diagonal matrices. * MatrixDiagPosDef: for diagonal positive definite matrices. * MatrixSym: for symmetric matrices. * MatrixSymDefPos: for symmetric definite positive matrices. * MatrixCorrelation: for correlation matrices (_i.e._ symmetric positive definite matrices with unit diagonal). Example, for the parametrization a symmetric positive definite \(3\times 3\) matrix: ``` importjax.numpyasjnp importparametrization_cookbook.jaxaspc ``` ``` parametrization=pc.MatrixSymPosDef(dim=3) print(f"parametrization.size:{parametrization.size}") ``` ``` x_real=jnp.array([-0.5,0.5,1,-1,0,1.5]) y=parametrization.reals1d_to_params(x_real) x_real_back=parametrization.params_to_reals1d(y) print(f"x_real:\n{x_real}") print(f"y:\n{y}") print(f"min_eigenvalue(y):{jnp.linalg.eigh(y)[0].min()}") print(f"x_real_back:\n{x_real_back}") ``` ``` parametrization.size:6 ``` ``` x_real: [-0.50.51.-1.0.1.5] y: [[0.22474898-0.335223050.0] [-0.335223050.0.97441290.5964979] [0.0.59649791.3248855]] ``` ``` min_eigenvalue(y):0.050460200756788254 x_real_back: [-0.499999880.49999820.9999976-1.0.1.5000001] ``` #### 4.2.2 Shaped matrices to build n-dimensional arrays It is possible to handle n-dimensional arrays where each slice w.r.t. the two last diments (each matrix with the form M[i0,..., ik, : ]) is a constrained matrix with the shape argument in class initialization. For example, to build a vector of size 4 of symmetric definite positive \(3\times 3\)-matrices: import jax.numpy as jnp import parametrization_cookbook.jax as pc parametrization = pc.MatrixSymPosDef(dim=3, shape=4) x_real = jnp.linspace(-3, 3, parametrization.size) y = parametrization.reals1d_to_params(x_real) x_real_back = parametrization.params_to_reals1d(y) print(f"parametrization.size: {parametrization.size}") print(f"yreal.shape: {y.shape}") print(f"min_eigenvalue(y[0]): {jnp.linalg.eigh(y[0])[0].min()}") print(f"min_eigenvalue(y[1]): {jnp.linalg.eigh(y[1])[0].min()}") print(f"min_eigenvalue(y[2]): {jnp.linalg.eigh(y[2])[0].min()}") print(f"min_eigenvalue(y[3]): {jnp.linalg.eigh(y[3])[0].min()}") print(f"max(abs(x_real-x_real_back)): {jnp.abs(jnp.max(x_real-x_real_back))}") parametrization.size: 24 yreal.shape: (4, 3, 3) min_eigenvalue(y[0]): 3.703959361445186e-09 min_eigenvalue(y[1]): 0.0035347489174455404 min_eigenvalue(y[2]): 0.10191547125577927 min_eigenvalue(y[3]): 0.5219957828521729 max(abs(x_real-x_real_back)): 4.76837158203125e-07 #### 3.1.5 Parametrization of Cartesian product When we have a parametrization of sets \(E_{k}\) for \(k:0\leq k<K\), defining a parametrization of \(E_{0}\times\dots\times E_{K-1}\) is an index-rewriting task. Two classes are provided to handle this: * Tuple: which returns parameters as a Python tuple instance, * NamedTuple: which returns parameters as a Python namedtuple instance. These two classes have exactly the same goal, the user choice is driven by their preference for manipulating indexes or names for elementary parameters. For example, if we have two parameters \(\alpha\) and \(\beta\) with constraints \(\alpha\in(0,1)\) and \(\beta\in\mathbb{R}^{*}_{+}\), we can do: * with Tuple import jax.numpy as jnp import parametrization_cookbook.jax as pc parametrization = pc.Tuple( pc.RealBounded01(), pc.RealPositive(), ) print(f"parametrization.size: {parametrization.size}") x_real = jnp.array([-0.5, 0.5]) y = parametrization.reals1d_to_params(x_real) print(f"y[0]: {y[0]}") print(f"y[1]: {y[1]}") parametrization.size: 2 y[0]: 0.3775406777858734 y[1]: 0.9740769863128662 * * #### 3.1.6 Definition of custom parametrization For some specific cases, user of the package may want to introduce a custom parametrization, and use it as other parametrizations (in particular with Tuple or NamedTuple). User-defined parametrization from \(\mathbb{R}^{n}\) to \(E\) must inherit from Param virtual class, and must define: * attribute _size. This attribute, masked to the user, is the one used by the size property, and is used by Tuple and NamedTuple to build parametrization of Cartesian product. This attribute must be a positive integer containing the value \(n\), the dimension of \(\mathbb{R}^{n}\). This attribute should be set in _init__ method, if the size does not depend on arguments on class construction, the attribute can be set as class attribute. * method reals1d_to_params. This method maps an argument from \(\mathbb{R}^{n}\) to \(E\). When using JAX, this method must be pure and must not use Python control flows depending on values (but Python control flows depending on shape are allowed). * method params_to_reals1d. This method maps a value from \(E\) to \(\mathbb{R}^{n}\). When using JAX, this method must be pure and must not use Python control flow depending on values (but Python control flows depending on shape are allowed). Furthermore, optionally, * the user could use the method _check_reals1d_size in reals1d_to_params to check the shape of the provided input (and to reshape scalars to 1-dimensional array of size 1). * the user could check the shape of input in params_to_reals1d. * the user could define _repr attribute in _init__. This attribute contains the representation of the object used when the object is converted to string or when the object is displayed. * if JAX is used, the user could JIT-compile the methods reals1d_to_params and params_to_reals1d. As the object must be non-mutable, the JIT-compilation of the method is done with self as static argument. #### Examples We will consider the parametrization from \(\mathbb{R}\) to \(\left(-\frac{\pi}{2},\frac{\pi}{2}\right)\) with \(\arctan\) function. _Note: this example is for illustration purpose. In general, there is no good reason to prefer parametrization with \(\arctan\) rather than parametrization with \(\arctan\) used before and used in Rea1Bounded class._ * with PyTorch importtorch importparametrization_cookbook.torchaspc classMyAngle(pc.Param): def__init__(self): self._size=1 self._repr="MyAngle()" defreals1d_to_params(self,x): x=self._check_reals1d_size(x) y=torch.arctan(x) returny[0] defparams_to_reals1d(self,x): assertx.shape==()#mustbeascalar y=torch.tan(x) returny.ravel()#converttovector1-d ``` * with JAX (with JIT-compilation for reals1d_to_params and params_to_reals1d) importfunctools importjax importjax.numpyasjnp importparametrization_cookbook.jaxaspc classMyAngle(pc.Param): def__init__(self): self._size=1 self._repr="MyAngle()" #functools.partial(jax.jit,static_arguments=0) defreals1d_to_params(self,x): x=self._check_reals1d_size(x) y=jnp.arctan(x) returny[0] #functools.partial(jax.jit,static_arguments=0) defparams_to_reals1d(self,x): assertx.shape==()#mustbeascalar y=jnp.tan(x) returny.ravel()#converttovector1-d ### Low-level module: access to elementary functions Functions described in Section 2 are implemented in low-level modules. Three low-level modules are provided: * parametrization_cookbook.functions.numpy: for numpy and scipy implementation. This implementation does not support automatic differentiation. * parametrization_cookbook.functions.jax: for JAX implementation. This implementation does support automatic differentiation. All computations are done with JAX primitives using LAX-backend or directly with LAX-backend. All functions are pure, compiled with JIT, and usable in user-defined JIT-compiled functions. * parametrization_cookbook.functions.torch: for PyTorch implementation. This implementation does support automatic differentiation. All computation are done with PyTorch primitive, using tensor with the same device than the provided device. Each module provides the functions described in Section 2, in both directions \(\mathbb{R}\to E\) and \(E\to\mathbb{R}\). These functions are used by high-level modules and are directly accessible to users. All functions supports vectorization, see Section 3.2.4 below. #### 3.2.1 Parametrization of scalars The following functions are available: * softplus, softplusinv, log1pexp, logexpm1: for parametrization of \(\mathbb{R}^{*}_{+}\), these functions are described in Section 2.2.1. * expit, logit, arctanh, tanh: for parametrization of \((0,1)\) or \((-1,1)\), these functions are described in Section 2.2.2. These functions are not implemented in the package, the implementation of scipy, JAX or PyTorch is used. For usage of scalar parametrization to build vector, matrices or n-dimensional arrays with constraints on elements, see Section 3.2.4 below. Example, with the parametrization of \(\mathbb{R}^{*}_{+}\) with PyTorch: ``` importtorch importparametrization_cookbook.functions.torchaspcf y=torch.tensor(2.4) x=pcf.softplusinv(y) y2=pcf.softplus(x) print(f"y:{y}") print(f"x:{x}") print(f"y2:{y2}") y: 2.4000000953674316 x: 2.3049001693725586 y2: 2.40000000953674316 #### 3.2.2 Parametrization of vectors The following functions are available: * reals_to_simplex, simplex_to_reals: for parametrization of the unit simplex \(\mathcal{S}_{n}\), these functions are described in Section 2.3.1. * reals_to_sphere, sphere_to_reals: for parametrization of the unit sphere \(\mathbf{S}_{n}\), these functions are described in Section 2.3.2. * reals_to_half_sphere, half_sphere_to_reals: for parametrization of the unit half-sphere \(\mathbf{HS}_{n}\), these functions are described in Section 2.3.3. * reals_to_ball1, ball_to_reals: for parametrization of the unit ball \(\mathbf{B}_{n}\), these functions are described in Section 2.3.4. For usage of vector parametrization to build matrices or n-dimensional arrays with constraints on the last dimension, see Section 3.2.4 below. Example, with the parametrization of \(\mathcal{S}_{2}\) with PyTorch: import torch import parametrization_cookbook.functions.torch as pcf y = torch.tensor([.3,.5,.2]) x = pcf.simplex_to_reals(y) y2 = pcf.reals_to_simplex(x) print(f"y: {y}") print(f"x: {x}") print(f"y2: {y2}") y: tensor([0.3000, 0.5000, 0.2000]) x: tensor([0.0400, 0.9163]) y2: tensor([0.3000, 0.5000, 0.2000]) #### 3.2.3 Parametrization of matrices The following functions are available: * reals_to_diag_matrix, diag_matrix_to_reals: for parametrization of diagonal matrices. These functions are described in Section 2.4.1. Without vectorization, these functions can be substituted by diag from numpy, JAX or PyTorch. * reals_to_sym_matrix, sym_matrix_to_reals: for parametrization of symmetric matrices. These function are described in Section 2.4.2. * reals_to_spd_matrix, spd_matrix_to_reals: for parametrization of symmetric positive definite matrices. These function are described in Section 2.4.4. For usage of matrix parametrizations to n-dimensional arrays with constraints on the two last dimensions, see Section 3.2.4 below. Example, with the parametrization of a symmetric definite positive \(3\times 3\) matrix with PyTorch: import torch import parametrization_cookbook.functions.torch as pcf y = torch.tensor([[3,1,1.5],[1,2.5,-1],[1.5,-1,2]]) assert (y==y.T).all() assert torch.linalg.eigh(y)[0].min()>0 x = pcf.spd_matrix_to_reals(y) y2 = pcf.reals_to_spd_matrix(x) print(f"y:\n(y)") print(f"x:\n(x)") print(f"y2:\n(y2)") y: tensor([[ 3.0000, 1.0000, 1.5000], [ 1.0000, 2.5000, -1.0000], [ 1.5000, -1.0000, 2.0000]]) x: tensor([ 1.5373, 1.9485, 0.1972, 0.8165, 1.5000, -1.7650]) y2: tensor([[ 3.0000, 1.0000, 1.5000], [ 1.0000, 2.5000, -1.0000], [ 1.5000, -1.0000, 2.0000]]) #### 3.2.4 Vectorization All functions in this module support vectorization. This allows computational performance for computing many parametrizations at the same time with arbitrary shape. #### Scalar functions For scalar functions listed in Section 3.2.1, applying the function on a n-dimensional array (vector, matrices, or higher dimension) results in the n-dimensional array of the function result. Example: importtorch importparametrization_cookbook.functions.torchaspcf x=torch.tensor([[-3,-2],[2,1]]) y=pcf.softplus(x) print(f"x.shape:{x.shape}") print(f"y.shape:{y.shape}") print(f"y[1,0]:{y[1,0]}") print(f"softplus(x[1,0]):{pcf.softplus(x[1,0])}") x.shape:torch.Size([2,2]) y.shape:torch.Size([2,2]) y[1,0]:2.1269280910491943 softplus(x[1,0]):2.1269280910491943 #### Vector functions For vectors functions listed in Section 3.2.2, applying the function on a n-dimensional array (matrices, or higher dimension) results in the n-dimensional array of the function result on last dimensions. _E.g._, for an input with shape (n,) if the output shape is (n+1,), then for an input with shape (a,b,c,n) the output shape will be (a,b,c,n+1). Example: importtorch importparametrization_cookbook.functions.torchaspcf x=torch.tensor([[-3,-2],[2,1]]) y=pcf.reals_to_simplex(x) print(f"x.shape:{x.shape}") print(f"y.shape:{y.shape}") print(f"y[1,:]:{y[1,:]}") print(f"reals_to_simplex(x[1,:]):{pcf.reals_to_simplex(x[1,:])}") x.shape:torch.Size([2,2]) y.shape:torch.Size([2,3]) y[1,:]:tensor([0.6547,0.2524,0.0929]) reals_to_simplex(x[1,:]): tensor([0.6547,0.2524,0.0929]) #### Matrices functions For matrices functions listed in Section 3.2.3, applying the function on a n-dimensional array results in the n-dimensional array of the function result on two last dimensions. _E.g._, for a input with shape (N,) if the output shape is (n,n) with \(N=\frac{n(n+1)}{2}\), then for a input with shape (a,b,c,N) the output shape will be (a,b,c,n,n). Example: ``` importtorchimportparametrization_cookbook.functions.torchaspcf x=torch.linspace(-1,1,66).reshape((11,6)) y=pcf.reals_to_spd_matrix(x) print(f"x.shape:{x.shape}") print(f"y.shape:{y.shape}") print(f"y[7,:,:]:\n{y[7,:,:]}") print(f"reals_to_spd_matrix(x[7,:]):\n{pcf.reals_to_spd_matrix(x[7,:])}") x.shape:torch.Size([11, 6]) y.shape:torch.Size([11, 3, 3]) y[7,:,:]; tensor([[0.7224, 0.2312, 0.2038], [0.2312, 0.4504, 0.2233], [0.2038, 0.2233, 0.3853]]) reals_to_spd_matrix(x[7,:]): tensor([[0.7224, 0.2312, 0.2038], [0.2312, 0.4504, 0.2233], [0.2038, 0.2233, 0.3853]]) ``` ## 4 Examples ### High-level modules: inference of parameters of a Multivariate Student distribution #### 4.1.1 Introduction With \(\mu\in\mathbb{R}^{p}\), \(\Sigma\) a symmetric definite positive matrix of size \(p\), and \(\nu\in\mathbb{R}^{*}_{+}\), the multivariate Student distribution with parameters \((\mu,\Sigma,\nu)\) is defined by the following probability distribution function: \[\mathbb{R}^{p} \longrightarrow \mathbb{R}_{+}\] \[x \longmapsto \frac{\Gamma\big{(}\frac{\nu+p}{2}\big{)}}{\Gamma\big{(}\frac{\nu }{2}\big{)}\nu^{\frac{p}{2}}\pi^{\frac{p}{2}}|\Sigma|^{\frac{1}{2}}}\big{(}1+ \frac{1}{\nu}(x-\mu)^{T}\Sigma^{-1}(x-\mu)\big{)}^{-\frac{\nu+p}{2}}\] With \(Y\sim\mathcal{N}(0,\Sigma)\), \(Z\sim\chi_{\nu}^{2}\), then \(X=\mu+\frac{Y}{\sqrt{Z/\nu}}\) follows a multivariate Student distribution with parameters \((\mu,\Sigma,\nu)\). The idea of this example is to introduce the inference with Maximum Likelihood Estimator (MLE) of the parameters, handling the constraints by parametrization with high level interface of the package, using automatic differentiation to compute derivatives. We introduce \(\theta\in\mathbb{R}^{k}\) the mapping of \((\mu,\Sigma,\nu)\) by our bijective parametrization. Using invariance property of the MLE, the mapping of MLE of \(\theta\) is equivalent to the MLE of \((\mu,\Sigma,\nu)\). In a second time, when a MLE is obtained, with sufficient regularity conditions (not detailed here), using asymptotic properties of MLE and Slutsky's lemma we have: \[\widehat{I}_{n,\widehat{\theta}}^{-\frac{1}{2}}\Big{(}\widehat{\theta}-\theta _{0}\Big{)}\underset{n\rightarrow+\infty}{\longrightarrow}\mathcal{N}(0,I)\] where: * \(\widehat{I}_{n,\widehat{\theta}}=-\left.\frac{\mathrm{d}^{2}\;\ell(\theta, \ldots)}{\mathrm{d}\;\theta^{2}}\right|_{\theta=\widehat{\theta}}\) is the estimated Fisher information matrix. * \(I\) is the identity matrix. * \(\ell\) is the log-likelihood of the whole sample. Now we can move this result to our original parameter space: \[\frac{\widehat{\nu}-\nu_{0}}{\sqrt{\delta_{\nu}^{T}I_{n,\widehat{\theta}}^{-1 }\delta_{\nu}}}\underset{n\rightarrow+\infty}{\longrightarrow}\mathcal{N}(0,1)\] with: * \(\delta_{\nu}=\left.\frac{\mathrm{d}\;\nu}{\mathrm{d}\;\theta}\right|_{\theta= \widehat{\theta}}\) Therefore we can obtain asymptotic confidence interval: \[\mathbb{P}\Big{(}\nu_{0}\in\Big{[}\widehat{\nu}\pm u_{1-\alpha/2}\sqrt{\delta _{\nu}^{T}I_{n,\widehat{\theta}}^{-1}\delta_{\nu}}\Big{]}\Big{)}\underset{ n\rightarrow+\infty}{\longrightarrow}1-\alpha\] Note that \(\delta_{\nu}\) and \(I_{n,\widehat{\theta}}\) will be computed with automatic differentiation. The same method is applicable with any parameter or function of parameter, _e.g._ we can have a confidence interval on \(|\Sigma|\): \[\mathbb{P}\Big{(}|\Sigma_{0}|\in\Big{[}\Big{|}\widehat{\Sigma}\Big{|}\pm u_{1- \alpha/2}\sqrt{\delta_{|\Sigma|}^{T}I_{n,\widehat{\theta}}^{-1}\delta_{|\Sigma| }}\Big{]}\Big{)}\underset{n\rightarrow+\infty}{\longrightarrow}1-\alpha\] with: * \(\delta_{|\Sigma|}=\frac{\mathrm{d}\,|\Sigma|}{\mathrm{d}\,\theta}\Big{|}_{ \theta=\widehat{\theta}}\) #### 4.1.2 With JAX **Simulating the data** First we generate simulated data to illustrate the method. ``` importnumpyasnp importscipy.stats n=1000 mu=np.arange(3) Sigma=np.array([[2,1,1],[1,2,1.5],[1,1.5,2]]) df=7 #weuseaseededrandomstateonlyforreproducibility random_state=np.random.RandomState(np.random.MT19937(np.random.SeedSequence(0))) X=scipy.stats.multivariate_t(loc=mu,shape=Sigma,df=df).rvs( size=n,random_state=random_state ) #converttoJAXarray importjax.numpyasjnp X=jnp.array(X) ``` **Definition of the parametrization** Our parameter space is the Cartesian product of \(\mathbb{R}^{3}\) (for \(\mu\)), the space of symmetric definite positive matrices of size 3 (for \(\Sigma\)) and \(\mathbb{R}^{*}_{+}\) (for the degree of freedom \(\nu\)). To handle these constraints we define a parametrization between this space and \(\mathbb{R}^{k}\) (the value of \(k\) will be automatically computed). ``` importparametrization_cookbook.jaxaspc ``` ``` parametrization=pc.NamedTuple( mu=pc.Real(shape=3), Sigma=pc.MatrixSymPosDef(dim=3), df=pc.RealPositive() ) We can retrieve the value of \(k\) with parametrization.size. ``` parametrization.size 10 ``` **Definition of the log-likelihood and gradients** First we define the log-likelihood depending on our original parameters: ``` deforiginal_loglikelihood(mu,Sigma,df,X): n,p=X.shape eigvals,eigvect=jnp.linalg.eigh(Sigma) U=eigvect*(eigvals**-0.5) logdet=jnp.log(eigvals).sum() return( jax.scipy.special.gammaln((df+p)/2) -jax.scipy.special.gammaln(df/2) -p/2*jnp.log(df*jnp.pi) -1/2*logdet -( (df+p) /2 *jnp.log1p(((X-mu)@U)**2).sum(axis=1)/df) )).sum() ``` And we define the log-likelihood of our parametrized model: ``` importjax ejax.jit defloglikelihood(theta,X): my_params=parametrization.reals1d_to_params(theta) returnoriginal_loglikelihood(my_params.mu,my_params.Sigma,my_params.df,X) ``` This function was JIT-compiled, as this function is run many-times, this is very interesting to reduce computation time. We now define the gradient and hessian functions (with JIT-compilation): ``` grad_loglikelihood=jax.jit(jax.grad(loglikelihood)) hessian_loglikelihood=jax.jit(jax.jacfwd(jax.jacrev(loglikelihood))) ``` #### Optimization We can use any optimization algorithm. We choose here a gradient descend (with step conditioning by the highest eigenvalue of the hessian) followed by the Newton-Raphson method. The gradient method is choosen for its robustness, and the second for its quick convergence starting from a initial point close to the optimum. This is only given for illustration purpose, in a real application case, using optimization algorithm developed with JAX in the Python module jaxopt(Blondel et al., 2021) can be a better choice. We choose here to initialize randomly \(\theta\). We can also build a plausible value of \(\theta\) with parametrization.params_to_reals1d. ``` importitertools #weuseaseededrandomstatenollyforreproducibility random_state=np.random.RandomState(np.random.MT19937(np.random.SeedSequence(1))) theta=random_state.normal(size=parametrization.size) current_likeli=loglikelihood(theta,X) print(f"current_likli={current_likeli}") forit_gradinitertools.count(): g=grad_loglikelihood(theta,X) H=hessian_loglikelihood(theta,X) sdp_eigenvalues=-jnp.linalg.eigh(H)[0] lr=1/sdp_eigenvalues.max() theta+=lr*g current_likeli,old_likeli=loglikelihood(theta,X),current_likeli ifcurrent_likeli-old_likeli<1e-2: break print(f'it_grad:{it_grad},current_likli:{current_likeli}') forit_nrinitertools.count(): g=grad_loglikelihood(theta,X) H=hessian_loglikelihood(theta,X) theta+=-min(1,.1*2**it_nr)*jnp.linalg.solve(H,g) current_likeli,old_likeli=loglikelihood(theta,X),current_likeli ifit_nr>3andcurrent_likeli-old_likeli<1e-6: break print(f'it_nr: {it_nr}, current_likli: {current_likeli}') print(f'theta: {theta}') current_likli = -17553.708984375 it_grad: 481, current_likli: -5160.08642578125 it_nr: 6, current_likli: -5156.76318359375 theta: [0.01115316 1.0506299 2.0488813 1.1360171 1.6467183 1.3518243 1.0945351 1.2938223 1.5352948 7.492975 ] Using the value It is easy to retrieve estimates \((\widehat{\mu},\widehat{\Sigma},\widehat{\nu})\): my_params = parametrization.reals1d_to_params(theta) my_params.mu DeviceArray([0.01115316, 1.0506299, 2.0488813 ], dtype=float32) my_params.Sigma DeviceArray([[2.0007489, 1.09474, 1.0565993], [1.09474, 2.260526, 1.720708 ], [1.0565993, 1.720708, 2.1778986]], dtype=float32) my_params.df DeviceArray(7.493532, dtype=float32) We can see we recover good estimate of the simulated parameters. Building confidence interval The first step is to compute the inverse of the estimated Fisher information matrix \(\widehat{I}_{n,\widehat{\theta}}\): FIM_inv = jnp.linalg.inv(-H) And we can compute \(\delta_{\nu}\), then the confidence interval: delta_df=jax.grad( lambdatheta:parametrization.reals1d_to_params(theta).df )(theta) df_asymptotic_variance=delta_df@FIM_inv@delta_df df_confidence_interval=( parametrization.reals1d_to_params(theta).df +( jnp.array([-1,1]) *scipy.stats.norm.ppf(0.975) *jnp.sqrt(df_asymptotic_variance) ) ) print(df_confidence_interval) [5.8153699.171696] The simulated value was 7. For the confidence interval on \(|\Sigma|\), we have: delta_det=jax.grad( lambdatheta:jnp.linalg.det(parametrization.reals1d_to_params(theta).Sigma) )(theta) det_asymptotic_variance=delta_det@FIM_inv@delta_det det_confidence_interval=( jnp.linalg.det(parametrization.reals1d_to_params(theta).Sigma) +( jnp.array([-1,1]) *scipy.stats.norm.ppf(0.975) *np.sqrt(det_asymptotic_variance) ) ) print(det_confidence_interval) [2.13259273.4136271] The simulated value was \(2.5\). #### 4.1.3 With PyTorch **Simulating the data** First we generate simulated data to illustrate the method. importnumpyasnp importsscipy.stats n=1000 mu=np.arange(3) Sigma=np.array([[2,1,1],[1,2,1.5],[1,1.5,2]]) df=7 #weuseaseededrandomstatenolyforreproducibility random_state=np.random.RandomState(np.random.MT19937(np.random.SeedSequence(0))) X=scipy.stats.multivariate_t(loc=mu,shape=Sigma,df=df).rvs( size=n,random_state=random_state ) #converttotorch importtorch X=torch.tensor(X,dtype=torch.float32) #### Definition of the parametrization Our parameter space is the Cartesian product of \(\mathbb{R}^{3}\) (for \(\mu\)), the space of symmetric definite positive matrices of size 3 (for \(\Sigma\)) and \(\mathbb{R}^{*}_{+}\) (for the degree of freedom \(\nu\)). To handle these constraints we define a parametrization between this space and \(\mathbb{R}^{k}\) (the value of \(k\) will be automatically computed). importparametrization_cookbook.torchaspc parametrization=pc.NamedTuple( mu=pc.Real(shape=3), Sigma=pc.MatrixSymPosDef(dim=3), df=pc.RealPositive() ) We can retrieve the value of \(k\) with parametrization.size. parametrization.size #### Definition of the log-likelihood and gradients First we define the log-likelihood depending on our original parameters: deforiginal_loglikelihood(mu, Sigma, df, x): n, p = X.shape eigvals, eigvect = torch.linalg.eigh(Sigma) U = eigvect * (eigvals**-0.5) logdet = torch.log(eigvals).sum() return ( torch.special.gammaln((df + p) / 2) - torch.special.gammaln(df / 2) - p / 2 * torch.log(df * torch.pi) - 1 / 2 * logdet - ( (df + p) / 2 * torch.log1p((((X - mu) @ U) ** 2).sum(axis=1) / df) )).sum() ``` And we define the log-likelihood of our parametrized model: ``` defloglikelihood(theta, X): my_params = parametrization.reals1d_to_params(theta) returnoriginal_loglikelihood(my_params.mu, my_params.Sigma, my_params.df, X) ``` #### Optimization We can use any optimization algorithm. We choose here a ADAM gradient. We choose here to initialize randomly \(\theta\). We can also build a plausible value of \(\theta\) with parametrization.params_to_reals1d. ``` importitertools #weuseaseededrandomstatenolyforreproducibility random_state=np.random.RandomState(np.random.MT19937(np.random.SeedSequence(1))) theta=random_state.normal(size=parametrization.size) theta=torch.tensor(theta,dtype=torch.float32,requires_grad=True) print(f"log-likelihoodbefore:{loglikelihood(theta, X)}") optimizer=torch.optim.Adam([theta], lr=1/n) last_losses=[] foritinitertools.count(): optimizer.zero_grad() loss =-loglikelihood(theta, X) new_loss = loss.detach() last_losses.append(new_loss) if len(last_losses)>5000: last_losses.pop(0) if last_losses[0]-last_losses[-1]<0: break loss.backward() optimizer.step() print(f"it:{it}") print(f"log-likelihoodafter:{loglikelihood(theta, X)}") print(f"theta:{theta}") log-likelihoodbefore: -17554.00390625 it: 23814 log-likelihoodafter: -5156.763671875 theta: tensor([0.0112, 1.0506, 2.0489, 1.1360, 1.6467, 1.3518, 1.0945, 1.2938, 1.535, 7.4930], requires_grad=True) #### Using the value It is easy to retrieve estimates \((\widehat{\mu},\widehat{\Sigma},\widehat{\nu})\): my_params = parametrization.reals1d_to_params(theta) my_params.mu tensor([0.0112, 1.0506, 2.0489], grad_fn=<AddBackward0>) my_params.Sigma tensor([[2.0007, 1.0947, 1.0566], [1.0947, 2.2605, 1.7207], [1.0566, 1.7207, 2.1779]], grad_fn=<MulBackward0>) my_params.df tensor(7.4935, grad_fn=<SelectBackward0>) We can see we recover good estimate of the simulated parameters. #### Building confidence interval The first step is to compute the inverse of the estimated Fisher information matrix \(\widehat{I}_{n,\widehat{\theta}}\): ``` FIM=-torch.autograd.functional.hessian( lambdatheta: loglikelihood(theta,X), theta, ) FIM_inv=torch.linalg.inv(FIM) ``` And we can compute \(\delta_{\nu}\), then the confidence interval: ``` theta.grad.zero_() est_df=parametrization.reals1d_to_params(theta).df est_df.backward() delta_df=theta.grad.detach() df_asymptotic_variance=delta_df@FIM_inv@delta_df df_confidence_interval=( est_df.detach() +( torch.tensor([-1,1]) * scipy.stats.norm.ppf(0.975) *torch.sqrt(df_asymptotic_variance) ) ) print(df_confidence_interval) ``` tensor([5.8149,9.1722]) ``` The simulated value was 7. For the confidence interval on \(|\Sigma|\), we have: ``` theta.grad.zero_() est_det=torch.linalg.det(parametrization.reals1d_to_params(theta).Sigma) est_det.backward() delta_det=theta.grad.detach() det_asymptotic_variance=delta_det@FIM_inv@delta_det det_confidence_interval = ( est_det.detach() + ( torch.tensor([-1, 1]) * scipy.stats.norm.ppf(0.975) * torch.sqrt(det_asymptotic_variance) ) ) print(det_confidence_interval) tensor([2.1326, 3.4136]) The simulated value was 2.5. ### Low-level modules: inference of parameters of a Gumbel distribution #### 4.2.1 Introduction With \(\mu\in\mathbb{R}^{p}\) and \(\beta\in\mathbb{R}^{*}_{+}\), the Gumbel distribution is defined with the probability distribution function: \[\mathbb{R} \longrightarrow \mathbb{R}_{+}\] \[x \longmapsto \frac{\exp\left(-\exp\left(-\frac{x-\mu}{\beta}\right)\right)\exp \left(-\frac{x-\mu}{\beta}\right)}{\beta}\] The idea of this example is to introduce the inference with Maximum Likelihood Estimator (MLE) of the parameters, handling the constraints by parametrization with low-level interface of the package, using automatic differentiation to compute derivatives. We introduce \(\theta\in\mathbb{R}^{2}\) the mapping of \((\mu,\beta)\) by our bijective parametrization. Using invariance property of the MLE, the mapping of MLE of \(\theta\) is equivalent to the MLE of \((\mu,\beta)\). In a second time, when a MLE is obtained, with sufficient regularity conditions (not detailed here), using asymptotic properties of MLE and Slutsky's lemma we have: \[\widehat{I}_{n,\widehat{\theta}}^{-\frac{1}{2}}\Big{(}\widehat{\theta}-\theta _{0}\Big{)}\underset{n\rightarrow+\infty}{\longrightarrow}\mathcal{N}(0,I)\] where: * the estimated Fisher information matrix \(\widehat{I}_{n,\widehat{\theta}}=-\frac{\,\mathrm{d}^{2}\,\ell(\theta,...)}{ \,\mathrm{d}\,\theta^{2}}\Big{|}_{\theta=\widehat{\theta}^{\widehat{\theta}}}\) * \(I\) the identity matrix. * \(\ell\) is the log-likelihood of the whole sample. Now we can move this result in our original parameter space: \[\frac{\widehat{\beta}-\beta_{0}}{\sqrt{\delta_{\beta}^{T}I_{n,\widehat{\theta}}^{-1 }\delta_{\beta}}}\underset{n\rightarrow+\infty}{\longrightarrow}\mathcal{N}(0,1)\] with: * \(\delta_{\beta}=\frac{\mathrm{d}\beta}{\mathrm{d}\theta}\Big{|}_{\theta=\widehat{ \theta}}\) Therefore we can obtain asymptotic confidence interval: \[\mathbb{P}\Big{(}\beta_{0}\in\Big{[}\widehat{\beta}\pm u_{1-\alpha/2}\sqrt{ \delta_{\beta}^{T}I_{n,\widehat{\theta}}^{-1}\delta_{\beta}}\Big{]}\Big{)} \underset{n\rightarrow+\infty}{\longrightarrow}1-\alpha\] Note that \(\delta_{\beta}\) and \(I_{n,\widehat{\theta}}\) will be computed with automatic differentiation. #### 4.2.2 With JAX #### Simulating the data First we generate simulated data to illustrate the method. ``` importnumpyasnp importscipy.stats n=1000 mu0=5 beta0=2 ``` ``` #weuseaseededrandomstatenollyforreproducibilty random_state=np.random.RandomState(np.random.MT19937(np.random.SeedSequence(0))) X=scipy.stats.gumbel_r(loc=mu0,scale=beta0).rvs( size=n,random_state=random_state ) #converttoJAXarray importjax.numpyasjnp X=jnp.array(X) ``` #### Definition of the log-likelihood and gradients First we define the likelihood depending on our original parameters: deforiginal_loglikelihood(mu,beta,X): logz = -(X-mu)/beta return (-jnp.exp(logz)+logz-jnp.log(beta)).sum() ``` And we define the log-likelihood of our parametrized model by using functions from the parametrization_cookbook.functions.jax module: ``` importparametrization_cookbook.functions.jaxaspcf importjax @jax.jit defloglikelihood(theta,X): mu=theta[0] beta=pcf.softplus(theta[1]) returnoriginal_loglikelihood(mu,beta,X) ``` This function was JIT-compiled, as this function is run many-times, this is very interesting to reduce computation time. We now define the gradient and hessian functions (with JIT-compilation): ``` grad_loglikelihood=jax.jit(jax.grad(loglikelihood))hessian_loglikelihood=jax.jit(jax.jacrfwd(jax.jacrev(loglikelihood))) ``` ### Optimization We can use any optimization algorithm. We choose here a gradient descend (with step conditioning by the highest eigenvalue of the hessian). This is only given for illustration purpose, in a real application case, using optimization algorithm developed with JAX in the Python module jaxopt (Blondel et al., 2021) can be a better choice. We choose here to initialize randomly \(\theta\). We can also build a plausible value of \(\theta\) with reciprocal functions. ``` importitertools #weuseaseedendomstatenolyforreproducibility random_state=np.random.RandomState(np.random.MT19937(np.random.SeedSequence(1)))theta=random_state.normal(size=2)theta=jnp.array(theta) current_likeli=loglikelihood(theta,X) print(f"Log-likeli:{current_likeli}") ``` for it initertools.count(): g = grad_loglikelihood(theta, X) H = hessian_loglikelihood(theta, X) sdp_eigenvalues = -jnp.linalg.eigh(H)[0] lr = 1/sdp_eigenvalues.max() theta += lr*g current_likelihood, old_likelihood = loglikelihood(theta, X), current_likelihood if current_likelihood-old_likelihood+1e-6: break print(f"it: {it}, Log-likelihood: {current_likelihood}") print(f"theta: {theta}") Log-likelihood: -14577.125 it: 13, Log-likelihood: -2253.14013671875 theta: [4.9774423 1.8211547] #### Using the value To retrieve the initial parameter, we must use \(\widehat{\theta}\): theta[0] # this is estimated of mu DeviceArray(4.9774423, dtype=float32) pcf.softplus(theta[1]) # this is estimated of beta DeviceArray(1.9711586, dtype=float32) We can see we recover good estimate of the simulated parameters. #### Building confidence interval The first step is to compute the inverse of the estimated Fisher information matrix \(\widehat{I}_{n,\widehat{\theta}}\): FIM_inv = jnp.linalg.inv(-H) And we can compute \(\delta_{\beta}\), then the confidence interval: delta_beta=jax.grad(lambdatheta:pcf.softplus(theta[1]))(theta) beta_asymptotic_variance=delta_beta@FIM_inv@delta_beta beta_confidence_interval=( pcf.softplus(theta[1]) +( jnp.array([-1,1]) *scipy.stats.norm.ppf(0.975) *jnp.sqrt(beta_asymptotic_variance) ) ) print(beta_confidence_interval) [1.87639512.065922] The simulated value was 2. #### 4.2.3 With PyTorch **Simulating the data** First we generate simulated data to illustrate the method. ``` importnumpyasnp importscipy.stats n=1000 mu0=5 beta0=2 ``` #weuseaseededrandomstatenollyforreproducibility random_state=np.random.RandomState(np.random.MT19937(np.random.SeedSequence(0))) X=scipy.stats.gumbel_r(loc=mu0,scale=beta0).rvs( size=n,random_state=random_state ) #converttoPyTorchtensor importtorch X=torch.tensor(X) ``` **Definition of the log-likelihood and gradients** First we define the likelihood depending on our original parameters: deforiginal_loglikelihood(mu,beta,X): logz = -(X-mu)/beta return (-torch.exp(logz)+logz-torch.log(beta)).sum() ``` And we define the log-likelihood of our parametrized model by using functions from the parametrization_cookbook.functions.torch module: ``` importparametrization_cookbook.functions.torchaspcf defloglikelihood(theta,X): mu=theta[0] beta=pcf.softplus(theta[1]) returnoriginal_loglikelihood(mu,beta,X) ``` ### Optimization We can use any optimization algorithm. We choose here a ADAM gradient. We choose here to initialize randomly \(\theta\). We can also build a plausible value of \(\theta\) with reciprocal functions. ``` importitertools #weuseaseededrandomstatenonlyforreproducibility random_state=np.random.RandomState(np.random.MT19937(np.random.SeedSequence(1))) theta=random_state.normal(size=2) theta=torch.tensor(theta,dtype=torch.float32,requires_grad=True) current_likelihood=loglikelihood(theta,X) print(f"log-likelihoodbefore:{loglikelihood(theta,X)}") optimizer=torch.optim.Adam([theta],lr=1/n) last_losses=[] foritinitertools.count(): optimizer.zero_grad() loss=-loglikelihood(theta,X) new_loss=loss.detach() last_losses.append(new_loss) iflen(last_losses)>5000: last_losses.pop(0) iflast_losses[0]-last_losses[-1]<0: break loss.backward() ``` optimizer.step() print(f"it:{it}") print(f"log-likelihoodafter:{loglikelihood(theta,X)}") print(f"theta:{theta}") log-likelihoodbefore:-14577.124868295035 it: 15350 log-likelihoodafter:-2253.1400991199 theta: tensor([4.9778,1.8214],requires_grad=True) #### Using the value To retrieve the initial parameter, we must use \(\widehat{\theta}\): theta[0]#thisisestimatedofmu tensor(4.9778,grad_fn=<SelectBackward0>) pcf.softplus(theta[1])#thisisestimatedofbeta tensor(1.9714,grad_fn=<MulBackward0>) We can see we recover good estimate of the simulated parameters. #### Building confidence interval The first step is to compute the inverse of the estimated Fisher information matrix \(\widehat{I}_{n,\widehat{\theta}}\) ``` FIM=-torch.autograd.functional.hessian( lambdatheta:loglikelihood(theta,X), theta, ) FIM_inv=torch.linalg.inv(FIM) ``` And we can compute \(\delta_{\beta}\), then the confidence interval: theta.grad.zero_() beta=pcf.softplus(theta[1]) beta.backward() delta_beta = theta.grad.detach() beta_asymptotic_variance = delta_beta @ FIM_inv @ delta_beta beta_confidence_interval = ( beta.detach() + ( torch.tensor([-1, 1]) * scipy.stats.norm.ppf(0.975) * torch.sqrt(beta_asymptotic_variance) ) ) print(beta_confidence_interval) tensor([1.8766, 2.0662]) The simulated value was 2. ## Acknowledgements The authors are very grateful to Matthias Bussonnier, Thibaud Le Graverend and Charlotte Baey for their feedbacks, their corrections of the cookbook and for testing the package. Nevertheless, any remaining errors remain the sole responsibility of the authors.
2307.14530
Optimal Estimation in Mixed-Membership Stochastic Block Models
Community detection is one of the most critical problems in modern network science. Its applications can be found in various fields, from protein modeling to social network analysis. Recently, many papers appeared studying the problem of overlapping community detection, where each node of a network may belong to several communities. In this work, we consider Mixed-Membership Stochastic Block Model (MMSB) first proposed by Airoldi et al. (2008). MMSB provides quite a general setting for modeling overlapping community structure in graphs. The central question of this paper is to reconstruct relations between communities given an observed network. We compare different approaches and establish the minimax lower bound on the estimation error. Then, we propose a new estimator that matches this lower bound. Theoretical results are proved under fairly general conditions on the considered model. Finally, we illustrate the theory in a series of experiments.
Fedor Noskov, Maxim Panov
2023-07-26T22:27:08Z
http://arxiv.org/abs/2307.14530v1
# Optimal Estimation in Mixed-Membership ###### Abstract Community detection is one of the most critical problems in modern network science. Its applications can be found in various fields, from protein modeling to social network analysis. Recently, many papers appeared studying the problem of overlapping community detection, where each node of a network may belong to several communities. In this work, we consider Mixed-Membership Stochastic Block Model (MMSB) first proposed by Airoldi et al. (2009). MMSB provides quite a general setting for modeling overlapping community structure in graphs. The central question of this paper is to reconstruct relations between communities given an observed network. We compare different approaches and establish the minimax lower bound on the estimation error. Then, we propose a new estimator that matches this lower bound. Theoretical results are proved under fairly general conditions on the considered model. Finally, we illustrate the theory in a series of experiments. Optimal Estimation in MMSB Optimal Estimation in MMSB Optimal Estimation in MMSB Optimal Estimation in Mixed-Membership Stochastic Block Models minimax bounds, mixed-membership stochastic block model, spectral estimators ## 1 Introduction Over the past ten years, network analysis has gained significant importance as a research field, driven by its numerous applications in various disciplines, including social sciences (Jin et al., 2023), computer sciences (Bedru et al., 2020), genomics (Li et al., 2018), ecology (Geary et al., 2020), and many others. As a result, a growing body of literature has been dedicated to fitting observed networks with parametric or non-parametric models of random graphs (Borgs and Chayes, 2017; Goldenberg et al., 2010). In this work, we are focusing on studying some particular parametric graph models, while it is worth mentioning _graphons_(Lovasz, 2012) as the most common non-parametric model. The simplest parametric model in network analysis is the Erdos-Renyi model (Erdos and Renyi, 1960), which assumes that edges in a network are generated independently with a fixed probability \(p\), the single parameter of the model. The stochastic block model (SBM; Holland et al. 1983) is a more flexible parametric model that allows for communities or groups within a network. In this model, the network nodes are partitioned into \(K\) communities, and the probability \(p_{ij}\) of an edge between nodes \(i\) and \(j\) depends on only what communities these nodes belong to. The mixed-membership stochastic block model (MMSB; Airoldi et al. 2009) is a stochastic block model generalization, allowing nodes to belong to multiple communities with varying degrees of membership. This model is characterized by a set of community membership vectors, representing the probability of a node belonging to each community. The MMSB model is the focus of research in the present paper. In the MMSB model, for each node \(i\), we assume that there exists a vector \(\mathbf{\theta}_{i}\in[0,1]^{K}\) drawn from the \((K-1)\)-dimensional simplex that determines the community membership probabilities for the given node. Then, a symmetric matrix \(\mathbf{B}\in[0,1]^{K\times K}\) determines the relations inside and between communities. According to the model, the probability of obtaining the edge between nodes \(i\) and \(j\) is \(\mathbf{\theta}_{i}^{\mathrm{T}}\mathbf{B}\mathbf{\theta}_{j}\). Importantly, in the considered model, we allow for self-loops. More precisely, let us observe the adjacency matrix of the undirected unweighted graph \(\mathbf{A}\in\{0,1\}^{n\times n}\). Under MMSB model \(\mathbf{A}_{ij}=Bern(\mathbf{P}_{ij})\) for \(1\leqslant i\leqslant j\leqslant n\), where \(\mathbf{P}_{ij}=\mathbf{\theta}_{i}^{\mathrm{T}}\mathbf{B}\mathbf{\theta}_{j}=\rho\, \mathbf{\theta}_{i}^{\mathrm{T}}\bar{\mathbf{B}}\mathbf{\theta}_{j}\). Here we denote \(\mathbf{B}=\rho\bar{\mathbf{B}}\) with \(\bar{\mathbf{B}}\in[0,1]^{K\times K}\) being a matrix with the maximum value equal to \(1\) and \(\rho\in(0,1]\) being the sparsity parameter that is crucial for the properties of this model. Stacking vectors \(\mathbf{\theta}_{i}\) into matrix \(\mathbf{\Theta}\), \(\mathbf{\Theta}_{i}=\mathbf{\theta}_{i}^{\mathrm{T}}\), we get the following formula for the matrix of edge probabilities \(\mathbf{P}\): \[\mathbf{P}=\mathbf{\Theta}\mathbf{B}\mathbf{\Theta}^{\mathrm{T}}=\rho\,\mathbf{\Theta} \bar{\mathbf{B}}\mathbf{\Theta}^{\mathrm{T}}.\] In this work, we aim to propose the minimax-optimal parameter estimation algorithm for the mixed-membership stochastic block model. For any estimators \(\widehat{\mathbf{\theta}}_{i}\) and \(\widehat{\mathbf{B}}\), Jin and Ke (2017) and Marshakov (2018) established lower bounds on the mean squared risk for vectors \(\mathbf{\theta}_{i}\), \(i\in[n]\), and matrix \(\mathbf{B}\). While the optimal estimators for \(\mathbf{\theta}_{i}\) were constructed (Panov et al., 2017; Jin et al., 2023), there is a polynomial gap between the lower bound and known theoretical guarantees for estimators of \(\mathbf{B}\). Related works.A large body of literature exists on parameter estimation in various parametric graph models. The most well-studied is the Stochastic Block Model, but methods for different graph models can share the same ideas. The maximum likelihood estimator is consistent for both SBM and MMSB, but it is intractable in practice (Celisse et al., 2012; Huang et al., 2020). Several variational algorithms were proposed to overcome this issue; see the original paper of Airoldi et al. (2009), the survey of Lee and Wilkinson (2019) and references therein. In the case of MMSB, the most common prior on vectors \(\mathbf{\theta}_{i}\), \(i\in[n]\), is Dirichlet distribution on a \((K-1)\)-dimensional simplex with unknown parameter \(\mathbf{\alpha}\). Unfortunately, a finite sample analysis of convergence rates for variational inference is hard to establish. In the case of SBM, it is known that the maximizer of the evidence lower bound over a variational family is optimal (Gaucher and Klopp, 2021). Still, there are no theoretical guarantees that the corresponding EM algorithm converges to it. Other algorithms do not require any specified distribution of membership vectors \(\mathbf{\theta}_{i}\). For example, spectral algorithms work well under the general assumption of _identifiability_ of communities (Mao et al., 2017). In the case of SBM, it is proved that they achieve optimal estimation bounds, see the paper by Yun and Proutiere (2015) and references therein. These results motivated several authors to develop spectral approaches for MMSB. For example, almost identical and simultaneously proposed algorithms SPOC (Panov et al., 2017), SPACL (Mao et al., 2021) and Mixed-SCORE (Jin et al., 2023) optimally reconstruct \(\mathbf{\theta}_{i}\) under the mean-squared error risk (Jin et al., 2023). For the matrix \(\mathbf{B}\), their proposed estimators \(\widehat{\mathbf{B}}\) achieves the following error rate: \[\min_{\mathbf{\Pi}\in\mathbb{S}_{K}}\|\widehat{\mathbf{B}}-\mathbf{\Pi}\mathbf{B} \mathbf{\Pi}^{\mathrm{T}}\|_{\mathrm{F}}\lesssim C(K)\sqrt{\frac{\rho\log n}{n}} \tag{1}\] with high probability, where \(C(K)\) is some constant depending on \(K\). Here \(\mathbb{S}_{K}\) stands for the set of \(K\times K\) permutation matrices, and \(\|\cdot\|_{\mathrm{F}}\) denotes the Frobenius norm. The algorithm by Anandkumar et al. (2013), which uses the tensor-based approach, provides the same rate. But the latter has high computational costs and assumes that \(\boldsymbol{\theta}_{i}\) is drawn from the Dirichlet distribution. Finally, Mao et al. (2017) obtained the estimator for diagonal \(\mathbf{B}\) with convergence rate \[\min_{\mathbf{\Pi}\in\mathbb{S}_{K}}\|\widehat{\mathbf{B}}-\mathbf{\Pi} \mathbf{B}\mathbf{\Pi}^{\mathrm{T}}\|_{\mathrm{F}}\lesssim\frac{K^{3}\log n}{ \sqrt{n}}\] with high probability. However, their algorithm can be modified for arbitrary \(\mathbf{B}\). None of the algorithms above match the known lower bound over \(n\) on the estimator of \(\mathbf{B}\) in the Frobenius norm. In his MSc thesis (Marshakov, 2018), E. Marshakov showed that \[\inf_{\bar{\mathbf{B}}}\sup_{\mathbf{B}\in\bar{\mathcal{B}}_{\rho}}\mathbb{P} \left(\|\widehat{\mathbf{B}}-\mathbf{B}\|_{\mathrm{F}}\geqslant C_{\mathbf{B} }\frac{\sqrt{\rho}}{n}\right)>0.15, \tag{2}\] where \[\bar{\mathcal{B}}_{\rho}=\left\{\rho\bar{\mathbf{B}}\mid\bar{\mathbf{B}}\in[0,1]^{K\times K},\;\max_{kk^{\prime}}\bar{\mathbf{B}}_{kk^{\prime}}=1,\forall k \neq k^{\prime}\;\bar{\mathbf{B}}_{kk}>\bar{\mathbf{B}}_{k^{\prime}k}\right\}, \tag{3}\] the infimum is taken over all possible estimators, and \(C_{\mathbf{B}}\) is some constant depending on \(K\). In what follows, we slightly modify his result to establish the same lower bound for the following loss function: \[\mathcal{L}(\widehat{\mathbf{B}},\mathbf{B})=\min_{\mathbf{\Pi}\in\mathbb{S} _{K}}\|\widehat{\mathbf{B}}-\mathbf{\Pi}\mathbf{B}\mathbf{\Pi}^{\mathrm{T}}\| _{\mathrm{F}}.\] It is worth mentioning models that also introduce overlapping communities but in a distinct way from MMSB and estimators for them. One example is OCCAM (Zhang et al., 2020) which is similar to MMSB but uses \(l_{2}\)-normalization for membership vectors. Another example is the Stochastic Block Model with Overlapping Communities (Kaufmann et al., 2018; Peixoto, 2015). Contributions.We aim to propose the estimator \(\widehat{\mathbf{B}}\) that it is computationally tractable and achieves the following error bound: \[\min_{\mathbf{\Pi}\in\mathbb{S}_{K}}\|\widehat{\mathbf{B}}-\mathbf{\Pi} \mathbf{B}\mathbf{\Pi}^{\mathrm{T}}\|_{\mathrm{F}}\leqslant C(K)\frac{\sqrt{ \rho}}{n} \tag{4}\] with high probability. This paper focuses on optimal estimation up to dependence on \(K\), while optimal dependence on \(K\) remains an interesting open problem. We need to impose some conditions to establish the required upper bound. We want these conditions to be non-restrictive and, ideally, satisfied in practice. The question of the optimality of proposed estimates achieving the rate (4) is central to this research. In what follows, we give a positive answer to this question under a fairly general set of conditions. The rest of the paper is organized as follows. We introduce a new _SPOC++_ algorithm in Section 2. Then, in Section 3, we establish the convergence rate for the proposed algorithm and show its optimality. Finally, in Section 4, we conduct numerical experiments that illustrate our theoretical results. Section 5 concludes the study with a discussion of the results and highlights the directions for future work. All proofs of ancillary lemmas can be found in Appendix. ## 2 Beyond successive projections for parameter estimation in MMSB ### SPOC algorithm Various estimators of \(\mathbf{B}\) and \(\mathbf{\Theta}\) were proposed in previous works (Mao et al., 2017; Panov et al., 2017; Jin et al., 2023). In this work, we will focus on the _Successive Projections Overlapping Clustering (SPOC)_ algorithm (Panov et al., 2017) that we present in Algorithm 2. However, we should note that any "vertex hunting" method (Jin et al., 2023) can be used instead of a successive projections algorithm as a base method for our approach. The main idea of SPOC is as follows. Consider a \(K\)-eigenvalue decomposition of \(\mathbf{P}=\mathbf{U}\mathbf{L}\mathbf{U}^{\mathrm{T}}\). Then, there exists a full-rank matrix \(\mathbf{F}\) such that \(\mathbf{U}=\mathbf{\Theta}\mathbf{F}\) and \(\mathbf{B}=\mathbf{F}\mathbf{L}\mathbf{F}^{\mathrm{T}}\). The proof of this statement can be found, for example, in (Panov et al., 2017). Hence, if we build an estimator of \(\mathbf{F}\) and \(\mathbf{L}\), we immediately get the estimator of \(\mathbf{B}\). Besides, since \(\mathbf{U}=\mathbf{\Theta}\mathbf{F}\), rows of \(\mathbf{U}\) lie in a simplex. The vertices of this simplex are rows of matrix \(\mathbf{F}\). Consequently, we may estimate \(\mathbf{U}\) by some estimator \(\widehat{\mathbf{U}}\) and find vertices of the simplex using rows of \(\widehat{\mathbf{U}}\). The most natural way to estimate \(\mathbf{U}\) and \(\mathbf{L}\) is to use a \(K\)-eigenvalue decomposition of the adjacency matrix \(\mathbf{A}\simeq\widehat{\mathbf{U}}\widehat{\mathbf{L}}\widehat{\mathbf{U}} ^{\mathrm{T}}\), where columns of \(\widehat{\mathbf{U}}\) are first \(K\) eigenvectors of \(\mathbf{A}\) and Figure 1: First and second components of rows of matrices \(\mathbf{U}\), \(\widehat{\mathbf{U}}\) in the case of \(K\) being equal to 3. ``` 0: Matrix \(\mathbf{V}\in\mathbb{R}^{n\times K}\) and integer \(r\leqslant K\) 0: Set of indices \(J\subset[n]\) 1: Set \(\mathbf{S}^{0}=\mathbf{V}\), \(J_{0}=\varnothing\) 2:for\(t=1\ldots r\)do 3: Find \(j_{t}=\arg\min_{i\in[n]}\|\mathbf{S}_{i}^{t-1}\|\) 4: Project rows of \(\mathbf{S}^{t-1}\) on the plane orthogonal to \(\mathbf{S}_{j_{t}}^{t-1}\): \[\mathbf{S}^{t}=\mathbf{S}^{t-1}\left(\mathbf{I}_{K}-\frac{\mathbf{S}_{j_{t}}^{t -1}(\mathbf{S}_{j_{t}}^{t-1})^{\mathrm{T}}}{\|\mathbf{S}_{j_{t}}^{t-1}\|_{2}^{2 }}\right).\] 5: Add \(j_{t}\) to the set \(J\): \(J_{t}=J_{t-1}\cup\{j_{t}\}\). 6:endfor 7:return\(J_{t}\) ``` **Algorithm 1** SPA (Mizutani, 2016) ``` 0: Adjacency matrix \(\mathbf{A}\), number of communities \(K\). 0: Estimators \(\widehat{\mathbf{G}}\), \(\widehat{\mathbf{B}}\) 1: Get the rank-\(K\) eigenvalue decomposition \(\mathbf{A}\simeq\widehat{\mathbf{U}}\widehat{\mathbf{L}}\widehat{\mathbf{U}}^{ \mathrm{T}}\) 2: Run SPA algorithm with input \((\widehat{\mathbf{U}},K)\), which outputs the set of indices \(J\) of cardinality \(K\) 3:\(\widehat{\mathbf{F}}=\widehat{\mathbf{U}}[J_{i};]\) 4:\(\widehat{\mathbf{B}}=\widehat{\mathbf{F}}\widehat{\mathbf{L}}\widehat{ \mathbf{F}}^{\mathrm{T}}\) 5:\(\widehat{\mathbf{G}}=\widehat{\mathbf{U}}\widehat{\mathbf{F}}^{-1}\) ``` **Algorithm 2** SPOC \(\widehat{\mathbf{L}}\) is the diagonal matrix of eigenvalues. The rows of matrix \(\widehat{\mathbf{U}}\) lie in a perturbed version of the simplex corresponding to matrix \(\mathbf{U}\), see illustration on Figure 1. To find vertices of the perturbed simplex, we run _Successive Projections Algorithm (SPA)_, see Algorithm 1. The resulting SPOC algorithm is given in Algorithm 2. However, the SPOC-based estimator \(\widehat{\mathbf{B}}\) does not allow for obtaining the optimal rate of estimation (4), only achieving the suboptimal one (1). The nature of the problem is in the SPA algorithm whose error is driven by the properties of rows of matrix \(\widehat{\mathbf{U}}\) that might be too noisy. In what follows, we will provide a noise reduction procedure for it. ### Denoising via averaging The most common denoising tool is averaging because it decreases the variance of i.i.d. variables by \(\sqrt{N}\) where \(N\) is a sample size. In this work, our key idea is to reduce the error rate of the estimation of the matrix \(\mathbf{F}\) by \(\sqrt{n}\) times through averaging \(\Theta(n)\) rows of \(\widehat{\mathbf{U}}\). The key contribution of this work is in establishing the procedure for finding the rows similar to the rows of \(\mathbf{F}\) and dealing with their weak dependence on each other. We call the \(i\)-th node "pure" if the corresponding row \(\mathbf{\Theta}_{i}\) of the matrix \(\mathbf{\Theta}\) consists only of zeros except for one particular entry, equal to 1. Thus, for the pure node \(\mathbf{U}_{i}=\mathbf{F}_{k}\) for some \(k\in[K]\). If we find many pure nodes and average corresponding rows of \(\widehat{\mathbf{U}}\), we can get a better estimator of rows of \(\mathbf{F}\) and, consequently, matrix \(\mathbf{B}\). To find pure nodes, we employ the following strategy. In the first step, we run the SPA algorithm and obtain one vertex per community. Below, we prove under some conditions that SPA chooses "almost" pure nodes with high probability. In the second step, we detect the nodes which are "similar" to the ones selected by SPA and use the resulting pure nodes set for averaging. The complete averaging procedure is given in Algorithm 3, while we discuss its particular steps below. The choice of similarity measure for detection on similar nodes is crucial for our approach. Fan et al. (2022) provide a statistical test for equality of node membership vectors \(\mathbf{\Theta}_{i}\) and \(\mathbf{\Theta}_{j}\) based on the statistic \(T_{ij}\). This statistic is closely connected to the displace matrix \[\mathbf{W}=\mathbf{A}-\mathbf{P}\] and covariance matrix \(\mathbf{\Sigma}(i,j)\) of the vector \((\mathbf{W}_{i}-\mathbf{W}_{j})\mathbf{UL}^{-1}\): \[\mathbf{\Sigma}(i,j)=\mathbb{E}\big{[}\mathbf{L}^{-1}\mathbf{U}^{\mathrm{T}}( \mathbf{W}_{i}-\mathbf{W}_{j})^{\mathrm{T}}(\mathbf{W}_{i}-\mathbf{W}_{j}) \mathbf{UL}^{-1}\big{]}.\] Thus, the test statistic \(T_{ij}\) is given by \[T_{ij}=(\widehat{\mathbf{U}}_{i}-\widehat{\mathbf{U}}_{j})\mathbf{\Sigma}(i,j)^{- 1}(\widehat{\mathbf{U}}_{i}-\widehat{\mathbf{U}}_{j})^{\mathrm{T}}.\] However, we do not observe the matrix \(\mathbf{\Sigma}(i,j)\). Instead, we use its plug-in estimator \(\widehat{\mathbf{\Sigma}}(i,j)\) which is described below in Algorithm 3, see equation (5). Thus, the resulting test statistic is given by \[\widehat{T}_{ij}=(\widehat{\mathbf{U}}_{i}-\widehat{\mathbf{U}}_{j})\widehat {\mathbf{\Sigma}}(i,j)^{-1}(\widehat{\mathbf{U}}_{i}-\widehat{\mathbf{U}}_{j})^{ \mathrm{T}}. \tag{10}\] Fan et al. (2022) prove that under some conditions \(T_{ij}\) and \(\widehat{T}_{ij}\) both converge to non-central chi-squared distribution with \(K\) degrees of freedom and center \[\bar{T}_{ij}=(\mathbf{U}_{i}-\mathbf{U}_{j})\mathbf{\Sigma}(i,j)^{-1}(\mathbf{U}_ {i}-\mathbf{U}_{j})^{\mathrm{T}}. \tag{11}\] Thus, \(\widehat{T}_{ij}\) can be considered as a measure of closeness for two nodes. For each node \(i\) we can define its neighborhood \(\mathcal{I}_{i}\) as all nodes \(j\) such that \(\widehat{T}_{ij}\) is less than some threshold \(t_{n}\): \(\mathcal{I}_{i}=\{j\in[n]\mid\widehat{T}_{ij}<t_{n}\}\). To evaluate \(\bar{T}_{ij}\), one needs to invert the matrix \(\mathbf{\Sigma}(i,j)\). However, matrix \(\mathbf{\Sigma}(i,j)\) can be degenerate in the general case. Nevertheless, one can specify some conditions on matrix \(\mathbf{B}\) to ensure it is well-conditioned. To illustrate it, let us consider the following proposition. **Proposition 1**.: _Let Conditions 1-4, defined below, hold. Assume additionally that entries of the matrix \(\mathbf{B}\) are bounded away from 0 and 1. Then there exist constants \(C_{1},C_{2}\) such that for large enough \(n\) it holds_ \[\frac{C_{1}}{n^{2}\rho}\leqslant\lambda_{\min}(\mathbf{\Sigma}(i,j))\leqslant \lambda_{\max}(\mathbf{\Sigma}(i,j))\leqslant\frac{C_{2}}{n^{2}\rho}\] _for any nodes \(i\) and \(j\)._ However, the condition on the entries of the community matrix above might be too strong, while we only need concentration bounds on \(\widehat{T}_{ij}\). To not limit ourselves to matrices \(\mathbf{B}\) with no zero entries, we consider a regularized version of \(\widehat{T}_{ij}\): \[\widehat{T}_{ij}^{a}=(\widehat{\mathbf{U}}_{i}-\widehat{\mathbf{U}}_{j}) \left(\widehat{\mathbf{\Sigma}}(i,j)+a\mathbf{I}\right)^{-1}(\widehat{\mathbf{U}} _{i}-\widehat{\mathbf{U}}_{j})^{\mathrm{T}}\] for some \(a>0\). When \(a=\Theta(n^{-2}\rho^{-1})\), the statistic \(\widehat{T}^{a}_{ij}\) concentrates around \[\bar{T}^{a}_{ij}=\left(\mathbf{U}_{i}-\mathbf{U}_{j}\right)(\mathbf{\Sigma}(i,j) +a\mathbf{I})^{-1}\,(\mathbf{U}_{i}-\mathbf{U}_{j})^{\mathrm{T}}.\] Practically, if \(\widehat{\mathbf{\Sigma}}(i,j)\) is well-conditioned, one can use the statistic \(\widehat{T}_{ij}\) without any regularization. In other words, all of our results still hold if \(a=0\) and \(\lambda_{\min}\big{(}\mathbf{\Sigma}(i,j)\big{)}\geqslant Cn^{-2}\rho^{-1}\) for all \(i,j\). But to not impose additional assumptions on either matrix \(\mathbf{B}\) or \(\mathbf{\Theta}\), in what follows we will use \(\widehat{T}^{a}_{ij}\) with \(a=\Theta(n^{-2}\rho^{-1})\). ### Estimation of eigenvalues and eigenvectors It turns out that the eigenvalues \(\widehat{\mathbf{L}}\) and eigenvectors \(\widehat{\mathbf{U}}\) of \(\mathbf{A}\) are not optimal estimators of \(\mathbf{L},\mathbf{U}\) respectively. The asymptotic expansion of \(\mathbf{U}\) described in Lemma 6 suggests a new estimator \(\widehat{\mathbf{U}}\) that suppresses some high-order terms in the expansion. For the exact formula, see equation (7) in Algorithm 3. Similarly, a better estimator \(\tilde{\mathbf{L}}\) of eigenvalues exists; see equation (12) in Algorithm 4. Proposed estimators admit better asymptotic properties than \(\widehat{\mathbf{L}}\) and \(\widehat{\mathbf{U}}\), see Lemma 9 and 16 below. In particular, it allows us to achieve the convergence rate (4) instead of \(1/n\). ### Estimation of \(K\) In the previous sections, we assumed that the number of communities \(K\) is known. However, in practical scenarios, this assumption often does not hold. This section presents an approach to estimating the number of communities. The idea is to find the efficient rank of the matrix \(\mathbf{A}\). Due to Weyl's inequality \(|\lambda_{j}(\mathbf{A})-\lambda_{j}(\mathbf{P})|\leqslant\|\mathbf{A}- \mathbf{P}\|\). Efficiently bounding the norm \(\|\mathbf{A}-\mathbf{P}\|\), we obtain that it much less than \(2\max_{i\in[n]}\sqrt{\sum_{t=1}^{n}\mathbf{A}_{it}\log^{2}n}\). However, in its turn, \(2\max_{i\in[n]}\sqrt{\sum_{t=1}^{n}\mathbf{A}_{it}\log^{2}n}\ll\lambda_{K}( \mathbf{P})\). Thus, we suggest the following estimator: \[\widehat{K}=\max\left\{j\mid\lambda_{j}(\mathbf{A})\geqslant 2\max_{i\in[n]} \sqrt{\sum\nolimits_{t=1}^{n}\mathbf{A}_{it}\log^{2}n}\right\}.\] In what follows, we prove that it coincides with \(K\) with high probability if \(n\) is large enough; see Section B.4 of Appendix for details. ### Resulting SPOC++ algorithm Combining ideas from previous sections, we split our algorithm into two procedures: Averaging Procedure (Algorithm 3) and the resulting SPOC++ method (Algorithm 4). However, the critical question remains: how to select the threshold \(t_{n}\)? In our theoretical analysis (see Theorem 2 below), we demonstrate that by setting \(t_{n}\) to be logarithmic in \(n\), SPOC++ can recover the matrix \(\mathbf{B}\) with a high probability and up to the desired error level. However, for practical purposes, we recommend defining the threshold just considering the distribution of the statistics \(\widehat{T}^{a}_{ikj}\) for different \(j\), where \(i_{k}\) is an index chosen by Algorithm 1; see Section 4.1 for details. ## 3 Provable guarantees ### Sketch of the proof of consistency We will need several conditions to be satisfied to obtain optimal convergence rates. The most important one is to have many nodes placed near the vertices of the simplex. We will give the exact conditions and statements below, but first, discuss the key steps that allow us to achieve the result. They are listed below. **Step 1. Asymptotics of \(\widehat{\mathbf{U}}_{ik}\).** First, using results of (Fan et al., 2020), we obtain the asymptotic expansion of \(\widehat{\mathbf{U}}_{ik}\). We show that up to a residual term of order \(\sqrt{\frac{\log n}{n^{3}\rho}}\) we have \[\widehat{\mathbf{U}}_{ik}\approx\mathbf{U}_{ik}+\frac{\mathbf{e}_{i}^{\rm T} \mathbf{W}\mathbf{u}_{k}}{t_{k}}+\frac{\mathbf{e}_{i}^{\rm T}\mathbf{W}^{2} \mathbf{u}_{k}}{t_{k}^{2}}-\frac{3}{2}\cdot\mathbf{U}_{ik}\frac{\mathbf{u}_{k }^{\rm T}\mathbb{E}\mathbf{W}^{2}\mathbf{u}_{k}}{t_{k}^{2}}+\frac{1}{t_{k}^{2} }\sum_{k^{\prime}\in[K]\setminus\{k\}}\frac{\lambda_{k^{\prime}}\mathbf{U}_{ ik^{\prime}}}{\lambda_{k^{\prime}}-t_{k}}\cdot\mathbf{u}_{k^{\prime}}^{\rm T} \mathbb{E}\mathbf{W}^{2}\mathbf{u}_{k},\] where \(t_{k}\approx\lambda_{k}(\mathbf{P})\). Matrices \(\mathbb{E}\mathbf{W}^{2}\) and \(\mathbf{W}^{2}\) can be efficiently estimated by diagonal matrix \(\mathbf{D}=\text{diag}\left(\sum_{t=1}^{n}\mathbf{A}_{it}\right)_{i=1}^{n}\), see also equation (6) in Algorithm 3. Thus, we proceed with plug-in estimation of the second-order terms and obtain the estimator \(\tilde{\mathbf{U}}\) defined in (7). Most importantly, the term linear in \(\mathbf{W}\) can be suppressed using averaging. **Step 2. Approximating the set of pure nodes.** We show that the difference \(|\widehat{T}^{a}_{ij}-\bar{T}^{a}_{ij}|\) can be efficiently bounded by sum of two terms: one depends on the difference \(\|\boldsymbol{\Theta}_{i}-\boldsymbol{\Theta}_{j}\|_{2}\) and the other is at most logarithmic. If \(i_{k}\) is an index chosen by SPA and \(j\in\mathcal{P}_{k}\), then \(\bar{T}^{a}_{ikj}\) is small. Thus, logarithmic threshold \(t_{n}\) will ensure that for all \(j\in\mathcal{P}_{k}\) we have \(\widehat{T}^{a}_{ikj}\leqslant t_{n}\). Next, Condition 5 implies that there are a few non-pure nodes in the set \(\{j\mid\widehat{T}^{a}_{ikj}\leqslant t_{n}\}\). **Step 3. Averaging.** Finally, we show that redundant terms in the asymptotic expansion of \(\tilde{\mathbf{U}}_{i}-\mathbf{U}_{i}\) vanish after averaging, and it delivers an appropriate estimator of the simplex vertices. After that, we can obtain a good estimator of the matrix \(\mathbf{B}\). ### Main result In order to perform theoretical analysis, we state some conditions. Most of these conditions are not restrictive, and below we discuss their limitations, if any. **Condition 1**.: _Singular values of the matrix \(\bar{\mathbf{B}}\) are bounded away from 0._ The full rank condition is essential as, otherwise, one loses the identifiability of communities (Mao et al., 2017). **Condition 2**.: _There is some constant \(c\) such that \(0\leqslant c<1/3\) and \(\rho>n^{-c}\)._ Parameter \(\rho\) is responsible for the sparsity of the resulting graph. The most general results on statistical properties of random graphs require \(\rho n\to\infty\) as \(n\to\infty\)(Tang et al., 2022). In this work, we require a stronger condition to achieve the relatively strong statements we aim at. We think this condition can be relaxed though it would most likely need a proof technique substantially different from ours. Next, we demand the technical condition for the probability matrix \(\mathbf{P}\). **Condition 3**.: _There exists some constant \(c_{0}>0\) such that_ \[\min\left\{\frac{|\lambda_{i}(\mathbf{P})|}{|\lambda_{j}(\mathbf{P})|}\mid 1 \leqslant i<j\leqslant K,\lambda_{i}(\mathbf{P})\neq\lambda_{j}(\mathbf{P}) \right\}\geqslant 1+c_{0}.\] _In addition, we have_ \[\max_{j}\sum_{i=1}^{n}\mathbf{P}_{ij}(1-\mathbf{P}_{ij})\to\infty \tag{13}\] _as \(n\) tends to \(\infty\)._ This condition is required because of the method to obtain asymptotics of eigenvectors of \(\mathbf{A}\). The idea is to apply the Cauchy residue theorem to the resolvent. Let \(\widehat{\mathbf{u}}_{k}\) be the \(k\)-th eigenvector of \(\mathbf{A}\) and \(\mathbf{u}_{k}\) be the \(k\)-th eigenvector of \(\mathbf{P}\). Let \(\mathcal{C}_{k}\) be a contour in the complex plane that contains both \(\lambda_{k}(\mathbf{P})\) and \(\lambda_{k}(\mathbf{A})\). If no other eigenvalues are contained in \(\mathcal{C}_{k}\) then \[\oint_{\mathcal{C}_{k}}\frac{\mathbf{x}^{\mathrm{T}}\widehat{ \mathbf{u}}_{k}\widehat{\mathbf{u}}_{k}^{\mathrm{T}}\mathbf{y}}{\lambda_{k}( \mathbf{A})-z}dz=\oint_{\mathcal{C}_{k}}\mathbf{x}^{\mathrm{T}}(\mathbf{A}-z \mathbf{I})^{-1}\mathbf{y}dz=\oint_{\mathcal{C}_{k}}\mathbf{x}^{\mathrm{T}} \left(\sum_{k=1}^{K}\lambda_{k}(\mathbf{P})\mathbf{u}_{k}\mathbf{u}_{k}^{ \mathrm{T}}+\mathbf{W}-z\mathbf{I}\right)^{-1}\mathbf{y}dz\] for any vectors \(\mathbf{x},\mathbf{y}\). The leftmost side is simplified by calculating the residue at \(\lambda_{k}(\mathbf{A})\), and the rightmost side is analyzed via Sherman-Morrison-Woodbury formula. For the example of obtained asymptotics, see Lemma 6. The second part of Condition 3 can be omitted if \(\rho<1\) or there exist \(k,k^{\prime}\in[K]\) such that \(\mathbf{B}_{kk^{\prime}}\) is bounded away from 0 and 1, since (13) is granted by Conditions 1-2 and 4 in this case. However, we decided not to impose additional assumptions and left this condition as proposed by Fan et al. (2022). Next, we call the \(i\)-th node in our graph _pure_ if \(\mathbf{\Theta}_{i}\) has 1 in some position and 0 in others. We also denote this non-zero position by \(\mathrm{cl}(i)\) and the set of pure nodes by \(\mathcal{P}\). Moreover, we define \(\mathcal{P}_{k}=\{i\in\mathcal{P}\mid\mathrm{cl}(i)=k\}\). Thus, \(\mathcal{P}_{k}\) is a set of nodes completely belonging to the \(k\)-th community. It leads us to the following conditions. **Condition 4**.: _Cardinality of \(\mathcal{P}_{k}\) has an asymptotic order of \(n\) for all \(k=1,\ldots,K\). More formally,_ \[|\mathcal{P}_{k}|=\Theta(n).\] **Condition 5**.: _Fix \(\eta\geqslant 1\). For any community index \(k\), \(\delta>0\) and \(n>n_{0}(\delta)\) there exists \(C_{\delta}\) such that_ \[\sum_{j\not\in\mathcal{P}_{k}}\mathrm{I}\left\{\|\boldsymbol{\Theta}_{j}- \mathbf{e}_{k}\|_{2}\leqslant\delta\sqrt{\frac{\log n}{n\rho}}\right\} \leqslant C_{\delta}\log^{\eta}n, \tag{14}\] _where \(\mathbf{e}_{k}\) is the \(k\)-th standard basis vector in \(\mathbb{R}^{K}\)._ Condition 4 is essential as it requires that all the communities have asymptotically significant mass. As discussed in Section 2.2, we employ row averaging on the eigenmatrix \(\widehat{\mathbf{U}}\) to mitigate noise, specifically focusing on rows corresponding to pure nodes. This averaging process effectively reduces noise by a factor of \(\sqrt{n}\). While this condition is not commonly encountered in the context of MMSB, it covers an important intermediate case bridging the gap between the Stochastic Block Model and the Mixed-Membership Stochastic Block Model. If this condition is not satisfied, we suppose it is possible to obtain a higher minimax lower bound than the one provided by Theorem 3. Condition 5 can be naturally fulfilled if non-pure \(\boldsymbol{\Theta}_{j}\) are sampled from the Dirichlet distribution. Indeed, the number of \(\boldsymbol{\Theta}_{j}\) in a ball of radius \(\sqrt{\frac{\log n}{n\rho}}\) is proportional to \(n\cdot\left[\frac{\log n}{n\rho}\right]^{\frac{K-1}{2}}\). For example, if \(\rho=\Theta(1)\) and \(K\geqslant 3\), then we have \[\sum_{j\not\in\mathcal{P}_{k}}\mathrm{I}\left\{\|\boldsymbol{\Theta}_{j}- \mathbf{e}_{k}\|_{2}\leqslant\delta\sqrt{\frac{\log n}{n\rho}}\right\}\sim C_{ \delta}n\cdot\left[\frac{\log n}{n\rho}\right]^{\frac{K-1}{2}}\lesssim C_{ \delta}\log^{(K-1)/2}n\] with high probability. In this case, one can take \(\eta=(K-1)/2\). One may prove the above by bounding the sum of Bernoulli random variables on the left-hand side using the Bernstein inequality. These conditions allow us to state the main result of this work. **Theorem 2**.: _Suppose that \(a=\Theta(n^{-2}\rho^{-1})\). Under Conditions 1-5, for each positive \(\varepsilon\) there are constants \(C_{t},C_{\mathbf{B}}\) depending on \(\varepsilon,K\) such that if we apply Algorithm 4 with_ \[t_{n}=C_{t}\log n, \tag{15}\] _then there is \(n_{0}\) such that for all \(n>n_{0}\) the following inequality holds:_ \[\mathbb{P}\left(\min_{\boldsymbol{\Pi}\in\mathcal{S}_{K}}\|\widehat{\mathbf{ B}}-\boldsymbol{\Pi}\mathbf{B}\boldsymbol{\Pi}^{\mathrm{T}}\|_{\mathrm{F}} \geqslant\frac{C_{\mathbf{B}}\sqrt{\rho}\log^{\eta}n}{n}\right)\leqslant n^{- \varepsilon}.\] The theorem demands \(a=\Theta(n^{-2}\rho^{-1})\), but the sparsity parameter \(\rho\) is not observed in practice. We suppose that the most convenient choice is \(a=0\), see discussion in Section 2.2. However, if one need to construct a quantity of order \(n^{-2}\rho^{-1}\), one can choose \(\left(n\lambda_{1}(\mathbf{A})\right)^{-1}\), see Lemma 25. ### Proof of Theorem 2 Assume that \(K\) is known. Given \(\varepsilon\), choose \(t_{n}=C(\varepsilon)\log n\) such that the event \[\|\widehat{\mathbf{F}}-\mathbf{F}\mathbf{\Pi}_{\mathbf{F}}\|_{ \mathrm{F}}\leqslant\frac{C_{\mathbf{F}}\log^{\eta}n}{n\sqrt{n\rho}} \tag{16}\] has probability at least \(1-n^{-\varepsilon}/3\) for some constant \(C_{\mathbf{F}}\) and permutation matrix \(\mathbf{\Pi}_{\mathbf{F}}\). Such \(t_{n}\) exists due to Lemma 9. WLOG, we assume that the minimum in (16) is attained when \(\mathbf{\Pi}_{\mathbf{F}}=\mathbf{I}\), since changing order of communities does not change the model. Meanwhile, due to Lemma 16: \[\tilde{\mathbf{L}}_{kk}=\mathbf{L}_{kk}+O_{\ell}(\sqrt{\rho\log n }).\] Thus, we have \[\max_{k}|\tilde{\mathbf{L}}_{kk}-\mathbf{L}_{kk}|\leqslant C_{ \mathbf{L}}\sqrt{\rho\log n}\] with probability \(1-n^{-\varepsilon}/3\). Hence, we obtain \[\|\mathbf{B}-\widehat{\mathbf{B}}\|_{\mathrm{F}}\leqslant\| \mathbf{F}-\widehat{\mathbf{F}}\|\|\mathbf{L}\|\|\mathbf{F}\|_{\mathrm{F}}+\| \widehat{\mathbf{F}}\|\|\mathbf{L}-\tilde{\mathbf{L}}\|\|\mathbf{F}\|_{ \mathrm{F}}+\|\widehat{\mathbf{F}}\|\|\tilde{\mathbf{L}}\|\|\mathbf{F}- \widehat{\mathbf{F}}\|_{\mathrm{F}}=O\left(\frac{\sqrt{\rho}}{n}\cdot\log^{ \eta}n\right),\] where we use \(\|\mathbf{F}\|_{\mathrm{F}}=O(n^{-1/2})\) and \(\|\mathbf{L}\|=O(n\rho)\) from Lemmas 24 and 25. Before we supposed that \(K\) is known. Now consider the case when it does not hold. Due to Lemma 10, we have \(\widehat{K}=K\) with probability \(1-n^{-\varepsilon}/3\) for large enough \(n\). It implies that the bound (17) also holds for the estimator based on \(\widehat{K}\) with probability \(1-n^{-\varepsilon}\). ### Lower bound In this section, we show that Theorem 2 is optimal under Conditions 1-5. Precisely, we state the following. **Theorem 3**.: _If \(n>n_{0}\) for some \(n_{0}\) that may depend on \(K\), then there exist a matrix \(\mathbf{\Theta}_{0}\) and a set of symmetric matrices \(\mathcal{B}\subset[0,1]^{K\times K}\) such that_ 1. _for each matrix_ \(\bar{\mathbf{B}}\in\mathcal{B}\)_, its singular values are at least_ \(1/8\)_,_ 2. _for each_ \(k\in[K-1]\) _and_ \(\bar{\mathbf{B}}\in\mathcal{B}\)_, we have_ \(\sigma_{k}(\mathbf{P})/\sigma_{k+1}(\mathbf{P})>1+c_{0}\)_, where_ \(c_{0}=0.2\) _and_ \(\mathbf{P}=\mathbf{\Theta}_{0}\bar{\mathbf{B}}\mathbf{\Theta}_{0}^{\mathrm{T}}\)_, and, additionally,_ \[\max_{j}\sum_{i=1}^{n}\mathbf{P}_{ij}(1-\mathbf{P}_{ij})\geqslant\frac{n\rho} {16},\] 3. _each set_ \(|\mathcal{P}_{k}|\)_,_ \(k\in[K]\)_, has cardinality at least_ \(\left(1-2^{-(K+5)}\right)2^{-K}n\)_,_ 4. _for each_ \(k\)_, we have_ \[\sum_{j\not\in\mathcal{P}_{k}}\mathrm{I}\left\{\|\mathbf{\Theta}_ {j}-\mathbf{e}_{k}\|_{2}\leqslant\delta\sqrt{\frac{\log n}{n\rho}}\right\}=0,\] _provided_ \(n\rho>\delta^{2}\left(1-K^{-2}\right)\log n\)_,_ _and_ \[\inf_{\bar{\mathbf{B}}}\sup_{\bar{\mathbf{B}}\in\mathcal{B}}\mathbb{P}\left(\min_{ \mathbf{\Pi}\in\mathbb{S}_{K}}\|\rho\bar{\mathbf{B}}-\mathbf{\Pi}\bar{\mathbf{B }}\mathbf{\Pi}^{\mathrm{T}}\|_{\mathrm{F}}\geq\frac{K^{2}\sqrt{\rho}}{1152\cdot n }\right)\geq\frac{1}{4e^{3.2}}\] _for any \(\rho>n^{-1/3}\)._ The proof is similar to that of (2). The main difference is that we consider the minimum over the set of permutation matrices. One can see that Condition 1 is satisfied by property (i), Condition 2 is satisfied since we guarantee the conclusion of Theorem 3 for any \(\rho>n^{-1/3}\), Condition 3 is satisfied by property (ii), Condition 4 is satisfied by property (iii), and Condition 5 is satisfied by property (iv). Thus, the estimator defined by Algorithm 4 is indeed optimal up to the dependence on \(K\). ## 4 Numerical experiments ### How to choose an appropriate threshold? In the considered experiments, we fix \(K\) equal to \(3\) and assume that \(\mathbf{B}\) is well-conditioned. Empirically we show that well-conditioning is vital to achieving a high probability of choosing pure nodes with SPA (see Figure 2). The crucial question in practice for the SPOC++ algorithm is the choice of the threshold. Theoretically, we have established that \(t=C\log n\) gives the right threshold to achieve good estimation quality. In practice, there is a simple way to choose the appropriate threshold for nodes \(i_{1},\ldots,i_{K}\) chosen by SPA. For each \(i_{k}\), it is necessary to plot distribution of \(\widehat{T}_{i_{k}j}\) over \(j\). Thus, if the averaging procedure improves the results of SPOC, then there is a corresponding plateau on the plot (see Figure 3). Besides, our experiments show that for small \(K\), \(t_{n}=2\log n\) is good enough if nodes are generated to satisfy Conditions 4 and 5. This choice corresponds well to the theory developed in this paper. ### Illustration of theoretical results We run two experiments to illustrate our theoretical studies. First, we check the dependence of the estimation error on the number of vertices \(n\). Second, we study how the sparsity parameter \(\rho\) influences the error. For the first experiment, we provide the following experimental setup. The number of clusters is chosen equal to \(3\), and for each \(n\in\{500,1000,1500,\ldots,5000\}\) we generate a matrix \(\boldsymbol{\Theta}\), where the fractions of pure nodes are \(\frac{|\mathcal{P}_{k}|}{n}=0.09\) and other (not pure) node community memberships are distributed in simplex according to \(Dirichlet(1,1,1)\). Then we calculated the matrix \(\mathbf{P}\) with \(\rho=1\). Besides, for each \(n\) (and, consequently, matrix \(\mathbf{P}\)) we generate the graph \(\mathbf{A}\)\(40\) times and compute the error \(\min_{\mathbf{T}}\|\widehat{\mathbf{B}}-\mathbf{\Pi}\mathbf{B}\mathbf{\Pi}^{ \mathrm{T}}\|_{\mathrm{F}}\), where minimum is taken over all permutation matrices. Hence, for each \(n\), we obtain \(40\) different errors, and, finally, we compute their mean and their quantiles for confidence intervals. The threshold is equal to \(2\log n\). We plot the error curves in logarithmic coordinates to estimate the convergence rate. The results are presented in Figure 4, left. It is easy to see that the observed error rate is a bit faster than the predicted one. The slope of the mean error is \(-1.21\pm 0.03\). However, it does not contradict the theory since the provided lower bound holds for some matrix \(\mathbf{B}\) that may not occur in the experiment. Figure 2: Varying \(t_{n}\), we draw curves \(\widehat{\mathbf{F}}_{k}(t_{n}),k\in[K]\), projected on the two first coordinates, where \(\widehat{\mathbf{F}}_{k}\) is defined in Algorithm 3. The intensity of a color corresponds to the value of \(t_{n}\). A curve is red if SPA chooses a pure node, otherwise, the curve is blue. We consider four different matrices \(\mathbf{B}\), each has different conditional number. For each matrix \(\mathbf{B}\), we construct one matrix \(\mathbf{P}\), and for this matrix \(\mathbf{P}\), we generate 100 matrices \(\mathbf{A}\). We choose \(n=1000\) and \(|\mathcal{P}_{k}|/n=0.07\), \(k\in[K]\). Non-pure membership vectors \(\boldsymbol{\theta}_{i}\) were sampled from \(Dirichlet(1,1,1)\). We fix \(n=5000\) for the second experiment and generate some matrix \(\mathbf{P}\) as before. Then, we generate 40 symmetric matrices \(\mathbf{E}^{(1)},\ldots,\mathbf{E}^{(40)}\in[0,1]^{n\times n}\). Entries of each matrix \(\mathbf{E}^{(p)}\) are uniformly distributed random variables with the support \([0,1]\). Given the sparsity Figure 4: Convergence rate of SPOC++. See the description of setup in Section 4.2. On the left subfigure, we draw a red line with slope equals \(-1\) to illustrate that the predicted rate of convergence is at most as observed. On the right subfigure, we draw a red line with slope equals \(-1/2\) to illustrate the same. In both cases, we choose the intercept to minimize the mean squared distance to the observed errors. Figure 3: The distribution of \(\widehat{T}_{i_{1}j}\) over \(j\) where \(i_{1}\) is the first choice of SPA. Here \(P=\frac{|\mathcal{P}_{k}|}{n}\) which is equal for every \(k\) in our partial case. It is painted on the plot by the vertical line. Different blue curves are related to different \(n\). parameter \(\rho\) and a matrix \(\mathbf{E}^{(p)}\), we generate a matrix \(\mathbf{A}\) as follows: \[\mathbf{A}_{ij}=\mathrm{I}\Big{\{}\mathbf{E}^{(p)}_{ij}<\rho\cdot\mathbf{P}_{ij} \Big{\}}.\] We apply our algorithm to \(\mathbf{A}\) and compute the error of \(\widehat{\mathbf{B}}\). We study our algorithm for \(20\) different values of \(\rho\). The results are presented on Figure 4, right. Visually, the observed rate of convergence is a bit faster than the predicted. We calculate the slope of the mean error which turns out to be \(0.56\pm 0.03\). ### Comparison with other algorithms We compare the performance of our algorithm with Algorithm 2 and GeoMNF (Mao et al., 2017), as other distribution-free algorithms are similar to Algorithm 2. We set the number of communities to \(3\). As in Section 4.2, we generate a well-conditioned matrix \(\bar{\mathbf{B}}\), then, for each \(n\in\{500,1000,\ldots,8500\}\), we choose \(\rho=1\) and generate a matrix \(\mathbf{P}\). As previously, for each community, the number of pure nodes was equal to \(0.09\cdot n\), and membership vectors of non-pure nodes were sampled from the \(Dirichlet(1,1,1)\) distribution. Given a matrix of connection probabilities \(\mathbf{P}\), we generate \(40\) different matrices \(\mathbf{A}\), and for each of them, we compute the error of reconstruction of \(\mathbf{B}\) and \(\boldsymbol{\Theta}\), defined as follows: \[\mathcal{L}_{\mathbf{B}}(\mathbf{B},\widehat{\mathbf{B}})=\min_{\boldsymbol{ \Pi}\in\mathbb{S}_{K}}\|\widehat{\mathbf{B}}-\boldsymbol{\Pi}^{\mathrm{T}} \mathbf{B}\boldsymbol{\Pi}\|_{\mathrm{F}},\quad\mathcal{L}_{\boldsymbol{ \Theta}}(\boldsymbol{\Theta},\widehat{\boldsymbol{\Theta}})=\min_{\boldsymbol{ \Pi}\in\mathbb{S}_{K}}\frac{\|\widehat{\boldsymbol{\Theta}}-\boldsymbol{ \Theta}\boldsymbol{\Pi}\|_{\mathrm{F}}}{\|\boldsymbol{\Theta}\|_{\mathrm{F}}}.\] The results are presented in Figure 5. As one can see, our algorithm significantly outperforms Algorithm 2 and GeoMNF. Figure 5: Error of reconstruction of \(\mathbf{B}\) and \(\boldsymbol{\Theta}\) for different algorithms. See setup in Section 4.3. ## 5 Discussion The most important assumption we made is Condition 4 since it allows us to reduce the error rate significantly. The lower bound was also obtained under this condition. We conjecture that if Condition 4 does not hold, then the lower bound from Theorem 3 is no longer optimal. We think it can be obtained by constructing families \(\mathcal{B}\) of community matrices and \(\mathcal{T}\) of membership matrices such that \(|\mathcal{P}_{k}|=1\), \(k\in[K]\), for each \(\boldsymbol{\Theta}\in\mathcal{T}\). However, this analysis is out of the scope of our paper. Let us note that Condition 4 covers the important intermediate case between Stochastic Block Model and Mixed-Membership Stochastic Block Model with almost no pure nodes. Thus, it seems pretty natural and can be satisfied in practice.
2306.14619
Verification of Neural Network Control Systems using Symbolic Zonotopes and Polynotopes
Verification and safety assessment of neural network controlled systems (NNCSs) is an emerging challenge. To provide guarantees, verification tools must efficiently capture the interplay between the neural network and the physical system within the control loop. In this paper, a compositional approach focused on inclusion preserving long term symbolic dependency modeling is proposed for the analysis of NNCSs. First of all, the matrix structure of symbolic zonotopes is exploited to efficiently abstract the input/output mapping of the loop elements through (inclusion preserving) affine symbolic expressions, thus maintaining linear dependencies between interacting blocks. Then, two further extensions are studied. Firstly, symbolic polynotopes are used to abstract the loop elements behaviour by means of polynomial symbolic expressions and dependencies. Secondly, an original input partitioning algorithm takes advantage of symbol preservation to assess the sensitivity of the computed approximation to some input directions. The approach is evaluated via different numerical examples and benchmarks. A good trade-off between low conservatism and computational efficiency is obtained.
Carlos Trapiello, Christophe Combastel, Ali Zolghadri
2023-06-26T11:52:14Z
http://arxiv.org/abs/2306.14619v1
# Verification of Neural Network Control Systems using Symbolic Zonotopes and Polynotopes ###### Abstract Verification and safety assessment of neural network controlled systems (NNCSs) is an emerging challenge. To provide guarantees, verification tools must efficiently capture the interplay between the neural network and the physical system within the control loop. In this paper, a compositional approach focused on inclusion preserving long term symbolic dependency modeling is proposed for the analysis of NNCSs. First of all, the matrix structure of symbolic zonotopes is exploited to efficiently abstract the input/output mapping of the loop elements through (inclusion preserving) affine symbolic expressions, thus maintaining linear dependencies between interacting blocks. Then, two further extensions are studied. Firstly, symbolic polynotopes are used to abstract the loop elements behaviour by means of polynomial symbolic expressions and dependencies. Secondly, an original input partitioning algorithm takes advantage of symbol preservation to assess the sensitivity of the computed approximation to some input directions. The approach is evaluated via different numerical examples and benchmarks. A good trade-off between low conservatism and computational efficiency is obtained. Reachability, neural networks, verification, symbolic zonotopes, polynotopes, nonlinear dynamics. ## I Introduction The proliferation of data and access to ever-increasing computational power have fueled a renewed interest in deep neural-networks (NNs). These networks have shown a significant ability to address classification/estimation/control tasks that can hardly be formalized and designed from knowledge-based models. However, despite their impressive ability for solving complex problems, it is well known that NNs can be vulnerable to small perturbations or adversarial attacks [1, 2]. This lack of robustness (or fragility) represents a major barrier for their application to safety-critical system where safety assurances are of primary importance. For example, in Guidance, Navigation and Control of flight systems, one must ensure that some output/state trajectories remain inside a flight envelope when some inputs explore a given region. The above issues have fostered a large amount of works that analyze the sensitivity to local disturbances of NNs in isolation (open-loop), as well as the satisfaction of pre-/post- safety conditions [3]. Nevertheless, as reported in [4], reasoning about the safety verification of neural-network control systems (NNCSs), where the NN is used as a feedback controller, still remains a key challenge that requires tractable methods capable of efficiently integrating the heterogeneous components that make up the control loop. This paper focuses on the reachability analysis of NNCSs which, in turn, allows for formal reasoning about the satisfaction of safety properties (reachability of a target set, or the avoidance of non-secure sets of states). A key challenge in NNCSs reachability analysis is to successfully retain the system-controller interplay by preserving (at each time instant) the dependencies between relevant variables. This, in fact, discourages a direct application of off-the-shelf verification tools which, although able to compute accurate output bounds for elements in isolation, return coarse approximations when iteratively concatenated for the analysis of closed-loop systems since most of (if not all) the I/O dependencies are quickly broken/lost during the computations [5, 6, 7]. Furthermore, effective NNCSs verification tools must be able to assess the system state during (relatively) large time intervals. The above issues motivate the development of computationally efficient analysis methods capable of capturing the interaction between the control loop elements while granting a good scalability both in the system dimensions and in the time horizon length. Another relevant factor that should be taken into account is the size of the initial state set under study. Mainly, the performance of open- and closed-loop verification techniques that are based on (locally) abstracting the system non-linearities, deteriorates considerably for large initial sets. A common approach to address this issue, particularly in NNCSs verification problems where the number of dimensions is relatively small, is to recast the initial reachability problem into simpler subproblems that analyze a subset of the initial conditions [5, 7, 8, 9, 10, 11, 12, 13]. Nonetheless, the design of efficient and scalable partitioning strategies, specially in closed-loop verification schemes, remains also an open problem. Related workPreserving dependencies for NNCS verification has spurred on some recent studies. In [14], the authors abstract the I/O mapping of a ReLU NN controller using a polynomial expression (plus an error interval). The polynomial rule is obtained by regression of I/O samples, whereas a sound error term is derived from solving a mixed-integer program (MIP). In a similar fashion, [15] uses Bernstein polynomials to abstract the NN controller. A theoretical and a sampled-based method is proposed to compute the error term based on the Lipschitz constant of the NN. Although both approaches preserve the system-controller interplay, they are computationally expensive, scaling poorly with the number of NN inputs while requiring to be iteratively repeated for each output. In [9] a NN with differentiable activation functions is transformed into an equivalent hybrid system built upon Taylor models that retain dependencies. However, this approach is not applicable to ReLU functions and the number of states (resp. modes) of the hybrid automaton scales with the number of neurons (resp. layers). Other approaches preserve system-controller dependencies by formulating the reachable set computation as an optimization problem. The work [16] proposes a semidefinite program for reachability analysis based on the abstraction of the NN non-linearities using quadratic constraints [17]. [11] relies on the tool CROWN [18], and preserves system-controller interaction by solving LP-programs. However, dependencies are broken from one sample to the next. In [10], the closed-loop is firstly abstracted as a conjunction of piecewise-linear functions, and then analyzed using ReLU NNs verification tools like [19], [20]. On the other hand, other works address the reachability problem by chaining different verification tools. In [5], the authors combine a polytopic abstraction of the dynamical system with the tool Sherlock [21], that is used to bound the NN controller outputs. The tool NNV [6], integrates the non-linear dynamics reachability tool CORA [22] with a star sets abstraction of ReLU NN controllers [23]. Besides, [7] combines validated simulation to soundly approximate the dynamical system with common tools for NN output bounding like DeepPoly [24]. In all the above works, dependencies are broken in the switch between the different tools. This latter issue is somehow palliated in [8], where the authors use second order zonotopes (i.e. zonotopes with generators matrix size \(n\times 2n\)) as an interface between system and NN controller analysis tools. Although capable to retain first order dependencies in the system-to-controller (and controller-to-system) set transformations, dependencies in the I/O mapping of the NN controller are broken. Focusing on partitioning strategies, in [25] the gradient of a ReLU NN (open-loop) is used to decide the next input direction to be bisected, whereas in [26] a uniform grid of the initial set is employed. Other works propose a simulation-based splitting strategy. In [12], the bisection is guided by comparing the interval bound of Monte-Carlo samples with a guaranteed Interval Bound Propagation [27] of the initial subsets. Working in a similar fashion, [13] proposes a simulation-guided framework that unifies standard NN output bounding tools. The decision on the bisection order is based on the distance to the simulation samples enclosure. A closed-loop implementation of the latter algorithm is reported in [11]. ContributionsThis paper takes a new and original direction based on symbolic zonotopes (s-zonotopes) as a generic tool for the closed-loop verification of discrete-time NN controlled systems. The generators (matrix) representation of s-zonotopes enables to efficiently abstract the input-output mappings of the NN controller and non-linear physical system through (inclusion preserving) affine symbolic expressions. The evolution of the closed-loop system can then be bounded in a propagation-based fashion that benefits from the efficient computation of basic operations granted by s-zonotopes, while preserving system-controller linear dependencies. Besides, the computational complexity of the verification tool can be fixed by limiting (reducing) the number of independent symbols. Simulations show the good performance/computational efficiency trade-off granted by this approach. Furthermore, two extensions are proposed. On the one hand, the use of polynomial symbolic expressions to abstract the input-output mapping of the loop elements is explored. In particular, symbolic polynotope (s-polynotope) structures [28] are used to enclose the NN activation functions graph via the non-convex sets that arise from the polynomial map of interval symbols. Polynomial abstractions enable to reduce the conservatism induced by linear relaxations, at the price of increasing the computation needs. On the other hand, the symbols preservation throughout the control loop is exploited to develop a smart partitioning strategy of the initial conditions set. The proposed algorithm reasons upon the influence of the input symbols in the output set in order to select which dimension to bisect next, and upon the influence of the (independent) error symbols to assess the quality of each over-approximation. StructureThe paper is organized as follows. Section II is devoted to some useful preliminaries. Section III introduces the problem statement. Then, Section IV analyzes the closed-loop verification using s-zonotopes. In Section V the use of s-polynotopes is investigated, whereas the input partitioning algorithm is detailed in Section VI. Section VII presents simulation results. Finally, some concluding remarks are provided in Section VIII. NotationThe following notations are used along this work. \(\mathbb{R}^{n}\), \(\mathbb{R}^{m\times n}\) and \(\mathbb{N}\) denote the \(n\) dimension Euclidean space, the \(m\times n\) dimensional Euclidean space and the set of non-negative integers, respectively. The notation \(v_{i}\) stands for the \(i\)-th element of vector \(v\) and \(M_{[i,\cdot]}\) (\(M_{[\cdot,j]}\)) for the \(i\)-th row (\(j\)-th column) of matrix \(M\). The 1-norm of the (row) vector \(v\) is \(\|v\|_{1}=|v|\textbf{1}\), with \(|.|\) the elementwise absolute value, and \(\mathbf{1}\) a column vector of ones of appropriate size. \(diag(v)\) returns a square diagonal matrix with the elements of vector \(v\) in the main diagonal, whereas \(card(\cdot)\) gives the cardinal. ## II Symbolic dependencies in set computations This section provides preliminary concepts which will be used in the following sections. Throughout this article, \(s\) refers to an indexed family of distinct symbolic variables of type unit interval, that is, \(\forall i\in\mathbb{N}\), the symbol \(s_{i}\) (uniquely identified by the integer \(i\)) refers to a scalar real variable the value of which is only known to belong to the unit interval \(\mathcal{D}(s_{i})=[-1,+1]\subset\mathbb{R}\). Also, \(\mathcal{D}(s)=[-1,+1]^{card(s)}\). In other words, the a priori unknown value \(\iota s_{i}\) taken by the symbolic variable \(s_{i}\) satisfies \(\iota s_{i}\in\mathcal{D}(s_{i})\). The generic notation \(\iota\) which reads as "interpretation/valuation of" helps disambiguate between symbols (syntax) and values (semantics) [28]. Note that, in general, several interpretations may coexist. Set-valued interpretations take sets as values. In the following, consistently with the definition domain \(\mathcal{D}(s_{i})\) related to \(s_{i}\), the set-valued interpretation of each symbolic variable \(s_{i}\) will be \(s_{i|_{\iota}}=[-1,+1]\). In addition, the integer-valued vector \(I\) is used to uniquely identify a set of symbols, for example, vector \(I=[1,\,5,\,3]\) identifies the symbols \(s_{1}\), \(s_{5}\) and \(s_{3}\). For brevity of notation, \(s_{I}\) denotes the column vector \([s_{i}]_{i\in I}\). **Definition 1** (s-zonotope [28]).: _A symbolic zonotope \(\mathcal{X}_{|s}\) is an affine symbolic function that can be written in the form \(c+Rs_{I}\) where vector \(c\) and matrix \(R\) do not depend on the symbolic variables in \(s_{I}\). Notation: \(\mathcal{X}_{|s}=\langle c,R,I\rangle_{|s}=c+Rs_{I}\)._ **Definition 2** (e-zonotope [28]).: _The e-zonotope \(\mathcal{X}_{|s}\) related to the s-zonotope \(\mathcal{X}_{|s}=\langle c,R,I\rangle_{|s}=c+Rs_{I}\) is the set-valued interpretation of \(\mathcal{X}_{|s}\) as \(\mathcal{X}_{|s}=\langle c,R,I\rangle_{|s}=\{c+R\sigma|\sigma\in\mathcal{D}(s_ {I})\}\)._ A basic example is : given \(i\in\mathbb{N}\), \(\langle 0,1,i\rangle_{|s}=s_{i}\) (symbolic expression corresponding to the \(i\)-th symbol in \(s\)) and \(\langle 0,1,i\rangle_{|s}=\mathcal{D}(s_{i})=[-1,+1]\) (set-valued interpretation of \(s_{i}\)). More generally, s-zonotopes and their interpretation as e-zonotopes make it possible to explicitly perform operations either at symbolic/syntactic level (\(._{|s}\)) or at semantic level (\(._{|s}\)). **Remark 1**.: _In this work, all symbols being of type unit interval, \(c\) being a real vector and \(R\) a real matrix, \(\mathcal{X}_{|s}\) is a classical zonotope \(\langle c,R\rangle\) with center \(c\) and generator matrix \(R\) (for extensions to other symbol types, see [28]). Note that \(\langle 0,1,i\rangle_{|s}-\langle 0,1,j\rangle_{|s}=s_{i}-s_{j}=0\) for \(i=j\), whereas \(\langle 0,1,i\rangle_{|t}-\langle 0,1,j\rangle_{|t}=[-2,+2]\) for all \((i,j)\), that is, even for \(i=j\). Operating at the symbolic/syntactic level thus permits more accurate set evaluations by preserving trace of symbolic dependencies. This is a key point to prevent from pessimistic outer approximations induced by the so-called dependency problem [29] affecting natural interval arithmetic and other classical set-based operations only considering the semantic level._ From a computational point of view, an s-zonotope is defined by storing the triplet \((c,R,I)\). Due to their affine structure, a key aspect is to efficiently trace the identifier \(i\in I\) of the symbol that multiplies each column of the matrix \(R\). To that end, Matrices with Labeled Columns (MLCs), constitute a data structure featuring columnwise sparsity: It is defined by the pair \((R,I)\) that allows for efficiently recasting standard operations involving s-zonotopes as set-operations on the identifiers vector (\(I\)) and column-wise operations in the projection matrices (\(R\)). For how to translate operations such as sum or linear image onto a computational platform using MLCs the interested reader can refer to [30]. Due to their relevance in further developments, the following operations involving s-zonotopes are briefly recalled. **Lemma 1** (common symbols [30]).: _Any two s-zonotopes \(\mathcal{X}_{|s}=\langle c_{x},R,I\rangle_{|s}\) and \(\mathcal{Y}_{|s}=\langle c_{y},G,J\rangle_{|s}\), can be rewritten using a common set of symbols \(s_{K}\) as \(\mathcal{X}_{|s}=\langle c_{x},\tilde{R},K\rangle_{|s}\) and \(\mathcal{Y}_{|s}=\langle c_{y},\tilde{G},K\rangle_{|s}\), with_ \[\begin{split}\tilde{R}&=\begin{bmatrix}R_{1},&R_{2},&0 \end{bmatrix},\quad\tilde{G}=\begin{bmatrix}G_{1},&0,&G_{2}\end{bmatrix},\\ K&=\begin{bmatrix}I\cap J;&I\setminus J;&J\setminus I\end{bmatrix}.\end{split} \tag{1}\] Matrices \((R_{1},G_{1})\) in Lemma 1 may be empty matrices if \(I\cap J\) is empty (similarly for \(R_{2}\) and \(I\setminus J\) or \(G_{2}\) and \(J\setminus I\)). **Definition 3** (basic operations [30]).: _Given two s-zonotopes \(\mathcal{X}_{|s}\) and \(\mathcal{Y}_{|s}\) with a common set of symbols \(s_{K}\) as in Lemma 1, then their sum and vertical concatenation are the s-zonotopes_ \[\mathcal{X}_{|s}+\mathcal{Y}_{|s} =\left\langle c_{x}+c_{y},[R_{1}+G_{1},\,R_{2},\,G_{2}],K\right\rangle _{|s}, \tag{2}\] \[\left[\mathcal{X}_{|s};\mathcal{Y}_{|s}\right] =\left\langle\begin{bmatrix}c_{x}\\ c_{y}\end{bmatrix},\begin{bmatrix}R_{1}&R_{2}&0\\ G_{1}&0&G_{2}\end{bmatrix},K\right\rangle_{|s}. \tag{3}\] **Definition 4** (inclusion [28]).: _The s-zonotope \(\mathcal{Y}_{|s}\) is said to include the s-zonotope \(\mathcal{X}_{|s}\), if the set-valued interpretation of \(\mathcal{Y}_{|s}\) includes the set-valued interpretation of \(\mathcal{X}_{|s}\). In other words, the expression \(\mathcal{X}_{|s}\subset\mathcal{Y}_{|s}\) interprets as \(\mathcal{X}_{|s}\subset\mathcal{Y}_{|s}\)._ Definition 4 serves the way for rewriting rules (at symbolic level) that may be either inclusion preserving or inclusion neutral or none of both at set-based evaluation (semantic) level. A more formal treatment of this topic can be found in the definition 27 (rewriting rules and inclusion) in [28], where a definition of inclusion functions is also given in definition 2. **Definition 5** (reduction [28]).: _The reduction operator \(\downarrow_{q}\) transforms an s-zonotope \(\mathcal{X}_{|s}=\langle c,R,I\rangle_{|s}\) into a new s-zonotope \(\tilde{\mathcal{X}}_{|s}=\downarrow_{q}\mathcal{X}_{|s}=\langle c,G,J\rangle_{|s}\), such that \(\tilde{\mathcal{X}}_{|s}\) includes \(\mathcal{X}_{|s}\) while depending on at most \(q\) symbols, i.e. \(card(J)\leq q\)._ Reduction is thus an inclusion preserving transform. In Definition 5, \(I\cap J\neq\emptyset\) is not mandatory but often useful to prevent from a further propagation of conservative approximations, while controlling the complexity of \(\tilde{\mathcal{X}}_{|s}\) through the maximum number \(q\) of its symbols/generators. In this context, preserving the more significant symbols/dependencies is often beneficial: as in [30], if \(p>q\) a common practice is to replace the \(p-q+1\) less important symbols by a new independent one while guaranteeing the inclusion \(\mathcal{X}_{|s}\subseteq\tilde{\mathcal{X}}_{|s}\). Besides, note that new symbols introduced to characterize independent behaviors must be uniquely identified. Wherever needed, the generation of a vector of \(n\) new unique symbols identifiers is denoted as \(!(n)\). The generation of a pre-specified number of identifiers can be attained by implementing, for example, the Unique Symbols Provider (USP) service introduced in [30]. ## III Problem statement ### _System description_ Consider the interconnection of a discrete-time non-linear dynamic model (4) and a neural network. The physical system is modeled as: \[x(t+1)=f(x(t),u(t),w(t)), \tag{4}\] where \(x(t)\in\mathbb{R}^{n_{x}}\) and \(u(t)\in\mathbb{R}^{n_{u}}\) respectively refer to the state and the control input at time step \(t\in\mathbb{N}\). For all \(t\geq 0\), vector \(w(t)\) accounts for modeling errors and process disturbances and satisfies \(w(t)\in\mathcal{W}=[-1,\,+1]^{n_{u}}\). The system (4) is controlled by a state-feedback controller \(g(x(t)):\mathbb{R}^{n_{x}}\mapsto\mathbb{R}^{n_{u}}\) parameterized by an \(l\)-layer feed-forward fully connected neural network. The map \(x\mapsto g(x)\) is described by the following recursive equations \[\begin{split}& x^{(0)}=x,\\ & x^{(k+1)}=\phi^{(k)}(W^{(k)}x^{(k)}+b^{(k)}),\quad k=0,...,l-1, \\ & g(x)=W^{(l)}x^{(l)}+b^{(l)},\end{split} \tag{5}\] where \(x^{(k)}\in\mathbb{R}^{n_{k}}\) are the outputs (post-activation) of the \(k\)-th layer. The weight matrix \(W^{(k)}\in\mathbb{R}^{n_{k+1}\times n_{k}}\) and bias \(b^{(k)}\in\mathbb{R}^{n_{k+1}}\) define the affine mapping \(z^{(k)}=W^{(k)}x^{(k)}+b^{(k)}\) for the \((k+1)\)-th layer. Besides, the vector-valued function \(\phi^{(k)}:\mathbb{R}^{n_{k+1}}\rightarrow\mathbb{R}^{n_{k+1}}\) is applied element-wise to the pre-activation vector \(z^{(k)}\), that is, \(\phi^{(k)}(z^{(k)})=[\varphi(z_{1}^{(k)}),\cdots,\varphi(z_{n_{k+1}}^{(k)})]^{T}\), where \(\varphi:\mathbb{R}\rightarrow\mathbb{R}\) is the (scalar) activation function. Common activation choices are: ReLU \(\varphi(z)=max(0,z)\); sigmoid \(\varphi(z)=\frac{1}{1+e^{-z}}\); and tanh \(\varphi(z)=tanh(z)\). The closed-loop system with dynamics (4) and a previously trained neural-network control policy (5), is governed by \[x(t+1)=f_{g}(x(t),w(t))=f\big{(}x(t),g(x(t)),w(t)\big{)}. \tag{6}\] Accordingly, given an initial set \(\mathcal{X}_{0}\subset\mathbb{R}^{n_{x}}\), the forward reachable set of (6) at time step \(t\) is denoted as \(\mathcal{X}(t)\). For \(t\geq 1\), this set is defined as: \[\mathcal{X}(t)=\big{\{}x(t)\,| \exists(x(0),w(0:t-1))\in\mathcal{X}_{0}\times\mathcal{W}\times...\times\mathcal{W}, \tag{7}\] \[\forall\tau\in[0,t-1],x(\tau+1)=f_{g}(x(\tau),w(\tau))\big{\}}.\] ### _Finite-time reach-avoid (RA) verification problem_ Given a goal set \(\mathcal{G}\subset\mathbb{R}^{n_{x}}\) a sequence of avoid sets \(\mathcal{A}(t)\subset\mathbb{R}^{n_{x}}\) and a finite time horizon \(N\in\mathbb{N}^{+}\), it is desired to test whether \[\mathcal{X}(N)\subseteq\mathcal{G} \tag{8}\] \[\mathcal{X}(t)\cap\mathcal{A}(t)=\emptyset,\quad\forall t=0,...,N-1\] holds true for the closed loop system (6). In general, the exact evaluation of (8) for a NNCSS is a computationally intractable problem. Thus, the problem is resorted to iteratively compute a tractable over-approximation of the reachable set \(\mathcal{X}(t)\subseteq\mathcal{\bar{X}}(t)\), to test (8) using \(\mathcal{\bar{X}}(t)\) instead. Because of the over-approximation, the proposed verification setting only provides one-sided guarantees, that is, if \(\mathcal{\bar{X}}(t)\) satisfies (8) then it can be guaranteed that (7) will satisfy the RA property, but no sound conclusion about the safety of (7) can be made if the over-approximation \(\mathcal{\bar{X}}(t)\) violates (8). Therefore, the computation of tight over-approximations is of paramount importance, so that a maximum number of truly satisfied specifications can be computationally proven as such. ## IV Closed-loop verification using s-zonotopes This section presents the methodology for computing a sound over-approximation of the closed-loop system that preserves system-controller linear dependencies. The computation takes advantage of s-zonotopes described in the previous section. The abstraction of the control loop components using affine symbolic expressions is presented below. ### _Initial set_ It is assumed that the initial set can be described by the set-valued interpretation of an s-zonotope \(\mathcal{X}_{|s}(0)=\langle c_{0},R_{0},I_{0}\rangle_{|s}\), where \(c_{0}\in\mathbb{R}^{n_{x}}\) and \(R_{0}\in\mathbb{R}^{n_{x}\times n_{0}}\) and \(I_{0}=!(n_{0})\) is a set of \(n_{0}\) unique identifiers for the interval valued symbols \(s_{l_{0}}\). In other words, it is assumed that \(\mathcal{X}_{0}=\mathcal{X}_{|t}(0)\). Note that, any arbitrary zonotopic set \(\{c+R\xi\,\|\,\|\xi\|_{\infty}\leq 1\}\) can be abstracted as an s-zonotope by characterizing the independent behaviour of the generators through new interval type symbols. ### _NN controller affine abstraction_ For the sake of simplicity of notations, the temporal notation is dropped here. Given a state bounding s-zonotope \(\mathcal{X}_{|s}=\langle c,R,I\rangle_{|s}\) and a NN controller (5), the idea is to abstract the NN behavior through an affine symbolic expression of the form \[\mathcal{U}_{|s}=\langle C_{u},[G,\,H],[I;\,J]\rangle_{|s}=c_{u}+Gs_{I}+Hs_{J}, \tag{9}\] such that, it guarantees the local enclosure of the network outputs, i.e. \(g(\mathcal{X}_{|t})\subseteq\mathcal{U}_{|t}\). Note that, expression (9) captures the linear dependencies of the state symbols (identified by \(I\)), plus the addition of new error symbols (identified by \(J\)) that are introduced to guarantee the soundness of the method. The computation of vector \(c_{u}\), matrices \(G,H\), and the identifiers vector \(J\) is discussed below. The focus is on generating a dependencies-preserving inclusion for an arbitrary layer of NN (5), since a sound enclosure for the whole network follows by induction due its sequential nature. For simplicity, the layer superscript is removed below and the superscript \({}^{+}\) is used to denote the next layer. **Affine mapping** Given the \(s\)-zonotope \(\mathcal{X}_{|s}=\langle c,R,I\rangle_{|s}\), the affine mapping \(\mathcal{Z}_{|s}=W\mathcal{X}_{|s}+b\) in the layers of (5) yields a (pre-activation) s-zonotope of the form \[\mathcal{Z}_{|s}=\langle\check{c},\check{R},I\rangle_{|s}, \tag{10}\] \[\check{c}=Wc+b,\quad\check{R}=WR.\] **Activation functions** Activation functions \(\varphi(\cdot)\) in (5) are applied element-wise to the pre-activation vector. Hence, the projection of \(\mathcal{Z}_{|s}\) onto the \(i\)-th neuron, yields the s-zonotope \[\mathcal{Z}_{|s}=\langle\check{c}_{i},\check{R}_{[i,:]},I\rangle_{|s}. \tag{11}\] Notice that, any point belonging to set-valued interpretation \(\mathcal{Z}_{i|t}\) of (11) is confined within an interval \([l_{i},\,u_{i}]\), where, since \(\mathcal{Z}_{i|t}\) is a one dimensional zonotopic set, it follows that \(\mathcal{Z}_{i|t}=[l_{i},\,u_{i}]\) with the lower and upper bounds \[l_{i}=\check{c}_{i}-\|\check{R}_{[i,:]}\|_{1},\quad u_{i}=\check{c}_{i}+\| \check{R}_{[i,:]}\|_{1}. \tag{12}\] Therefore, the soundness of the method can be certified by guaranteeing the inclusion (see Definition 4) of the graph of the activation function in the range \([l_{i},\,u_{i}]\). To that end, the activation function \(\varphi(\cdot)\) is abstracted through an affine symbolic function of the form \[\mathcal{X}_{i|s}^{+}=\alpha_{i}\mathcal{Z}_{i|s}+\beta_{i}+\gamma_{i}s_{j}, \tag{13}\] where \(s_{j}\) represents a new independent symbol (identified through \(j=!(1)\)) that must be introduced to guarantee the full coverage of the activation function graph on the considered range, that is, in order to satisfy the condition \[\begin{bmatrix}\mathcal{Z}_{i|t}\\ \varphi(\mathcal{Z}_{i|t})\end{bmatrix}\subseteq\begin{bmatrix}\mathcal{Z}_{i|t} \\ \alpha_{i}\mathcal{Z}_{i|t}+\beta_{i}+\gamma_{i}\mathcal{D}(s_{j})\end{bmatrix}. \tag{14}\] The \(i\)-th neuron post-activation s-zonotope \(\mathcal{X}_{i|s}^{+}\) in (13) not only guarantees that its set-valued interpretation encloses the neuron output, but it preserves the linear influence of the symbols \(s_{I}\) in the output set. This later point plays a fundamental role since it allows to retain the interplay between the inputs of the neurons at the same layer. Coherently, the layer post-activation s-zonotope can be computed by vertically concatenating in a recursive fashion the different \(\mathcal{X}_{i|s}^{+}\) after rewriting them using the same set of symbols \[\mathcal{X}_{|s}^{+}=[\,...\,[\,[\mathcal{X}_{1|s}^{+};\mathcal{X}_{2|s}^{+}]; \mathcal{X}_{3|s}^{+}]\,...\,;\mathcal{X}_{n_{k}|s}^{+}]. \tag{15}\] **Proposition 1** (NN s-zonotope).: _Given the s-zonotope \(\mathcal{X}_{|s}^{(0)}=\langle c^{(0)},R^{(0)},I^{(0)}\rangle_{|s}\), and let \(\alpha^{(k)},\beta^{(k)},\gamma^{(k)}\in\mathbb{R}^{n_{k}+1}\) be some parameter vectors that guarantee the inclusion of the \(n_{k+1}\) activation functions in the \(k\)-th layer, then the enclosure of the NN output set \(g(\mathcal{X}_{|_{i}}^{(0)})\subseteq\mathcal{U}_{|_{i}}=\langle c_{u},[G,\,H ],[I;\,J]\rangle_{\iota}\) is guaranteed for the s-zonotope in (9) with parameters_ \[c^{(k+1)}=diag(\alpha^{(k)})(W^{(k)}c^{(k)}+b^{(k)})+\beta^{(k)}, \tag{16a}\] \[\tilde{H}^{(k+1)}=\big{[}diag(\alpha^{(k)})W^{(k)}\tilde{H}^{(k)},\quad diag(\gamma^{(k)})\big{]}\,,\] (16b) \[\tilde{G}^{(k+1)}=diag(\alpha^{(k)})W^{(k)}\tilde{G}^{(k)},\quad k =1,...,l-1,\] (16c) \[c_{u}=W^{(l)}c^{(l)}+b^{(l)},\] (16d) \[H=W^{(l)}\tilde{H}^{(l)},\] (16e) \[K=W^{(l)}\tilde{K}^{(l)},\] (16f) \[J=[!(n_{1});\,...;!(n_{l})], \tag{16g}\] _where \(\tilde{G}^{(1)}=diag(\beta^{(0)})W^{(0)}R^{(0)}\) and \(\tilde{H}^{(1)}=diag(\gamma^{(0)})\)._ Proof.: Expressions (16a)-(16f) result from the recursive application of Lemma 1 and the vertical concatenation of the post-activation s-zonotopes (13) for the \(n_{k+1}\) neurons of the \(k\)-th layer. Besides, (16g) reflects the symbols identifier update of the noise terms introduced at the neurons of each layer. Regarding the output inclusion, starting with an initial set \(\mathcal{X}_{|_{i}}^{(0)}\), by induction, given the pre-activation s-zonotope \(\mathcal{X}_{|_{i}}^{(k)}\), the operations at the \(k\)-th layer are: affine mapping; linear abstraction (inclusion preserving for appropriate triplet \((\alpha_{i}^{(k)},\beta_{i}^{(},\gamma_{i}^{(k)}))\)); and vertical concatenation. Thus, the composition of inclusion functions being an inclusion function, the proof follows. For each neuron, the triplet of parameters \((\alpha,\beta,\gamma)\) must be appropriately designed to satisfy (14), while minimizing the conservatism induced by using an affine relaxation. In this regard, a relevant heuristic consists in minimizing the magnitude of the error symbol introduced to guarantee the activation function graph enclosure, i.e. to minimize \(|\gamma|\). Due to the independent behaviour of the error symbol, this can be reformulated as minimizing the area of the enclosing parallelogram [31]. **Lemma 2**.: _Given the bounds \([l,\,u]\) in (12) with \(l<u\), the triplet of parameters \((\alpha^{*},\beta^{*},\gamma^{*})\) that minimizes \(|\gamma|\) while guaranteeing the satisfaction of (14) are:_ * _ReLU function_ \(\varphi(x)=max(0,x)\)__ \[\alpha^{*}=\frac{\varphi(u)-\varphi(l)}{u-l},\quad\beta^{*}=\gamma^{*}=\frac{ \varphi(l)-\alpha^{*}\cdot l}{2}.\] (17) * _S-shaped functions_ * _Sigmoid_ \(\varphi(x)=\frac{1}{1+e^{-x}}\) _with_ \(\varphi^{\prime}(x)=\varphi(x)(1-\varphi(x))\)__ * _tanh_ \(\varphi(x)=tanh(x)\) _with_ \(\varphi^{\prime}(x)=1-\varphi(x)^{2}\)__ \[\alpha^{*}=\min(\varphi^{\prime}(l),\varphi^{\prime}(u)),\] \[\beta^{*}=\frac{\varphi(u)+\varphi(l)-\alpha^{*}\cdot(u+l)}{2},\] (18) \[\gamma^{*}=\frac{\varphi(u)-\varphi(l)-\alpha^{*}\cdot(u-l)}{2}.\] **Remark 2**.: _The proposed NN abstraction method shares a similar structure with the zonotope abstraction based on affine arithmetic presented in [31] for the (open-loop) NN output bounding. However, here, the explicitly computed affine symbolic expression (9) will further play a key role in closed-loop verification, and an efficient computation of the projection matrices exploiting the generators (matrix) structure of s-zonotopes, is also used._ ### _Dynamical system affine abstraction_ Similar to the NN controller dynamics (5), the function (4) that describes the state evolution at time \((t+1)\) can be abstracted by means of an (inclusion preserving) affine mapping. The resulting s-zonotope will depend on the symbols that define the state at time \(t\), plus some extra symbols that account for: I) NN controller non-linearities; II) abstract system non-linearities; III) the uncertainty sources. For the computation of a state bounding s-zonotope, it is assumed that the function \(f(\cdot)\) in (4) results from the composition of elementary functions and operators for which an affine symbolic expression (s-zonotope) can be computed. Note that this is not much restrictive since (linear) operations such as linear image, sum or vertical concatenation are closed (i.e. they return s-zonotopes) under affine mappings. Besides, any univariate locally continuous differentiable function can be abstracted through an affine mapping. **Lemma 3**.: _Let \(h:[l,\,u]\rightarrow\mathbb{R}\) be a class \(\mathcal{C}^{1}\) function on a given interval \([\,l,\,u]\subset\mathbb{R}\). Then, the function \(\tilde{h}(x,\epsilon)=\alpha x+\beta+\gamma\epsilon\) satisfies that \(\forall x\in[\,l,\,u],\,\,\exists\epsilon\in[-1,\,+1],h(x)=\tilde{h}(x,\epsilon)\) for the triplet of parameters:_ \[\alpha=\frac{h(u)-h(l)}{u-l},\quad\beta=\frac{h(\underline{x})+h( \bar{x})-\alpha(\underline{x}+\bar{x})}{2},\] \[\gamma=\frac{h(\bar{x})-h(\underline{x})+\alpha(\underline{x}- \bar{x})}{2},\] _where, defining \(\xi(x)=h(x)-\alpha x\), then_ \[\bar{x}=\operatorname*{arg\,max}_{x\in\{\delta_{1},...,\delta_{n},u\}}\xi(x), \quad\,\underline{x}=\operatorname*{arg\,min}_{x\in\{\delta_{1},...,\delta_{n}, u\}}\xi(x),\] _with \(\delta_{1},...,\delta_{n}\) the stationary-points of \(\xi(\cdot)\) in \([l,\,u]\)._ Proof.: See Appendix A. Lemma 3 provides a method to propagate (inclusion preserving) s-zonotopes through univariate non-linearities. Besides, the approach in Lemma 3 returns an optimal, in the sense of minimizing the magnitude \(|\gamma|\) of the error symbol, set of parameters for convex/concave differentiable functions [28]. On the other hand, the interaction between multiple variables can be handled through the sum operation (2) or by over-approximating the product of two s-zonotopes. **Lemma 4**.: _Given two 1-D s-zonotopes \(\mathcal{X}_{|s}=\langle c_{x},r^{T},K\rangle_{|s}\) and \(\mathcal{Y}\rangle_{|s}=\langle c_{y},g^{T},K\rangle_{|s}\) with a common set of symbols \(s_{K}\) (with \(n=card(s_{K})\)), then the product \(\mathcal{X}_{|s}\times\mathcal{Y}_{|s}\) is included by the s-zonotope \(\mathcal{L}_{|s}=\langle c_{l},[l^{T},\,m],[K;\,j]\rangle_{|s}\) with_ \[\begin{split}& c_{l}=c_{x}c_{y}+\frac{1}{2}\sum_{i=1}^{n}r_{i}g_{i}, \qquad l=c_{x}g+c_{y}r,\\ & m=\frac{1}{2}\sum_{i=1}^{n}|r_{i}g_{i}|+\sum_{i=1}^{n}\sum_{l> i}^{n}|r_{i}g_{l}+r_{l}g_{i}|,\end{split} \tag{19}\] _and \(j=!(1)\)._ **Example 1**.: _Consider the system \(x^{+}=\sin(x)-u+0.1w\), with an initial set \(\mathcal{X}_{0}=[0,\,1]\) described by \(\mathcal{X}_{|s,0}=0.5+0.5s_{1}\). This system is controlled by a NN with 1 layer of 2 neurons that, for \(\mathcal{X}_{0}\), is abstracted as \(\mathcal{U}_{|s}=0.1+0.2s_{1}-0.1s_{2}+0s_{3}\). The non-linear function \(h(x)=\sin(x)\) is abstracted, for \(\mathcal{X}_{0}\), as \(\hat{\mathcal{X}}_{|s}=0.45+0.42s_{1}+0.03s_{4}\). Besides, the independent behaviour of the disturbances is captured by \(\mathcal{W}|_{s}=s_{5}\). Accordingly, a dependency preserving over-approximation of the successor state is given by \(\mathcal{X}_{|s}^{+}=0.35+0.22s_{1}+0.1s_{2}+0.03s_{4}+0.1s_{5}\). The a priori knowledge on the number of error symbols introduced at each abstraction (e.g. \(\mathcal{U}_{|s}\) introduces up to two error symbols, one per neuron) allows to directly store the generators matrix of each s-zonotope by taking into account the common set of symbols, thus providing an efficient computation of required operations as shown below:_ \[\begin{split}&\qquad\qquad\qquad c\quad\quad s_{1}\quad\quad s_{2} \quad\quad s_{3}\quad\quad s_{4}\quad\quad s_{5}\\ -&\left[\begin{array}{c|c|ccccc}0.1&0.2&-0.1&0&0&0 \\ 0.45&0.42&0&0&0.03&0\\ \end{array}\right]&\leftrightarrow&\mathcal{U}_{|s}\\ +&\left[\begin{array}{c|c|ccccc}0&0&0&0&0&0.1\\ \end{array}\right]&\leftrightarrow&0.1\cdot\mathcal{W}_{|s}\\ =&\left[\begin{array}{c|ccccc}0.35&0.22&0.1&0&0.03&0.1\\ \end{array}\right]&\leftrightarrow&\mathcal{X}_{|s}^{+}\end{split}\] ``` 1:\(\mathcal{X}_{0}\), NN param \((\boldsymbol{W},\boldsymbol{b})\), \(\mathcal{G},\mathcal{A}(i)\), \(N\), \(q\) 2:isRAok, \(t_{err}\), \(\mathcal{X}_{|s}(j)\)\(j=0,...,min(t_{err},N)\). 3:Initialize: Generate \(\mathcal{X}_{|s}(0)\); set \(t_{err}\leftarrow\infty\) 4:for i = 0 to \(N-1\)do 5:if\(\mathcal{X}_{|i}(i)\cap\mathcal{A}(i)\)then 6:\(t_{err}\gets i\) 7:break all 8:else 9:\(\mathcal{U}_{|s}(i)\gets controller(\mathcal{X}_{|s}(i),\boldsymbol{W}, \boldsymbol{b})\) 10:\(\bar{\mathcal{X}}_{|s}(i+1)\gets system(f(\cdot),\mathcal{X}_{|s}(i), \mathcal{U}_{|s}(i),\mathcal{W}_{|s}(i))\) 11:\(\mathcal{X}_{|s}(i+1)\leftarrow_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! their relevant information (generators, symbol identifiers and order of the monomials) into matrices. **Definition 6** (s-polynotope [28]).: _A symbolic polynotope \(\mathcal{P}_{|s}\) is a polynomial function that can be written in the form \(c+Rs_{I}^{E}\) where vector \(c\) and matrices \(R\) and \(E\) do not depend on the symbolic variables in \(s_{I}\). Notation: \(\mathcal{P}_{|s}=\langle c,R,I,E\rangle_{|s}=c+Rs_{I}^{E}\)._ Definition 6 uses the exponential matrix notation (as in Definitions 23-25 in [28]), where the usually sparse matrix \(E\) accounts for exponents of the symbols involved in each monomial. As an example, \(s_{I}=[s_{1},s_{2}]^{T}\) and \(E=[1\,0\,3;0\,2\,4]\), yields \(s_{I}^{E}=[s_{1},s_{2}^{2},s_{1}^{3}s_{2}^{4}]^{T}\). Similar to s-zonotopes, a distinction is made between an s-polynotope as defined in Definition 6 and its set-valued interpretation defined as the (possibly non-convex) set \(\mathcal{P}_{|u}=\{c+R\sigma^{E}\,|\,\sigma\in\mathcal{D}(s_{I})\}\). Symbolic polytopes obviously extend s-zonotopes (obtained from \(E=\mathcal{I}\), i.e. with identity as exponent matrix) and are closed under the extension of basic operations already defined for s-zonotopes like linear image, sum or concatenation. The reader is referred to [28] for further details on how to define and operate on s-polynotopes. ### _NN controller polynomial abstraction_ The abstraction of the I/O map of a NN controller of the form (5) using s-polynotopes is presented below. In particular, given a state bounding s-polynotope \(\mathcal{X}_{|s}=\langle c,R,I,E\rangle_{|s}\) (note that any initial s-zonotope in Section IV-A can be directly transformed into an equivalent s-polynotope) the idea is to compute a polynomial symbolic map of the form \[\mathcal{U}_{|s}=\langle c_{u},G,Q,E_{u}\rangle=c_{u}+Gs_{Q}^{E_{u}}, \tag{20}\] such that, the enclosure of the network outputs is guaranteed, i.e. \(g(\mathcal{X}_{|s})\subseteq\mathcal{U}_{|t}\). The vector of identifiers in (20) has the structure \(Q=[I,\,J]\), thus involving the symbols in the state bounding s-polynotope (identified by \(I\)) as well as error symbols (identified by \(J\)). Notice that the exponent matrix \(E_{u}\) may also capture cross terms involving symbols with identifiers in both \(I\) and \(J\). Similar to section IV-B, the computation of (20) can be obtained from a forward propagation of \(\mathcal{X}_{|s}\) through (in this case) a polynomial relaxation of the activation functions. The pre-activation s-polynotope \(\mathcal{Z}_{|s}=W\mathcal{X}_{|s}+b\) and its projection onto the \(i\)-th neuron are given by \[\begin{split}\mathcal{Z}_{|s}&=\langle\bar{c},\tilde {R},I,E\rangle_{|s},\qquad\mathcal{Z}_{|s}=\langle\bar{c}_{i},\tilde{R}_{[i, ]},I,E\rangle_{|s},\\ \dot{c}&=Wc+b,\qquad\qquad\qquad\tilde{R}=WR.\end{split} \tag{21}\] Bounding the set-valued interpretation of \(\mathcal{Z}_{|s}\) within the interval \([l_{i},\,u_{i}]\), then the polynomial structure of s-polynotopes enables to obtain a sound over-approximation of the NN output by locally covering the activation function graph through an \(n\)-order polynomial expression of the form \[\mathcal{X}_{i|s}^{+}=\sum_{m=1}^{n}\alpha_{i,m}(\mathcal{Z}_{i|s})^{m}+\beta _{i}+\gamma_{i}s_{j}, \tag{22}\] where \(s_{j}\) represents a new independent symbol (identified through \(j=!(1)\)) introduced to guarantee the enclosure of the activation function graph in the range \([l_{i},\,u_{i}]\). Therefore, since \(\mathcal{X}_{i|s}^{+}\) results in an s-polynotope arising from the polynomial mapping of s-polynotopes, the layer post-activation s-polynotope \(\mathcal{X}_{s}^{+}\) is computed by vertically concatenating the neuron post-activation s-zonotopes. Polynomial over linear abstractions of the activation function not only allow to reduce the conservatism introduced by the error symbols, but also enable the to compute input-output symbolic relationships that better fit the activation behaviour. **Example 2**.: _Suppose that the projection onto a ReLU neuron is given by the s-polynotope \(\mathcal{Z}_{|s}=0.5-0.5s_{1}+s_{1}s_{2}\), whose set-valued interpretation is bounded/included in the interval \([l,\,u]=[-1,\,2]\). Then, the ReLU function can be locally abstracted over this range using an \(n=2\)-order polynomial of the form \(\mathcal{X}_{|s}^{+}=\alpha_{2}(\mathcal{Z}_{|s})^{2}+\alpha_{1}\mathcal{Z}_{| s}+\beta+\gamma s_{3}\), with \((\alpha_{2},\,\,\alpha_{1})=(0.25,\,0.5),\,\,\beta=\gamma=0.125\). This, in turn, generates the post-activation s-polynotope_ \[\begin{split}\mathcal{X}_{|s}^{+}&=0.25(0.5-0.5s_{ 1}+s_{1}s_{2})^{2}+0.5(0.5-0.5s_{1}+s_{1}s_{2})\\ &\quad+0.125+0.125s_{3}\\ &=0.4375-0.375s_{1}+0.0625s_{1}^{2}+0.125s_{3}+0.75s_{1}s_{2}\\ &\quad-0.25s_{1}^{2}s_{2}+0.25s_{1}^{2}s_{2}^{2}.\end{split}\] _Figure 1 depicts the non-convex local enclosure generated by the set-valued interpretation of \([\mathcal{Z}_{|s};\mathcal{X}_{|s}^{+}]\)._ Example 2 evidences the complexity/accuracy trade-off inherent to using \(n\)-order polynomial abstractions: a high \(n\) grants an accurate representation of the activation functions; whereas, on the other hand, it increases the computational complexity due to the increased number of monomials. A reduction strategy is thus used to manage the representation complexity. It can consist in either truncating the maximum degree of the polynomial approximation (20), or limiting the maximum number of monomials involved. To address this latter issue, an approach consists in (independently) assessing the monomials relevance based on the \(2\)-norm of the generators (matrix columns) [32], and use natural interval extension [33] to bound the list of selected monomials through a reduced number of independent symbols. The ability of the final s-polynotope (20) to generate a sound over-approximation of the network outputs is guaranteed by selecting (for each neuron) a triplet \((\alpha,\beta,\gamma)\), where Fig. 1: 2-nd order polynomial enclosure of ReLu function (Example 2) \(\alpha=[\alpha_{n},...,\alpha_{1}]^{T}\) is an \(n\)-dimensional vector, that ensures the (local) coverage of the activation function. In the case of the commonly used ReLU activation functions, as shown in Example 2 their convex nature allows describe them more accurately than with the sole affine dependencies by using \(2\)-nd order polynomial expressions. The reduction in the magnitude related to the error symbol is especially significant in those situations where an affine approximation yields a rough description of ReLU function, i.e. for \(|l|\approx u\) (if \(l<0<u\)). **Proposition 2**.: _Given the interval \([l,\,u]\) with \(l<0<u\) and a ReLU activation function \(\varphi(x)=max(x,0)\). The set of parameters_ \(\alpha_{2}=\frac{1}{2u},\ \alpha_{1}=1-\alpha_{2}u,\ \beta=\gamma=\frac{\alpha_{2}u^{2}}{8}\) _if_ \(|l|\leq u\leq 2|l|\)__ \(\alpha_{2}=\frac{\gamma}{2},\ \alpha_{1}=-\alpha_{2}l,\ \beta=\gamma=\frac{ \alpha_{2}l^{2}}{8}\) _if_ \(u<|l|\leq 2u\)__ _guarantees that \(\eta(x,\epsilon)=\alpha_{2}x^{2}+\alpha_{1}x+\beta+\gamma\epsilon\) satisfies that \(\forall x\in[l,\,u],\exists\epsilon[-1,\,1],\ \varphi(x)=\eta(x,\epsilon)\) with \(|\gamma|\leq\frac{3}{8}|\gamma_{aff}^{*}|\), where \(\gamma_{aff}^{*}\) is the error symbol introduced by the affine abstraction in Lemma 2._ Proof.: See Appendix B. ### _Finite-time RA using s-polynotopes_ The main aspects in addressing the RA verification problem using s-polynotopes are discussed below. In general, the same steps presented in Algorithm 1 can be used while adapting the NN controller and dynamical system abstraction to an s-polynotope formulation. In this case, the computation of the s-polynotope \(\mathcal{U}_{|s}\) in Step 7 of Algorithm 1 has already been presented in section V-B. Regarding the abstraction of the nonlinear function \(f(\cdot)\) in Step 8, since s-polynotopes constitute an extension of s-zonotopes, \(f(\cdot)\) can always be abstracted using (at least) the affine dependency preserving method in Lemma 3. Note that, s-polynotopes also enable the description of (multivariate) polynomial equations without the need to over-approximate them, at least in all intermediate symbolic compositions and up to some tunable computation load. It must be taken into account that several operations in a s-polynotope formulation of Algorithm 1 such as bounding the projection of an s-polynotope onto a neuron, intersection/inclusion of an s-polynotope with an avoid/reach set or the reduction operator (in Step 9), in turn require the computation of interval bounds from (multivariate) interval polynomial expressions. If computationally affordable, the range bounds computed by a (simple and fast) interval extension may be refined either by iteratively bisecting the variables domain, or resorting to numerically reliable optimization-based methods. ## VI Input partitioning strategy In general, the conservatism induced by abstraction-based verification tools strongly depends on the size of the initial set. It is thus extremely useful to assess the regions of the initial/input space for which meaningful (i.e. not too coarse) over-approximations of the closed-loop system evolution can be obtained. To that end, this section presents an algorithm to split the initial set of a NNCSs verification problem in a smartly guided way. More precisely, the proposed splitting strategy relies on and benefits from the dependency modeling and tracing used in Section IV. In particular, the algorithm assesses the sensitiveness of the initial/input directions on the satisfaction of a safety property by an s-zonotopic over-approximation through the analysis of the relative influence of the initial symbols. The principle of the algorithm is introduced in Section VI-A. Then, some relevant notions are detailed in section VI-B, whereas the algorithm pseudo-code for a RA problem implementation is reported in Section VI-C. Finally, some further discussion on different settings is presented. ### _Splitting principle_ The main idea of the proposed algorithm is to keep a linear increase in the number of subsets by only splitting at each iteration the sole initial/input set symbol that has greater influence on the satisfaction of the safety property \(\mathcal{S}\) to be verified. To that end, notice that given any initial s-zonotope \(\mathcal{X}_{|s}(0)=\langle c_{0},R_{0},I\rangle_{|s}\) as defined in Section IV-A, then the successive computation of forward reachable sets returns over-approximating s-zonotopes structured as \[\mathcal{X}_{|s}(t)=c_{f}+R_{f}s_{I}+G_{f}s_{J}, \tag{23}\] where the matrix \(R_{f}\) (resp. \(G_{f}\)) reflects the impact of the initial (resp. error) symbols identified by \(I\) (resp. \(J\)) on the computed over-approximation at time \(t\). Typically, testing \(\mathcal{S}(\mathcal{X}_{|s}(t))\) boils down to a metric/size evaluation on the set-valued interpretation of \(\mathcal{X}_{|s}(t)\) (e.g. to check threshold preassigning). Hence, due to the linearity of (23), the influence of each input symbol \(s_{i}\)\((i\in I)\) can be assessed using a metric that gauges the generator (column of \(R_{f}\)) size that is related to \(s_{i}\), whereas the accuracy of an s-zonotope approximation to evaluate \(\mathcal{S}\) can be determined by measuring the zonotope \(\langle 0,G_{f}\rangle\) spanned by the error symbols. Therefore, at each iteration of the algorithm, an input s-zonotope \(\mathcal{X}_{|s}(0)\), such that the corresponding output/final s-zonotope does not satisfy \(\mathcal{S}\), is split into two new input s-zonotopes that are later evaluated on the satisfaction of \(\mathcal{S}\). The algorithm may run until the satisfaction of the safety property, or until the accuracy of the method (gauged through \(\langle 0,G_{f}\rangle\)) is below a certain threshold. ### _Relevant notions_ Some relevant notions for the s-zonotope based partitioning algorithm are discussed below. #### Vi-B1 Accuracy assessment considering the evaluation of a safety property for a s-zonotope of the form (23), the accuracy of the over approximation can be assessed by gauging the zonotope \(\langle 0,G_{f}\rangle\) spanned by a (set-valued) interpretation of the error symbols. In particular, further implementations make use of the zonotope \(F\)-radius [34], that is,the Frobenius norm of the generators matrix \(\|G_{f}\|_{F}\) to reasoning upon the quality of the affine approximation. #### Vi-B2 Input symbols relative influence the sensitivity of an input symbol \(s_{i}\)\((i\in I)\) is computed based on the F-radius ratios of the I/O zonotopes spanned by \(\iota s_{i}\). That is, through the ratio \(\|R_{0}^{[i]}\|_{2}/\|R_{0}^{[i]}\|_{2}\) where \(R_{0}^{[i]}\) (and \(R_{f}^{[i]}\)) denote the columns of \(R_{0}\) (and \(R_{f}\)) that multiply the symbol \(s_{i}\). This relation is used to quantify how a variation on \(s_{i}\) at the input s-zonotope \(\mathcal{X}_{|s}(0)\) affects the output s-zonotope. #### Vi-B3 Symbol bisection bisecting a unit interval symbol \(s_{i}\ (i\in I)\) is done by rewriting it as \(s_{i}\to 0.5+0.5s_{j}\) and \(s_{1}\to-0.5+0.5s_{k}\), where \(j=!(1)\) and \(k=!(1)\), thus generating two new s-zonotopes. #### Vi-B4 Polyhedral RA sets checking the empty intersection and/or the inclusion of a state bounding set with/within a polyhedron in half-space representation of the type \(\{h_{i}^{T}x\leq r_{i},\ i=1,...,m\}\) can be done by evaluating the infimum/supremum of the projections of the bounding set onto the directions \(h_{i}\in\mathbb{R}^{n_{x}}\)[35]. For a state bounding s-zonotope of the form (23), the supremum of the dot product with \(h\) is computed as \[\sup_{x\in\mathcal{X}_{|s}(t)}h^{T}x=h^{T}c_{f}+\|h^{T}R_{f}\|_{1}+\|h^{T}G_{f} \|_{1}. \tag{24}\] ### _Algorithm implementation for finite-time RA_ Algorithm 2 reflects the pseudo-code of the proposed partitioning strategy to check the satisfaction of a RA problem over a time horizon \(N\). Algorithm 2 uses square brackets to label the different s-zonotopes that arise after splitting. As an example, \(\mathcal{X}_{|s}[0]\) reads as the initial s-zonotope (i.e. \(\mathcal{X}_{|s}[0]=\mathcal{X}_{|s}^{0}\)), which is split onto a second \(\mathcal{X}_{|s}[1]\) and a third \(\mathcal{X}_{|s}[2]\) s-zonotopes (with \(\mathcal{X}_{|s}[1]\cup\mathcal{X}_{|s}[2]=\mathcal{X}_{|s}[0]\)). Besides, \(L\) denotes a set of integer labels/indices (for the above example \(L=\{0,1,2\}\)), and \(\mathcal{X}_{|s}[L]\) is a shorthand for the set of s-zonotopes \(\{\mathcal{X}_{|s}[l]\ |\ l\in L\}\). At each iteration, the routine reach runs a slightly modified version of Algorithm 1, that, in this case, returns the last time instant (and the corresponding s-zonotope) for which the RA problem is not satisfied. These times-to-last-error are managed by vector \(T\). The algorithm iteratively selects the label of the initial s-zonotope that yields the largest time-to-last-error (Step 14). The use of this backward management of the information that prioritizes to split until the RA constraints are satisfied at time \(t\), then at time \(t-1\), etc., will be further discussed in the next paragraph. Once the \(l\)-th (with \(l\in L\)) s-zonotope has been selected, sym-select returns the initial symbol identifier \(i\in I\) that has greater relative influence over the violated property. The symbol \(s_{i}\) of the \(l\)-th set is split by the routine sym-split that returns two new initial subsets (Step 6). The times-to-last-error for the new s-zonotopes are computed and the set \(L\) and vector \(T\) updated (Steps 7-11). In particular, Algorithm 2 runs either until the RA problem is satisfied for the whole set \(L\), or until a maximum number \(n_{max}\) of splits is reached. ``` 0: same as Algorithm 1, \(n_{max}\) 0: isRAok, set of s-zonotopes \(\mathcal{X}_{|s}[L]\) 1:Initialize:\(l=n=0\); \(L=\{l\}\); \(\mathcal{X}_{|s}[0]=\mathcal{X}_{0}\) 2:\((t,\mathcal{X}_{|s}(t)[0])\leftarrow\texttt{reach}(\mathcal{X}_{|s}[0],N)\) 3:\(T\gets append(t)\) 4:while\((\max(T)>0)\ \vee\ (n/2==n_{max})\)do 5:\(i\leftarrow\texttt{sym-select}(\mathcal{X}_{|s}[l],\mathcal{X}_{|s}(T(l))[l])\) 6:\((\mathcal{X}_{|s}[n+1],\mathcal{X}_{|s}[n+2])\leftarrow\texttt{sym-split}( \mathcal{X}_{|s}[l],i)\) 7:for\(j=1\) to \(2\)do 8:\((t,\mathcal{X}_{|s}(t)[n+j])\leftarrow\texttt{reach}(\mathcal{X}_{|s}[n+j], \max(T))\) 9:\(L\gets L\cup\{n+j\}\) 10:\(T\gets append(t)\) 11:endfor 12:\(L\gets L\setminus\{l\}\) 13:\(T(l)\gets 0\) 14:\(l\leftarrow\arg\max(T(L))\) 15:\(n\gets n+2\) 16:endwhile 17: isRAok = (\(\max(T)==0\)) ``` **Algorithm 2** Input partitioning for finite-time RA Algorithm 2 manages the information in a backward fashion, that is, it selects a s-zonotope with the higher time-to-last-error. This usually gives better results than working in a forward fashion (that is, selecting the s-zonotope with lower time-to-first-error) since it avoids to get stuck by exhaustively splitting up to the satisfaction of a constraint at time \(t\), which, then, may have a small impact on the constraint satisfaction at \(t+1\). On this subject, the algorithm can be straightforwardly adapted to handle the forward case by directly using Algorithm 1 (instead of reach), using \(T(l)\gets N+1\) (instead of \(T(l)\gets 0\)) in Step 13 and selecting the minimum (instead of the maximum) of vector \(T\). Besides, the reduction operation used in Algorithm 1 must not truncate the initial symbols even if their relevance decreases with time, so that the input-output mapping of the symbols identified by \(I\) is preserved. ### _Other possible settings and applications_ Other choices for the proposed input partitioning strategy are as follows: * The strategy in Algorithm 2 can be adapted to handle open-loop verification problems (e.g. elements like the NN in isolation). In this case, the reach routine will only compute the output s-zonotope for the isolated element for a number of forward steps \(N=1\). * The maximum number of splits stopping criterion in Algorithm 2 can be modified/complemented with a tolerance on the accuracy assessment (see Section VI-B). In other words, if the accuracy tolerance is fulfilled and a property is still violated, then the algorithm should stop to prevent from further splitting and the safety property is considered as unsatisfied up to the accuracy tolerance. * Another interesting application is to modify the s-zonotope split decision rule (Step 14 of Algorithm 2) to focus the split in those regions for which the accuracy of using an affine abstraction is low (i.e. high \(\|G_{f}\|_{F}\)). This tends to return a set of initial s-zonotopes such that each locally provides an accurate (affine) abstraction of the system behavior. ## VII Simulations ### _Benchmark description_ The numerical simulations consist in the discrete-time version of some of the verification problems proposed in the ARCH-COMP 2021 [4]. Five dynamical systems are assessed, namely: single pendulum (**S**), TORA (**T**), unicycle car (**C**), adaptive cruise control (**ACC**) and double pendulum (**D**). The above systems have been discretized using the forward Euler method with sampling period \(\Delta T\), and they are controlled by a NN controller with control period \(\Delta h\). The NN controllers are the ones provided in [4] to control the continuous-time version of the models. To address this issue, the dynamical models have been analyzed under sampling times \(\Delta T(\leq\Delta h)\) chosen sufficiently small for the discretization to have negligible impact in the model responses. Under this context, the same safety constraints and initial conditions than the ones proposed in [4] have been re-used to setup the reported simulations. Note that, as discussed in Remark 3, the use of symbolic approaches supports the variation of \(\Delta T\) (for some \(\Delta h\)) without inducing conservatism due to a loss of dependencies between repeated control inputs. A detailed description of the systems dynamics can be found in [4], whereas the main parameters that define each safety verification problem are shown in Table I. ### _Benchmark results using \(s\)-zonotopes_ All the results reported below were obtained on a standard laptop with Intel Core [email protected]\(\times\)4 processor and 16GB RAM running Windows 10. Table II shows the set of initial states, the safety constraints (with their time horizon), as well as the time required by an s-zonotope implementation to verify each problem. The reduction order is \(q=200\) in all the experiments. Some particularities are discussed below: * _Single pendulum_ (**S**): in a discrete-time setting, the constraint \(x_{1}\in[0,\,1]\) is guaranteed to be satisfied for problem **S1** (with \(\Delta T=0.05\)s) for the time interval \(t\in[0.55,\,1]\) (that is, for samples \(\{11,...,20\}\)), whereas in **S2** (with \(\Delta T=0.001\)s) the constraint satisfaction is guaranteed for the time interval \(t\in[0.516,\,1]\). * _TORA_ (**T**): in **T1**, the closed-loop system is not stable for the discrete-time model obtained for \(\Delta T=1\)s. In this case, an unambiguous constraint violation is achieved at \(t=3\) in \(0.036\)s. On the other hand, the closed-loop model obtained in **T2** and **T3** is stable, and the s-zonotope method verifies the satisfaction of the safety constraint in both problems without resorting to split the input set. * _Unicycle car_ (**C**): the model under study considers the addition of an unknown-but-bounded disturbance \(w\in 10^{-4}[-1,\,+1]\) affecting the fourth state. The safety properties are verified for both **C1** and **C2**. In particular, Figure 2 shows the envelope computed for **C2** in the time interval \(t\in[0,\,10]\) and how the outer-approximation lies within the goal set at \(t=10\). * _Adaptative cruise control_ (**ACC**): both problems **ACC1** and **ACC2** are verified for the given time horizon. * _Double pendulum_ (**D**): the set of constraints in problems **D1-2** are violated by the closed-loop system. An unambiguous constraint violation is achieved for **D1** at \(t=0.25\) and for **D2** at \(t=0.278\). On the other hand, the problem **D3** cannot be verified from a simple affine abstraction: the accumulated error indeed increases in the reachability analysis of **D3**, not allowing to guarantee the constraints satisfaction or their unambiguous violation, and thus motivating further extensions. The results presented above show how, despite their low computational complexity, s-zonotopes yield a high performance in NNCSs verification, being able to verify almost all the benchmark problems without splitting the input set. It is also remarkable the scalability of this approach. As an example, for problem **T3** with \(\Delta T=0.001\)s, \(\Delta h=1\)s and time horizon \(t\in[0,20]\), the proposed tool only requires of \(1.515\)s to compute and assess \(N=20\mathrm{s}/\Delta T=20.000\) forward iterations. ### _Use of s-polynotopes_ The capability of s-polynotopes to capture the non-convex map of NNs is illustrated below. To that end, the set of randomly generated neural networks used in [17] are analyzed. All the NNs consists of 2 inputs, 2 outputs and they differ on the number of hidden layers and neurons per layer. The Fig. 2: Problem **C2**: framed zonotopes represent the computed bounds at each \(\Delta h=.2\)s; blurred lines represent the bounds update at \(\Delta T=.001\)s. first four NNs present \(l=\{1,2,3,4\}\) hidden layers, each having \(n_{k}=100\) ReLU neurons per layer. The examined NN input set is \(\mathcal{X}_{0}=[0.9,\,1.1]\times[0.9,\,1.1]\). Figure 3 shows the set-valued interpretation of the output bounding s-polynotopes obtained by abstracting the activity functions of active neurons with second order polynomials i.e. with \(n=2\) in (22). The computation times are \(\{0.178,0.240,2.021,3.329\}s\) for the NNs with \(l=1\) to \(l=4\) hidden layers, respectively. Similarly, another set of NNs with \(l=\{7,8,9,10\}\) hidden layers and \(n_{k}=10\) ReLU neurons per layer is evaluated for the same input set. Figure 4 represents the interpretation of the resulting s-polynotopes that are computed in \(\{0.2389,0.108,0.786,0.155\}\)s, respectively. Those examples (Figure 3 and Figure 4) taken from [17] illustrate the ability of s-polynotopes composition to accurately generate inclusion preserving polynomial I/O mappings of NNs. As a byproduct, an efficient implicit description of possibly non convex output sets is obtained. ### _Partitioning strategy_ Firstly, in order to show the performance of the partitioning algorithm, it will be applied to the open-loop robotic arm example used in [12, 13]. Particularly, the non-linear dynamics of a 2 DOF robot arm are modeled by a \((2,5,2)\) NN with \(tanh\) activations. The considered set of joint angles are extended to \((\theta_{1},\theta_{2})\in[\frac{\pi}{3},\frac{4\pi}{3}]^{2}\). An implementation of Algorithm 2 adapted to analyze the NN in isolation is executed in order to iteratively minimize the \(F\)-radius1 of the zonotope spanned by the error symbols (\(\|G_{f}\|_{F}\)) for a fixed number of \(n_{max}=400\) splits. The computation time of the algorithm is \(0.097\)s. Figure 4(b) shows the resulting pattern of \(401\) input subsets, whereas Figure 4(a) represents the corresponding s-zonotope interpretations obtained in the output space altogether with an exhaustive evaluation of the NN (blue dots). This latter figure shows how Algorithm 2 achieves an accurate description of the non-convex output set by focusing the splitting effort in those regions of the input space for which an affine abstraction granted by s-zonotopes is not accurate enough. Footnote 1: The \(F\)-radius of a zonotope is the Frobenius norm of its generator matrix (see Definition 3 in [34]). Furthermore, considering the initial set \((\theta_{1},\theta_{2})\in[\frac{\pi}{3},\frac{2\pi}{3}]^{2}\), Algorithm 2 is set to split up to the satisfaction of the safety constraint \(y_{1}\leq d\) (where \(y_{1}\) denotes the first output). Table III reflects the number of splits and the time required by Algorithm 2 to satisfy the above safety constraint for different values of \(d\). Besides, Table III also shows, for a fixed number of splits, the number of existing possible combinations of set selections and symbols bisections, as well as how many among them are able to satisfy the property. As an example, for \(d=1.2\), Algorithm 2 requires \(8\) splits. For the same Fig. 4: NNs with 10 neurons per layer and \(l=\{7,8,9,10\}\) hidden layers. Set-valued interpretation of the over-approximating s-polynotope (red set); exhaustive evaluation of the NNs (blue dots). Fig. 3: NNs with 100 neurons per layer and \(l=\{1,2,3,4\}\) hidden layers. Set-valued interpretation of the over-approximating s-polynotope (red set); exhaustive evaluation of the NNs (blue dots). problem, there are no possible combinations of less than 7 splits for which the property can be proven; there exist 18 out of \(5.491\cdot 10^{4}\) possibilities that satisfy it with 7 splits (\(0.032\%\)); and 336 out of \(3.66\cdot 10^{5}\) possibilities that satisfy it with 8 splits (\(0.0918\%\)). Regarding the closed-loop examination of Algorithm 2, this is applied to assess the satisfaction of problem **D3**, which cannot be satisfied by a simple s-zonotope abstraction. To that end, Algorithm 2 is set to split up to the satisfaction of the safety constraints in Table II. The algorithm requires a total of 19 splits (i.e. 20 subsets) computed in \(5.12\)s. Figure 6 shows the time evolution of the interval enclosure of the resulting 20 reachable sets (light blue background), altogether with 50 random simulations of the closed-loop system (blue dots). ## VIII Conclusions A compositional approach focused on inclusion preserving long term symbolic dependency modeling is introduced in this work for the analysis of NNCSs, where such long term is to be understood both in time iterations (regarding the controlled system dynamics) and in layer iterations (regarding the sole NNs). This results in a generic method that has been developped in several ways. Firstly, the matrix structure of s-zonotopes enables to compute (fast and simple) affine symbolic mappings to abstract the I/O mapping of the control loop components. Two further extensions are also proposed: the use of s-polynotopes to compute inclusion preserving polynomial mappings capable of accurately describing the non-convex map of NNs, and an input partitioning algorithm that benefits from the ability granted by s-zonotopes to preserve linear dependencies between the loop elements. Simulations show the comparative efficiency of the proposals and support the prevalence of dependency preserving methods for closed-loop analysis over the use of accurate, but dependency breaking, output bounding verification tools. Future works should address the integration with the analysis of continuous-time dynamical systems, as well as the study of optimized affine/polynomial abstractions for achieving better performance in verifying specific safety properties. ## Acknowledgments The authors would like to thank Prof. Vicenc Puig (UPC, Barcelona, Spain) who initially prompted this collaboration and his support during the research stay of the first author at the University of Bordeaux. This work was in part supported by the Margarita Salas grant from the Spanish Ministry of Universities funded by the European Union NextGenerationEU. Fig. 5: Robot arm example: input partitioning for \((\theta_{1},\theta_{2})\in[\frac{\pi}{2},\frac{4\pi}{3}]^{2}\); yellow-green colors characterize the corresponding I/O set pairs; exhaustive evaluation of the NN (blue dots). Fig. 6: Input partitioning problem **D3**: interval enclosure of the resulting 20 reachable sets (light blue); random simulations (blue dots). ## Appendix A Proof of the Lemma 3 Define \(\alpha=\frac{h(u)-h(l)}{u-l}\) and form \(\xi(x)=h(x)-\alpha x\) which is continuous in \([l,\,u]\), differentiable in \((l,\,u)\) and satisfies \(\xi(l)=\xi(u)=0\). Then, the maximum \(\xi(\bar{x})\) (resp. minimum \(\xi(\underline{x})\)) of \(\xi(x)\) on \([l,\,u]\) must be for \(\bar{x}\) (resp. \(\underline{x}\)) in the boundary points \(\{u,l\}\) or in its identity points denoted as \(\{\delta_{1},...,\delta_{n}\}\), that is, the solutions of \(\xi^{\prime}(x)=0\to h^{\prime}(x)=\alpha\). Given \(\xi(\bar{x})\) and \(\xi(\underline{x})\), then for all \(x\in[l,\,u]\)\(\xi(\underline{x})\leq\xi(\underline{x})\leq\xi(\underline{x})\implies\underline{y}(x)\leq h (x)\leq\bar{y}(x)\), with \(\underline{y}(x)=\alpha(x-\underline{x})+h(\underline{x})\) and \(\bar{y}(x)=\alpha(x-\bar{x})+h(\bar{x})\). Thus, it follows that \(\forall x\in[l,\,u],\exists\epsilon\in[-1,+1]\) such that \[h(x)=\frac{\bar{y}(x)+\underline{y}(x)}{2}+\frac{\bar{y}(x)-\underline{y}(x) }{2}\epsilon\] or equivalently \(h(x)=\tilde{h}(x,\epsilon)=\alpha x+\beta+\gamma\epsilon\) with \[\beta =\frac{\bar{y}(x)+\underline{y}(x)}{2}-\alpha x=\frac{h( \underline{x})+h(\bar{x})-\alpha(\underline{x}+\bar{x})}{2},\] \[\gamma =\frac{\bar{y}(x)-\underline{y}(x)}{2}=\frac{h(\bar{x})-h( \underline{x})+\alpha(\underline{x}-\bar{x})}{2}.\] ## Appendix B Proof of the Proposition 2 _Inclusion preservation_: the sign criterion \(\gamma\geq 0\) is chosen below. Given \(\bar{y}(x)=\alpha_{2}x^{2}+\alpha_{1}x+\beta+\gamma\) and \(\underline{y}(x)=\alpha_{2}x^{2}+\alpha_{1}x+\beta-\gamma\), then parameters \((\alpha_{2},\alpha_{1},\beta,\gamma)\) ensure local coverage of \(\varphi(x)\) if \(\underline{y}(x)\leq\varphi(x)\leq\bar{y}(x),\forall x\in[l,\,u]\). Consider firstly the scenario \(|l|\leq u\leq 2|l|\). In this case, \(\alpha_{2}=\frac{1}{2u}>0\) and thus \(\underline{y}(x),\bar{y}(x)\) are strictly convex. On the one hand, \(\beta=\gamma\) and \(\alpha_{1}=-\alpha_{2}u\) impose that, \(y(0)=0=\varphi(0)\) and \(\underline{y}(u)=u=\varphi(u)\), whereas \(\underline{y}(l)=\frac{1}{2}(\frac{l}{u}+l)\leq 0=\varphi(l)\) for \(u\geq|l|\) and \(l<0\). Therefore, from \(\underline{y}(l)\leq\varphi(l)\), \(\underline{y}(0)=\varphi(0)\), \(\underline{y}(u)=\varphi(u)\) and the convexity of \(\underline{y}(x)\) wrt \(x\), it follows that \(\underline{y}(x)\leq\varphi(x),\forall x\in[l,\,u]\). On the other hand, from \(\alpha_{1}=1-\alpha_{2}u\) and \(\beta=\gamma=\frac{\alpha_{2}u^{2}}{8}\), then \(\bar{y}(x)\) is tangent to the positive region of \(\varphi(x)\) in \(\hat{x}=\frac{u}{2}\) (that is, \(\bar{y}(\hat{x})=\varphi(\hat{x})=\hat{x}\) and \(\bar{y}^{\prime}(\hat{x})=\varphi^{\prime}(\hat{x})=1\)), and thus, since \(\bar{y}(x)\) is convex, it follows that \(\bar{y}(x)\geq\varphi(x),\forall x\geq 0\). Additionally, for \(\alpha_{2}=\frac{1}{2u}\) the (global) minimum of \(\bar{y}(x)\) is \(\bar{y}(x^{*})=0\) for \(x^{*}=\frac{-u}{2}\) (that is, \(\bar{y}^{\prime}(x^{*})=0\)), and thus \(\bar{y}(x)\geq\varphi(x),\forall x\leq 0\). A similar reasoning can be used to show the inclusion preservation for the scenario \(u<|l|\leq 2u\). _Conservatism reduction:_ For the scenario \(|l|\leq u\leq 2|l|\), the parameter \(\gamma\) has the value \(\gamma=\frac{\alpha_{2}u^{2}}{8}=\frac{u}{16}\). On the other hand, for a ReLU function \(\varphi(x)=max(0,x)\) the triplet for an affine abstraction in Lemma 2 yields \(\gamma^{*}_{aff}=\frac{u|l|}{2(u+|l|)}\). Therefore, for \(u\leq 2|l|\) the following inequality is obtained \[\gamma^{*}_{aff}=\frac{u|l|}{2(u+|l|)}\geq\frac{u|l|}{2(2|l|+|l|)}=\frac{u}{6} >\frac{u}{16}=\gamma\] and thus \(\gamma\leq\frac{3}{8}\gamma^{*}_{aff}\sim|\gamma|\leq\frac{3}{8}|\gamma^{*}_{aff}|\) (since \(\gamma,\gamma^{*}_{aff}>0\)). A similar reasoning can be used to prove the case \(u<|l|\leq 2u\).
2305.15057
Linear-Time Modeling of Linguistic Structure: An Order-Theoretic Perspective
Tasks that model the relation between pairs of tokens in a string are a vital part of understanding natural language. Such tasks, in general, require exhaustive pair-wise comparisons of tokens, thus having a quadratic runtime complexity in the length of the string. We show that these exhaustive comparisons can be avoided, and, moreover, the complexity of such tasks can be reduced to linear by casting the relation between tokens as a partial order over the string. Our method predicts real numbers for each token in a string in parallel and sorts the tokens accordingly, resulting in total orders of the tokens in the string. Each total order implies a set of arcs oriented from smaller to greater tokens, sorted by their predicted numbers. The intersection of total orders results in a partial order over the set of tokens in the string, which is then decoded into a directed graph representing the desired linguistic structure. Our experiments on dependency parsing and coreference resolution show that our method achieves state-of-the-art or comparable performance. Moreover, the linear complexity and parallelism of our method double the speed of graph-based coreference resolution models, and bring a 10-times speed-up over graph-based dependency parsers.
Tianyu Liu, Afra Amini, Mrinmaya Sachan, Ryan Cotterell
2023-05-24T11:47:35Z
http://arxiv.org/abs/2305.15057v2
# Learning the String Partial Order ###### Abstract We show that most structured prediction problems in natural language processing can be solved in linear time and space by considering them as partial orderings of the tokens in the input string. Our method computes real numbers for each token in an input string and sorts the tokens accordingly, resulting in as few as 2 total orders of the tokens in the string. Each total order possesses a set of edges oriented from smaller to greater tokens. The intersection of total orders results in a partial order over the set of input tokens, which is then decoded into a directed graph representing the desired structure. Experiments show that our method achieves \(95.4\) LAS and \(96.9\) UAS by using an intersection of 2 total orders, \(95.7\) LAS and \(97.1\) UAS with 4 on the English Penn Treebank dependency parsing benchmark. Our method is also the first linear-complexity coreference resolution model and achieves \(79.2\) F1 on the English OntoNotes benchmark, which is comparable with state of the art. ## 1 Introduction Structured prediction spans a wide range of NLP tasks, including but not limited to constituency and dependency parsing, semantic parsing, and coreference resolution. Unlike conventional classification tasks where the output space is typically a set of a constant number of labels, structured prediction involves modeling complex interactions between the tokens in an input sequence, and outputting structured objects from a label space whose cardinality is exponential in the input length. The mathematical foundation on which we build our models for structured prediction is a substantial but often neglected _inductive bias_. It confines the class of models to be utilized to solve the task and sets a limit on the efficiency of the models. Many machine learning models for structured prediction [3, 2004, 2003, _inter alia_] are developed upon probabilistic graphical models. They rely heavily on extracting local features for sub-structures [2011, SS2.2], such as textual spans or arcs between tokens, to decide their suitability in some global structure, such as a phrase structure or a dependency graph, often introducing a prohibitively large memory footprint. During inference, computationally expensive algorithms are adopted, e.g., the CKY algorithm [16, 17, 18], which finds the binary constituency parse with the best score; and the Chu-Liu/Edmonds algorithm [3, 2], which finds the maximum spanning tree. These algorithms often take considerable time and space for storing the features for sub-structures and running chart-based dynamic programming. Such a paradigm for structured prediction is often undesirable, especially with recent advancements in pretrained language models, due to its high computational cost and inability to execute in parallel. To fundamentally improve the efficiency of structured prediction, we abandon the probabilistic graphical formalization and propose an order-theoretic perspective to understand the structures in natural language processing. Rather than considering structures as graphs, we view them as _partial orderings_ of the tokens in the input strings. The directed arcs between tokens are cast as a transitive order relation \(\prec\), while the other token pairs with no arcs in between are considered incomparable in the partial ordering. Since the set of real numbers \(\mathbb{R}\) is naturally ordered by the less than relation \(<\), we use real numbers as the proxy for determining the partial ordering. We predict real numbers for each token, and compare the \(<\) (less than) relation between the real numbers to decide whether two tokens can be connected by \(\prec\). We further provide an efficiency guarantee based on the well-established results from order theory that partial orders satisfying particular conditions can be represented as the intersection of as few as 2 total orders. Importantly, most structures in natural language, including trees, alignments, and set partitions, satisfy these conditions. Fig. 1 gives an illustrative example of dependency parsing, in which the structure being modeled is a tree. A great advantage of this order-theoretic perspective of structures is that it allows us to consider a structure _as a whole_ rather than a simple collection of sub-structures. The real numbers predicted for each token not only represent the structural information of the token itself, but also contribute to the construction of the global structure. For instance, if large real-number values are assigned to the token \(\mathbf{w}_{i}\), it will be more likely for other tokens to point arcs at \(\mathbf{w}_{i}\). It eliminates the need to compute local features for sub-structures. Our method _only_ utilizes token-level computation, which guarantees its linear time and space complexity. Experimental results show that our method achieves \(95.4\) LAS and \(96.9\) UAS by using an intersection of only 2 total orders, \(95.7\) LAS and \(97.1\) UAS with only 4 on the English Penn Treebank dependency parsing benchmark. Our method also achieves \(79.2\) F1 on the English OntoNotes coreference resolution benchmark, which is on par with state of the art. ## 2 Efforts in Linear-Complexity Structured Prediction A recent line of work studies the linearization of structures by framing them as tagging Kitaev and Klein (2020); Amini and Cotterell (2022); Amini et al. (2023). The general pipeline for tagging consists of first linearizing the structured object into a sequence of tags, and then aligning the tags to the input sequence. The benefit of framing structured prediction as tagging is that any pretrained model can then be finetuned to classify words and predict their tags, either in parallel or autoregressively. During inference, the predicted tag sequence, or the predicted probability distribution over tags, needs to be converted back to a structured object. Previous work provides efficient algorithms with linear time and space complexity for both linearization and inference. Although taggers show competitive performance to state-of-the-art models in both constituency and projective dependency parsing, some serious theoretic limitations lie in them. First, the underlying structures of natural language, e.g., syntax, semantics, and discourse, are often _not_ aligned with the surface form of a sequence due to the existence of syntactic movement Chomsky (1955). In other words, it is often impossible to factorize a global structure into token-level tags. For instance, Amini et al.'s (2023) method cannot tackle non-projective dependency trees. More importantly, the usage of discrete tags from finite tag spaces severely limits the expressiveness of these works. The maximum number of structures that a discrete tag sequence can represent is at most \(\mathcal{O}\big{(}|\mathcal{T}|^{N}\big{)}\), where \(\mathcal{T}\) is a finite tag space and \(N\) is the length of the tag sequence. Such spaces do not have the capacity to model many structures of interest in NLP. For instance, the space of non-projective dependency trees of \(N\) tokens has a cardinality of \(\mathcal{O}\big{(}N^{N-2}\big{)}\)Cayley (1889). As \(N\) increases, it is obvious that \(\mathcal{O}\big{(}|\mathcal{T}|^{N}\big{)}<\mathcal{O}\big{(}N^{N-2}\big{)}\) Figure 1: An illustration of our method. To solve a structured prediction problem, represented as a directed graph in Fig. 0(a), we first convert it into a token-split structure (see §3.3) in Fig. 0(b), which is a partial ordering. Then, two real numbers are predicted for each vertex in Fig. 0(b). The positions of vertices in Fig. 0(c) on the number lines indicate the real numbers the vertices are mapped to. The vertices are sorted twice accordingly, resulting in a realizer of 2 total orderings, each possessing a set of candidate edges. The exact set of desired edges in the original structure can be restored from the intersection of \(\mathcal{E}_{1}\) and \(\mathcal{E}_{2}\) (see §3.6). An adequately large output sequence space is the prerequisite for devising a generic way to cast complex structure prediction problems into tagging or sequence modeling in general. Confining the length of the target sequence in a tractable scale of \(\mathcal{O}(N)\), the only way to enable the modeling of more complex structures is to adapt the tag space \(\mathcal{T}\). Empirically, if \(\mathcal{T}\) is large enough, it will still be possible to model arbitrarily complex structures as sequence modeling. In our work, we choose \(\mathcal{T}\) to be \(\mathbb{R}\), the set of real numbers, which is uncountably infinite with a cardinality of \(\aleph_{1}\)(Jech, 2007, SS4). Besides, from a practical perspective, real numbers can be efficiently expressed as floating-point numbers by machine learning models. Meanwhile, these sequences of real numbers should be directly interpretable. In other words, they should _not_ be considered as feature vectors or hidden representations of the input string. They are _dense tags_ in the sense that there exists a _deterministic_ and _non-parameterized_ that maps the real number sequences to structures. In SS3 and SS4, we focus on finding such mapping by exploring the order-theoretic properties of the structures being modeled and the set of real numbers \(\mathbb{R}\). Intuitively speaking, we view two vertices connected by an arc as real numbers that satisfy the less-than relation \(<\). We first draw the connections between structures in NLP and partially ordered sets. Then, we show that partially ordered sets can further be factorized into intersections of total orders, as illustrated in Fig. 1. We show, in the end, that for certain structures, the intersection of 2 total orders is sufficient, implying that only a constant number of real numbers need to be predicted for each token, thereby concluding our effort to find a general linear-complexity method for structured prediction. ## 3 An Order-Theoretic View of Structured Prediction ### Overview Previous structured prediction methods often follow a straightforward design pattern--extracting features of the desired sub-structure, e.g., spans in phrase structure grammars and arcs between words in dependency structures, then solving a decision problem of whether the sub-structure exists based on the extracted features (Weiss and Taskar, 2010). Many state-of-the-art structured prediction models, for instance, the biaffine dependency parser (Dozat and Manning, 2017), follow this design pattern by computing kernelized inner product between every pair of words. Depending on the structures being modeled, the time and space consumption of computing the features for sub-structures ranges from \(\mathcal{O}(N)\) in tagging to \(\mathcal{O}\big{(}N^{4}\big{)}\) in coreference resolution (Lee et al., 2017). Therefore, exact inference under this design pattern is often prohibitive due to the high computational cost. However, such a design pattern often introduces _redundant_ computation, especially for structured prediction in NLP, where the structures of interest tend to be sparse and subject to global constraints.1 For instance, in dependency grammars, each word has one and only one head word for which it serves as a dependent (Kubler et al., 2009), and a unique root exists in every sentence. While in context-free constituency parsing, a parse tree in Chomsky normal form is always binary. When computing local features for sub-structures, such global constraints are largely ignored. Footnote 1: In other words, these characteristics are the _inductive bias_ in structured prediction. In addition to the \(\mathcal{O}\big{(}N^{2}\big{)}\) number of features computed, the computational cost of the algorithms (Kirchhoff, 1947; Tutte, 1948; Kasami, 1966; Younger, 1967; Chu and Liu, 1965; Edmonds et al., 1967, _inter alia_) to construct optimal global structure from sub-structures or computing the sum of the scores of all structures is also prohibitive as \(N\) increases. Such algorithms often take over \(\mathcal{O}\big{(}N^{2}\big{)}\) runtime and space to run chart-based dynamic programs or store adjacency matrices. Meanwhile, they are very hard to be implemented in parallel due to the dependencies across sub-structures. In this section, we describe an order-theoretic treatment for structured prediction. Specifically, we view structures as partially ordered sets--sets that are equipped with a **transitive** relation \(\prec\) but not all elements are comparable. Tokens connected by directed arcs are viewed as elements satisfying the relation \(\prec\), By doing so, the set of real numbers \(\mathbb{R}\) can be used as a proxy for modeling the relations between the tokens in an input string. We predict real numbers for each token, and compare the \(<\) (less than) relation between the real numbers to decide whether there is a relation between tokens, thus eliminating the need to compute token pair representations explicitly. Beyond representing the tokens' structural information, the real numbers predicted for each token also contribute to the global structure. For instance, if large real-number values are assigned to the token \(\mathbf{w}_{i}\), it will be more likely for arcs to point at and less likely to start from \(\mathbf{w}_{i}\). As we will show later, such order-theoretic dynamics bring a substantial improvement in efficiency over the probabilistic graphical model formalization of structured prediction, in which the local features computed for sub-structures hardly contribute to identifying the global structure. ### Structures as Directed Graphs Let \(\Sigma\) be an alphabet, a finite set of natural language tokens, and let \(\mathbf{w}=\mathbf{w}_{1}\mathbf{w}_{2}\cdots\mathbf{w}_{N}\in\Sigma^{*}\) be a string, i.e., an ordered list of tokens. Empirically speaking, structured prediction is the task of assigning a structure, such as a tag sequence or a parse tree, to a given input string \(\mathbf{w}\). Its major goal is to model the **homogeneous relation2** on the set of tokens \(\{\mathbf{w}_{1},\mathbf{w}_{2},\cdots,\mathbf{w}_{N}\}\) in the input string. The output space is defined by the input itself, in contrast to the external label spaces in other tasks such as classification or language generation. To handle the heterogeneity of structures, we give a formal definition using the notations from graph theory. Footnote 2: A homogeneous relation on a set \(\mathcal{X}\) is a binary relation between two elements in \(\mathcal{X}\). **Definition 1** (Structure).: _A **structure** over the string \(\mathbf{w}=\mathbf{w}_{1}\mathbf{w}_{2}\cdots\mathbf{w}_{N}\) is a labeled directed graph \(\mathcal{G}=(\mathcal{V},\mathcal{E},R)\), where \(\mathcal{V}=\{\mathbf{w}_{1},\mathbf{w}_{2},\cdots,\mathbf{w}_{N}\}\) is the unordered set of input tokens in \(\mathbf{w}\), \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\times R\) is the set of labeled arcs, and \(R\) is a finite set of relation labels. Circles and self-loops are allowed. Token-level labels are attached to self-loop arcs._ Through the rest of this paper, Def. 1 will be the most generic characterization of structures over natural language. Typical structured prediction problems can be naturally subsumed under it. We give three examples of structured prediction tasks formalized by Def. 1. **Example 1** (Sequence tagging).: _A sequence tagging structure can be written as \(\mathcal{G}=(\mathcal{V},\mathcal{E},R)\), where \(\mathcal{E}=\{(\mathbf{x},\mathbf{x},r)\mid\mathbf{x}\in\mathcal{V}\}\), \(R\) is the set of token-level tags (e.g., part-of-speech tags)._ **Example 2** (Dependency parsing; Kubler et al., 2009).: _A dependency structure can be written as \(\mathcal{G}=(\mathcal{V},\mathcal{E},R)\), where \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\times R\), and \(R\) is the set of dependency relation types. If \((\mathbf{x},\mathbf{y},r)\in\mathcal{E}\) then \(\forall r^{\prime}\neq r,(\mathbf{x},\mathbf{y},r^{\prime})\notin\mathcal{E}\)._ **Example 3** (Coreference).: _A coreference structure can be written as \(\mathcal{G}=(\mathcal{V},\mathcal{E},R)\), where \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\times R\), and \(R=\{r,r^{\prime}\}\)._ \((\mathbf{x},\mathbf{y},r)\in\mathcal{E}\) _if and only if the textual span \(\mathbf{w}_{\mathbf{x}:\mathbf{y}}\) is a mention of an entity. \((\mathbf{x}_{1},\mathbf{x}_{2},r^{\prime})\in\mathcal{E}\land(\mathbf{y}_{1},\mathbf{y}_{2},r^ {\prime})\in\mathcal{E}\) if and only if the textual spans \(\mathbf{w}_{\mathbf{x}_{1}:\mathbf{y}_{1}}\), and \(\mathbf{w}_{\mathbf{x}_{2}:\mathbf{y}_{2}}\) corefer to each other._ ### From Directed Graphs to Posets The global properties and sparsity of structures have been under-explored in previous structured prediction work. The goal of our order-theoretic treatment is to re-introduce them by organizing structures with the notion of order. **Definition 2** (Order; Hausdorff, 1914).: _An order over a set \(\mathcal{V}\) is a homogeneous relation \(\prec\) that the following hold for all \(\mathbf{x},\mathbf{y},\mathbf{z}\in\mathcal{V}\):_ 1. _irreflexivity__:_ \(\mathbf{x}\!\prec\!\mathbf{x}\)_,_ 2. _asymmetry__:_ \(\mathbf{x}\!\prec\!\mathbf{y}\implies\mathbf{y}\!\prec\!\mathbf{x}\)_,_ 3. _transitivity__:_ \(\mathbf{x}\!\prec\!\mathbf{y}\wedge\mathbf{y}\!\prec\!\mathbf{z}\implies\mathbf{x}\!\prec\!\mathbf{z}\)_._ The structural sparsity of natural language is exhibited in that each token in a string only interacts with very few other tokens. In a particular structure, such as a dependency graph or coreference graph, most pairs of tokens have no relations. Such sparsity, from an order-theoretic point of view, can be characterized by incomparability in a partially ordered set (Hausdorff, 1914). **Definition 3** (Partially ordered structure).: _A **partially ordered structure**\(\mathcal{P}=(\mathcal{V},\mathcal{E},\prec)\) is a directed graph such that the homogeneous relation \(\prec\) is an order._ We write \((\mathbf{x},\mathbf{y},\prec)\in\mathcal{E}\) as a equivalent expression of \(\mathbf{x}\!\prec\!\mathbf{y}\). And use the set-theoretic abbreviations **poset** (partially ordered set) and **toset** (totally ordered set) in the context of structured prediction hereafter. Removing Undesired Transitivity and Enabling Cycles.Partial order is transitive. I.e., \(\mathbf{x}\!\prec\!\mathbf{y}\land\mathbf{y}\!\prec\!\mathbf{z}\) implies \(\mathbf{x}\!\prec\!\mathbf{z}\), which, however, does not hold in the structures characterized by the directed graph formalization in Def. 1. Also, for the generality of the notations, we do not require the graph to be acyclic in Def. 1. Posets cannot represent cyclic structures due to the transitivity requirement. We introduce the token-split structure, which enables cycles and removes redundant edges introduced by transitivity in posets. An example is displayed in Fig. 0(b). **Definition 4** (Token-split structure).: _A token-split structure given a structure \(\mathcal{G}=(\mathcal{V},\mathcal{E},R)\) and a relation \(r\in R\) is a structure \(\mathcal{G}_{r}=\left(\overline{\mathcal{V}},\mathcal{E}_{r},\{r\}\right)\) such that_ 1. \(\overline{\mathcal{V}}\stackrel{{\text{\tiny def}}}{{=}}\mathcal{V }\cup\mathcal{V}^{\prime}\)_,_ 2. \(\mathcal{V}\cap\mathcal{V}^{\prime}=\varnothing\) _,_ 3. _there exists a bijective mapping_ \(\mathbf{f}\colon\mathcal{V}\to\mathcal{V}^{\prime}\)_,_ 4. \(\mathcal{E}_{r}=\{(\mathbf{x},\mathbf{f}(\mathbf{y}),r)\mid(\mathbf{x},\mathbf{y},r)\in \mathcal{E}\}\,.\)__ In other words, a token-split structure maps all the edges from the original structure, _including self-loops_, into a bipartite graph in which the edges are oriented from \(\mathcal{V}\) to \(\mathcal{V}^{\prime}\). Now, we show that token-split structures given by Def. 4 are posets. **Theorem 1**.: _Token-split structures are posets._ Proof.: We show that a token-split structure \(\mathcal{G}_{r}=\left(\overline{\mathcal{V}},\mathcal{E}_{r},\{r\}\right)\) satisfies the 3 conditions in Def. 2. 1. irreflexivity: By 4.4, for all \(\mathbf{x}\in\overline{\mathcal{V}}\), \(\mathbf{x}\not\prec\mathbf{x}\). 2. asymmetry: Suppose that \(\exists\mathbf{x},\mathbf{y},\mathbf{x}\neq\mathbf{y}\), s.t. \(\mathbf{x}\prec\mathbf{y}\land\mathbf{y}\prec\mathbf{x}\). By 4.2 and 4.4, \(\mathbf{x},\mathbf{y}\in\mathcal{V}\cap\mathcal{V}^{\prime}=\varnothing\). Thus, \(\mathbf{x}\prec\mathbf{y}\implies\mathbf{y}\not\prec\mathbf{x}\). 3. transitivity: \(\mathbf{x}\prec\mathbf{y}\land\mathbf{y}\prec\mathbf{z}\) cannot hold by 4.4. Since \(\mathbf{x}\prec\mathbf{y}\) implies \(\mathbf{x}\in\mathcal{V}\land\mathbf{y}\in\mathcal{V}^{\prime}\), while \(\mathbf{y}\prec\mathbf{z}\) implies \(\mathbf{y}\in\mathcal{V}\land\mathbf{x}\in\mathcal{V}^{\prime}\), a contradiction occurs due to \(\mathbf{y}\in\mathcal{V}\cap\mathcal{V}^{\prime}=\varnothing\) by 4.2. \(\mathbf{x}\prec\mathbf{y}\land\mathbf{y}\prec\mathbf{z}\implies\mathbf{x}\prec\mathbf{z}\) holds since the antecedent is always false. Thus, token-split structures are partially ordered. Hereafter, we write any token-split structure \(\mathcal{G}_{r}=\left(\overline{\mathcal{V}},\mathcal{E}_{r},\prec_{\mathrm{ r}}\right)\) using the order notation \(\prec_{\mathrm{r}}\) since \(\mathcal{G}_{r}\) is a poset, where \(\prec_{\mathrm{r}}\) is induced by \(r\in R\) from the original structure. Each relation \(r\in R\) induces one corresponding order \(\prec_{\mathrm{r}}\). Thus, a structure \(\mathcal{G}=(\mathcal{V},\mathcal{E},R)\) can be converted to \(|R|\) independent token-split structures \(\mathcal{G}_{r}=\left(\overline{\mathcal{V}},\mathcal{E}_{r},\prec_{\mathrm{ r}}\right),\forall r\in R\). To restore the arcs in the original structure \(\mathcal{G}=(\mathcal{V},\mathcal{E},R)\) from its corresponding token-split structure \(\mathcal{G}_{r}\), the following equation holds \[\mathcal{E}=\bigcup_{r\in R}\{(\mathbf{x},\mathbf{f}^{\!-\!1}(\mathbf{y}),r)\mid(\bm {x},\mathbf{y},\prec_{\mathrm{r}})\in\mathcal{E}_{r}\}.\] ### Embedding Posets into Tosets An important theoretical foundation of our method is that partial orders can be represented by an intersection of total orders, which enables us to use real numbers as a proxy to determine the partial ordering of tokens. A total order is defined as follows. **Definition 5** (Totally ordered structure).: _A totally ordered structure (toset) \(\mathcal{T}=(\mathcal{V},\mathcal{E},\prec)\) is a partially ordered structure that satisfies \(\forall\mathbf{x},\mathbf{y}\in\mathcal{V},(\mathbf{x},\mathbf{y},\prec)\in\mathcal{E}\lor( \mathbf{y},\mathbf{x},\prec)\in\mathcal{E}\)._ **Theorem 2** (Szpilrajn's extension theorem; Szpilrajn, 1930).: _Every partially ordered structure is contained in a totally ordered structure. I.e., for a partially ordered structure \(\mathcal{G}=(\mathcal{V},\mathcal{E},\prec)\), there exists a totally ordered structure \(\mathcal{T}=(\mathcal{V},\mathcal{E}^{\prime},\prec)\), s.t. \(\mathcal{E}\subseteq\mathcal{E}^{\prime}\)._ Theorem 2 ensures that every poset can be embedded in some coset, in the sense that the coset _contains_ all the relation triples in the poset. ### Posets as Intersections of Tosets Posets can be represented as intersections of a collection of tosets. Such a collection of tosets is called a **realizer**. **Definition 6** (Realizer).: _A realizer \(\mathcal{K}(\mathcal{P})=\{\mathcal{T}_{1},\mathcal{T}_{2},\cdots,\mathcal{T}_{k}\}\), where \(\mathcal{T}_{i}=(\mathcal{V},\mathcal{E}_{i},\prec)\;\;\forall i\in[k]\), of a partially ordered structure \(\mathcal{P}=(\mathcal{V},\mathcal{E},\prec)\) is a set of totally ordered structures whose intersection constructs the partially ordered structure. Formally, we have \(\mathcal{E}=\bigcap_{i\in[k]}\mathcal{E}_{i}\)._ **Theorem 3** (Existence of realizers; Dushnik and Miller, 1941, Thm. 2.32).: _There exists a realizer \(\mathcal{K}(\mathcal{P})\) for every partially ordered structure \(\mathcal{P}=(\mathcal{V},\mathcal{E},\prec)\)._ The complexity of a poset can be characterized by its order dimension. **Definition 7** (Order dimension; Dushnik and Miller, 1941).: _Let \(\mathcal{P}=(\mathcal{V},\mathcal{E},\prec)\) be a partially ordered structure. The **order dimension**\(D(\mathcal{P})\) of a partially ordered structure is the minimal number of tosets required to represent \(\mathcal{P}\) as an intersection._ ### Efficiency Guarantees In this section, we introduce the theorems of order dimension, which guarantee the efficiency of an order-theoretic treatment for structured prediction. These efficiency guarantees come from a series of works on order and lattice theory (Dushnik and Miller, 1941; Hiraguchi, 1955; Birkhoff, 1967; Kelly and Rival, 1975; Platt, 1976). It is important to note that _not_ all posets can be represented as an intersection of a constant number of tosets. In fact, testing whether the order dimension of a partial order \(\mathcal{P}\) is at most \(k,\forall k\geq 3\) is NP-complete (Yannakakis, 1982). However, it is possible for us to show that most of the desired structures in natural language processing [21]--including trees, equivalence classes (i.e., set partitioning), and alignment (i.e., bipartite matching)--can be represented as the intersection of 2 tosts. From an order-theoretic point of view, a poset is 2-dimensional if and only if its **complete lattice embedding** has a **planar Hasse diagram**. I.e., it can be drawn on a plane without any crossing edges. We refer interested readers to MacNeille (1937) and Birkhoff (1967, Chapter V), which discuss the construction of complete lattice embeddings. **Theorem 4** (Baker et al., 1972).: _Suppose \(\mathcal{P}=(\mathcal{V},\mathcal{E},\prec)\) is a poset. Then the following are equivalent:_ 1. \(D(\mathcal{P})\leq 2\)_._ 2. _The complete lattice embedding of_ \(\mathcal{P}\) _has a planar Hasse diagram._ Although it is difficult in practice to compute the complete lattice embedding for a poset [19], Theorem 4 can still provide an empirical characterization of the class of structures that can be efficiently represented. According to the Euler characteristic, the average degree of a vertex in a planar graph cannot exceed \(6\)[21, SS6.1.23]. I.e., the posets that can be represented as an intersection of \(2\) tosts should be sparse enough. Fortunately, this is often the case in natural language. Such phenomenon is closely related to what is termed **valency** by Tesniere (1959, Part 1, Book D). The number of actants (i.e., arguments) needed to implement the function of a word is a property of the word itself--a constant that does not change with the context. In practice, the valency of a word is often a very small constant. We give a more practical efficiency guarantee based on a method to construct large posets from smaller posets. **Definition 8** (Series-parallel posets; Valdes et al., 1979).: _Series-parallel posets are defined as follows:_ 1. _A poset having a single vertex and no edges is series-parallel._ 2. _If posets_ \(\mathcal{P}_{1}=(\mathcal{V}_{1},\mathcal{E}_{1},\prec)\) _and_ \(\mathcal{P}_{2}=(\mathcal{V}_{2},\mathcal{E}_{2},\prec)\) _are series-parallel, so is either of the posets constructed by the following operations:_ 1. _Parallel composition:_ \(\mathcal{P}_{\mathfrak{p}}=(\mathcal{V}_{1}\cup\mathcal{V}_{2},\mathcal{E}_{1 }\cup\mathcal{E}_{2},\prec)\)_._ 2. _Series composition:_ \(\mathcal{P}_{\mathfrak{s}}=(\mathcal{V}_{1}\cup\mathcal{V}_{2},\mathcal{E}_{1 }\cup\mathcal{E}_{2}\cup(\mathcal{N}_{1}\times\mathcal{M}_{2}),\prec)\)_,_ _where_ \(\mathcal{N}_{1}\) _is the set of sources_3 _of_ \(\mathcal{P}_{1}\) _and_ \(\mathcal{M}_{2}\) _the set of sinks of_ \(\mathcal{P}_{2}\)_._ Footnote 3: Sources and sinks refer to the vertices without incoming arcs and without outgoing arcs, respectively. **Theorem 5** (Series-parallel posets are 2-dimensional; Valdes et al., 1979).: _The dimension of series-parallel posets is at most 2._ Theorem 5 provides the guarantee that most structures in natural language processing can be represented as the intersection of 2 tosts. Since most structures of interest in NLP, such as trees and forests (thereby alignments and set partitioning), can be subsumed under series-parallel posets, therefore have an order dimension of at most 2. The following propositions show that trees and forests are series-parallel, and thus are 2-dimensional. **Proposition 1** (Trees are 2-dimensional; Lawler, 1978).: _Directed tree posets are series-parallel. Therefore their order dimension is at most 2._ **Proposition 2** (Forests are 2-dimensional).: _Forests are parallel compositions of trees. Thus they are series-parallel, and their order dimension is at most 2._ ## 4 Parameterizing Posets with Neural Networks In this section, we describe the core technical contribution of our work by showing how to model posets with neural networks and pretrained language models. We propose an end-to-end method that implements the order-theoretic treatment for modeling structures. Specifically, we define a parameterized realizer of Def. 6 and an objective function for training the realizer to model the token-split structures. ### Neuralized Total Order **Definition 9** (Functional realizer).: _A functional realizer of a poset \(\mathcal{P}=(\mathcal{V},\mathcal{E},\prec)\) is a set of mappings \(\boldsymbol{\mathcal{F}_{\theta}}=\{f_{1},f_{2},\cdots,f_{k}\}\), where \(\boldsymbol{\theta}\) is the set of learnable parameters, and \(k\in\mathbb{N}\) is a hyper-parameter of the realizer. \(f_{i}\colon\mathcal{V}\to\mathbb{R}\) maps each vertex in the input structure to a real number._ _We overload \(\boldsymbol{\mathcal{F}_{\theta}}\) with a mapping \(\boldsymbol{\mathcal{F}_{\theta}}\colon\mathcal{V}\to\mathbb{R}^{k}\), defined as \(\boldsymbol{\mathcal{F}_{\theta}}(\boldsymbol{x})=\big{[}f_{1}(\boldsymbol{x }),\cdots,f_{k}(\boldsymbol{x})\big{]}^{\top}\). For modeling structures in NLP, we choose \(k\geq 2\)._ The set of real numbers \(\mathbb{R}\) is totally ordered, in which the order is given by the \(<\) (less than) relation. Each \(f_{i}\in\mathbf{\mathcal{F}_{\theta}}\) induces a total order \(\mathcal{T}_{i}=(\mathcal{V},\{(\mathbf{x},\mathbf{y},\prec)\mid\forall\mathbf{x},\mathbf{y}\in \mathcal{V},f_{i}(\mathbf{x})<f_{i}(\mathbf{y})\},\prec)\). The functionalizer assigns \(k\) total orderings \(\{\mathcal{T}_{1},\mathcal{T}_{2},\cdots,\mathcal{T}_{k}\}\) to the input string. During decoding, \(\mathbf{x}\!\prec\!\mathbf{y}\) holds in \(\mathcal{P}\) if and only if \(\mathbf{x}\!\prec\!\mathbf{y}\) holds in \(\mathcal{T}_{i},\forall i\in[k]\). Implementing Def. 9 with neural networks is straightforward. To obtain \(f_{i}(\mathbf{x})\) and \(f_{i}(\mathbf{x}^{\prime})\), where \(\mathbf{x},\mathbf{x}^{\prime}\) are two vertices introduced by the token-split formulation (Def. 4) corresponding to the same token \(\mathbf{w}_{\mathbf{x}}\) in the input, we apply two independent linear projection on the contextualized representation of \(\mathbf{w}_{\mathbf{x}}\)4 given by a pretrained model parameterized by \(\mathbf{\theta}\). In total, we predict \(2k\) real numbers for each input token. The extra memory consumption in addition to the pretrained model is \(\mathcal{O}(2kN)\). Footnote 4: If a word consists of more than one subword due to tokenization, we apply the projection to the representation of the last subword. ### Neuralized Intersection To enable end-to-end training, we need a differentiable computation of the intersection of tosets. In a poset \(\mathcal{P}\), given its functionalizer \(\mathbf{\mathcal{F}_{\theta}}=\{f_{1},f_{2},\cdots,f_{k}\}\), \((\mathbf{x},\mathbf{y},\prec)\in\mathcal{E}\) if and only if \[\bigwedge_{i\in[k]}f_{i}(\mathbf{x})<f_{i}(\mathbf{y}) \tag{1}\] \[\Longleftrightarrow\max_{i\in[k]}\{f_{i}(\mathbf{x})-f_{i}(\mathbf{y})\}<0 \tag{2}\] We define the following functions as the smooth approximation to Eq. (2). **Definition 10** (Approximate maximum).: _The approximate max function \(\operatorname{approxmax}(\cdot\,\tau)\colon\mathbb{R}^{k}\to\mathbb{R}\) with temperature hyperparameter \(\tau\) is a convex and differentiable approximation to the \(\max\{\cdot\}\) function, defined as_ \[\operatorname{approxmax}(\mathbf{h};\tau)=\log\left(\sum_{i\in[k]}\exp\!\left( \frac{\mathbf{h}_{i}}{\tau}\right)\right)\] _where \([k]=\{j\in\mathbb{N}\mid j<k\}\)._ _As \(\tau\to 0^{+}\), \(\operatorname{approxmax}(\mathbf{x};\tau)\to\max\{\mathbf{x}\}\).5 We choose \(\tau=1\) by default._ Footnote 5: \(\tau\to 0^{+}\) denotes the limit from above. **Definition 11** (Pairwise function).: _A pairwise function \(F_{\mathbf{\theta}}\colon\mathcal{V}\times\mathcal{V}\to\mathbb{R}\) is defined as_ \[F_{\mathbf{\theta}}(\mathbf{x},\mathbf{y})=\operatorname{approxmax}(\mathbf{\mathcal{F}_{\mathbf{ \theta}}}(\mathbf{x})-\mathbf{\mathcal{F}_{\mathbf{\theta}}}(\mathbf{y})). \tag{3}\] Def. 11 provides an efficient way to model token-pair interactions. Empirically, the smaller \(F_{\mathbf{\theta}}(\mathbf{x},\mathbf{y})\) is, the more likely an arc from \(\mathbf{x}\) to \(\mathbf{y}\) exists. In contrast to the self attention Vaswani et al. (2017), biaffine Dozat and Manning (2017), or feedforward modules Lee et al. (2017), Def. 11 does not require any computation of pairwise features. ### Objective Function We define the training objective of our model, whose goal is to encourage the model to make decisions that comply with the order constraints enforced by the structures, described by Eq. (2). Given the token-split formalization \(\mathcal{P}=(\mathcal{V}\cup\mathcal{V}^{\prime},\mathcal{E},\prec)\) of a structure, the training objective is to minimize the following negative log-likelihood: \[\mathcal{L}(\mathbf{\theta})=-\bigg{(}\sum_{\mathbf{x}\prec\mathbf{y}}\log \sigma(-F_{\mathbf{\theta}}(\mathbf{x},\mathbf{y}))+\\ \sum_{\mathbf{x}\prec\mathbf{y}}\log\sigma(F_{\mathbf{\theta}}(\mathbf{x},\mathbf{y}) )\bigg{)} \tag{4}\] ## 5 Experiments We report the experimental results on two representative structured prediction problems in NLP, namely dependency parsing and coreference resolution. The graph-theoretic definitions of these tasks are given in examples 2 and 3. We formulate them as posets following the procedure described in SS3, and apply the neural method described in SS4 to model the posets. Our models for dependency parsing and coreference resolution follow the same design pattern. A functional realizer (Def. 9) implemented as linear projection is applied to the contextualized representation given by the pretrained language model. ### Dependency Parsing Modeling.The dependency arcs are modeled using the method proposed in this paper. For the dependency type labels, we apply a token-level linear classifier on the contextualized representation. During decoding, if a vertex \(\mathbf{y}\) has multiple incoming arcs, we keep the one that minimizes \(\max_{i\in[k]}\{f_{i}(\mathbf{x})-f_{i}(\mathbf{y})\}\). I.e., the arc starting from \(\operatorname{argmin}_{\mathbf{x}\in\mathcal{V}}\big{(}\max_{i\in[k]}\{f_{i}(\mathbf{x })-f_{i}(\mathbf{y})\}\big{)}\) is kept. Datasets.We conduct experiments on the English Penn Treebank (PTB; Marcus et al., 1993) and the Chinese Penn Treebank (CTB; Xue et al., 2005). We follow previous work Kiperwasser and Goldberg (2016); Dozat and Manning (2017) to derive the dependency annotations from the treebank annotations using the Stanford Dependency converter v3.3.0 De Marneffe and Manning (2008). During evaluation, punctuations are omitted. The dataset splits are consistent with previous work. For PTB, we follow the standard split of Marcus et al. (1993), resulting in 39,832 sentences for training, 1,700 for development, and 2,416 for testing. For CTB, we follow the split of Zhang and Clark (2008), resulting in 16,091 sentences for training, 803 for development, and 1,910 for testing. Accuracy.We train our model using XLNet-large-cased6Yang et al. (2019) for PTB and bertbase-chinese7 for CTB as the pretrained encoders. The results are reported in Tab. 1. Footnote 6: [https://huggingface.co/xlnet-large-cased](https://huggingface.co/xlnet-large-cased) Footnote 7: [https://huggingface.co/bert-base-chinese](https://huggingface.co/bert-base-chinese) On PTB, our model gets nearly the same UAS and LAS performance with Yang and Tu's (2022)\(\mathcal{O}\big{(}N^{3}\big{)}\) method. The accuracy of our model is lower but still decent on CTB, compared to other much more computationally intensive state-of-the-art models. ### Coreference Resolution Modeling.Our model operates in a two-stage manner to accommodate the two relations in example 3. First, it extracts a list of entity mentions using the partial order induced by \(r\). Then, it models the partial ordering induced by \(r^{\prime}\) over the extracted mentions. The overall complexity of our coreference resolution model is still \(\mathcal{O}(N)\), since the number of mentions is \(\mathcal{O}(N)\)Liu et al. (2022). Dataset.We experiment on the CoNLL-2012 English shared task dataset OntoNotes; Pradhan et al. (2012). The OntoNotes dataset contains 2,802 documents for training, 343 for validation, and 348 for testing. Accuracy.Our model uses Longformer-large-cased8Beltagy et al. (2020) as the pretrained encoder. The experimental results are displayed in Tab. 2. Similar to the results on dependency parsing, an intersection of \(2\) total orders can already achieve reasonably high performance on coreference resolution. This provides empirical support for our assertion that most structures in NLP can be represented as the intersection of 2 total orders. Footnote 8: [https://huggingface.co/allenai/longformer-large-4096](https://huggingface.co/allenai/longformer-large-4096) When \(k=4\), the performance is comparable to Kirstain et al. (2021), which uses the same pretrained encoder with ours and requires \(\mathcal{O}\big{(}N^{2}\big{)}\) bi-affine product computation for token-pair scores. ## 6 Related Work ### Structured Prediction Structured prediction consists of a wide range of tasks that have been extensively studied in natural language processing. The tasks often involve the modeling of interrelated variables or outputs with structural constraints. Some representative structured prediction problems are sequence tagging Church (1988), dependency parsing Kubler et al. (2009), and coreference resolution Stede (2012). \begin{table} \begin{tabular}{c c c c c} \hline \hline & \multicolumn{2}{c}{PTB} & \multicolumn{2}{c}{CTB} \\ \cline{2-5} Model & UAS & LAS & UAS & LAS \\ \hline Zhou and Zhao (2019)\({}^{*}\) & 97.0 & 95.4 & 91.2 & 89.2 \\ Mrini et al. (2020)\({}^{\star}\) & 97.4 & 96.3 & 94.6 & 89.3 \\ \hline Chen and Manning (2014) & 91.8 & 89.6 & 83.9 & 82.4 \\ Dozat and Manning (2017) & 95.7 & 94.1 & 89.3 & 88.2 \\ Yang and Tu (2022)\({}^{\#}\) & 97.4 & 95.8 & 93.3 & 92.3 \\ \hline \(\mathsf{Ours}\) (\(k=2\)) & 96.9 & 95.4 & 86.4 & 84.3 \\ \(\mathsf{Ours}\) (\(k=4\)) & 97.1 & 95.7 & 90.5 & 88.1 \\ \hline \hline \end{tabular} \end{table} Table 1: Experimental results on PTB and CTB. \({}^{*}\) indicates usage of extra constituency annotation. \({}^{\#}\) is our re-implementation using the same pretrained encoder as ours. \(k\) is the dimension of the realizer used in our method. \begin{table} \begin{tabular}{c c c c} \hline \hline & Avg. P & Avg. R & Avg. F1 \\ \hline Lee et al. (2017) & 69.9 & 64.7 & 67.2 \\ Lee et al. (2018) & 73.9 & 72.0 & 73.0 \\ Fei et al. (2019) & 77.9 & 70.2 & 73.8 \\ Kantor and Globerson & 76.1 & 77.1 & 76.6 \\ Joshi et al. (2019) & 78.4 & 75.4 & 76.9 \\ Joshi et al. (2020) & 80.1 & 78.9 & 79.6 \\ Xu and Choi (2020) & 80.3 & 79.5 & 79.9 \\ Kirstain et al. (2021) & 81.2 & 79.4 & 80.3 \\ \hline \hline \(\mathsf{Ours}\) (\(k=2\)) & 75.8 & 73.7 & 74.7 \\ \(\mathsf{Ours}\) (\(k=4\)) & 79.3 & 79.0 & 79.2 \\ \hline \hline \end{tabular} \end{table} Table 2: Experimental results on the OntoNotes English coreference resolution benchmark. \(k\) is the dimension of the realizer used in our method. Typically, structured prediction can be formulated as learning and inference of probabilistic graphical models (Smith, 2011, SS2.2). The key idea is to represent the probability distribution over the output space using a graph, in which each vertex corresponds to a random variable, and each edge corresponds to dependence relations between random variables. ### Graph-based Parsing Graph-based parsers, or arc-factored parsers, construct graphs by scoring all possible arcs between each pair of words. They then use maximum spanning tree (MST) finding algorithms, such as Kruskal's algorithm (Kruskal, 1956) and Tarjan's algorithm (Tarjan, 1977), at inference time to build the valid dependency trees with maximum score in \(\mathcal{O}\big{(}N^{2}\big{)}\)(McDonald et al., 2005). Kiperwasser and Goldberg (2016) present a neural graph-based parser that uses the same kind of attention mechanism as (Bahdanau et al., 2015) for machine translation. While offering exact inference algorithms, graph-based parsing is neither fast nor parallelizable. ### Tagging-based Parsing Inspired by Bangalore and Joshi's (1999)'s seminal work on _supertagging_, a recent line of work aims to utilize pretrained models and parse dependency trees by inferring tags for each word in the input sequence. Li et al. (2018); Kiperwasser and Ballesteros (2018) predict the relative position of the dependent with respect to its parent as the tag. They then use beam tree constraints (Lee et al., 2016) to infer valid dependency trees. Strzyz et al. (2019) provides a framework for analyzing similar tagging schemes. Gomez-Rodriguez et al. (2018) infers a chunk of actions in a transition-based system for each word in the sequence. Although these works have demonstrated potential in this area, none achieved state-of-the-art results compared to custom architectures developed for graph- or transition-based parsing. Additionally, the output space, or size of the tag set, is unrestricted, which limits the efficiency of this approach. ### Ordinal Regression Ordinal regression is a family of problems that involve ranking a set of objects. Unlike classification, the label spaces in ordinal regression exhibit some natural ordering in its elements (McCullagh, 1980). For instance, in information retrieval, a ranking model sorts a set of documents typically according to the document's relevance to the query. Practically, ordinal regression can either be tackled as either regression or classification by treating the ranks as real-values or the assignment to a particular rank value as a classification (Shawe-Taylor and Cristianini, 2004). ## 7 Conclusion In this paper, we propose an order-theoretic perspective of structured prediction in natural language processing. We show that most structured prediction problems can be done in linear time and space by considering them as partial orderings of the tokens in the input string. We demonstrate the effectiveness of our method with empirical experiments on dependency parsing and coreference resolution. Future work will explore the probabilistic formalization and constrained decoding of string partial order.
2306.17310
European Shock-Tube for High-Enthalpy Research Combustion Driver Qualification
The ESTHER shock tube is a new state-of-the-art facility at Instituto Superior T\'ecnico designed to support future ESA planetary exploration missions. Its driver is a high-pressure combustion chamber using a mixture of He:H2:O2 ignited by a high-power Nd:YAG laser. Both hydrogen as an energy vector and laser ignition are promising techniques with applications in high-pressure combustion. The influence of gas mixture and laser parameters, namely the air:fuel ratio, filling pressure, inert gas dilution and ignition mode, on the combustion and thus shock tube performance were extensively studied. A second, low-velocity driver mixture with nitrogen in place of helium as a dilutant was also studied and experimental shots are done. Our results show that the filling pressure and helium dilution are the most dominant parameters in both peak pressure, acoustic oscillation and combustion velocity. The gas mixture peak pressure and acoustic wave amplitude increase with the increased filling pressure. Yet, the increased filling pressure lowers the combustion velocity. The helium in the mixture had a dilution effect, with it lowering the overall effectiveness of combustion. Having higher dilution factors lowers the combustion compression ratio, acoustic waves amplitude and flame velocity. The air:fuel equivalence ratio influence was expected with faster flame and peak pressures at the stoichiometric region. Nitrogen diluted shots have drastically lower compression ratios and flame velocity when compared to the helium ones, besides it, the acoustic perturbation was stronger. ``tulip" flames and deflagration to detonation transitions phenomena were identified in some of the experiments.
Ricardo Grosso Ferreira, Mário Lino da Silva
2023-06-29T21:38:04Z
http://arxiv.org/abs/2306.17310v1
# European Shock-Tube for High-Enthalpy Research Combustion Driver Qualification ###### Abstract The ESTHER shock tube is a new state-of-the-art facility at Instituto Superior Tecnico designed to support future ESA planetary exploration missions. Its driver is a high-pressure combustion chamber using a mixture of He:H\({}_{2}\):O\({}_{2}\) ignited by a high-power Nd:YAG laser. Both hydrogen as an energy vector and laser ignition are promising techniques with applications in high-pressure combustion. The influence of gas mixture and laser parameters, namely the air:fuel ratio, filling pressure, inert gas dilution and ignition mode, on the combustion and thus shock tube performance were extensively studied. A second, low-velocity driver mixture with nitrogen in place of helium as a dilutant was also studied and experimental shots are done. Our results show that the filling pressure and helium dilution are the most dominant parameters in both peak pressure, acoustic oscillation and combustion velocity. The gas mixture peak pressure and acoustic wave amplitude increase with the increased filling pressure. Yet, the increased filling pressure lowers the combustion velocity. The helium in the mixture had a dilution effect, with it lowering the overall effectiveness of combustion. Having higher dilution factors lowers the combustion compression ratio, acoustic waves amplitude and flame velocity. The air:fuel equivalence ratio influence was expected with faster flame and peak pressures at the stoichiometric region. Nitrogen diluted shots have drastically lower compression ratios and flame velocity when compared to the helium ones, besides it, the acoustic perturbation was stronger. "tulip" flames and deflagration to detonation transitions phenomena were identified in some of the experiments. ## 1 Introduction The European Shock-Tube for High-Enthalpy Research (ESTHER) is a new state-of-the-art facility at Instituto Superior Tecnico design to support the next European Space Agency (ESA) planetary exploration missions [21, 22]. A shock-tube is comprised of a high-pressure driver, and a low-pressure driven sections separated by a breakable wall, a diaphragm. The driver pressure increases until it ruptures the diaphragm creating a high-speed shock-wave. This may be achieved using multiple techniques [37], namely a high-pressure combustion driver. Hydrogen arises as a natural solution for a high-pressure combustion shock-tube driver due to being lighter and not producing soot when compared to hydrocarbons. Another advantage of hydrogen in relation to hydrocarbons is that its combustion temperature and peak pressure are higher than most hydrocarbons, furthering improving the driver performance. Due to the high filling pressures (10-100 bar), laser ignition provides significant advantages over electric spark plugs. The most significant one is that increasing filling pressure increases the minimum voltage in the spark plugs, yet, decreases the minimum pulse energy as observed in [43, 34, 33, 14, 13]. To accomplish good repeatability the combustion should be stable and with subsonic flame propagation mode (deflagration) instead of supersonic (detonation). Strong acoustic oscillations are also undesirable, thus a small scale model of the chamber was first tested in [41], yielding an operational mixture of He:H\({}_{2}\):O\({}_{2}\) 8:2:1.2. A qualification campaign went underway to ensure the driver predictability, repeatability and reliability. A mapping of the limits of deflagration and detonation, acoustic oscillations, and peak pressure as function of the initial conditions, pressure \(p_{0}\), dilution \(X_{He}\) and air:fuel ratio \(\lambda\), and, of the laser focusing lens. Fig. 1 depicts ESTHER CAD overview, with combustion chamber driver on the left and the driven section on the right hand side. This manuscript is divided into the following parts: Section II covers the state-of-the-art in high-pressure hydrogen combustion; Section III describes the experimental setup; Section IV presents and discusses the results; and Section V highlights the main conclusions and next steps of this work. ## 2 State-of-the-Art The air:fuel equivalence ratio \(\lambda\)1 is defined by the ratio of oxidizer to fuel quantities, normalized by the stoichiometric coefficients. In our experiment, the air:fuel ratio is defined as Footnote 1: \(\phi=\lambda^{-1}\), \(\phi\) is the fuel:air equivalence ratio \[\lambda=\frac{n_{O2}}{n_{H2}}\bigg{/}\left(\frac{n_{O2}}{n_{H2}}\right)_{ stoichiometric}\quad, \tag{1}\] where \(n_{O2}\) and \(n_{H2}\) are the number of moles of oxygen and hydrogen respectively. A "rich" mixture is defined by having excess fuel (\(\lambda<1\)), and a "lean" mixture by having excess air/oxidizer (\(\lambda>1\)) in relation to the stoichiometric conditions (\(\lambda=1\)). Maximum peak pressure is typically found at stoichiometric conditions \(\lambda\sim 1\). Reference [45] measured the compression ratio and velocity of hydrogen-air-nitrogen mixtures, reporting that peak compression ratio is found at \(\lambda=1\) for filling pressures of 1.5 bar. Nonetheless, maximum flame velocity occurs in rich mixtures at \(\lambda=0.71\). Hydrogen low molecular weight and high thermal diffusivity explain why hydrogen-rich mixtures will burn faster than stoichiometric ones [18]. References [46, 4] observed similar results. Dilution ratio \(X_{He/N2}\) by an inert/non-reactive gas can be expressed as the ratio of non-reactive gas moles to the total gas. In our experiments, these gases can be either helium or nitrogen, thus the Eq. 2 follows, \[X_{He/N2}=\frac{n_{He/N2}}{n_{total}}\quad. \tag{2}\] Compression ratio and flame temperature are not significantly effected by changes in gas filling pressure [45]. A small increase in compression ratio may be observed and explained by faster flames, which reduce the wall heat losses, thus ensuring a higher peak pressure/temperature. In combustible mixtures with low to zero dilution by an "inert"/non-reacting gas faster flames are achieved at increased pressures [35, 4, 5, 17, 16, 46, 39]. Nonetheless, this trend is reversed in mixtures with high dilution (\(>50\%\)) factors. Simulation on hydrogen-oxygen-steam mixtures [17] shows that laminar flame velocity has a non-monotonic dependence with the filling pressure. Similar results are found in [5]. Mixtures with dilution above 40% show a laminar flame velocity nearly inversely proportional to the filling pressure. In both works the authors justify this behaviour by a change in the kinetic scheme. The work of [40] shows a non-monotonous pressure dependence of detonation limits for high-pressure hydrogen-oxygen and hydrogen-air mixtures. As stated before, pulsed lasers are effective in igniting high-pressure combustible mixtures, a review of its advantages can be found in [38]. Multiple works on laser ignition of high-pressure mixtures have been performed [48, 29, 36, 43, 6] for hydrogen-air, [42] for hydrogen-oxygen and [35, 14, 32, 31, 15, 23, 34, 30, 1] methane-air. Detonation may be initiated directly at ignition or trough a deflagration that transits to a detonation. The detonation region presents very high temperatures and pressures, which leads to exponentially high reaction rates, since these are proportional to the exponential of temperature. A detonation starts by a deposition of a great amount of energy in a short time, like with a focused laser. On the deflagration-to-detonation transition (DDT), the flame speed is key to the transition. A "tulip" flame is usually present in DDT, and when the flame front inversion occurs, detonation may start in a preheat ahead of the flame front. Reference [20] illustrates this phenomena in shadow photography, as the pressures waves coalesce ahead of the flame front, creating a region of extremely high pressure. They concluded that the positive feedback loop of pressure and temperature feed the detonation fast reactions, as temperature/pressure increase so the velocity of chemical reactions, which in turn releases more heat to increase the temperature. A detonation is only self-sustained if the trans Figure 1: ESTHER CAD overview verse dimension of the tube is larger than the detonation cell, otherwise the detonation will decay to a deflagration. Reference [19] used shadowgraph to image detonation cells in hydrogen-oxygen mixtures, evidencing a fish scale pattern. Mixtures with smaller cells lead to more sensitive and unstable mixtures, which detonate more easily. The influence of \(\lambda\) on the detonation cell size was investigated by [19] for several gas mixtures. Detonation cells have a minimum for \(\lambda\sim 1\), meaning stoichiometric mixture are easier to detonate and sustain a detonation. Increasing filling pressure decreases the size of detonations cells [12, 3]. In a DDT an initial laminar subsonic flame accelerates and at some point transits to a detonation. This phenomena is a stochastic one, thus the transition point and time is not always the same for the same initial conditions [20]. Experiments on hydrogen-oxygen DDT in tubes were done in [47, 28, 27]. Currently the DDT phenomena happens in the following manner: 1) initial exponential flame acceleration and shockwave creation; 2) flame creates compression waves ahead of its front which grows the shock; 3) A heated zone of unreacted gas right in front of the flame is created, a chain of reactions starts as the flame front reaches it; A positive feedback loop is formed by the rapid heat release of the chemical reaction rates in the high pressure/temperature zone, transit to a detonation. Schlieren photography of stoichiometric hydrogen-oxygen observing the 3 DDT stages previously presented can be found in [20]. A "tulip" flame was formed during each DDT phenomena, but not all "tulip" flames transit to a detonation. A "tulip" flame is a phenomena where a premixed flame in a tube suddenly changes its flame front from finger-like to a tulip-like shape. In it, the flame front center region slows, thus being "overtaken" by the outer edges near the tube wall. This was photographed in [9]. The "tulip" formation can be attributed to different causes, such as flow viscosity [44, 24], flow-pressure wave interaction [26], hydrodynamic instability [11, 7], vortices or Taylor instability [25, 26]. The propagation speed suddenly decreases when the flame front transits to "tulip"-flame, faster flames create a more pronounced "tulip", due to the larger acceleration/deceleration, [49]. The flame front inversion may be observed in the signal pressure as the point where a second slope [8]. ## 3 Experimental Setup The ESTHER combustion chamber driver operates using a premixed mixture of He:H\({}_{2}\):O\({}_{2}\) at filling pressures up to 100 bar. Alternatively, He could be replaced by N\({}_{2}\). Fig. 2 depicts the experimental setup. The combustion chamber is a cylinder with a total length of 1600 mm, a inner diameter of 200 mm and a wall thickness of 200 mm. Ignition occurs inside a channel with a length of 144 mm and an inner diameter of 20 mm, using a remote-controlled high power Nd:YAG laser and an optical setup of mirrors and lenses. The system is divided into 3 parts, the combustion chamber and its associated optics (1-6), the laser heads (a and b), and the beam conditioning system (c-i). The red and Nd:YAG were first aligned, so that the latter could be turned off as a safety measure and the alignment made with the former one. The beam from the high power Nd:YAG laser (Quantel Brilliant, 1064 nm, 180 mJ, 5 ns) is reflected by two 45\({}^{\circ}\) mirrors (High power CVI mirrors) to adjust its height and azimuthal deviation, passes through a half-wave plate and a beam splitter cube and focusing lens, then enters the chamber. The laser beam power sent to the chamber can be adjusted using the half-wave plate and the polarizer cube. The combustion is characterized by its pressure signal, measured by a piezoelectric pressure transducer (Kistler 6215) connected to a Charge Amplifier (Kistler 5015). The piezoelectric is positioned into the third port of the driver and protected with a heat shield system (Kistler 6567) as recommended by the manufacturer, in fig. 2. The signal is recorded into a digital storage oscilloscope (Tekronics MDO4104B-3) and a FPGA based acquisition board RedPitaya at 125 MSamples/s and variable decimation rate. The gas filling system was designed to fill the chamber with an arbitrary mixture of He:H\({}_{2}\):O\({}_{2}\) up to 100 bar of pressure, identified by number 5 in fig. 2. It uses standard 200 bar gas bottles. The option to use N\({}_{2}\) instead of He was added to the system. The gas filling system is controlled and monitored by an industrial standard Siemens Programmable Logic Controller (PLC) system (S7-1200 CPU Module 1215C), associated to an EPICS/CS-Studio higher-level control and monitoring layer, operated remotely from a separate control room. The combustion chamber filling pressure is monitored by a pressure transducer is an Ashcroft A4-S-A-F09-42-D0-2000psi-G-XK8-X6B series A4 model, capable of measuring absolute pressures from atmospheric up to 130 bar. ## 4 Results and Discussion ### Phenomenological description A total of 93 shots were performed, 63 with helium acting as diluent and 30 with nitrogen as diluent gas. Direct observation with Schlieren or shadowgraph photography of the combustion process could not be made because of the large pressures and the thick walls of the vessel. Ignition is formed by an highly reactive plasma kernel [38, 2, 23] near the focal point of the lens (around 40 mm from the sapphire window in our experiment). Fig 3 illustrates the pressure signal of shot S017 with a transition to "tulip" flame. The laser firing is identified by the flash lamp at \(t_{0}\). Initially, the flame front expands spherically from the ignition channel exit until it reaches the chamber wall, from \(t_{1}\) to \(t_{2}\). As the flame front expands its area, the rate of reaction increases thus accelerating the gas consumption and pressure rise [20]. Then the flame front moves forward at about constant velocity until the gas is totally consumed at \(t_{3}\). An acoustic wave is formed when the flame front hits the chamber outer well at \(t_{2}\), observed by the oscillation in the pressure signal [49, 50]. A transition to a "tulip" flame can be observed when a second slope is formed at \(t_{2}^{\prime}\). The values of flame velocity are computed from the pressure signal. The average flame velocity is computed by taking the distance from the exit of ignition channel to the piezoelectric sensor (1346 mm), and the time between the initial rise \(t_{1}\) and peak pressure \(t_{3}\). Faster flames create stronger acoustic waves [49]. A spectral analysis of the pressure signal can be done by performing a fast-fourier transform of the pressure signal. A time-dependent Fast-Fourier transform (spectrogram) was computed for shot S084 (N\({}_{2}\):H\({}_{2}\):O\({}_{2}\), X\({}_{\text{N2}}\)=72%, \(\lambda\)=1.1, \(p_{0}=10\) bar) and depicted in fig. 4. Combustion starts slightly before 200 ms, noted by increase in acoustic wave strength. The gas post-combustion will cool down, lowering the sound speed and the acoustic wave frequency. This is observable by the peak frequency shifting to lower values over time. The spectrogram depicts multiple oscillation frequencies, the most crucial are in the 0 to 600 Hz region. The highest frequency decay shortly after being created. ### _Stoichiometry and dilution ratios_ #### 4.2.1 He/N2 Dilution Four major parameters were analysed during the campaign, the compression ratio/peak pressure, flame velocity, acoustic wave amplitude and combustion mode (deflagration/detonation). Fig. 5 illustrates the compression ratio against helium dilution Figure 4: Spectrogram (Time-dependent Fast-Fourier transform) of shot S084 (N\({}_{2}\):H\({}_{2}\):O\({}_{2}\), X\({}_{\text{N2}}\)=72%, \(\lambda\)=1.1, \(p_{0}=10\) bar) pressure signal. A zoom on region up to 1500 Hz is depicted. The region of 150 Hz has a peak corresponding to the acoustic wave pressure oscillation. Figure 3: S017 pressure signal, \(p_{0}=30\) bar, \(\lambda=1.2\), \(X_{\text{He}}=71\%\). Characteristic times \(t_{2}^{\prime}\) the time when flame front inversion occurs and a transition to a “tulip flame”. Figure 2: ESTHER Combustion Driver Setup. 1-Combustion Chamber; 2-Diapragham/Blank; 3-Gas input port; 4-Gas output port; 5-Kistler gauge; 6-Sapphire Optical Window. a-High-power Nd:YAG laser 1064 nm 5 ns 200 mJ; b-Red diode laser (for alignment); c,d,e-45\({}^{\text{o}}\) mirror; f-Half-wave plate; g-Beam splitter cube; h-Beam dump; i-Bi-convex focusing lens 100 mm and O\({}_{2}\):H\({}_{2}\) equivalence ratio in color scheme. As expected, larger helium dilutions decrease the compression ratio. Helium is an inert species which do not take part into the exothermic reaction, nonetheless absorbs part of the release heat. For cases with dilution below 60%, detonations can occur, since its compression ratio values are above 8. Fig 6 depicts the average flame velocity function of the helium dilution and O\({}_{2}\):H\({}_{2}\) equivalence ratio. Similarly to the compression ratio, the overall lower temperature will lower the flame velocity. The works of [45] agree with our results, where higher dilutions by inert gas lowers the compression ratio and flame velocity. Shots with helium dilutions below 65% may be yielding detonations, as their average flame velocity is in excess of 200 m/s, the sound speed of the gas inside the chamber. A comparison of shots S034 and S035, \(p_{0}=50\) bar, \(\lambda=1.40\) and helium dilutions of 64 and 67% respectively, pressure signal is shown in fig. 7. The peak pressure and flame velocity of S035 are lower, due to the higher helium percentage. The amplitude of the acoustic wave is also lower for S035. Ref [49] relate the acoustic wave amplitude to the flame front velocity, the author stated that larger diameter ducts acoustic waves were stronger due to the higher achieved flame velocities. Nonetheless, the acoustic wave is also dependent on filling pressure and O\({}_{2}\):H\({}_{2}\) equivalence ratio. rich then the heat release will decay, thus will the temperature, compression ratio and flame velocity. Fig. 10 shows the pressure signal of shots S013 and S016, \(p_{0}=20\) bar, X\({}_{\text{He}}\)=65%, \(\lambda\)=0.95 and 0.90, respectively. The peak pressure is similar in both cases, however, S016 has a faster combustion than S013. Thus, S016 creates stronger acoustic waves, in concordance with [50]. A comparison of three 30 bar shots with helium dilution of 70% is depicted in fig. 11. Both lean mixtures have a double slope and take lower to complete combustion. Lastly, fig. 12 compares two lean mixture, \(\lambda\)=1.27 and 1.37, shots at 70 bar and 71% helium dilution. The mixture closer to stoichiometric has higher peak pressure and faster combustion, as well, as a much stronger acoustic wave. All the results agree with the expected U-curve profile for both compression ratio and flame velocity. \(p_{0}=60\) bar however had a single slope combustion behaviour. Increased pressure would also create stronger acoustic waves and more detonation prone mixtures. This effect is observable in fig. 17 which compares two shots with similar chemical composition (\(\lambda\)=1.40, \(X_{\text{He}}\)=64%) at different pressures. The higher pressure signal transits to a detonation at t\(\sim\)8 ms. The higher filling pressure decrease the size of detonation cells, thus making the mixture more detonation prone [12]. The mean acoustic wave amplitude values for different shots is depicted in fig. 18. Higher filling pressures have increased acoustic waves, a steep rise is observable from 60 to 70 bar filling pressure. ### Influence of laser focusing The laser ignition system operate in two modes: focused beam via a 100 mm bi-convex focusing lens and unfocused beam by removing the aforementioned lens. Using the 100 mm lens the ignition point would occur inside the ignition channel. Without the lens the ignition point of origin could not be exactly determined. For filling pressures below 30 bar ignition required the lens to be mounted; in the range 30 to 60 bar ignition without lens could occur, yet could take multiple laser pulse to initiate; above 60 bar a single pulse would lead to ignition. A comparison of the effects of focusing lens is shown in fig. 19. It compares pressure signal of shots S046 and S047, \(p_{0}=80\) bar, \(\lambda=1.39\) and \(X_{\text{He}}=70\%\), unfocused and focused, respectively. The focused ignition reduces both the acoustic oscillation and the peak pressure values, as well as the average flame velocity. An hypothesis for this is that unfocused ignition may create multiple ignition points along the beam path. From there, multiple flame fronts propagate simultaneously, which creates more instabilities therefore accelerating the gas burning rate. Ignition might also be taking place along a line, thus reducing the characteristic burn length from the chamber length to its diameter. In any case, the faster combustion reduces wall heat losses, thus increasing the peak temperature/pressure of the gas. As stated before, faster flames are also responsible for stronger acoustic oscillations, as observable in fig. 19. Further research is required to understand these differences, and the inclusion of fast imaging diagnostics appears unavoidable for it. The explanation for how unfocused ignition is achieved for high-pressure combustible mixtures is yet not fully understood. Refer Figure 14: Compression ratio comparison for the “typical” [X:H\({}_{2}\):O\({}_{2}\)] 8:2:1.2 shot at different filling pressures. Figure 12: Influence of O\({}_{2}\):H\({}_{2}\) equivalence ratio on pressure signal comparison of shots at \(p_{0}\)=70 bar, \(X_{\text{He}}\) =71%. Figure 13: Pressure signal comparison for shots at \(p_{0}=\)20 bar, \(X_{\text{He}}=\)55% and lens focal distance of 100 mm. A detonation is observable for S054 (\(\lambda\)=0.77) but not for S056 (\(\lambda\)=1.34). Figure 15: Average flame velocity comparison for the “typical” [He:H2:O2] 8:2:1.2 shot at different filling pressures. ences [48, 33, 13, 10] show that higher filling pressures lower the minimum pulse energy. The electron cascade most likely starts with ionization of microparticles impurities such as dust or soot. ## 5 Final Remarks We have performed experiments on high-pressure He:H\({}_{2}\):O\({}_{2}\) and N\({}_{2}\):H\({}_{2}\):O\({}_{2}\) evaluating the compression ratio, acoustic wave formation, flame velocity and transition to detonation. The parameters in study were filling pressure [10-100] bar, oxygen-hydrogen equivalence ratio [0.45-2.04], inert gas dilutions of [54-79]% and laser ignition mounted/unmounted 100 mm bi-convex lens. The effects can be summarized as: Increased dilution: * Lower compression ratio * Slower combustion * Weaker acoustic waves * Less detonation prone mixtures When replacing helium for nitrogen as dilutant gas, the compression ratio and flame velocity further decrease. However, the acoustic waves amplitude is larger. Mixtures with O\({}_{2}\):H\({}_{2}\) equivalence ratio closer to the stoichiometric value were faster and with higher peak pressure. Slightly rich mixtures are faster and with higher compression ratios. These are also more prone to detonation and with stronger acoustic waves. Increased filling pressure: * Similar compression ratio * Slower combustion * Stronger acoustic waves * More detonation prone mixtures The removal of focusing lens creates stronger acoustic waves and faster combutions. However, laser ignition without the lens can happen for filling pressure above 30 bar. A direct imaging of the combustion process is important step to acquired information on the flame front shape and transition to tulip flame. Alongside it, to understand how the ignition occurs along the laser beam path in the un-focused mode. Operationally-speaking, stable, repeatable deflagrations have been reached for filling pressures in the 10-100 bar range, allowing successful operation of the driver up to 650 bar post-shot pressures, with a high degree of cleanliness. Figure 16: Comparison [He:H\({}_{2}\):O\({}_{2}\)] 8:2:1.2 shots at different filling pressures Figure 17: Pressure signal comparison for S034 (\(p_{0}\)=50 bar) and S076 (\(p_{0}\)=20 bar), \(\lambda\)=1.40, \(X_{\text{He}}\)=64%. Figure 18: Influence of filling pressure on the mean amplitude for acoustic waves. Comparison of 8:2:1.2 [He:H\({}_{2}\):O\({}_{2}\)] mixtures
2304.11118
BoDiffusion: Diffusing Sparse Observations for Full-Body Human Motion Synthesis
Mixed reality applications require tracking the user's full-body motion to enable an immersive experience. However, typical head-mounted devices can only track head and hand movements, leading to a limited reconstruction of full-body motion due to variability in lower body configurations. We propose BoDiffusion -- a generative diffusion model for motion synthesis to tackle this under-constrained reconstruction problem. We present a time and space conditioning scheme that allows BoDiffusion to leverage sparse tracking inputs while generating smooth and realistic full-body motion sequences. To the best of our knowledge, this is the first approach that uses the reverse diffusion process to model full-body tracking as a conditional sequence generation task. We conduct experiments on the large-scale motion-capture dataset AMASS and show that our approach outperforms the state-of-the-art approaches by a significant margin in terms of full-body motion realism and joint reconstruction error.
Angela Castillo, Maria Escobar, Guillaume Jeanneret, Albert Pumarola, Pablo Arbeláez, Ali Thabet, Artsiom Sanakoyeu
2023-04-21T16:39:05Z
http://arxiv.org/abs/2304.11118v1
# BoDiffusion: Diffusing Sparse Observations for Full-Body Human Motion Synthesis ###### Abstract Mixed reality applications require tracking the user's full-body motion to enable an immersive experience. However, typical head-mounted devices can only track head and hand movements, leading to a limited reconstruction of full-body motion due to variability in lower body configurations. We propose **BoDiffusion** - a generative diffusion model for motion synthesis to tackle this under-constrained reconstruction problem. We present a time and space conditioning scheme that allows BoDiffusion to leverage sparse tracking inputs while generating smooth and realistic full-body motion sequences. To the best of our knowledge, this is the first approach that uses the reverse diffusion process to model full-body tracking as a conditional sequence generation task. We conduct experiments on the large-scale motion-capture dataset AMASS and show that our approach outperforms the state-of-the-art approaches by a significant margin in terms of full-body motion realism and joint reconstruction error. + Footnote †: * Equal contributions. ## 1 Introduction Full-body motion capture enables natural interactions between real and virtual worlds for immersive mixed-reality experiences [17, 38, 51]. Typical mixed-reality setups use a Head-Mounted Display (HMD) that captures visual streams with limited visibility of body parts and tracks the global location and orientation of the head and hands. Adding more wearable sensors [14, 16, 18] is expensive and less comfortable to use. Therefore, in this work, we tackle the challenge of enabling high-fidelity full-body motion tracking when only sparse tracking signals for the head and hands are available, as shown in Fig. 1. Existing motion reconstruction approaches for 3-point input (head and hands) struggle to model the large variety of possible lower-body motions and fail to produce smooth full-body movements because of their limited predictive nature [15]. A recent attempt [2] to address this problem uses a generative approach based on normalizing flows [41] falling short of incorporating temporal motion information and generating poses for every frame individually, thus resulting in unrealistic synthesized motions. Another approach [6] that integrates motion history information using a Variational Autoencoder (VAE) [21] takes limited advantage of the temporal history because VAEs often suffer from "posterior collapse" [8, 20]. Thus, there is a need for a scalable generative approach that can effectively model temporal dependencies between poses to address these limitations. Recently, diffusion-based generative models [11, 46] have emerged as a potent approach for generating data across various domains such as images [42], audio [60], video [12], and language [9]. Compared to Generative Adversarial Networks (GANs), diffusion-based models have demonstrated to capture a much broader range of the target distribution [30]. They offer several advantages, including excellent log-likelihoods and high-quality samples, and em Figure 1: **BoDiffusion. Head and wrist IMUs are the standard motion-capture sensors in current virtual-reality devices. BoDiffusion leverages the power of Transformer-based conditional Diffusion Models to synthesize fluid and accurate full-body motion from such sparse signals.** ploy a solid, stationary training objective that scales effortlessly with training compute [30]. To leverage the powerful diffusion model framework, we propose **BoDiffusion** (**B**ody **Diffusion**), a new generative model for human motion synthesis. BoDiffusion directly learns the conditional data distribution of human motions, models temporal dependencies between poses, and generates full _motion sequences_, in contrast to previous methods that operate solely on static poses [2, 55]. Moreover, BoDiffusion does not suffer from the limitation of methods that require a known pelvis location and rotation during inference [2, 6, 55], and generates high-fidelity body motions relying solely on the head and hands tracking information. Our main contributions can be summarized as follows. We propose BoDiffusion - the first diffusion-based generative model for full-body motion synthesis conditioned on the sparse tracking inputs obtained from HMDs. To build our diffusion model, we adopt a Transformer-based backbone [34], which has proven more efficient for image synthesis than the frequently used UNet backbone [5, 39, 42], and it is more naturally suited for modeling sequential motion data. To enable conditional motion synthesis in BoDiffusion, we introduce a novel time and space conditioning scheme, where global positions and rotations of tracked joints encode the control signal. Our extensive experiments on AMASS [27] demonstrate that the proposed BoDiffusion synthesizes smoother and more realistic full-body pose sequences from sparse signals, outperforming the previous state-of-the-art methods (see Fig. 2 and 4). Find our full project on bcv-uniandes.github.io/bodiffusion-wp/. ## 2 Related Work Pose Estimation from Sparse Observations.Full-body pose estimation methods generally rely on inputs from body-attached sensors. Much prior work relies on 6 Inertial Measurement Units (IMUs) to predict a complete pose [14, 56, 57]. In [14], the authors train a bi-directional LSTM to predict body joints of a SMPL [25] model, given 6 IMU inputs (head, 2 arms, pelvis, and 2 legs). However, there is a high incentive to reduce the number of body-attached IMUs because depending on many body inputs creates friction in motion capture. LoBSTr [55] reduces this gap by working with 4 inputs (head, 2 arms, and pelvis). It takes past tracking signals of these body joints as input for a GRU network that predicts lower-body pose at the current frame. Furthermore, it estimates the upper body with an Inverse Kinematics (IK) solver. The methods in [2, 6] also require 4 joints as input since they leverage the pose of the pelvis to normalize the input data during training and inference. Figure 2: **Predicting Dense Full-Body Poses from Sparse Data.** Comparison of BoDiffusion and AvatarPose [15] against the ground truth. Color gradient in the avatars indicates an absolute positional error, with a higher error corresponding to higher red intensity. BoDiffusion synthesizes substantially more accurate and plausible full-body poses, particularly in the lower body where no IMU data are captured. In Mixed Reality (MR), obtaining user input from a headset and a pair of controllers is common. The authors of [15, 54] highlight the importance of a sensor-light approach and further reduce the amount of inputs to 3, a number that aligns well with scenarios in MR environments. AvatarPoser [15] combines a Transformer architecture and traditional IK to estimate full-body pose from HMD and controller poses. Similar to [15], our method uses only 3 inputs but provides much better lower-body prediction thanks to our diffusion model. Choutas _et al_. [4] propose an iterative neural optimizer for 3D body fitting from sparse HMD signals. However, they optimize poses frame-by-frame and do not consider motions. QuestSim [54] proposes to learn a policy network to predict joint torques and reconstruct full body pose using a physics simulator. Nevertheless, this approach is challenging to apply in a real-world scenario, especially when motion involves interaction with objects (_e.g._, sitting on a chair). In such a case, one needs to simulate both the human body and all the objects, which have to be pre-scanned in advance and added to the simulation. In contrast, our approach is data-driven and does not require a costly physics simulation or object scanning. Human Motion Synthesis & Pose Priors.A large body of work aims at generating accurate human motion given no past information [1, 23, 36, 37, 58]. Methods like TEMOS [37] and OhMG [23] combine a VAE [21] and a Transformer network to generate human motion given text prompts. Recently, FLAG [2] argues against the reliability of using VAEs for body estimation and proposes to solve these disadvantages with a flow-based generative model. VPoser [32] learns a pose prior using VAE, and Humor [40] further improves it by learning a conditional prior using a previous pose. Recent work [31] proposes a more generic approach that learns a pose prior and approximates an IK solver using a neural network. Another line of work tackles motion synthesis using control signals provided by an artist or from game-pad input [10, 13, 24, 50, 35]. However, in contrast to our method, such approaches either focus on locomotion and rely on the known future root trajectory of the character or are limited to a predefined set of actions [35]. Denoising Diffusion Probabilistic Models (DDPMs) [11, 30]are a class of likelihood-based generative models inspired by Langevin dynamics [22] which map between a prior distribution and a target distribution using a gradual denoising process. Specifically, generation starts from a noise tensor and is iteratively denoised for a fixed number of steps until a clean data sample is reached. Recently, Ho _et al_. [11] have shown [11] that DDPMs are equivalent to the score-based generative models [48, 49]. Currently, DDPMs are showing impressive results in tasks like image genera Figure 3: **Overview of BoDiffusion.** BoDiffusion is a diffusion process synthesizing full-body motion using sparse tracking signals as conditioning. **Top:** At each denoising step, the model takes as input \(2W\) tokens, which correspond to local joint rotations with \(t\) steps of noise (\(\mathbf{x}_{t}=x_{t}^{1:W}\)) and sparse tracking signals of the head and hands (\(s^{1:W}\)) as conditioning. We concatenate the \(x_{t}^{i}\) tokens with the conditioning tokens \(s^{i}\) along the spatial axis to preserve the time information and ensure coherence between the conditioning signal and the synthesized motion. After that, we pass it through the Transformer backbone of \(N\) DiT blocks [34]. **Bottom:** During inference, we start from random Gaussian noise \(\mathbf{x}_{T}\) and perform \(T\) denoising steps until we reach a clean output motion \(\mathbf{x}_{0}\). tion and manipulation [5, 7, 29, 39, 42] due to their impressive ability to fit the training distribution at large scale and stable training objective. Moreover, concurrent to this work, Diffusion Models have also been used to synthesize human motion from text inputs [52, 19, 59]. UNet [44] architecture has been de-facto the main backbone for image synthesis with Diffusion Models [5, 42, 39] up until a recent work [34] that suggested a new class of DDPMs for image synthesis with Transformer-based backbones. Transformers are inherently more suitable than convolutional networks for modeling heterogeneous sequential data, such as motion, and we capitalize on this advantage in our work. In particular, we employ a Transformer-based Diffusion Model, based on the DiT backbone [34], to construct an architecture for conditional full-body pose estimation from 3 IMU tracking inputs. ## 3 BoDiffusion In this section, we present our BoDiffusion model. We start with the DDPMs background in Sect. 3.1. Next, we define the problem statement and our probabilistic framework in Sect. 3.2. Then, in Sect. 3.3, we give an overview of the proposed BoDiffusion model for conditional full-body motion synthesis from sparse tracking signals, followed by the details of our model design. Please refer to Fig. 3 for an illustration of the entire pipeline of our method. ### Diffusion Process We briefly summarize DDPMs [11] inner workings and formulate our conditional full-body motion synthesis task using the generative framework. Let \(x_{0}^{1:W}=\mathbf{x}_{0}\sim q(\mathbf{x}_{0})\) be our real motion data distribution, where \(W\) is the length of the sequence motion. The forward diffusion process \(q\) produces latent representations \(\mathbf{x}_{1},\ldots,\mathbf{x}_{T}\) by adding Gaussian noise at each timestep \(t\) with variances \(\beta_{t}\in(0,1)\). Hence, the data distribution is defined as follows: \[q(\mathbf{x}_{1:T}|\mathbf{x}_{0})=\prod_{t=1}^{T}q(\mathbf{x}_{t}|\mathbf{x} _{t-1}) \tag{1}\] \[q(\mathbf{x}_{t}|\mathbf{x}_{t-1})=\mathcal{N}(\mathbf{x}_{t};\sqrt{1-\beta_ {t}}\mathbf{x}_{t-1},\beta_{t}\mathbf{I}), \tag{2}\] where \(\mathbf{I}\) is the identity matrix. Due to the properties of Gaussian distributions, Ho _et al_. [11] showed that we can directly calculate \(\mathbf{x}_{t}\) from \(\mathbf{x}_{0}\) by sampling: \[\mathbf{x}_{t}=\sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0}+\sqrt{1-\bar{\alpha}_{t}}\epsilon, \tag{3}\] where \(\alpha_{t}=1-\beta_{t}\), \(\bar{\alpha_{t}}=\prod_{i=1}^{T}\alpha_{i}\), and \(\epsilon\sim\mathcal{N}(0,\mathbf{I})\). On the contrary, the reverse diffusion process \(q(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) is the process of iterative denoising through steps \(t=T,\ldots,1\). Ideally, we would like to perform this process in order to convert Gaussian noise \(\mathbf{x}_{T}\sim\mathcal{N}(0,\mathbf{I})\) back to the data distribution and generate real data points \(\mathbf{x}_{0}\). However, \(q(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) is intractable because it needs to use the entire data distribution. Therefore, we approximate it with a neural network \(p_{\theta}\) with parameters \(\theta\): \[p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})=\mathcal{N}(\mathbf{x}_{t-1};\mu_ {\theta}(\mathbf{x}_{t},t),\Sigma_{\theta}(\mathbf{x}_{t},t)). \tag{4}\] We train to optimize the negative log-likelihood using the Variational Lower Bound (VLB) [11]: \[-\log p_{\theta}(\mathbf{x}_{0})\leq-\log p_{\theta}(\mathbf{x}_ {0})+\\ +D_{\text{KL}}(q(\mathbf{x}_{1:T}|\mathbf{x}_{0})\|p_{\theta}( \mathbf{x}_{1:T}|\mathbf{x}_{0}))=\mathcal{L}_{\text{vlb}}. \tag{5}\] Following [11], we parameterize \(\mu_{\theta}(\mathbf{x}_{t},t)\) like this: \[\mu_{\theta}(\mathbf{x}_{t},t)=\frac{1}{\sqrt{\alpha_{t}}}\left(\mathbf{x}_{ t}-\frac{\beta_{t}}{\sqrt{1-\bar{\alpha}_{t}}}\epsilon_{\theta}(\mathbf{x}_{t},t) \right). \tag{6}\] After a couple simplifications, [11] ignores the weighting terms to rewrite \(\mathcal{L}_{\text{simple}}\) as follows: \[\mathcal{L}_{\text{simple}}=E_{\mathbf{x}_{0}\sim q(\mathbf{x}_{0}),t\sim U [1,T]}||\epsilon-\epsilon_{\theta}(\mathbf{x}_{t},t)||_{2}^{2}. \tag{7}\] Ho _et al_. [11] observed that optimizing \(\mathcal{L}_{\text{simple}}\) works better in practice than optimizing full VLB \(\mathcal{L}_{\text{vlb}}\). During training, we follow Eq. 7, where we sample \(\mathbf{x}_{0}\) from the data distribution, the timestep as \(t\sim\mathcal{U}\{1,T\}\), and compute \(\mathbf{x}_{t}\) using Eq. 3. Intuitively, we learn \(p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) by training neural network to predict the noise \(\epsilon\) that was used to compute the \(\mathbf{x}_{t}\) with Eq. 3. However, simple loss \(\mathcal{L}_{\text{simple}}\) assumes that we have a predefined variance \(\Sigma(\mathbf{x}_{t},t)=\beta_{t}\). Instead, we follow [30] and optimize the variance \(\Sigma_{\theta}(\mathbf{x}_{t},t)=e^{v\log\beta_{t}+(1-v)\log\beta_{t}\frac{ 1-\alpha_{t-1}}{1-\alpha_{1}}}\), where \(v\) is a learnable scalar. Hereby, we use a combined objective: \[\mathcal{L}=\mathcal{L}_{\text{simple}}+\lambda_{\text{vlb}}\mathcal{L}_{ \text{vlb}}. \tag{8}\] ### Conditional Full-Body Motion Synthesis **Problem Definition.** Human motion can be characterized by a sequence of body poses \(x^{i}\) ordered in time. We define a _pose_ as a set of body joints arranged in the kinematic tree of the SMPL [25] model. Joint states are described by their local rotations relative to their parent joints, with the pelvis serving as the root joint and its rotation being defined with the global coordinate frame. We utilize the 6D representation of rotations [61] to ensure favorable continuity properties, making \(x^{i}\in\mathbb{R}^{22\times 6}\). The global translation of the pelvis is not modeled explicitly, as it can be calculated from the tracked head position by following the kinematic chain [15]. We consider a typical mixed reality system with HMD and two hand controllers that provides _3-point_ tracking information of head and hands in the form of their global positions \(p^{i}\) and rotations \(r^{i}\). Furthermore, we additionally compute the linear and angular velocities \(v^{i},\omega^{i}\) of the head and wrists, making \(s^{i}=\{p^{i},r^{i},v^{i},\omega^{i}\}\in\mathbb{R}^{3\times(3+6+3+6)}\) to make the input signal more informative and robust [15]. The target task is to synthesize full-body human motion \(x^{1:W}=\{x^{i}\}_{i=1}^{W}\) using the limited tracking signals \(s^{1:W}=\{s^{i}\}_{i=1}^{W}\) as input. Probabilistic Framework.We formally define our conditional full-body motion synthesis task by using the formulation of Diffusion Models outlined in Sect. 3.1. Let \(\mathbf{x}_{t}=x_{t}^{1:W},\mathbf{s}=s^{1:W}\) for brevity. We want to learn a conditional distribution of the full-body human motion sequences \(\mathbf{x}_{0}\) defined as follows: \[p_{\theta}(\mathbf{x}_{0}|\mathbf{s})=\int p_{\theta}(\mathbf{x}_{0:T}| \mathbf{s})d\mathbf{x}_{1:T}, \tag{9}\] \[p_{\theta}(\mathbf{x}_{0:T}|\mathbf{s})=p(\mathbf{x}_{T})\prod_{t=1}^{T}p_{ \theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{s}), \tag{10}\] where \(p(\mathbf{x}_{T})\sim\mathcal{N}(0,\mathbf{I})\) is a Gaussian noise. In this case, we train a neural network \(\theta\) to predict the mean \(\mu_{\theta}(\mathbf{x}_{t},t,\mathbf{s})\) and the variance \(\Sigma_{\theta}(\mathbf{x}_{t},t,\mathbf{s})\), similar to Eq. 4, but conditioned on sparse tracking signals \(\mathbf{s}\). Thus, the simple loss from Eq. 7 then becomes: \[\mathcal{L}_{\text{simple}}=E_{\mathbf{x}_{0}\sim q(\mathbf{x}_{0}),t\sim U\{ 1,T\}}||\epsilon-\epsilon_{\theta}(\mathbf{x}_{t},t,\mathbf{s})||_{2}^{2}. \tag{11}\] Local Rotation Loss is Equivalent to the \(\mathcal{L}_{\text{simple}}\).In Human Motion Synthesis, it is widespread [55, 2, 6, 15] to use the local rotation loss that minimizes the difference between the local joint rotations of the estimated poses and the ground truth. Because of this standard practice, one can hypothesize whether learning \(\epsilon_{\theta}\) (from Eq. 7) is helpful for synthetic motion sequences. However, we found that optimizing \(\epsilon_{\theta}\) is equivalent to directly minimizing the local rotation error. **Lemma 1**.: _Let \(\mathcal{L}(x,x^{\prime})=||x-x^{\prime}||^{2}\) be the local rotation error loss between a motion sequence \(x\) and \(x^{\prime}\) be an estimate of \(x\). Then, optimizing the \(\mathcal{L}_{\text{simple}}\) loss is equivalent to optimizing \(\mathcal{L}\)._ We provide the proof of Lemma 1 in the Supplementary Material. ### BoDiffusion Architecture We draw inspiration from the diffusion models for image synthesis to design a model for learning the conditional distribution \(p_{\theta}(x_{0}^{1:W}|s^{1:W})\) of the full-body motion sequences (cf. Eq. 9). Specifically, we choose to leverage the novel Transformer backbone DiT [34] to build the BoDiffusion model because (i) it was shown to be superior for image synthesis task [34] compared to the frequently used UNet backbone [5, 39, 42], and (ii) it is more naturally suited for modeling heterogeneous motion data. Below, we provide a detailed description of our architecture and introduce a method that ensures the conditional generation of motion coherent with the provided sparse tracking signal \(s^{1:W}\). In order to leverage the Transformer's ability to handle long-term dependencies while maintaining temporal consistency, we format the input \(x_{t}^{1:W}\), which represents joint rotations over time, as a time-sequence tensor and split it along the time dimension into tokens. We treat each pose \(x_{t}^{i}\) as an individual token and combine the feature and joint dimensions into a \(d\)-dimensional vector, where \(d=22\times 6\) is the number of joints multiplied by the number of features. This strategy allows us to take advantage of the temporal information and efficiently process the motion sequence. We implement our BoDiffusion model by extending the DiT architecture of Peebles _et al_. [34] with our novel conditioning scheme. The DiT backbone architecture consists of a stack of encoder transformer layers that use Adaptive Layer Normalization (AdaLN). The AdaLN layers produce the scale and shift parameters from the timestep embedding vector to perform the normalization depending on the timestep \(t\). Peebles _et al_. [34] input the class labels along with the time embedding to the AdaLN layers to perform class-conditioned image synthesis. However, we empirically demonstrate (see Sect. 4.2) that using the conditioning tracking signal \(\mathbf{s}\) along with the time embedding \(t\) in the AdaLN layers harms the performance of our BoDiffusion model because in this case, we disregard the time information. Therefore, we propose a novel conditioning method that retains the temporal information and allows conditional synthesis coherent with the provided sparse tracking signal. Figure 4: **Error Comparison.** Comparison of BoDiffusion and AvatarPoser [15] with color coding as previously explained. Motions generated by BoDiffusion exhibit greater similarity to the ground truth and display fewer foot skating artifacts, as highlighted in the red circles. Specifically, the leg in contact with the ground should not slide, and BoDiffusion produces motion sequences that adhere more closely to this requirement. **Conditioning on tracking signal.** We use the 3-point tracking information of head and hands from HMDs to compute an enriched input conditioning \(s^{1:W}\). This conditioning \(s^{1:W}\) has the shape \(W\times d_{s}\), where \(d_{s}=18\cdot 3\) is the number of features (18) per joint multiplied by the number of tracked joints (3). We treat it as a sequence of individual tokens \(s_{i}\) and apply a linear transformation (_conditioning embedding_ layer in Fig. 3) to each of them, thus increasing the dimensionality of the tokens from \(d_{s}\) to \(d_{emb}=18\cdot 22\). We observe that such higher-dimensional embedding enforces the model to pay more attention to the conditioning signal. Next, we concatenate the input sequence tokens \(x^{i}_{t}\) with the transformed conditioning tokens and input the result to the transformer backbone. By preserving the temporal structure of the tracking signal, we enable the model to efficiently learn the conditional distribution of motion where each pose in the synthesized sequence leverages the corresponding sparse tracking signal \(s^{i}\). ## 4 Experiments **Datasets.** We use the AMASS [27] dataset for training and evaluating our models. AMASS is a large-scale dataset that merges 15 optical-marker-based MoCap datasets into a common framework with SMPL [25] model parameters. For our first set of experiments, we use the CMU [3], BML-rub [53], and HDM05 [28] subsets for training and testing. We follow the same splits of AvatarPoser [15] to achieve a fair comparison. For our second set of experiments, we evaluate the Transitions [27] and HumanEVA [45] subsets of AMASS and train on the remaining datasets following the protocol described in [2]. **Evaluation Metrics.** We report four different types of metrics to evaluate our performance comprehensively. First, we report the velocity-related metrics Mean Per Joint Velocity Error [cm/s] (MPJVE), and Jitter error [\(\mathrm{km}/\mathrm{s}^{3}\)] [57] that measure the temporal coherence and the smoothness of the generated sequences. Second, we report the position-related metrics Mean Per Joint Position Error [cm] (MPJPE), Hand Position Error [cm] (Hand PE), Upper Body Position Error [cm] (Upper PE), and Lower Body Position Error (Lower PE). The third set is rotation-related metrics, including the Mean Per Joint Rotation Error [\(\mathrm{deg}\)] (MPJRE). Finally, we devise a metric based on Foot Contact (FC) to measure if the predicted body has a realistic movement of the feet. To calculate this metric for every pair of instances in a sequence, we determine if there is contact between the four joints of the feet and the ground by calculating the velocity of the joints and checking whether it is under a pre-defined threshold or not, following [52]. Afterward, we calculate the accuracy between the predicted and the ground-truth FC. Since the ratio of foot contact vs. foot in the air is meager, we calculate a balanced accuracy (FCAcc). **Implementation Details.** Similar to [15], we set window size \(W=41\). Our Transformer backbone consists of \(12\) DiT blocks [34]. Before feeding to the backbone, the input tokens are projected to the hidden dimension \(emb=384\), as shown in Fig. 3. Finally, we project the output of the last DiT block back to the human body pose space of shape \(41\times 6\cdot 22\), representing the 6D rotations for 22 body joints. During training, we use \(\lambda_{\text{vlb}}=1.0\), and define \(t\) to vary between \([1,T]\), where \(T=1000\) corresponds to a pure Gaussian distribution. At inference, we start from \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline Method & Jitter & MPJVE & MPJPE & Hand PE & Upper PE & Lower PE & MPJRE & FCAcc \(\uparrow\) \\ \hline Final IK* & - & 59.24 & 18.09 & - & - & - & 16.77 & - \\ LoBERT* & - & 44.97 & 9.02 & - & - & - & 10.69 & - \\ VAE-HMD* & - & 37.99 & 6.83 & - & - & - & 4.11 & - \\ AvatarPoser [15] & 1.53 & 28.23 & 4.20 & 2.34 & 1.88 & 8.06 & 3.08 & 79.60 \\ AvatarPoser-Large [15] & 1.17 & 23.98 & 3.71 & 2.20 & 1.68 & 7.09 & **2.70** & 82.30 \\ BoDiffusion (Ours) & **0.49** & **14.39** & **3.63** & **1.32** & **1.53** & **7.07** & **2.70** & **87.28** \\ \hline \hline \end{tabular} \end{table} Table 1: **Comparison with State-of-the-art Methods for Full-Body Human Pose Estimation.** Results on a subset of the AMASS dataset (CMU, BMLrub, and HDM05) for Jitter [\(\mathrm{km}/\mathrm{s}^{3}\)], MPJVE [\(\mathrm{cm}/\mathrm{s}\)], MPJPE [cm], Hand PE [cm], Upper PE [cm], Lower PE [cm], MPJRE [deg], and FCAcc [%] (balanced foot contact accuracy) metrics. AvatarPoser is retrained with 3 and 10 (Large) Transformer layers. The star (*) denotes the results reported in [15]. Figure 5: **Full-Sequence Generation.** Example prediction of BoDiffusion compared against the ground-truth sequence. Our method can generate realistic motions faithful to the ground truth. Color gradient represents time flow, whereas lighter colors denote the past. pure Gaussian noise, and we use DDIM sampling [47] with \(50\) steps. We set the variance \(\Sigma_{\theta}\) of the reverse noise to zero. This configuration turns the model into a deterministic mapping from Gaussian noise to motions, allowing it to do much fewer denoising steps without degrading the quality of synthesized motions. We use AdamW optimizer [26] with a learning rate of \(1e-4\), batch size of \(256\), without weight decay. Our model has 22M parameters and is trained for 1.5 days on four NVIDIA Quadro RTX 8000. More implementation details are in the Supplementary Material (Sec. A.1). Our approach has no limitations concerning the length of the generated sequences. We can synthesize motions of arbitrary length by applying BoDiffusion in an autoregressive manner using a sliding window over the input data. We refer the reader to the Supplementary Material for more explanation of our inference-time protocol (see Sec. A.2). ### Results We compare BoDiffusion with AvatarPoser [15] and FLAG [2] following their experimental setups. For AvatarPoser in Table 1, we use the official source code to retain the standard version with 3 Transformer layers. Furthermore, to ensure a fair comparison with BoDiffusion, we train a scaled-up version of AvatarPoser (AvatarPoser-Large) with 10 layers, 8 attention heads, and an embedding dimension of 384. Since the other state-of-the-art methods do not provide public source codes, we compare them against the results reported in each of the previous papers. Table 1 shows that BoDiffusion outperforms the state-of-the-art approaches in all metrics on the test subset of the AMASS dataset (CMU, BMLrub, and HDM05). Since we enforce the temporal consistency in BoDiffusion by leveraging the novel conditioning scheme and learning to generate sequences of poses instead of individual poses, our method generates smoother and more accurate motions. This is demonstrated by our quantitative results in Tab. 1. We observe a significant improvement in the quality of generated motions by leveraging the BoDiffusion model. Thus, we are able to decrease the MPJVE by a margin of 9.59 cm\(/\)s and the Jitter error by 0.68 km\(/\)s\({}^{3}\), compared to AvatarPoser-Large. Fig. 4 shows that motions generated by BoDiffusion exhibit more significant similarity to the ground truth across all the sequence frames and display fewer foot-skating artifacts compared to AvatarPoser, which struggles to maintain coherence throughout the sequence and severely suffers from foot skating. Furthermore, we empirically demonstrate that our method successfully learns a manifold of plausible human poses while maintaining temporal coherence. In practice, we are given the global position of the hands and head as the conditioning; thus, it is expected to have a lower error on these joints, while the conditioning does not uniquely define the configuration of legs and should be synthesized. However, Fig. 2, 4, 5 show that BoDiffusion produces plausible poses not only for the upper body but for the lower body as well, in contrast to the state-of-the-art Transformer-based AvatarPoser method. Fig. 2 qualitatively shows the improvement of our method in positional errors. In particular, our method predicts lower body configurations that resemble the ground truth more than AvatarPoser. These results support the effectiveness of our conditioning scheme for guiding the generation towards realistic movements that are in close proximity to the ground-truth sequences. Furthermore, our method achieves a better performance in the Foot Contact Accuracy metric (FCAcc), as shown in Table 1 and the feet movements in Fig. 5. Thus, the iterative nature of the DDPMs, along with our spatio-temporal conditioning scheme, allows us to generate sequences with high fidelity even at the feet, which are the furthest from the input sparse tracking signals. Table 1 shows the performance of a larger version of AvatarPoser-Large compared to ours. In particular, we demonstrate that enlarging this model increases its motion capture capacity to the point where it reaches more competitive results. By definition, this experiment also demonstrates that using more complex methods leads to better performance. However, BoDiffusion depicts a better trade-off between the performance and computational complexity than state-of-the-art methods. Since BoDiffusion can take \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline Method & Jitter & MPJVE & MPJPE & Hand PE & Upper PE & Lower PE & MPJRE & FCAcc \(\uparrow\) \\ \hline VAE-HMD (3p + pelvis)* & - & - & 7.45 & - & 3.75 & - & - & - \\ VPoser-HMD (3p + pelvis)* & - & - & 6.74 & - & 1.69 & - & - & - \\ HuMot-HMD (3p + pelvis)* & - & - & 5.50 & - & 1.52 & - & - & - \\ ProHMR-HMD (3p + pelvis)* & - & - & 5.22 & - & 1.64 & - & - & - \\ FLAG [2] (3p + pelvis)* & - & - & **4.96** & - & **1.29** & - & - & - \\ \hline AvatarPoser [15] (3p) & 1.11 & 34.42 & 6.32 & 3.03 & 2.56 & 12.60 & 4.64 & 71.46 \\ BoDiffusion (Ours) (3p) & **0.35** & **21.37** & 5.78 & **1.94** & 2.27 & **11.55** & **4.53** & **82.04** \\ \hline \hline \end{tabular} \end{table} Table 2: **Comparison against Generative-based Models.** Results reported on the held-out Transitions [27] and HumanEVA [45] subset of AMASS, following the protocol of FLAG [2], for Jitter [km\(/\)s\({}^{3}\)], MPJVE [cm\(/\)s], MPJPE [cm], Hand PE [cm], Upper PE [cm], MPJRE [deg], and FCAcc [%] metrics. We report the results after retraining AvatarPoser, and report the same results as in [2] for methods with a star (*). advantage of DiT, our approach will further improve in the measure that foundation models reach better results. Table 2 shows the quantitative comparison between BoDiffusion and other generative-based state-of-the-art approaches for the Transitions [27] and HumanEVA [45] subsets of AMASS. AvatarPoser is included for reference. On the one hand, even though we only train with three sparse inputs, we have competitive results regarding an overall positional error (MPJPE) and upper body positional error (Upper PE) with the methods that also use the pelvis information. Our DDPM-based method outperforms the VAE-based approaches VAE-HMD and VPoser-HMD and has comparable results with the conditional flow-based models ProHMR-HMD and FLAG. On the other hand, our BoDiffusion has a better performance than AvatarPoser in all the metrics, with a significant improvement in the velocity-related metrics MPJVE and Jitter. Please refer to the Supplementary for additional qualitative results. ### Ablation Experiments We conduct ablation experiments to assess the effect of the different components of our method on the smoothness and temporal consistency of the generated sequences. In Table 3, we report the experiments corresponding to the conditioning scheme, the stochastic component at inference, and the relevance of temporal context. Firstly, we compare the effect of using different conditioning schemes. Our method receives the conditioning by concatenating the input tokens (Token input cond). Thus, the conditioning keeps time-dependent information, allowing us smoother predictions, as the low Jitter and MPJVE values show. In contrast, applying the condition through the timestep embedding (Timestep cond) results in a compression towards a time-agnostic vector embedding. Table 3 shows that using this time-agnostic embedding solely as conditioning results in detrimental performance for the method. Furthermore, using both the token input and the timestep conditioning still results in less smooth sequences and is less consistent than using only the token input conditioning scheme. Secondly, we implemente a purely stochastic inference scheme (w/ stochasticity), finding out that, even when the rotational and positional errors decrease slightly, having extra control over the randomness is beneficial, especially for the smoothness of the sequences, as shown by the decrease in MPJVE and Jitter. Thirdly, we evaluate the importance of having temporal consistency by using a sliding window of size one during training (Window size W=1) and randomly sorting the sequence at inference time (Unordered sequence). As expected, the MPJVE and Jitter errors increase significantly, and all the other metrics also increase by some proportion. Therefore, these experiments confirm the relevance of enforcing temporal consistency. Table 4 presents the impact of different architectural choices on the performance of our proposed model. First, to validate the effectiveness of using DiT as our backbone (BoDiffusion-Transformer), we compare it against UNet (BoDiffusion-UNet), which has traditionally been used as a backbone for diffusion models [5, 42]. Table 4 indicates that the Transformer outperforms UNet in all the metrics, even when diffusion processes are not involved. Additionally, when incorporating our diffusion framework on top of both backbones, significant improvements are observed in the temporal consistency and quality of the generated sequences. It is important to note that while replacing the DiT backbone with UNet leads to a slight decrease (\(0.2^{\circ}\)) in rotation error, it is accompanied by a significant increase in Jitter and Velocity errors. Thus, these ablation experiments demonstrate the complementarity of using a transformer-based backbone in a diffusion framework, resulting in smoother and more accurate predictions. Table 5 shows the ablation experiment using different sampling steps for DDIM at inference time. Increasing the \begin{table} \begin{tabular}{l c c c c} Method & Jitter & MPJVE & MPJPE & MPJRE \\ \hline UNet w/o diffusion & 1.44 & 33.35 & 4.36 & 2.81 \\ Transformer w/o diffusion & 1.27 & 27.62 & 3.92 & 2.60 \\ \hline BoDiffusion-UNet & 1.24 & 20.65 & **3.63** & **2.48** \\ BoDiffusion-Transformer (Ours) & **0.49** & **14.39** & **3.63** & 2.70 \\ \hline \end{tabular} \end{table} Table 4: **Architecture Ablations.** We evaluate the relevance of using DiT as our backbone. We also assess the effectiveness of the denoising power of our DDPM by comparing it against the backbones without diffusion. \begin{table} \begin{tabular}{l c c c c} DIDM steps & Jitter & MPJVE & MPJPE & MPJRE \\ \hline 10 & 0.56 & 16.16 & 3.89 & 2.84 \\ 20 & 0.52 & 15.05 & 3.72 & 2.75 \\ 30 & 0.51 & 14.75 & 3.66 & 2.73 \\ 40 & 0.49 & 14.55 & 3.64 & 2.71 \\ 50 & 0.49 & 14.39 & 3.63 & 2.70 \\ 100 & 0.48 & 14.12 & 3.44 & 2.59 \\ \hline \end{tabular} \end{table} Table 5: **Ablation of inference sampling steps.** During inference, we use DDIM sampling with 50 steps. Note that the performance improves when there are more sampling steps. \begin{table} \begin{tabular}{l c c c c} Method & Jitter & MPJVE & MPJPE & MPJRE \\ \hline BoDiffusion (Token input cond) & **0.49** & **14.39** & 3.63 & 2.70 \\ Timesetep cond & 1.38 & 52.78 & 7.19 & 4.00 \\ Token input + Timesetep cond & 0.59 & 16.22 & 3.60 & **2.60** \\ with stochasticity & 0.53 & 15.37 & **3.53** & 2.67 \\ \hline Window size W=1 & 19.71 & 174.9 & 4.77 & 3.13 \\ Shuffled sequences & 108.42 & 935.69 & 17.13 & 7.10 \\ \hline \end{tabular} \end{table} Table 3: **Design Ablations. Up:** We ablate our training scheme by varying the conditioning approach. At inference, we demonstrate that controlling the stochasticity smoothens our predictions. **Down:** We assess the importance of including temporal context. sampling steps improves the performance of our method, proving the importance of the iterative nature of DDPMs. However, more steps require more computational capacity. Thus, we select 50 DDIM steps for an appropriate trade-off between performance and complexity. ## 5 Conclusion In this work, we present BoDiffusion, a Diffusion model for conditional motion synthesis inspired by effective architectures from the image synthesis field. Our model leverages the stochastic nature of DDPMs to produce realistic avatars based on sparse tracking signals of the hands and head. BoDiffusion uses a novel spatio-temporal conditioning scheme and enables motion synthesis with significantly reduced jittering artifacts, especially on lower bodies. Our results outperform state-of-the-art methods on traditional metrics, and we propose a new evaluation metric to fully demonstrate BoDiffusion's capabilities. ## Acknowledgements Research reported in this publication was supported by the Agence Nationale pour la Recherche (ANR) under award number ANR-19-CHIA-0017.
2307.08119
Exponential mixing for heterochaos baker maps and the Dyck system
We investigate mixing properties of piecewise affine non-Markovian maps acting on $[0,1]^2$ or $[0,1]^3$ and preserving the Lebesgue measure, which are natural generalizations of the {\it heterochaos baker maps} introduced in [Y. Saiki, H. Takahasi, J. A. Yorke. Nonlinearity 34 (2021) 5744-5761]. These maps are skew products over uniformly expanding or hyperbolic bases, and the fiber direction is a center in which both contracting and expanding behaviors coexist. We prove that these maps are mixing of all orders. For maps with a mostly expanding or contracting center, we establish exponential mixing for H\"older continuous functions. Using this result, for the Dyck system originating in the theory of formal languages, we establish exponential mixing with respect to its two coexisting ergodic measures of maximal entropy.
Hiroki Takahasi
2023-07-16T18:07:47Z
http://arxiv.org/abs/2307.08119v1
# Exponential mixing for ###### Abstract. We investigate mixing properties of piecewise affine non-Markovian maps acting on \([0,1]^{2}\) or \([0,1]^{3}\) and preserving the Lebesgue measure, which are natural generalizations of the _heterochaos baker maps_ introduced in [Y. Saiki, H. Takahasi, J. A. Yorke. Nonlinearity **34** (2021) 5744-5761]. These maps are skew products over uniformly expanding or hyperbolic bases, and the fiber direction is a center in which both contracting and expanding behaviors coexist. We prove that these maps are mixing of all orders. For maps with a mostly expanding or contracting center, we establish exponential mixing for Holder continuous functions. Using this result, for the Dyck system originating in the theory of formal languages, we establish exponential mixing with respect to its two coexisting ergodic measures of maximal entropy. Key words and phrases:piecewise affine map; mixing of all orders; decay of correlations; the Dyck shift. 2020 Mathematics Subject Classification: Primary 37A25, 37A40; Secondary 37A55 ## 1. introduction The baker map (see Figure 1) \[(x,y)\in[0,1)^{2}\mapsto\begin{cases}\Big{(}2x,\dfrac{y}{2}\Big{)}&\text{on } \left[0,\dfrac{1}{2}\right)\times[0,1),\\ \Big{(}2x-1,\dfrac{y+1}{2}\Big{)}&\text{on }\left[\dfrac{1}{2},1\right) \times[0,1),\end{cases}\] is one of the simplest uniformly hyperbolic dynamical systems. The name 'baker' is used since the action of the map is reminiscent of the kneading dough [14]. The baker map preserves the Lebesgue measure on \([0,1)^{2}\), and mixing properties with respect to this measure are well-known: It is \(K\) and hence mixing of all orders [26]; has exponential decay of correlations for Holder continuous functions [7, 27]. Is there any analogue of the baker map that provides a hands-on, elementary understanding of complicated phenomena in non-hyperbolic systems? In response to this question, two piecewise affine maps on \([0,1]^{2}\) or \([0,1]^{3}\) were introduced in [28], called heterochaos baker maps. The name 'heterochaos' comes from their distinctive property that periodic points with different unstable dimensions coexist densely in an invariant set [28, Theorem 1.1], a phenomenon aka _the unstable dimension variability_[12, 19], which was recognized earlier [1, 6, 22, 30] as a \(C^{1}\) robust phenomenon for diffeomorphisms of closed manifolds of dimension at least three. The analysis of the heterochaos baker maps is also motivated by the theory of symbolic dynamics, which is closely related to automata theory and formal language theory. There is a universal formal language due to W. Dyck. The Dyck system [2, 20] is the symbolic dynamics generated by that language. Interestingly, the heterochaos baker maps are natural geometric models of the Dyck system [32, Theorem 1.1]. This connection has opened the door for a complementary analysis of the heterochaos baker maps and the Dyck system, which consists in _verifying certain dynamical properties of the one side, with the aid of the other._ For a prototypical result on the heterochaos baker maps with the aid of the Dyck system, see [32, Theorem 1.2]. In this paper we proceed to a complementary analysis of mixing properties of the heterochaos baker maps and the Dyck system. We will prove exponential mixing with respect to the Lebesgue measure for the heterochaos baker maps, and with the aid of this, prove exponential mixing for the Dyck system with respect to the two coexisting ergodic measures of maximal entropy. Below we introduce these two dynamical systems, and state main results. ### The heterochaos baker maps Let \(m\geq 2\) be an integer. For \(a\in(0,\frac{1}{m})\) define \(\tau_{a}\colon[0,1]\to[0,1]\) by \[\tau_{a}(x)=\begin{cases}\frac{x-(i-1)a}{a}&\text{on }[(i-1)a,ia),\ i\in\{1, \ldots,m\},\\ \frac{x-ma}{1-ma}&\text{on }[ma,1].\end{cases} \tag{1.1}\] Consider the set of \(2m\) symbols \[D=\{\alpha_{1},\ldots,\alpha_{m}\}\cup\{\beta_{1},\ldots,\beta_{m}\}, \tag{1.2}\] and define \[\Omega^{+}_{\alpha_{i}}=[(i-1)a,ia)\times[0,1]\ \text{for }i\in\{1,\ldots,m\},\] and \[\Omega^{+}_{\beta_{i}}=\begin{cases}[ma,1]\times\left[\frac{i-1}{m},\frac{i}{ m}\right)&\text{for }i\in\{1,\ldots,m-1\},\\ \left[ma,1\right]\times\left[\frac{i-1}{m},1\right]&\text{for }i=m.\end{cases}\] Figure 1. The baker map: The \(x\)-direction is expanding by factor \(2\) and the \(y\)-direction is contracting by factor \(1/2\). The sets \(\Omega^{+}_{\alpha_{i}}\), \(\Omega^{+}_{\beta_{i}}\), \(i\in\{1,\ldots,m\}\) are pairwise disjoint and their union equals \([0,1]^{2}\). Define a map \(f_{a}\colon[0,1]^{2}\to[0,1]^{2}\) by \[f_{a}(x,y)=\begin{cases}\left(\tau_{a}(x),\frac{y}{m}+\frac{i-1}{m}\right)& \text{ on }\Omega^{+}_{\alpha_{i}},\ i\in\{1,\ldots,m\},\\ (\tau_{a}(x),my-i+m+1)&\text{ on }\Omega^{+}_{\beta_{i}},\ i\in\{1,\ldots,m\}. \end{cases} \tag{1.3}\] See Figure 2 for \(m=2\). Next, put \(\Omega_{\alpha_{i}}=\Omega^{+}_{\alpha_{i}}\times[0,1]\) and \(\Omega_{\beta_{i}}=\Omega^{+}_{\beta_{i}}\times[0,1]\) for \(i\in\{1,\ldots,m\}\). For \(a,b\in(0,\frac{1}{m})\) define a map \(f_{a,b}\colon[0,1]^{3}\to[0,1]^{3}\) by \[f_{a,b}(x,y,z)=\begin{cases}(f_{a}(x,y),(1-mb)z)&\text{ on }\Omega_{\alpha_{i}},\ i \in\{1,\ldots,m\},\\ (f_{a}(x,y),bz+1+b(i-m-1))&\text{ on }\Omega_{\beta_{i}},\ i\in\{1,\ldots,m\}. \end{cases} \tag{1.4}\] See Figure 3. In other words, \(f_{a}\) is the projection of \(f_{a,b}\) to the \(xy\)-plane. Note that \(f_{a,b}\) is one-to-one except on the points in the boundaries of \(\Omega_{\alpha_{i}}\), \(\Omega_{\beta_{i}}\) where it may be discontinuous and at most three-to-one. We call \(f_{a}\) and \(f_{a,b}\)_heterochaos baker maps_1. Footnote 1: The baker map is defined on \([0,1)^{2}\), whereas \(f_{a}\) is defined on \([0,1]^{2}\). This minor discrepancy has stemmed from the intention of the authors of [28] to visualize the ‘heteroclinic set’ in [28]. The heterochaos baker maps introduced in [28] were precisely \(f_{\frac{1}{3}}\) and \(f_{\frac{1}{3},\frac{1}{6}}\) with \(m=2\). Symbolic dynamics for \(f_{a}\), \(f_{a,b}\) and their measures of maximal entropy were investigated in [32]. Saiki et al. [29] considered some variants of \(f_{a,b}\) in the context of fractal geometry and homoclinic bifurcations of diffeomorphisms. Homburg and Kalle [17] considered iterated function systems intimately related to \(f_{a,b}\). For any \(a\in(0,\frac{1}{m})\), \(f_{a}\) preserves the Lebesgue measure on \([0,1]^{2}\). The map \(f_{a,b}\) preserves the Lebesgue measure on \([0,1]^{3}\) if and only if \(a+b=\frac{1}{m}\). Put \[g_{a}=f_{a,\frac{1}{m}-a}.\] We will drop \(a\) from notation and write \(f\), \(g\) and so on when the context is clear. ### The Dyck system Krieger [21] introduced a certain class of subshifts having some algebraic property, called property A subshifts. It is a fundamental shift space in this class. The Dyck system is a subshift on the alphabet \(D\) in (1.2) consisting of \(m\) brackets, \(\alpha_{i}\) left and \(\beta_{i}\) right in pair, whose admissible words are sequences of legally aligned brackets. To be more precise, let \(D^{*}\) denote the set of finite words in \(D\). Consider the monoid with zero, with \(2m\) generators in \(D\) and the unit element \(1\) with relations \[\begin{split}\alpha_{i}\cdot\beta_{j}&=\delta_{i,j },\ 0\cdot 0=0\ \text{and}\\ \gamma\cdot 1&=1\cdot\gamma=\gamma,\ \gamma\cdot 0=0 \cdot\gamma=0\ \text{for}\ \gamma\in D^{*}\cup\{1\},\end{split} \tag{1.5}\] where \(\delta_{i,j}\) denotes Kronecker's delta. For \(n\geq 1\) and \(\gamma_{1}\gamma_{2}\cdots\gamma_{n}\in D^{*}\) let \[\operatorname{red}(\gamma_{1}\cdots\gamma_{n})=\prod_{i=1}^{n}\gamma_{i}.\] The one and two-sided Dyck shifts on \(2m\) symbols are respectively defined by \[\begin{split}\Sigma_{D}^{+}&=\{\omega\in D^{ \mathbb{Z}_{+}}\colon\operatorname{red}(\omega_{i}\cdots\omega_{j})\neq 0\ \text{for}\ i,j\in\mathbb{Z}_{+}\ \text{with}\ i<j\},\\ \Sigma_{D}&=\{\omega\in D^{\mathbb{Z}}\colon \operatorname{red}(\omega_{i}\cdots\omega_{j})\neq 0\ \text{for}\ i,j\in\mathbb{Z}\ \text{with}\ i<j\},\end{split}\] where \(\mathbb{Z}_{+}\) denotes the set of non-negative integers. Let \(\sigma_{+}\) and \(\sigma\) denote the left shifts acting on \(\Sigma_{D}^{+}\) and \(\Sigma_{D}\) respectively. As a counterexample to the conjecture of Weiss [34], Krieger [20] showed that the Dyck system has exactly two ergodic measures of maximal entropy, and they are fully supported and Bernoulli. Meyerovitch [25] proved the existence of tail invariant measures for the Dyck system. To our knowledge, there is no result on further statistical properties of the measures of maximal entropy for the Dyck system. ### Statements of the results Under the iteration of \(f_{a}\), the \(x\)-direction is expanding by factor \(\frac{1}{a}\) or \(\frac{1}{1-ma}\), while the \(y\)-direction is a center: contracting by factor \(\frac{1}{m}\) on \(\Omega_{\alpha}^{+}\) and expanding by factor \(m\) on \(\Omega_{\beta}^{+}\), where \[\Omega_{\alpha}^{+}=\bigcup_{i=1}^{m}\Omega_{\alpha_{i}}^{+}\ \text{ and }\ \Omega_{\beta}^{+}=\bigcup_{i=1}^{m}\Omega_{\beta_{i}}^{+}.\] The local stability in the \(y\)-direction along each orbit is determined by the asymptotic time average of the function \[\phi^{c}(p)=\begin{cases}-\log m&\text{on }\Omega^{+}_{\alpha},\\ \log m&\text{on }\Omega^{+}_{\beta},\end{cases} \tag{1.6}\] called _the central Jacobian_. Write \(S_{n}\phi^{c}\) for the sum \(\sum_{k=0}^{n-1}\phi^{c}\circ f_{a}^{k}\). Since \(f_{a}\) is a skew product map over \(\tau_{a}\) and \(\phi^{c}\) is constant on sets of the form \(\{x\}\times[0,1]\), \(x\in[0,1]\), the ergodicity of \(\tau_{a}\) with respect to the Lebesgue measure on \([0,1]\) implies that \[\lim_{n\to\infty}\frac{1}{n}S_{n}\phi^{c}(p)=(1-2ma)\log m\ \text{ for Lebesgue a.e. }p\in[0,1]^{2}. \tag{1.7}\] We classify \(f_{a}\), \(g_{a}\) into three types according to the sign of this limit value: 1. \(a\in(0,\frac{1}{2m})\) (mostly expanding center), 2. \(a\in(\frac{1}{2m},\frac{1}{m})\) (mostly contracting center), 3. \(a=\frac{1}{2m}\) (mostly neutral center). In [28], \(f_{\frac{1}{3}}\) and \(g_{\frac{1}{3}}\) with \(m=2\) were shown to be weak mixing with respect to the Lebesgue measure. The proof there can be slightly modified to show the weak mixingsness of \(f_{a}\) and \(g_{a}\) with respect to the Lebesgue measure for any \(a\in(0,\frac{1}{m})\setminus\{\frac{1}{2m}\}\). Our first result considerably strengthens this. Let \((X,\mathscr{B},\nu)\) be a probability space, \(T\colon X\to X\) be a measurable map preserving \(\nu\) and let \(k\geq 2\) be an integer. We say \((T,\nu)\) is \(k\)-mixing if for all \(A_{0},A_{1}\ldots,A_{k-1}\in\mathscr{B}\), \[\lim_{n_{1},\ldots,n_{k-1}\to\infty}\nu(A_{0}\cap T^{-n_{1}}(A_{1})\cap\cdots \cap T^{-n_{1}-n_{2}-\cdots-n_{k-1}}(A_{k-1}))=\prod_{j=0}^{k-1}\nu(A_{j}).\] 2-mixing is usually called mixing or strong mixing. Clearly \((k+1)\)-mixing implies \(k\)-mixing, but the converse is unknown. We say \((T,\nu)\) is _mixing of all orders_ if it is \(k\)-mixing for any \(k\geq 2\). Let \(\operatorname{Leb}\) denote the Lebesgue measure on \([0,1]^{2}\) or \([0,1]^{3}\). **Theorem A**.: _For any \(a\in(0,\frac{1}{m})\), \((f_{a},\operatorname{Leb})\) and \((g_{a},\operatorname{Leb})\) are mixing of all orders._ In fact, we will show that \((f_{a},\operatorname{Leb})\) is exact and the restriction of \((g_{a},\operatorname{Leb})\) to a set of full Lebesgue measure has \(K\)-property (Proposition 3.2). These properties imply mixing of all orders [26, Section 2.6]. Concerning rates of 2-mixing we obtain the following result. All functions appearing in this paper are real-valued. For a pair \(\varphi\), \(\psi\) of functions in \(L^{2}(\nu)\), define their correlations by \[\operatorname{Cor}_{n}(T;\varphi,\psi;\nu)=\left|\int\varphi(\psi\circ T^{n} )\mathrm{d}\nu-\int\varphi\mathrm{d}\nu\int\psi\mathrm{d}\nu\right|\ \text{for }n\geq 1.\] For a metric space \(X\) and \(\eta\in(0,1]\) let \(\mathscr{H}_{\eta}(X)\) denote the set of Holder continuous functions on \(X\) with a Holder exponent \(\eta\). We say \((T,\nu)\) is _exponentially mixing_ if for each \(\eta\in(0,1]\) there exists \(\lambda=\lambda(\eta)\in(0,1)\) such that for any pair \(\varphi\), \(\psi\) of functions in \(\mathscr{H}_{\eta}(X)\) there exists \(C=C(\varphi,\psi)>0\) such that \(\operatorname{Cor}_{n}(T;\varphi,\psi;\nu)\leq C\lambda^{n}\) for all \(n\geq 1\). **Theorem B**.: _For any \(a\in(0,\frac{1}{m})\setminus\{\frac{1}{2m}\}\), \((f_{a},\operatorname{Leb})\) and \((g_{a},\operatorname{Leb})\) are exponentially mixing._ As we recall in Section 2.9, the heterochaos baker maps are geometric models of the Dyck system [32, Theorem 1.1]: Following the orbits of \(f_{a}\) and \(f_{a,b}\) over the partitions \(\{\Omega_{\gamma}^{+}\}_{\gamma\in D}\) and \(\{\Omega_{\gamma}\}_{\gamma\in D}\), one obtains \(\Sigma_{D}^{+}\) and \(\Sigma_{D}\) respectively. We exploit this connection to establish exponential mixing for the Dyck system. For the one-sided (resp. two-sided) Dyck shift, there exist exactly two shift invariant ergodic measures of maximal entropy [20], which we denote by \(\nu_{\alpha}^{+}\) and \(\nu_{\beta}^{+}\) (resp. \(\nu_{\alpha}\) and \(\nu_{\beta}\)), leaving the details to Section 5.1. As usual, the metrics \(d\) on the shift spaces are the Hamming metrics: For distinct points \(\omega\), \(\omega^{\prime}\in\Sigma_{D}^{+}\), \[d(\omega,\omega^{\prime})=\exp(-\min\{i\geq 0\colon\omega_{i}\neq\omega_{i}^{ \prime}\}),\] and for distinct points \(\omega\), \(\omega^{\prime}\in\Sigma_{D}\), \[d(\omega,\omega^{\prime})=\exp(-\min\{i\geq 0\colon\omega_{i}\neq\omega_{i}^{ \prime}\text{ or }\omega_{-i}\neq\omega_{-i}^{\prime}\}).\] **Theorem C**.: _All \((\sigma_{+},\nu_{\alpha}^{+})\), \((\sigma_{+},\nu_{\beta}^{+})\), \((\sigma,\nu_{\alpha})\), \((\sigma,\nu_{\beta})\) are exponentially mixing._ ### Outline of proofs and the structure of the paper Uniformly hyperbolic systems have Markov partitions, and their statistical properties are well-understood on a symbolic level [7, 27, 31]. The difficulty in analyzing statistical properties of the heterochaos baker maps consists in the lack of a Markov partition, which is precisely due to the dynamics in the \(y\)-direction. To overcome this difficulty, we combine inducing and large deviations. A main tool in our construction is a Markov diagram, originally introduced by Hofbauer [15, 16] and developed by Keller [18], Buzzi [5] and so on. In Section 2, for \(f_{a}\), \(a\in(0,\frac{1}{m})\) we define a stopping time \(R\colon[0,1]^{2}\to\mathbb{Z}_{+}\cup\{\infty\}\), and construct a uniformly expanding induced Markov map \(f^{R}\) with infinitely many branches. We show that \(\{R=\infty\}\) is a null set for \(a\in(0,\frac{1}{2m}]\) (Proposition 2.6), and the Lebesgue measure of the tail \(\{R>n\}\) decays exponentially in \(n\) for \(a\in(0,\frac{1}{2m})\) (Proposition 2.9). Since the heterochaos baker maps are piecewise affine, all these construction and estimates are purely combinatorial and distortion estimates are not needed. In Section 3 we show that both \((f_{a},\operatorname{Leb})\) and \((g_{a},\operatorname{Leb})\) are mixing of all orders for all \(a\in(0,\frac{1}{2m}]\). Using the 'invariance' of correlations under the replacement of \(a\) by \(\frac{1}{m}-a\) (Proposition 3.1), we complete the proof of Theorem A. In Section 4 we construct _towers_, and apply Young's result [35] to deduce the exponential mixing for the heterochaos baker maps. In order to estimate error bounds, we perform a large deviation argument for the map \(\tau_{a}\). Using the invariance of correlations again completes the proof of Theorem B. A proof of Theorem C in Section 5 relies on a surprising connection between one of the two ergodic measures of maximal entropy for the Dyck system and the Lebesgue measure (Proposition 5.7). We emphasize that Theorem C is not an immediate consequence of Theoerem B, since the coding maps (see Section 2.9 for the definition) from the heterochaos baker maps into the Dyck system do not preserve Holder continuous functions. Our idea is to transfer the towers for the heterochaos baker maps to the Dyck shift spaces, and apply Young's result [35] again to obtain the exponential mixing with respect to one of the two ergodic measures of maximal entropy. Exponential mixing with respect to the other one follows from the symmetry in the Dyck system (Proposition 5.3). ### Comparison with results on partially hyperbolic systems Like the heterochaos baker maps, skew product maps over uniformly hyperbolic or expanding bases are considered to be simple models of partially hyperbolic systems2. Hence, it is relevant to compare our results with others on exponential decay of correlations for partially hyperbolic systems. Footnote 2: The splitting of the tangent bundle of \([0,1]^{2}\) into \(x\)- and \(y\)-directions does not always give rise to a partially hyperbolic splitting for \(f_{a}\): If \(0<a<\frac{m-1}{m^{2}}\) then the minimum expansion rate in the \(x\)-direction is \(\min\{\frac{1}{a},\frac{1}{1-ma}\}\), which does not dominate the expansion rate \(m\) in the \(y\)-direction. Young's method [35, 36] of deducing nice statistical properties using towers with fast decaying tails has been successfully applied to some partially hyperbolic systems, e.g., [3, 8, 9, 13] to deduce exponential decay of correlations. With more functional analytic methods, exponential decay of correlations was proved for certain partially hyperbolic diffeomorphisms [10], and for certain piecewise partially hyperbolic endomorphisms [4]. Clearly the heterochaos baker maps are covered by none of these existing results. ### Comparison with results on subshifts For a large class of subshifts with 'non-uniform specification', Climenhaga [11] constructed towers and applied Young's method to obtain nice statistical properties including exponential decay of correlations. Clearly the Dyck system does not satisfy the 'non-uniform specification', since it would imply the uniqueness of measure of maximal entropy. One can also directly check the breakdown of 'non-uniform specification' using the definition of the Dyck system. We believe it would be difficult to construct towers for the Dyck system to which one can apply Young's result, purely within the symbolic setting with no recourse to the connection with the heterochaos baker maps proved in [32]. The geometric approach to the Dyck system in the proof of Theorem C based on this connection seems promising, in order to establish many other dynamical properties of the measures of maximal entropy for the Dyck system. Moreover, this approach may be extended to some other subshifts with a high symmetry, such as the Motzkin system [23]. ## 2. Dynamics of the heterochaos baker maps In this section we analyze the dynamics of the heterochaos baker map \(f_{a}\), \(a\in(0,\frac{1}{m})\). In Section 2.1 we introduce a Markov diagram for \(f_{a}\) and describe its combinatorial structure. In Section 2.2 we develop path counting arguments in the diagram, and in Section 2.3 provide useful formulas for the Birkhoff sums of the function \(\phi^{c}\) in (1.6). In Section 2.4 we introduce a stopping time \(R\), and express it as 'a twice jump time under a constraint'. In Section 2.5 we give another formula for the stopping time, and show in Section 2.6 the almost sure finiteness of the stopping time for \(a\in(0,\frac{1}{2m}]\). In Section 2.7 we construct an induced Markov map, and in Section 2.8 estimate the Lebesgue measure of the tail of the stopping time. In Section 2.9 we recall the connection between the heterochaos baker maps and the Dyck system established in [32]. ### A Markov diagram Let \(\operatorname{int}\) denote the interior operation in \(\mathbb{R}^{2}\) or \(\mathbb{R}^{3}\). Let \(A\subset[0,1]^{2}\) be a non-empty set with \(A\subset\operatorname{int}(\Omega_{\gamma}^{+})\) for some \(\gamma\in D\). A set \(B\subset[0,1]^{2}\) is a _successor_ of \(A\) if \(B=f(A)\cap\operatorname{int}(\Omega_{\gamma^{\prime}}^{+})\) holds for some \(\gamma^{\prime}\in D\). If \(B\) is a successor of \(A\), we write \(A\to B\). We set \[\mathcal{V}_{\alpha}=\{\operatorname{int}(\Omega_{\alpha_{i}}^{+})\colon i \in\{1,\dots,m\}\}\ \text{ and }\ \mathcal{V}_{\beta}=\{\operatorname{int}(\Omega_{\beta_{i}}^{+})\colon i\in \{1,\dots,m\}\},\] and define \(\mathcal{V}_{n}\), \(n=0,1,\dots\) inductively by \[\mathcal{V}_{n}=\begin{cases}\mathcal{V}_{\alpha}\cup\mathcal{V}_{\beta}& \text{ for }n=0,\\ \mathcal{V}_{n-1}\cup\{B\colon B\text{ is a successor of an element of }\mathcal{V}_{n-1}\}& \text{ for }n\geq 1.\end{cases}\] We set \(\mathcal{V}=\bigcup_{n=0}^{\infty}\mathcal{V}_{n}\). The oriented graph \((\mathcal{V},\rightarrow)\) is called a _Markov diagram_ for \(f\). We decompose the set \(\mathcal{V}\) of vertices into infinitely many levels \(\mathcal{L}_{r}\), \(r=-1,0,1,\dots\) as follows: \[\mathcal{L}_{r}=\begin{cases}\mathcal{V}_{\alpha}&\text{ for }r=-1,\\ \mathcal{V}_{1}\setminus\mathcal{V}_{\alpha}&\text{ for }r=0,\\ \mathcal{V}_{r+1}\setminus\mathcal{V}_{r}&\text{ for }r\geq 1.\end{cases}\] See Figure 4. Define \(l\colon\mathcal{V}\rightarrow\mathbb{Z}\) by \[v\in\mathcal{L}_{l(v)}\text{ for each }v\in\mathcal{V}.\] It is possible to describe all outgoing edges and incoming edges for each vertex in \(\mathcal{V}\). However, we only have to estimate from above the number of paths of a given length connecting two given vertices. For this purpose, it suffices to describe the cardinalities of all outgoing edges from each vertex \(v\in\mathcal{V}\) which are either upward, downward or parallel (shown in black, red, blue arrows respectively in Figure 4), in terms of the function \(l\). The cardinalities of the three sets \[\mathcal{V}_{\uparrow}(v) =\{v^{\prime}\in\mathcal{V}\colon v\to v^{\prime},\ l(v^{\prime})=l(v)+1\},\] \[\mathcal{V}_{\downarrow}(v) =\{v^{\prime}\in\mathcal{V}\colon v\to v^{\prime},\ l(v^{\prime})=l( v)-1\},\] \[\mathcal{V}_{\rightarrow}(v) =\{v^{\prime}\in\mathcal{V}\colon v\to v^{\prime},\ l(v^{ \prime})=l(v)\},\] are given as follows: 1. For each \(v\in\mathcal{V}_{\alpha}\), \[\#\mathcal{V}_{\uparrow}(v)=m+1,\ \#\mathcal{V}_{\downarrow}(v)=0,\ \#\mathcal{V}_{\rightarrow}(v)=0.\] 2. For each \(v\in\mathcal{V}_{\beta}\), \[\#\mathcal{V}_{\uparrow}(v)=0,\ \#\mathcal{V}_{\downarrow}(v)=m,\ \#\mathcal{V}_{ \rightarrow}(v)=m.\] 3. For each \(v\in\mathcal{V}\setminus\mathcal{V}_{0}\) with \(v\subset\Omega_{\alpha}^{+}\), \[\#\{v^{\prime}\in\mathcal{V}_{\uparrow}(v)\}=m+1,\ \#\mathcal{V}_{\downarrow}(v)=0,\ \#\mathcal{V}_{\rightarrow}(v)=0.\] 4. For each \(v\in\mathcal{V}\setminus\mathcal{V}_{0}\) with \(v\subset\Omega_{\beta}^{+}\), \[\#\{v^{\prime}\in\mathcal{V}_{\uparrow}(v)\}=0,\ \#\mathcal{V}_{\downarrow}(v)=m+1,\ \#\mathcal{V}_{ \rightarrow}(v)=0.\] ### Counting paths in the Markov diagram Let \(n\geq 1\). A _path_ of length \(n\) is a word \(v_{0}\cdots v_{n}\) of elements of \(\mathcal{V}\) of word length \(n+1\) satisfying \(v_{k}\to v_{k+1}\) for every \(k\in\{0,\ldots,n-1\}\). We say \(k\in\{0,\ldots,n-1\}\) is a _hold time_ of a path \(v_{0}\cdots v_{n}\) if \(\{v_{k},v_{k+1}\}\subset\mathcal{V}_{\beta}\). Let \(P_{n}\) denote the set of paths of length \(n\) which have no hold time. In order to estimate the cardinalities of \(P_{n}\) and its various subsets, we consider projections to paths of the symmetric random walk on \(\mathbb{Z}\). Let \[Z_{n}=\left\{l_{0}\cdots l_{n}\in\mathbb{Z}^{n+1}\colon|l_{k}-l_{k+1}|=1\text{ for }k\in\{0,\ldots,n-1\}\right\}. \tag{2.1}\] Define a projection \(\Phi_{n}\colon P_{n}\to Z_{n}\) by \[\Phi_{n}(v_{0}v_{1}\cdots v_{n})=l(v_{0})l(v_{1})\cdots l(v_{n}).\] **Lemma 2.1**.: _Let \(n\geq 1\) and let \(l_{0}l_{1}\cdots l_{n}\in Z_{n}\). We have_ \[\#\Phi_{n}^{-1}(l_{0}l_{1}\cdots l_{n})\leq(m+1)m^{\frac{1}{2}(n+l_{n}-l_{0})}.\] Proof.: A path \(u_{0}\cdots u_{j}\) in the Markov diagram is _upward_ if \(l(u_{k})<l(u_{k+1})\) for all \(k\in\{0,\ldots,j-1\}\), and _downward_ if \(l(u_{k+1})>l(u_{k})\) for all \(k\in\{0,\ldots,j-1\}\). Let \(r\), \(s\) be integers with \(-1\leq r<s\). From the description of the Markov diagram in Section 2.1, the number of upward paths from one vertex in \(\mathcal{L}_{r}\) to another in \(\mathcal{L}_{s}\) does not exceed \(m^{s-r-1}(m+1)\), and the number of upward paths from one vertex in \(\mathcal{L}_{r}\) to another in \(\mathcal{L}_{s}\) which can be concatenated to a downward path does not exceed \(m^{s-r-1}\). Reciprocally, the number of downward paths from one vertex in \(\mathcal{L}_{s}\) to another in \(\mathcal{L}_{r}\) does not exceed \(m+1\), and the number of downward paths from one vertex in \(\mathcal{L}_{s}\) to another in \(\mathcal{L}_{r}\) which can be concatenated to an upward path does not exceed \(m\). Let \(l_{0}\cdots l_{n}\in Z_{n}\). We may assume \(l_{k}\geq-1\) for \(k\in\{0,\ldots,n\}\) for otherwise the desired inequality is trivial. Any path in \(\Phi_{n}^{-1}(l_{0}\cdots l_{n})\) is uniquely written as the alternate concatenation of upward and downward paths, see Figure 5 for example. The sum of the lengths of all the upward paths in this concatenation equals \(\frac{1}{2}(n+l_{n}-l_{0})\). Hence we obtain the desired inequality. For a vertex \(v\in\mathcal{L}_{0}\setminus\mathcal{V}_{\beta}\) and positive integers \(n\), \(j\), let \(P_{n}(j;v)\) denote the set of paths \(v_{0}\cdots v_{n}\) of length \(n\) such that \(v_{0}=v\), \(l(v_{k})\geq 1\) for all \(k\in\{1,\ldots,n\}\) and \(l(v_{n})=j\). Note that \(P_{n}(j;v)\neq\emptyset\) if and only if \(n\equiv j\) mod \(2\). **Lemma 2.2**.: _For any \(v\in\mathcal{L}_{0}\setminus\mathcal{V}_{\beta}\) and all positive integers \(n\), \(j\) with \(n\equiv j\) mod \(2\), we have_ \[\#P_{n}(j;v)\leq\frac{j}{n}\left(\begin{smallmatrix}n\\ \frac{n+j}{2}\end{smallmatrix}\right)\frac{m+1}{m}m^{\frac{n+j}{2}}.\] Proof.: Since each path in \(P_{n}(j;v)\) has no hold time, each element of \(\Phi_{n}(P_{n}(j;v))\) may be viewed a path of the symmetric random walk on \(\mathbb{Z}\) starting at the position \(j\) which hits the origin \(0\) at step \(n\) for the first time. The reflection principle for the random walk gives \[\#\Phi_{n}(P_{n}(j;v))=\frac{j}{n}\left(\begin{smallmatrix}n\\ \frac{n+j}{2}\end{smallmatrix}\right). \tag{2.2}\] By Lemma 2.1, for each \(v_{0}\cdots v_{n}\in P_{n}(j;v)\) we have \[\#\Phi_{n}^{-1}(l(v_{0})\cdots l(v_{n}))\leq\frac{m+1}{m}m^{\frac{n+j}{2}}. \tag{2.3}\] Combining (2.2) and (2.3) we obtain the desired inequality. ### Formulas for Birkhoff sums of the central Jacobian Let \(n\geq 1\) and let \(v_{0}\cdots v_{n}\) be a path in the Markov diagram \((\mathcal{V},\rightarrow)\). The Birkhoff sum \(S_{n}\phi^{c}\) on the rectangle \(\bigcap_{k=0}^{n}f^{-k}(v_{k})\) is a constant, which we denote by \(S_{n}\phi^{c}(v_{0}\cdots v_{n})\) with a slight abuse of notation. From the structure of the Markov diagram described in Section 2.1, we derive two useful formulas for this constant. 1. If \(v_{0}\cdots v_{n}\in P_{n}\), then (2.4) \[S_{n}\phi^{c}(v_{0}\cdots v_{n})=(l(v_{0})-l(v_{n}))\log m.\] 2. If \(v_{0}\in\mathcal{V}_{\beta}\) and \(v_{n}\in\mathcal{V}_{\beta}\) then (2.5) \[S_{n}\phi^{c}(v_{0}\cdots v_{n})=\#\{k\in\{0,\ldots,n-1\}\colon k\text{ is a hold time of }v_{0}\cdots v_{n}\}\log m.\] Since any path in \((\mathcal{V},\rightarrow)\) can be written as a concatenation of paths of the above two kinds, the Birkhoff sum of the central Jacobian along any path is kept in track by a combination of the formulas (2.4) and (2.5). ### Stopping time We denote by \(|\cdot|\) the Lebesgue measure on \([0,1]\), \([0,1]^{2}\), \([0,1]^{3}\). A _rectangle_ (resp. _block_) is a product of two (resp. three) non-degenerate intervals in \([0,1]\). For a rectangle \(B=B_{x}\times B_{y}\) we write \[|B|_{x}=|B_{x}|\ \text{ and }\ |B|_{y}=|B_{y}|.\] Similarly, for a block \(B=B_{x}\times B_{y}\times B_{z}\) we write \[|B|_{x}=|B_{x}|,\ |B|_{y}=|B_{y}|\ \text{ and }\ |B|_{z}=|B_{z}|.\] For \(p\in[0,1]^{2}\) and \(n\geq 1\), let \(K_{n}(p)\) denote the maximal rectangle containing \(p\) on which \(f^{n}\) is affine. Clearly \(|K_{1}(p)|_{y}=1\) for \(p\in\Omega_{\alpha}^{+}\), \(|K_{1}(p)|_{y}=\frac{1}{m}\) for \(p\in\Omega_{\beta}^{+}\) and \(|K_{n}(p)|_{y}/|K_{n-1}(p)|_{y}\in\left\{1,\frac{1}{m}\right\}\) for \(n\geq 2\). An integer \(n\geq 2\) is a _cutting time_ of \(p\) if \(|K_{n}(p)|_{y}/|K_{n-1}(p)|_{y}=\frac{1}{m}\). We define a _stopping time_ \[R(p)=\inf\{n\geq 2\colon n\text{ is a cutting time of }p\}. \tag{2.6}\] Clearly we have \(R(p)=2\) if and only if \(p\) is contained in the set \[E=\Omega_{\beta}^{+}\cap f^{-1}(\Omega_{\beta}^{+}).\] The stopping time is expressed as follows. Put \(S_{0}\phi^{c}\equiv 0\) for convenience. **Proposition 2.3**.: _For all \(p\in[0,1]^{2}\) we have_ \[R(p)=\begin{cases}\inf\{n\geq 0\colon S_{n}\phi^{c}(p)=-\log m\text{ and }f^{n}(p)\in E\}+2&\text{ on }\Omega_{\alpha}^{+},\\ \inf\{n\geq 0\colon S_{n}\phi^{c}(p)=0\text{ and }f^{n}(p)\in E\}+2&\text{ on }\Omega_{\beta}^{+}.\end{cases}\] Before proceeding let us record two equalities. First, \[f(\Omega_{\beta}^{+}\setminus E)=\Omega_{\alpha}^{+}. \tag{2.7}\] The definition of the stopping time (2.6) implies \[R\circ f=R-1\ \text{ on }\Omega_{\beta}^{+}\setminus E. \tag{2.8}\] Proof of Proposition 2.3.: By (2.7) and (2.8) it suffices to consider the case \(p\in\Omega_{\beta}^{+}\setminus E\). Clearly we have \(R(p)\geq 3\). We claim \(S_{R(p)-2}\phi^{c}(p)=0\), for otherwise (2.5) implies \(S_{R(p)-2}\phi^{c}(p)>0\), and there is \(k\in\{2,\ldots,R(p)-1\}\) such that \(f^{k-2}(p)\in E\) and \(S_{k-2}\phi^{c}(p)=0\). From Lemma 2.4 below, it follows that \(k\) is a cutting time of \(p\), a contradiction to the minimality in the definition of \(R(p)\). This claim and (2.4) together imply \(S_{n}\phi^{c}(p)<0\) for all \(n\geq 1\) with \(n\leq R(p)-3\), and so the desired equality holds. **Lemma 2.4**.: _Let \(p\in\Omega_{\beta}^{+}\) and \(n\geq 2\). Then \(n\) is a cutting time of \(p\) if and only if \(f^{n-2}(p)\in E\) and \(S_{n-2}\phi^{c}(p)\geq 0\)._ Proof.: To show the 'if' part, suppose \(n\geq 2\), \(f^{n-2}(p)\in E\) and \(S_{n-2}\phi^{c}(p)\geq 0\). Take \(i\in\{1,\ldots,m\}\) with \(f^{n-2}(p)\in\Omega^{+}_{\beta_{i}}\). Let \(B\) denote the connected component of \(f^{-n+2}(\Omega^{+}_{\beta_{i}})\) that contains \(p\). Then \(B\) is a rectangle, \(f^{n-2}|_{B}\) is affine and \(|f^{n-2}(B)|_{y}=\frac{1}{m}\). Hence \(n\) is a cutting time of \(p\). To show the 'only if' part, let \(n\geq 2\) be a cutting time of \(p\). The definition of cutting time implies \(f^{n-2}(p)\in E\). Since \(p\in\Omega^{+}_{\beta}\), by (2.5) we obtain \(S_{n-2}\phi^{c}(p)\geq 0\). ### Formula for the stopping time in terms of pullbacks Let \(A\) be a non-empty subset of \([0,1]^{2}\) and let \(n\geq 1\). A connected component of \(f^{-n}(A)\) is called a _pullback_ of \(A\) by \(f^{n}\). If \(B\) is a pullback of \(A\) by \(f^{n}\) and \(f^{n}|_{B}\) is affine, we call \(B\) an affine pullback of \(A\) by \(f^{n}\), or simply an _affine pullback_. If \(A\) is connected and \(B\) is an affine pullback of \(A\) by \(f^{n}\), then \(f^{n}(B)=A\). **Proposition 2.5**.: _For any \(p\in(0,1)^{2}\) such that \(R(p)\) is finite and \(f^{R(p)}(p)\in(0,1)^{2}\), we have_ \[R(p)=\min\left\{\begin{aligned} n\geq 2\colon& \text{there exists an affine pullback $B$ of $(0,1)^{2}$ by $f^{n}$}\\ &\text{such that $p\in B$ and $|B|_{y}=\frac{1}{m}|K_{1}(p)|_{y}$} \end{aligned}\right\}.\] Proof.: In the case \(R(p)=2\), we have \(p\in E\) and the desired equality is obvious. Suppose \(R(p)\geq 3\). Then \(p\notin E\) holds. By Proposition 2.3, there exists \(i\in\{1,\ldots,m\}\) such that \(f^{R(p)-2}(p)\in\Omega^{+}_{\beta_{i}}\cap E\). The pullback of the rectangle \(\Omega^{+}_{\beta_{i}}\cap E\) by \(f^{R(p)-2}\) that contains \(p\), denoted by \(B^{\prime}\), is an affine pullback and satisfies \(|B^{\prime}|_{y}=|K_{1}(p)|_{y}\). Proposition 2.3 implies \(\operatorname{int}(\Omega^{+}_{\beta_{i}}\cap E)\subset f^{R(p)-2}(B^{\prime})\). In particular, \(f^{R(p)-1}|_{B^{\prime}}\) is affine and \(\operatorname{int}(\Omega^{+}_{\beta})\subset f^{R(p)-1}(B^{\prime})\). From the assumption \(f^{R(p)}(p)\in(0,1)^{2}\), there exists an affine pullback \(B\) of \((0,1)^{2}\) by \(f^{R(p)}\) satisfying \(p\in B\subset B^{\prime}\) and \(|B|_{y}=\frac{1}{m}|B^{\prime}|_{y}=\frac{1}{m}|K_{1}(p)|_{y}\). Hence, \(R(p)\) does not exceed the minimum in the right-hand side of the desired equality in the proposition. The reverse inequality is obvious from the property of \(B\). ### Almost sure finiteness of the stopping time Clearly, if \(a\in(\frac{1}{2m},\frac{1}{m})\) then the set \(\{R=\infty\}\) has positive Lebesgue measure. Otherwise this is a null set. **Proposition 2.6**.: _If \(a\in(0,\frac{1}{2m}]\) then the stopping time is finite Lebesgue a.e. on \([0,1]^{2}\)._ Proof.: Proposition 2.3 and (2.4), (2.5) together imply \(\limsup_{n\to\infty}\frac{1}{n}S_{n}\phi^{c}(p)\leq 0\) for all \(p\in\{R=\infty\}\). Recall (1.7) that \(\frac{1}{n}S_{n}\phi^{c}\) converges almost surely to the positive constant \((1-2ma)\log m\). Hence, if \(a\in(0,\frac{1}{2m})\) then \(R\) is finite Lebesgue a.e. on \([0,1]^{2}\). For the rest of the proof of Proposition 2.6 we assume \(a=\frac{1}{2m}\). We need two lemmas. **Lemma 2.7**.: _For Lebesgue a.e. \(x\in[0,1]\), there exists a pair \((M,N)\) of positive integers such that_ \[\sup_{n\geq N}\frac{1}{\sqrt{n}}|S_{n}\phi^{c}(x,0)|\leq M.\] Proof.: Put \(v(\phi^{c})=\int_{[0,1]}|\phi^{c}(x,0)|^{2}\mathrm{d}x\). Let \((a_{k})_{k=2}^{\infty}\) be the increasing sequence of positive reals satisfying \((1/\sqrt{2\pi v(\phi^{c})})\int_{\mathbb{R}\setminus[-a_{k},a_{k}]}\exp\left(- \frac{x^{2}}{2v(\phi^{c})}\right)\mathrm{d}x=1/k^{2}\) for all \(k\geq 2\). For \(k\geq 2\) and \(n\geq 1\), define \[G_{k,n}=\left\{x\in[0,1]\colon\frac{1}{\sqrt{n}}S_{n}\phi^{c}(x,0)\in[-a_{k},a _{k}]\right\}.\] The central limit theorem applied to the sequence of independently identically distributed random variables \(x\in[0,1]\mapsto\phi^{c}(f^{j}(x,0))\) (\(j=0,1,\ldots\)) with mean \(0\) and variance \(v(\phi^{c})\) yields \[\lim_{n\to\infty}|G_{k,n}|=\frac{1}{\sqrt{2\pi v(\phi^{c})}}\int_{[-a_{k},a_{k }]}\exp\left(-\frac{x^{2}}{2v(\phi^{c})}\right)\mathrm{d}x=1-\frac{1}{k^{2}}.\] Choose a sequence \((n_{k})_{k=2}^{\infty}\) of positive integers such that \(\sum_{k=2}^{\infty}|[0,1]\setminus\bigcup_{n=n_{k}}^{\infty}G_{k,n}|<\infty\). By Borel-Cantelli's lemma, \[\left|\limsup_{k\to\infty}\left([0,1]\setminus\bigcup_{n=n_{k}}^{\infty}G_{k,n }\right)\right|=0. \tag{2.9}\] Let \(G\) denote the set of \(x\in[0,1]\) for which there is no positive integer pair \((M,N)\) satisfying \(\sup_{n\geq N}|S_{n}\phi^{c}(x,0)|/\sqrt{n}\leq M\). For any \(x\in G\) we have \(|S_{n}\phi^{c}(x,0)|/\sqrt{n}\to\infty\) as \(n\to\infty\). Hence, for any \(k\geq 2\) there is \(n\geq n_{k}\) such that \(|S_{n}\phi^{c}(x,0)|/\sqrt{n}>a_{k}\), and so \(x\in[0,1]\setminus\bigcup_{n=n_{k}}^{\infty}G_{k,n}\). Since \(k\geq 2\) is arbitrary, (2.9) yields \(|G|=0\), which verifies Lemma 2.7. Define \(Q\colon[0,1]^{2}\setminus E\to\mathbb{Z}_{+}\cup\{\infty\}\) by \[Q(p)=\begin{cases}\inf\{n\geq 1\colon S_{n}\phi^{c}(p)=-\log m\text{ and }f^{n}(p)\in\Omega_{\beta}^{+}\}&\text{ on }\Omega_{\alpha}^{+},\\ \inf\{n\geq 1\colon S_{n}\phi^{c}(p)=0\text{ and }f^{n}(p)\in\Omega_{\beta}^{+}\}& \text{ on }\Omega_{\beta}^{+}\setminus E.\end{cases}\] **Lemma 2.8**.: \(Q\) _is finite Lebesgue a.e. on \([0,1]^{2}\setminus E\)._ Proof.: By (2.7) and \(Q\circ f=Q-1\) on \(\Omega_{\beta}^{+}\setminus E\), it suffices to show that \(Q\) is finite Lebesgue a.e. on \(\Omega_{\alpha}^{+}\). Recall that \(f_{\frac{1}{2m}}\) is a skew product over the piecewise affine fully branched map \(\tau_{\frac{1}{2m}}\) in (1.1) that preserves the Lebesgue measure on \([0,1]\), and \(\phi^{c}\) is constant on the sets \(\{x\}\times[0,1]\), \(x\in[0,1]\). By Lemma 2.7, for Lebesgue a.e. \(p\in[0,1]^{2}\) there exists a positive integer pair \((M,N)\) satisfying \(\sup_{n\geq N}|S_{n}\phi^{c}(p)|/\sqrt{n}\leq M\). Moreover, the definition of \(Q\) and the formulas (2.4), (2.5) together imply \(\sup_{k\geq 1}S_{k}\phi^{c}(p)\leq-\log m\) for all \(p\in\Omega_{\alpha}^{+}\cap\{Q=\infty\}\). Hence, Lebesgue a.e. point in \(\Omega_{\alpha}^{+}\cap\{Q=\infty\}\) is contained in the set \[W=\bigcup_{M=1}^{\infty}\bigcup_{N=1}^{\infty}\bigcap_{n=N}^{\infty}W_{M,n}, \tag{2.10}\] where \(M\), \(n\) are positive integers and \[W_{M,n}=\left\{p\in\Omega_{\alpha}^{+}\colon\sup_{k\geq 1}S_{k}\phi^{c}(p)\leq- \log m\ \text{ and }\ S_{n}\phi^{c}(p)\geq-M\sqrt{n}\right\}.\] It suffices to show that \(W\) is a null set. For each \(M\geq 1\) we have \[W_{M,n}\subset\bigcup_{v\in\mathcal{L}_{0}\setminus\mathcal{V}_{\beta}}\bigcup_{j =1}^{\lfloor M\sqrt{n}\rceil}\bigcup_{v_{0}\cdots v_{n}\in P_{n}(j;v)}\bigcap_{ k=0}^{n}f^{-k}(v_{k}),\] where \(\lfloor\cdot\rfloor\) denotes the floor function. Using Lemma 2.2 to bound \(\#P_{n}(j;v)\) and the identity \(|\bigcap_{k=0}^{n}f^{-k}(v_{k})|_{x}=2^{-n}m^{-\frac{n+j}{2}}\) for each path \(v_{0}\cdots v_{n}\) in \(P_{n}(j;v)\), we have \[|W_{M,n}| \leq\sum_{v\in\mathcal{L}_{0}\setminus\mathcal{V}_{\beta}}\sum_{v _{0}\cdots v_{n}\in P_{n}(j;v)}\left|\bigcap_{k=0}^{n}f^{-k}(v_{k})\right|\] \[\leq\sum_{j=1}^{\lfloor M\sqrt{n}\rfloor}\frac{j}{n}\binom{n}{ \frac{n+j}{2}}\frac{m+1}{m}m^{\frac{n+j}{2}}2^{-n}m^{-\frac{n+j}{2}}\leq n^{- \frac{3}{2}}\sum_{j=1}^{\lfloor M\sqrt{n}\rfloor}j\leq\frac{M^{2}}{\sqrt{n}},\] for all sufficiently large \(n\). For the third inequality we have used Stirling's formula for factorials to evaluate the binomial coefficient. This yields \(|\bigcap_{n=N}^{\infty}W_{M,n}|\leq\inf_{n\geq N}|W_{M,n}|=0\) for all \(M\), \(N\geq 1\), and therefore \[|W|\leq\sum_{M=1}^{\infty}\sum_{N=1}^{\infty}\left|\bigcap_{n=N}^{\infty}W_{M, n}\right|=0,\] as required. This completes the proof of Lemma 2.8. By virtue of (2.7), (2.8) and \(R=2\) on \(E\), it suffices to show that \(R\) is finite Lebesgue a.e. on \(\Omega_{\beta}^{+}\setminus E\). Let \(2\mathbb{Z}_{+}\) denote the set of positive even integers. If \(p\in\Omega_{\beta}^{+}\setminus E\) and \(Q(p)\) is finite then \(Q(p)\in 2\mathbb{Z}_{+}\). Put \(M_{1}=E\), and \(M_{k}=\Omega_{\beta}^{+}\cap\{Q=k\}\) for \(k\in 2\mathbb{Z}_{+}\). For each \(k\in\{1\}\cup 2\mathbb{Z}_{+}\), \(M_{k}\) is written as a finite union \(M_{k}=\bigcup_{\ell}M_{k,\ell}\) of pairwise disjoint rectangles \(M_{k,\ell}\) with the properties that \(M_{k,\ell}\subset\Omega_{\beta_{i}}^{+}\) for some \(i\in\{1,\ldots,m\}\), \(|M_{k,\ell}|_{y}=\frac{1}{m}\), \(f^{k}|_{M_{k,\ell}}\) is affine and \(\operatorname{int}(\Omega_{\beta_{j}}^{+})\subset f^{k}(M_{k,\ell})\subset \Omega_{\beta_{j}}^{+}\) for some \(j\in\{1,\ldots,m\}\). Lemma 2.8 gives \(|\Omega_{\beta}^{+}\setminus\bigcup_{k\in\{1\}\cup 2\mathbb{Z}_{+}}M_{k}|=0\). The map \(H\colon\bigcup_{k\in\{1\}\cup 2\mathbb{Z}_{+}}M_{k}\to\Omega_{\beta}^{+}\) given by \(H(x,y)=f^{k}(x,y)\) for \((x,y)\in M_{k}\) has the skew product form \[H(x,y)=(H_{1}(x),H_{2}(x,y)).\] The map \(H_{1}\) is a piecewise affine map on the interval \([ma,1]\) with infinitely many full branches, leaving the normalized Lebesgue measure on the interval invariant and ergodic. From Birkhoff's ergodic theorem for \(H_{1}\), \(n_{1}(p)=\inf\{n\geq 1\colon H^{n}(p)\in E\}\) is finite for Lebesgue a.e. \(p\in\Omega_{\beta}^{+}\setminus E\). Moreover, (2.4) implies \(S_{n_{1}(p)}\phi^{c}(p)=0\) for \(p\in\Omega_{\beta}^{+}\setminus E\). This together with Proposition 2.3 implies \(R(p)=n_{1}(p)+2<\infty\) for Lebesgue a.e. \(p\in\Omega_{\beta}^{+}\setminus E\). The proof of Proposition 2.6 is complete. ### An induced Markov map For each \(n\geq 1\) let \(\mathscr{P}_{n}\) denote the collection of affine pullbacks of \((0,1)^{2}\) by \(f^{n}\) which are contained in \(\{R=n\}\). We set \[\mathscr{P}=\bigcup_{n=1}^{\infty}\mathscr{P}_{n}.\] Elements of \(\mathscr{P}\) are pairwise disjoint, and the stopping time is constant on each element. We now define an induced map \(f^{R}\colon\,\bigcup_{\omega\in\mathscr{P}}\omega\to[0,1]^{2}\) by \(f^{R}|_{\omega}=f^{R|_{\omega}}|_{\omega}\) on each \(\omega\in\mathscr{P}\), and put \[\Delta_{0}^{+}=\bigcap_{n=0}^{\infty}(f^{R})^{-n}\left(\bigcup_{\omega\in \mathscr{P}}\omega\right).\] Partition \(\Delta_{0}^{+}\) into \(\{\omega\cap\Delta_{0}^{+}\}_{\omega\in\mathscr{P}}\), label the partition elements with an integer \(i\geq 1\), and write \(\{\Delta_{0,i}^{+}\}_{i=1}^{\infty}=\{\omega\cap\Delta_{0}^{+}\}_{\omega\in \mathscr{P}}\) and \(R_{i}=R|_{\Delta_{0,i}^{+}}.\) We have \(\Delta_{0}^{+}=\bigcup_{i=1}^{\infty}\widetilde{\Delta}_{0,i}^{+}\), and \(f^{R}\) maps each \(\Delta_{0,i}^{+}\) affinely onto \(\Delta_{0}^{+}\). Proposition 2.6 implies \(|\Delta_{0}^{+}|=1\) for \(a\in(0,\frac{1}{2m}]\). ### Stopping time estimates For \(a\in(0,\frac{1}{m})\) put \[\chi(a)=-\log\sqrt{a(1-ma)}.\] Note that \(\chi(a)\geq\log\sqrt{4m}>0\). Moreover we have \(\sqrt{4m}e^{-\chi(a)}\leq 1\), and the equality holds if and only if \(a=\frac{1}{2m}\). **Proposition 2.9**.: _There exists \(n_{0}\geq 1\) such that for all \(a\in(0,\frac{1}{2m}]\) and all \(n\geq n_{0}\) we have_ \[|\{R=n+2\}|\leq n^{-\frac{3}{2}}(\sqrt{4m}e^{-\chi(a)})^{n}.\] _In particular, if \(a\in(0,\frac{1}{2m})\) then \(|\{R=n+2\}|\) decays exponentially in \(n\)._ To prove Proposition 2.9, we estimate the size of each element of the partition \(\{\Delta_{0,i}^{+}\}_{i=1}^{\infty}\), and the cardinality of the set of elements with a given stopping time. **Lemma 2.10**.: _For all \(a\in(0,\frac{1}{2m}]\) and all \(i\geq 1\),_ \[|\Delta_{0,i}^{+}|\leq\exp(-\chi(a)(R_{i}-2)).\] Proof.: For \(i\geq 1\) write \(\Delta_{0,i}^{+}=\omega\cap\Delta_{0}^{+}\), \(\omega\in\mathscr{P}\) and put \[R_{i,\alpha}=\#\left\{0\leq k\leq R_{i}-1\colon f^{k}(\omega) \subset\Omega_{\alpha}^{+}\right\},\] \[R_{i,\beta}=\#\left\{0\leq k\leq R_{i}-1\colon f^{k}(\omega) \subset\Omega_{\beta}^{+}\right\}.\] Clearly we have \(R_{i,\alpha}+R_{i,\beta}=R_{i}.\) Proposition 2.5 implies the following: 1. If \(\omega\subset\Omega_{\alpha}^{+}\), then \(|\omega|_{y}=\frac{1}{m}\) and \(S_{R_{i}}\phi^{c}=\log m\) on \(\omega\). In particular, \(R_{i}\) is odd and \[R_{i,\alpha}=\frac{1}{2}(R_{i}-1)\ \ \text{and}\ \ R_{i,\beta}=\frac{1}{2}(R_{i}+1).\] 2. If \(\omega\subset\Omega_{\beta}^{+}\), then \(|\omega|_{y}=\frac{1}{m^{2}}\) and \(S_{R_{i}}\phi^{c}=2\log m\) on \(\omega\). In particular, \(R_{i}\) is even and \[R_{i,\alpha}=\frac{1}{2}(R_{i}-2)\ \ \text{and}\ \ R_{i,\beta}=\frac{1}{2}(R_{i}+2).\] See Figure 6. We also have \(\log|\omega|_{x}=R_{i,\alpha}\log a+R_{i,\beta}\log(1-ma).\) Combining this with the above (i) (ii) yields \(|\omega|_{x}\leq\exp(-\chi(a)(R_{i}-2)).\) Since \(|\Delta_{0,i}^{+}|=|\omega|_{x}|\omega|_{y}\), we obtain the desired inequality. For each \(n\geq 1\), define \[P_{n}^{*}=\begin{cases}\{v_{0}\cdots v_{n}\in P_{n}\colon l(v_{0})=-1\text{ and }l(v_{n})=0\}&\text{ if $n$ is odd,}\\ \{v_{0}\cdots v_{n}\in P_{n}\colon l(v_{0})=0\text{ and }l(v_{n})=0\}&\text{ if $n$ is even.}\end{cases}\] **Lemma 2.11**.: _For all \(n\geq 2\) we have_ \[\#P_{n}^{*}\leq\frac{2(m+1)}{n+1}\begin{pmatrix}n+2\\ \lfloor\frac{n+4}{2}\rfloor\end{pmatrix}m^{\lfloor\frac{n+1}{2}\rfloor}.\] Proof.: For integers \(n\), \(s\), \(t\) with \(n\geq 1\) and \(s\leq t\), let \[Z_{n}(s,t)=\{l_{0}\cdots l_{n}\in Z_{n}\colon l_{0}=s,\ l_{n}=t\}.\] If \(s<t\), then let \[Z_{n}^{*}(s,t)=\{l_{0}\cdots l_{n}\in Z_{n}(s,t)\colon l_{k}>s\text{ for }k \in\{1,\ldots,n\}\}.\] Note that \(Z_{n}^{*}(s,t)\neq\emptyset\) if and only if \(t-s\equiv n\bmod 2\). The reflection principle of the symmetric random walk on \(\mathbb{Z}\) gives \[\#Z_{n}^{*}(s,t)=\frac{t-s}{n}\begin{pmatrix}n\\ \frac{n+t-s}{2}\end{pmatrix}. \tag{2.11}\] By Lemma 2.1, for each \(v_{0}\cdots v_{n}\in P_{n}^{*}\) we have \[\#\Phi_{n}^{-1}(l(v_{0})\cdots l(v_{n}))\leq\begin{cases}(m+1)m^{\frac{n+1}{2} }&\text{ if $n$ is odd,}\\ (m+1)m^{\frac{n}{2}}&\text{ if $n$ is even.}\end{cases} \tag{2.12}\] Suppose \(n\geq 2\) is odd. For each \(v_{0}\cdots v_{n}\in P_{n}^{*}\) we have \(l(v_{0})\cdots l(v_{n})\in Z_{n}(-1,0)\). Since the map \(l_{0}\cdots l_{n}\in Z_{n}(-1,0)\mapsto(-2)l_{0}\cdots l_{n}\in Z_{n+1}^{*}(- 2,0)\) is injective, (2.11) gives \[\#\Phi_{n}(P_{n}^{*})\leq\#Z_{n+1}^{*}(-2,0)\leq\frac{2}{n+1}\begin{pmatrix} n+1\\ \frac{n+3}{2}\end{pmatrix}. \tag{2.13}\] Suppose \(n\geq 2\) is even. Similarly, for each \(v_{0}\cdots v_{n}\in P_{n}^{*}\) we have \(l(v_{0})\cdots l(v_{n})\in Z_{n}(0,0).\) Since the map \(l_{0}\cdots l_{n}\in Z_{n}(0,0)\mapsto(-2)(-1)l_{0}\cdots l_{n}\in Z_{n+2}^{*} (-2,0)\) is injective, (2.11) gives \[\#\Phi_{n}(P_{n}^{*})\leq\#Z_{n+2}^{*}(-2,0)\leq\frac{2}{n+2}\begin{pmatrix}n+2\\ \frac{n+4}{2}\end{pmatrix}. \tag{2.14}\] Combining (2.12), (2.13) and (2.14) yields the desired inequality in the lemma. Proof of Proposition 2.9.: Let \(n\geq 2\). For each \(i\geq 1\) with \(R_{i}=n+2\), there exists a unique path \(v_{0}\cdots v_{n}\in P_{n}^{*}\) such that \(f^{k}(\Delta_{0,i}^{+})\subset v_{k}\) for \(0\leq k\leq n\). By Lemmas 2.10 and 2.11 we have \[|\{R=n+2\}| =\sum_{i\geq 1\colon R_{i}=n+2}|\Delta_{0,i}^{+}|\leq\#P_{n}^{*}e^{- \chi(a)n}\] \[\leq\frac{2(m+1)}{n+1}\begin{pmatrix}n+2\\ \lfloor\frac{n+4}{2}\rfloor\end{pmatrix}m^{\lfloor\frac{n+1}{2}\rfloor}e^{- \chi(a)n}\leq n^{-\frac{3}{2}}(\sqrt{4m}e^{-\chi(a)})^{n},\] provided \(n\) is large enough. To deduce the last inequality we have evaluated the binomial coefficient using Stirling's formula for factorials. ### Symbolic dynamics For \(a,b\in(0,\frac{1}{m})\) we put \[\Lambda_{a}=\bigcap_{n=0}^{\infty}f_{a}^{-n}\left(\bigcup_{\gamma\in D}\text{ int}(\Omega_{\gamma}^{+})\right)\quad\text{ and }\quad\Lambda_{a,b}=\bigcap_{n=-\infty}^{\infty}f_{a,b}^{-n}\left(\bigcup_{ \gamma\in D}\text{int}(\Omega_{\gamma})\right).\] Define coding maps \(\pi_{a}\colon(x,y)\in\Lambda_{a}\mapsto(\omega_{n})_{n\in\mathbb{Z}_{+}}\in D^ {\mathbb{Z}_{+}}\) and \(\pi_{a,b}\colon(x,y,z)\in\Lambda_{a,b}\mapsto(\omega_{n})_{n\in\mathbb{Z}}\in D ^{\mathbb{Z}}\) by \[(x,y)\in\bigcap_{n=0}^{\infty}f_{a}^{-n}(\text{int}(\Omega_{\omega_{n}}^{+})) \text{ \ and \ }(x,y,z)\in\bigcap_{n=-\infty}^{\infty}f_{a,b}^{-n}(\text{int}(\Omega_{ \omega_{n}})).\] Note that \(\sigma_{+}\circ\pi_{a}=\pi_{a}\circ f_{a}\) and \(\sigma\circ\pi_{a,b}=\pi_{a,b}\circ f_{a,b}.\) The coding maps are not injective. The next connection between the heterochoasaker maps and the Dyck system lies at the basis of the proof of Theorem C. **Theorem 2.12** ([32], Theorem 1.1).: _For all \(a,b\in(0,\frac{1}{m})\) we have_ \[\Sigma_{D}^{+}=\overline{\pi_{a}(\Lambda_{a})}\text{ \ and \ }\Sigma_{D}= \overline{\pi_{a,b}(\Lambda_{a,b})}.\] ## 3. Mixing of all orders for the heterochaos baker maps In this section we prove Theorem A. In Section 3.1 we show a certain invariance of correlations, which implies the sufficiency to consider only \(a\in(0,\frac{1}{2m}]\). In Section 3.2 we establish the exactness and \(K\)-property of the corresponding maps, and in Section 3.3 complete the proof of Theorem A. ### Invariance of correlations Let \((X,\mathscr{B},\nu)\) be a probability space and let \(T\colon X\to X\) be a measurable map preserving \(\nu\). For an integer \(k\geq 2\) and functions \(\phi_{0},\phi_{1},\ldots,\phi_{k-1}\in L^{k}(\nu)\), consider their correlations \[\text{Cor}_{n_{1},\ldots,n_{k-1}}(T;\phi_{0},\ldots,\phi_{k-1};\nu)=\left| \int\prod_{j=0}^{k-1}(\phi_{j}\circ T^{n_{j}})\text{d}\nu-\prod_{j=0}^{k-1} \int\phi_{j}\text{d}\nu\right|,\] where \(0=n_{0}\leq n_{1}\leq\cdots\leq n_{k-1}\). It is well-known that \((T,\nu)\) is \(k\)-mixing if and only if \(\mathrm{Cor}_{n_{1},\ldots,n_{k-1}}(T;\phi_{0},\ldots,\phi_{k-1};\nu)\to 0\) as \(n_{1},n_{2}-n_{1},\ldots,n_{k-1}-n_{k-2}\to\infty\) for all \(\phi_{0},\phi_{1},\ldots,\phi_{k-1}\in L^{k}(\nu)\). Define an involution \(\iota\colon[0,1]^{3}\to[0,1]^{3}\) by \[\iota(x,y,z)=(1-z,1-y,1-x).\] **Proposition 3.1**.: _Let \(a\in(0,\frac{1}{m})\). For all \(k\geq 2\), \(\phi_{0},\ldots,\phi_{k-1}\in L^{k}(\mathrm{Leb})\) and integers \(n_{0},\ldots,n_{k-1}\) with \(0=n_{0}\leq n_{1}\leq n_{2}\leq\cdots\leq n_{k-1}\) we have_ \[\mathrm{Cor}_{n_{1},\ldots,n_{k-1}} (g_{a};\phi_{0},\ldots,\phi_{k-1};\mathrm{Leb})\] \[=\mathrm{Cor}_{n_{k-1},n_{k-1}-n_{1},n_{k-1}-n_{2},\ldots,0}(g_{ \frac{1}{m}-a};\phi_{0}\circ\iota^{-1},\ldots,\phi_{k-1}\circ\iota^{-1}; \mathrm{Leb}).\] Proof.: Define \(g_{a}^{*}\colon[0,1]^{3}\to[0,1]^{3}\) by \[g_{a}^{*}=\iota^{-1}\circ g_{\frac{1}{m}-a}\circ\iota. \tag{3.1}\] For Lebesgue a.e. \(p\in[0,1]^{3}\) we have \(g_{a}^{*}(g_{a}(p))=p\). Moreover, for Lebesgue a.e. \(p\in[0,1]^{3}\) we have \[\phi_{j}\circ g_{a}^{n_{j}}(p) =\phi_{j}\circ(g_{a}^{*})^{n_{k-1}-n_{j}}\circ g_{a}^{n_{k-1}-n_{j }}\circ g_{a}^{n_{j}}(p)\] \[=\phi_{j}\circ(g_{a}^{*})^{n_{k-1}-n_{j}}\circ g_{a}^{n_{k-1}}(p) \text{ for every }j\in\{0,\ldots,k-1\}. \tag{3.2}\] Using (3.2) and the \(g_{a}\)-invariance of \(\mathrm{Leb}\), (3.1) and the \(\iota\)-invariance of \(\mathrm{Leb}\) yield \[\int\prod_{j=0}^{k-1}(\phi_{j}\circ g_{a}^{n_{j}})\mathrm{dLeb} =\int\prod_{j=0}^{k-1}(\phi_{j}\circ(g_{a}^{*})^{n_{k-1}-n_{j}}) \mathrm{dLeb}\] \[=\int\prod_{j=0}^{k-1}(\phi_{j}\circ\iota^{-1}\circ g_{\frac{1}{ m}-a}^{n_{k-1}-n_{j}}\circ\iota)\mathrm{dLeb}\] \[=\int\prod_{j=0}^{k-1}(\phi_{j}\circ\iota^{-1}\circ g_{\frac{1}{ m}-a}^{n_{k-1}-n_{j}})\mathrm{dLeb}.\] We also have \(\int\phi_{j}\mathrm{dLeb}=\int\phi_{j}\circ\iota^{-1}\mathrm{dLeb}\), and so the desired equality holds. ### Exactness and \(K\)-property Let \((X,\mathscr{B},\nu)\) be a probability space. For sub-sigma-algebras \(\mathscr{C}\), \(\mathscr{D}\) of \(\mathscr{B}\) we write \(\mathscr{C}\stackrel{{\circ}}{{\subset}}\mathscr{D}\) if for every \(C\in\mathscr{C}\) there exists \(D\in\mathscr{D}\) such that \(\nu(C\ominus D)=0\), where \(\ominus\) denotes the symmetric difference of sets. We write \(\mathscr{C}\stackrel{{\circ}}{{=}}\mathscr{D}\) if \(\mathscr{C}\stackrel{{\circ}}{{\subset}}\mathscr{D}\) and \(\mathscr{D}\stackrel{{\circ}}{{\subset}}\mathscr{C}\). If \(\{\mathscr{B}_{n}\}_{n=1}^{\infty}\) is a family of sub-sigma-algebras of \(\mathscr{B}\), let \(\bigvee_{n=1}^{\infty}\mathscr{B}_{n}\) denote the smallest sub-sigma-algebra that contains all the \(\mathscr{B}_{n}\). If \(\{\mathscr{B}_{n}\}_{n=1}^{\infty}\) is a family of partitions of \(X\) into measurable sets, let \(\bigvee_{n=1}^{\infty}\mathscr{B}_{n}\) denote the smallest sub-sigma-algebra that contains all the \(\mathscr{B}_{n}\). Let \(T\colon X\to X\) be a measurable map preserving \(\nu\). For a sub-sigma-algebra \(\mathscr{C}\) of \(\mathscr{B}\) and \(n\geq 1\), let \(T^{-n}\mathscr{C}=\{T^{-n}(B)\colon B\in\mathscr{C}\}\). If \(T\) has a measurable inverse, let \(T^{n}\mathscr{C}=\{T^{n}(B)\colon B\in\mathscr{C}\}\). We say \((T,\nu)\) is _exact_ if \(T\) has no measurable inverse and \(\bigcap_{n=0}^{\infty}T^{-n}\mathscr{B}\stackrel{{\circ}}{{=}}\{X,\emptyset\}\). We say \((T,\nu)\) has \(K\)_-property_ if \(T\) has a measurable inverse and there exists a sub-sigma-algebra \(\mathscr{K}\) of \(\mathscr{B}\) such that: 1. \(\mathscr{K}\subset T\mathscr{K}\). (ii) \(\bigvee_{n=0}^{\infty}T^{n}\mathscr{K}\stackrel{{\circ}}{{=}} \mathscr{B}\). (iii) \(\bigcap_{n=0}^{\infty}T^{-n}\mathscr{K}\stackrel{{\circ}}{{=}}\{X, \emptyset\}\). If \((T,\nu)\) has \(K\)-property, \(T\) is usually called a Kolmogorov automorphism [33]. Exactness or \(K\)-property implies mixing of all orders [26, Section 2.6]. **Proposition 3.2**.: _For any \(a\in(0,\frac{1}{2m}]\), \((f_{a},\mathrm{Leb})\) is exact and \((g_{a}|_{\Lambda_{a},\frac{1}{m}-a},\mathrm{Leb}|_{\Lambda_{a,\frac{1}{m}-a}})\) has \(K\)-property._ For a proof of this proposition we need some notation and one preliminary lemma. For \(a\in(0,\frac{1}{m})\) define \(\mathscr{A}=\mathscr{A}_{a}\) by \[\mathscr{A}=\bigcup_{n=1}^{\infty}\{B_{0}\cap(f^{R})^{-1}(B_{1})\cap\cdots \cap(f^{R})^{-n+1}(B_{n-1})\colon B_{0},\ldots,B_{n-1}\in\mathscr{P}|_{\Delta_ {0}^{+}}\}.\] Let \(\mathscr{B}(X)\) denote the Borel sigma-algebra on a topological space \(X\). **Lemma 3.3**.: _If \(a\in(0,\frac{1}{2m}]\), then for any \(A\in\mathscr{B}([0,1]^{2})\) with positive Lebesgue measure and any \(\varepsilon\in(0,1)\), there exists \(\omega\in\mathscr{A}\) such that \(|A\cap\omega|\geq(1-\varepsilon)|\omega|\)._ Proof.: Since the collection of unions of countably many elements of \(\mathscr{A}\) is an algebra on \(\Delta_{0}^{+}\), for any \(\varepsilon>0\) there exist finitely many elements \(\omega_{1},\ldots,\omega_{k}\) of \(\mathscr{A}\) such that \(|\bigcup_{j=1}^{k}\omega_{j}\ominus A|<\varepsilon^{2}\). If \(\varepsilon\) is sufficiently small, there exists \(j_{0}\in\{1,\ldots,k\}\) such that \(|A\cap\omega_{j_{0}}|\geq(1-\varepsilon)|\omega_{j_{0}}|\), for otherwise we obtain the following contradiction: \(|A|<(1-\varepsilon)\sum_{j=1}^{k}|\omega_{j}|+|A\setminus\bigcup_{j=1}^{k} \omega_{j}|<(1-\varepsilon)(|A|+\varepsilon^{2})+\varepsilon^{2}<|A|\). Proof of Proposition 3.2.: Let \(A\in\bigcap_{n=0}^{\infty}f^{-n}\mathscr{B}([0,1]^{2})\) satisfy \(|A|>0\). To verify the exactness of \((f,\mathrm{Leb})\) it suffices to show that \(|A|=1\). By Lemma 3.3, for any \(\varepsilon>0\) there exist \(n\geq 1\) and an affine pullback of \(\omega\) of \((0,1)^{2}\) by \(f^{n}\) such that \(|A\cap\omega|>(1-\varepsilon)|\omega|\). There exists \(A^{\prime}\in\mathscr{B}([0,1]^{2})\) such that \(A=f^{-n}(A^{\prime})\), and therefore \(A^{\prime}\supset f^{n}(A)\) and \[|A|=|A^{\prime}|\geq|f^{n}(A)|\geq|f^{n}(A\cap\omega)|\geq(1-\varepsilon)|f^{n }(\omega)|=1-\varepsilon.\] The last inequality is because \(f^{n}|_{\omega}\) is affine. Since \(\varepsilon\) is arbitrary we obtain \(|A|=1\). Note that \(\mathrm{Leb}(\Lambda)=1\), \(g|_{\Lambda}\) has a measurable inverse, and the restriction \(\mathrm{Leb}|_{\Lambda}\) is \(g|_{\Lambda}\)-invariant. Let \(\mathscr{K}\) denote the smallest sub-sigma-algebra of \(\mathscr{B}(\Lambda)\) that contains \(\{(A\times[0,1])\cap\Lambda\colon A\in\mathscr{A}\}\). From the definition of the map (1.4) we obtain \(\mathscr{K}\subset g\mathscr{K}\) and \(\bigvee_{n=0}^{\infty}g^{n}\mathscr{K}\stackrel{{\circ}}{{=}} \mathscr{B}(\Lambda)\). The argument in the previous paragraph shows \(\bigcap_{n=0}^{\infty}g^{-n}\mathscr{K}\stackrel{{\circ}}{{=}} \{\Lambda,\emptyset\}\). We have verified that \((g|_{\Lambda},\mathrm{Leb}|_{\Lambda})\) has \(K\)-property. _Remark 3.4_.: The exactness of \((f_{a},\mathrm{Leb})\) for \(a\in(0,\frac{1}{2m})\) can be shown by applying [36, Theorem 1 (iii)] (see also [35, Lemma 5]) to the tower map \(F\) introduced in Section 4.1. This argument does not work for \(a=\frac{1}{2m}\) since the lift of the Lebesgue measure to the tower becomes an infinite measure. For more details on this point, see Remark 4.3. ### Proof of Theorem A Let \(a\in(0,\frac{1}{2m}]\). By Proposition 3.2, \((f_{a},\mathrm{Leb})\) is exact and so mixing of all orders. By Proposition 3.2, \((g_{a},\mathrm{Leb})\) is mixing of all orders. By Proposition 3.1, \((g_{\frac{1}{m}-a},\mathrm{Leb})\) is mixing of all orders too, and so is \((f_{\frac{1}{m}-a},\mathrm{Leb})\). The proof of Theorem A is complete. ## 4. Exponential mixing for the heterochaos baker maps In this section we prove Theorem B. In Section 4.1 we introduce towers with exponential tail associated with the heterochaos baker maps. In Section 4.2 we apply the results in [35] to the towers and use large deviations to complete the proof of Theorem B. ### Towers Put \(\Delta_{0}=\Delta_{0}^{+}\times[0,1]\). We extend the stopping time to a function on \(\Delta_{0}\) in the obvious way, and still denote the extension by \(R\): \(R(x,y,z)=R(x,y)\) for \((x,y,z)\in\Delta_{0}\). We define a _tower_\(\Delta\) associated with \(g=g_{a}\) by \[\Delta=\{(p,\ell)\colon p\in\Delta_{0},\ \ell=0,1,\ldots,R(p)-1\}.\] For each \(\ell\geq 1\), the \(\ell\)-th floor is the set \[\Delta_{\ell}=\{(p,\ell)\in\Delta\colon p\in\Delta_{0}\}.\] We identify the ground floor \(\Delta_{0}\times\{0\}\) with \(\Delta_{0}\). Note that \(\Delta=\bigcup_{\ell=0}^{\infty}\Delta_{\ell}\). Define a _tower map_\(G\colon\Delta\to\Delta\) by \[G(p,\ell)=\begin{cases}(p,\ell+1)&\text{ if }\ell+1<R(p),\\ (g^{R(p)}(p),0)&\text{ if }\ell+1=R(p).\end{cases}\] Collapsing the \(z\)-coordinate, we obtain the quotient tower \(\Delta^{+}\) and the tower map \(F\colon\Delta^{+}\to\Delta^{+}\). The maps \(g\) and \(G\), \(f\) and \(F\) are semiconjugated by the maps \(\theta\colon(p,\ell)\in\Delta\mapsto g^{\ell}(p)\in[0,1]^{3}\) and \(\theta^{+}\colon(p,\ell)\in\Delta^{+}\mapsto f^{\ell}(p)\in[0,1]^{2}\) respectively. Let \(\operatorname{pr}\colon\Delta\to\Delta^{+}\) denote the canonical projection. In summary, the following diagram commutes: We fix a sigma-algebra on \(\Delta\) that is obtained by naturally transplanting the Borel sigma-algebra on \([0,1]^{3}\). Each floor \(\Delta_{\ell}\), \(\ell\geq 0\) is identified with \(\{R>\ell\}\), and so equipped with the restriction of the Lebesgue measure. If \(a\in(0,\frac{1}{2m})\), then in view of Proposition 2.9 let \(\mu\) denote the probability measure on \(\Delta\) given by \[\mu(A)=\frac{1}{\int R\mathrm{d}\mathrm{Leb}}\sum_{\ell=0}^{\infty}|A\cap \Delta_{\ell}|\ \ \text{for any measurable set }A\subset\Delta.\] Since the Lebesgue measure on \([0,1]^{3}\) is \(g\)-invariant, \(\mu\) is \(G\)-invariant. The measure \(\mu\circ\theta^{-1}\) is \(f\)-invariant, and absolutely continuous with respect to the Lebesgue measure, and hence \(\mu\circ\theta^{-1}=\mathrm{Leb}\). The measure \(\mu^{+}=\mu\circ\operatorname{pr}^{-1}\) is \(F\)-invariant. ### Proof of Theorem B Let \(a\in(0,\frac{1}{2m})\). Let \(\eta\in(0,1]\) and let \(\varphi\), \(\psi\in\mathscr{H}_{\eta}([0,1]^{3})\). For a function \(\phi\) on \([0,1]^{3}\), let \(\tilde{\phi}\) denote its lift to the tower \(\Delta\) associated with \(g=g_{a}\), i.e. \(\tilde{\phi}=\phi\circ\theta\). Note that \(\mathrm{Cor}_{n}(g;\varphi,\psi;\mathrm{Leb})=\mathrm{Cor}_{n}(G;\tilde{ \varphi},\tilde{\psi};\mu)\). The rest of the proof of Theorem B breaks into three steps, much in parallel to [35, Section 4.1] with one important difference in Step 2. In Step 1 we begin by approximating \(\tilde{\varphi}\circ G^{k}\), \(\tilde{\psi}\circ G^{k}\), \(k\geq 1\) by functions \(\varphi_{k}\), \(\psi_{k}\) on \(\Delta\) which do not depend on the \(z\)-coordinate. In Step 2 we provide error bounds of these approximations. Since [35, p.608 Sublemma] does not hold as a result of the breakdown of the condition (P4) (backward contraction) in [35], we estimate errors of these approximations using large deviations for the map \(\tau_{a}\) in (1.1). In Step 3 we view \(\varphi_{k}\), \(\psi_{k}\) as functions on \(\Delta^{+}\), and show exponential decay of their correlations by applying [35]. We then unify all these estimates. _Step 1: Approximations._ Put \(\Delta_{0,i}=\Delta_{0.i}^{+}\times[0,1]\) for \(i\geq 1\). Each floor \(\Delta_{\ell}\), \(\ell\geq 0\) is partitioned into \(\{\Delta_{\ell,i}\}_{i\geq 1:\;R_{i}>\ell}\) where \(\Delta_{\ell,i}\) is a copy of \(\Delta_{0,i}\). Let \(\mathscr{D}_{0}\) denote the partition of \(\Delta\) into \(\Delta_{\ell,i}\)-components. It has the Markov property: For every \(A\in\mathscr{D}_{0}\), \(G(A)\) is the union of elements of \(\mathscr{D}_{0}\). For \(k\geq 1\) we put \(\mathscr{D}_{k}=\bigvee_{j=0}^{k-1}G^{-j}\mathscr{D}_{0}\). For \(k\geq 0\), let \(\mathscr{D}_{k}^{+}\) denote the partition of \(\Delta^{+}\) which is obtained as the canonical projection of \(\mathscr{D}_{k}\). For \(\phi\in\mathscr{H}_{\eta}([0,1]^{3})\) and \(k\geq 1\), define \(\phi_{k}\colon\Delta\to\mathbb{R}\) by \[\phi_{k}|_{A}=\inf\{\tilde{\phi}(w)\colon w\in G^{k}(A)\}\quad\text{for every $A\in\mathscr{D}_{2k}$.}\] Clearly \(\phi_{k}\) is constant on each element of \(\mathscr{D}_{2k}\), and \(|\phi_{k}|_{\infty}\leq|\phi|_{\infty}\). Since \(\phi_{k}\) does not depend on the \(z\)-coordinate, we may view \(\phi_{k}\) as a function on \(\Delta^{+}\). Let \(n>k\). By the identities \(\mathrm{Cor}_{n}(G;\tilde{\varphi},\tilde{\psi};\mu)=\mathrm{Cor}_{n-k}(G; \tilde{\varphi},\tilde{\psi}\circ G^{k};\mu)\) and \(\mathrm{Cor}_{n}(G;\varphi_{k},\psi_{k};\mu)=\mathrm{Cor}_{n}(F;\varphi_{k}, \psi_{k};\mu^{+})\), we have \(\mathrm{Cor}_{n}(G;\tilde{\varphi},\tilde{\psi};\mu)\leq I+I\!I+I\!I\!I\) where \[I =|\mathrm{Cor}_{n-k}(G;\tilde{\varphi},\tilde{\psi}\circ G^{k};\mu )-\mathrm{Cor}_{n-k}(G;\tilde{\varphi},\psi_{k};\mu)|,\] \[I\!I =|\mathrm{Cor}_{n-k}(G;\tilde{\varphi},\psi_{k};\mu)-\mathrm{Cor}_ {n-k}(G;\varphi_{k},\psi_{k};\mu)|,\] \[I\!I\!I =\mathrm{Cor}_{n-k}(F;\varphi_{k},\psi_{k};\mu^{+}).\] _Step 2: Estimates of \(I\), \(I\!I\)._ A direct calculation shows \(I\leq 2|\varphi|_{\infty}\int|\tilde{\psi}\circ G^{k}-\psi_{k}|\mathrm{d}\mu\), and a similar argument to the one in [35, p.608] shows \(I\!I\leq 2|\psi|_{\infty}\int|\tilde{\varphi}\circ G^{k}-\varphi_{k}|\mathrm{ d}\mu\). Hence, upper bounds of \(I\), \(I\!I\) follow from the next lemma. For \(\phi\in\mathscr{H}_{\eta}([0,1]^{3})\) let \(|\phi|\) denote the \(\eta\)-Holder norm of \(\phi\), namely \[|\phi|=\sup_{\begin{subarray}{c}p,q\in[0,1]^{3}\\ p\neq q\end{subarray}}\frac{|\phi(p)-\phi(q)|}{|p-q|^{\eta}},\] where \(|p-q|\) denotes the Euclidean distance between \(p\) and \(q\). **Lemma 4.1**.: _For any \(\eta\in(0,1]\) and any \(\phi\in\mathscr{H}_{\eta}([0,1]^{3})\), there exist constants \(C=C(\phi)>0\) and \(\xi=\xi(\eta)\in(0,1)\) such that for every \(k\geq 1\),_ \[\int|\tilde{\phi}\circ G^{k}-\phi_{k}|\mathrm{d}\mu\leq C\xi^{k}.\] Proof.: Put \[\chi^{u}=-\log\max\left\{a,1-ma\right\},\;\chi^{s}=\log\max\left\{b,1-mb \right\},\;\chi^{c}=(1-2ma)\log m.\] The minimal expansion rate by \(g\) in the \(x\)-direction is \(\exp(\chi^{u})\) and the maximal contraction rate by \(g\) in the \(z\)-direction is \(\exp(\chi^{s})\). Since \(a\in(0,\frac{1}{2m})\) we have \(\chi^{c}>0\). Fix \(\varepsilon\in(0,\chi^{c})\), and recall the definition of a block and the notation introduced in the beginning of Section 2.4. For each \(A\in\mathscr{D}_{k}\), \(\theta(A)\) is a block satisfying \(|\theta(A)|_{x}\leq\exp(-\chi^{u}k)\) and \(|\theta(A)|_{z}\leq\exp(\chi^{s}k).\) Define \[\mathscr{D}_{k}^{\prime}=\left\{A\in\mathscr{D}_{k}\colon|\theta(A)|_{y}<\exp(- (\chi^{c}-\varepsilon)k)\right\}.\] Put \(\xi_{0}=\exp(\max\{-\chi^{u},-\chi^{c}+\varepsilon,\chi^{s}\})\in(0,1).\) For every \(A\in\mathscr{D}_{2k}\) with \(G^{k}(A)\in\mathscr{D}_{k}^{\prime}\), the Euclidean diameter of \(\theta(G^{k}(A))\) does not exceed \(3\xi_{0}^{k}\). Since \(\phi\in\mathscr{H}_{\eta}([0,1]^{3})\), this bound implies \[\sup_{A}|\tilde{\phi}\circ G^{k}-\phi_{k}|\leq 3^{\eta}|\phi|\xi_{0}^{\eta k}. \tag{4.1}\] For each \(w\in\Delta^{+}\) let \(\mathscr{D}_{k}^{+}(w)\) denote the element of \(\mathscr{D}_{k}^{+}\) that contains \(w\). Then \(\mathscr{D}_{k}^{+}(w)\) is a rectangle satisfying \(|\mathscr{D}_{k}^{+}(w)|_{y}\leq\exp(-S_{k}\phi^{c}(\theta^{+}(w)))\), which implies \[\begin{split}\sum_{A\in\mathscr{D}_{k}\setminus\mathscr{D}_{k}^{ \prime}}\mu(A)&\leq\mu^{+}\left\{w\in\Delta^{+}\colon S_{k}\phi^{c }(\theta^{+}(w))\leq(\chi^{c}-\varepsilon)k\right\}\\ &=\left|\left\{p\in[0,1]^{2}\colon S_{k}\phi^{c}(p)\leq(\chi^{c}- \varepsilon)k\right\}\right|.\end{split} \tag{4.2}\] Recall that \(g\) is a skew product over \(\tau\) in (1.1) that preserves the Lebesgue measure on \([0,1]\), and \(\int_{[0,1]}\phi^{c}(x,0)\mathrm{d}x=\chi^{c}\). From the large deviations applied to the sequence of independently identically distributed random variables \(x\in[0,1]\mapsto\phi^{c}(f^{k}(x,0))\) (\(k=0,1,\ldots\)), the sum in (4.2) decays exponentially in \(k\). There exist \(C>0\) and \(\xi_{1}\in(0,1)\) such that \[\sum_{A\in\mathscr{D}_{k}\setminus\mathscr{D}_{k}^{\prime}}\mu(G^{-k}(A))\leq \sum_{A\in\mathscr{D}_{k}\setminus\mathscr{D}_{k}^{\prime}}\mu(A)\leq C\xi_{1} ^{k}. \tag{4.3}\] The first inequality is due to the \(G\)-invariance of \(\mu\). Let \(D_{k}\) denote the union of elements of \(\mathscr{D}_{k}^{\prime}\). For any \(A\in\mathscr{D}_{2k}\), either \(G^{k}(A)\subset D_{k}\) or \(G^{k}(A)\subset\Delta\setminus D_{k}\) holds. We have \[\sum_{A\in\mathscr{D}_{2k}\colon G^{k}(A)\subset\Delta\setminus D_{k}}\mu(A) \leq\sum_{A\in\mathscr{D}_{k}\setminus\mathscr{D}_{k}^{\prime}}\mu(G^{-k}(A)). \tag{4.4}\] From (4.1), (4.2), (4.3) and (4.4) we obtain \[\begin{split}\int|\tilde{\phi}\circ G^{k}-\phi_{k}|\mathrm{d} \mu&=\sum_{A\in\mathscr{D}_{2k}\colon G^{k}(A)\subset D_{k}} \int_{A}|\tilde{\phi}\circ G^{k}-\phi_{k}|\mathrm{d}\mu\\ &\quad+\sum_{A\in\mathscr{D}_{2k}\colon G^{k}(A)\subset\Delta \setminus D_{k}}\int_{A}|\tilde{\phi}\circ G^{k}-\phi_{k}|\mathrm{d}\mu\\ \leq& 3^{\eta}|\phi|\xi_{0}^{\eta k}\sum_{A\in \mathscr{D}_{2k}}\mu(A)+2|\phi|_{\infty}\sum_{A\in\mathscr{D}_{k}\setminus \mathscr{D}_{k}^{\prime}}\mu(A)\\ \leq& 3^{\eta}|\phi|\xi_{0}^{\eta k}+2C|\phi|_{ \infty}\xi_{1}^{k}.\end{split}\] Taking \(C(\phi)=3^{\eta}|\phi|+2C|\phi|_{\infty}\) and \(\xi=\max\{\xi_{0}^{\eta},\xi_{1}\}\) yields the desired inequality in Lemma 4.1. _Step3: Estimate of III and an overall estimate._ It is clear that the greatest common divisor of \(\{R_{i}\colon i\geq 1\}\) is \(1\). Using the Perron-Frobenius operator \(\mathcal{P}\colon L^{1}(\mu^{+})\to L^{1}(\mu^{+})\) given by \(F_{*}(\phi\mu^{+})=\mathcal{P}(\phi)\mu^{+}\), we write \[\text{{III}}=\operatorname{Cor}_{n-2k}(F;\mathcal{P}^{2k}(\varphi_{k}),\psi_{k}; \mu^{+}).\] We introduce a separation time \(s\colon\Delta_{0}^{+}\times\Delta_{0}^{+}\to\mathbb{Z}\) by defining \(s(p,p^{\prime})\) to be the smallest integer \(n\geq 0\) such that \((f^{R})^{n}(p)\), \((f^{R})^{n}(p^{\prime})\) lie in different elements of \(\mathscr{P}|_{\Delta_{0}^{+}}\). For general points \(w=(p,\ell)\), \(w^{\prime}=(p^{\prime},\ell^{\prime})\in\Delta^{+}\), define \(s(w,w^{\prime})=s(p,p^{\prime})\) if \(\ell=\ell^{\prime}\) and \(s(w,w^{\prime})=0\) otherwise. This defines a separation time \(s\colon\Delta^{+}\times\Delta^{+}\to\mathbb{Z}\). For \(\beta\in(0,1]\) define \[C_{\beta}(\Delta^{+})=\{\phi\colon\Delta^{+}\to\mathbb{R}\colon\exists C>0\ \forall w,w^{\prime}\in\Delta^{+},\ |\phi(w)-\phi(w^{\prime})|\leq C\beta^{s(w,w^{\prime})}\}.\] For each \(\phi\in C_{\beta}(\Delta^{+})\) define \[\|\phi\|=|\phi|_{\infty}+\operatorname{esssup}_{\begin{subarray}{c}w,w^{ \prime}\in\Delta^{+}\\ w\neq w^{\prime}\end{subarray}}\frac{|\phi(w)-\phi(w^{\prime})|}{\beta^{s(w,w^ {\prime})}}.\] **Lemma 4.2**.: _For all \(\beta\in(0,1)\) and all \(k\geq 1\), we have \(\mathcal{P}^{2k}(\varphi_{k})\in C_{\beta}(\Delta^{+})\) and \(\|\mathcal{P}^{2k}(\varphi_{k})\|\leq 3|\varphi|_{\infty}\)._ Proof.: For each \(A\in\mathscr{D}_{k}^{+}\), \(F^{k}|_{A}\) has a measurable inverse, and the pullback of the measure \(\mu^{+}|_{F^{k}(A)}\) by \(F^{k}|_{A}\) is absolutely continuous with respect to \(\mu^{+}|_{A}\). The Radon-Nikodym derivative \(\mathrm{d}(\mu^{+}|_{F^{k}(A)}\circ F^{k}|_{A})/\mathrm{d}\mu^{+}|_{A}\) is constant on \(A\), which we denote by \(JF^{k}(A)\). Note that \[\mathcal{P}^{2k}(\varphi_{k})(w)=\sum_{A\in\mathscr{D}_{2k}^{+}\colon w\in F^ {2k}(A)}\frac{\varphi_{k}|_{A}}{JF^{2k}(A)}.\] Moreover, \(JF^{2k}(A)=\mu^{+}(F^{2k}(A))/\mu^{+}(A)\geq 1/\mu^{+}(A)\) for all \(A\in\mathscr{D}_{2k}^{+}\), and so \(\sum_{A\in\mathscr{D}_{2k}^{+}}1/JF^{2k}(A)\leq 1.\) From this and \(|\varphi_{k}|_{\infty}\leq|\varphi|_{\infty}\) we obtain \(|\mathcal{P}^{2k}(\varphi_{k})|_{\infty}\leq|\varphi|_{\infty}\). For all \(w,w^{\prime}\in\Delta^{+}\) with \(w\neq w^{\prime}\) we have \(\left|\mathcal{P}^{2k}(\varphi_{k})(w)-\mathcal{P}^{2k}(\varphi_{k})(w^{ \prime})\right|\leq 2|\varphi|_{\infty}\). If \(s(w,w^{\prime})\geq 1\), then for each \(A\in\mathscr{D}_{2k}^{+}\) either \(w,w^{\prime}\in F^{2k}(A)\) or \(w\), \(w^{\prime}\notin F^{2k}(A)\), which implies \(\left|\mathcal{P}^{2k}(\varphi_{k})(w)-\mathcal{P}^{2k}(\varphi_{k})(w^{ \prime})\right|=0.\) Hence the conclusion of the lemma holds. By [35, Section 4.2] (see also [24, Theorem 1.5]), there exist \(C>0\) and \(\lambda_{0}\in(0,1)\) such that \[\text{{III}}\leq C\lambda_{0}^{n-2k}.\] Combining this with the estimates of \(I\), _II_ in Step 2, and then substituting \(k=\lfloor n/2\rfloor/2\), we obtain constants \(C=C(\varphi,\psi)>0\) and \(\lambda\in(0,1)\) such that \[\operatorname{Cor}_{n}(g;\varphi,\psi;\operatorname{Leb})=\operatorname{Cor}_{ n}(G;\tilde{\varphi},\tilde{\psi};\mu)\leq C\lambda^{n}\text{ for all }n\geq 1,\] namely, the exponential mixing for \((g_{a},\operatorname{Leb})\). From this and Proposition 3.1 we obtain the exponential mixing for \((g_{\frac{1}{m}-a},\operatorname{Leb})\). Taking a pair of Holder continuous functions on \([0,1]^{3}\) which are constant on sets \(\{(x,y)\}\times[0,1]\), \((x,y)\in[0,1]^{2}\), we obtain the exponential mixing for \((f_{a},\operatorname{Leb})\) and \((f_{\frac{1}{m}-a},\operatorname{Leb})\). The proof of Theorem B is complete. _Remark 4.3_.: In the case \(a=\frac{1}{2m}\) which is excluded in Theorem B, it is plausible that correlations for Holder continuous functions decay only subexponentially. It would be nice to prove this. A close inspection into the counting argument in Section 2.2 and the stopping time estimate in Section 2.7 reveals \(|\{R=n\}|\asymp n^{-3/2}\), and as a result \(\sum_{n}|\{R>n\}|=\infty\). It follows that the lift of the Lebesgue measure to the tower becomes an infinite measure, and the general result in [36] to draw subexponential decay of correlations is not applicable. ## 5. Exponential mixing for the Dyck system In this last section we prove Theorem C. In Section 5.1, we identify the two ergodic measures of maximal entropy for the Dyck system. In Section 5.2 we show the invariance of correlations (modulo involution) under the replacement of the two measures. Section 5.3 provides two preliminary lemmas needed to transfer invariant measures on the two different spaces. In Section 5.4 we clarify a connection between one of the two ergodic measures of maximal entropy and the Lebesgue measure on \([0,1]^{3}\). In Section 5.5 we complete the proof of Theorem C. ### Measures of maximal entropy Let \(T\) be a Borel map acting on a topological space. For each \(T\)-invariant Borel probability measure \(\mu\), let \(h(T,\mu)\) denote the measure-theoretic entropy of \(\mu\) with respect to \(T\). If \(\sup\{h(T,\mu)\colon\mu\text{ is $T$-invariant}\}\) is finite, a measure which attains this supremum is called _a measure of maximal entropy_. Following [20, Section 4], for each \(i\in\mathbb{Z}\) we define \(H_{i}\colon\Sigma_{D}\to\mathbb{Z}\) by \[H_{i}(\omega)=\begin{cases}\sum_{j=0}^{i-1}\sum_{l=1}^{m}(\delta_{\alpha_{l}, \omega_{j}}-\delta_{\beta_{l},\omega_{j}})&\text{ for }i\geq 1,\\ \sum_{j=i}^{-1}\sum_{l=1}^{m}(\delta_{\beta_{l},\omega_{j}}-\delta_{\alpha_{l },\omega_{j}})&\text{ for }i\leq-1,\\ 0&\text{ for }i=0,\end{cases}\] where the delta denotes Kronecker's delta. These functions are used to indicate whether a bracket in a prescribed position in a sequence in \(\Sigma_{D}\) is closed or not. Consider three shift invariant Borel sets \[\begin{split}& A_{0}=\bigcap_{i=-\infty}^{\infty}\left(\left( \bigcup_{l=1}^{\infty}\{H_{i+l}=H_{i}\}\right)\cap\left(\bigcup_{l=1}^{\infty }\{H_{i-l}=H_{i}\}\right)\right),\\ & A_{\alpha}=\left\{\omega\in\Sigma_{D}\colon\lim_{i\to\infty}H_{i}( \omega)=\infty\ \text{ and }\ \lim_{i\to-\infty}H_{i}(\omega)=-\infty\right\},\\ & A_{\beta}=\left\{\omega\in\Sigma_{D}\colon\lim_{i\to\infty}H_{i}( \omega)=-\infty\ \text{ and }\ \lim_{i\to-\infty}H_{i}(\omega)=\infty\right\}.\end{split} \tag{5.1}\] Any shift invariant ergodic measure on \(\Sigma_{D}\) gives measure \(1\) to one of these three sets (see [20, pp.102-103]). **Theorem 5.1** ([20], Section 4).: _There exist two ergodic measures \(\nu_{\alpha}\), \(\nu_{\beta}\) of entropy \(\log(m+1)\) which are Bernoulli and satisfy \(\nu_{\alpha}(A_{\alpha})=1\) and \(\nu_{\beta}(A_{\beta})=1\). These two measures are precisely all the ergodic measures of maximal entropy for the two-sided Dyck shift._ Let \(\nu_{\alpha}^{+}\) (resp. \(\nu_{\beta}^{+}\)) denote the shift invariant measure on \(\Sigma_{D}^{+}\) which is the push-forward of \(\nu_{\alpha}\) (resp. \(\nu_{\beta}\)) under the canonical projection \(\Sigma_{D}\to\Sigma_{D}^{+}\). These two measures are precisely all the ergodic measures of maximal entropy for the one-sided Dyck shift. We recall the result in [32] on measures of maximal entropy for the heterochaos baker maps. Let \(c_{1}=\frac{1}{m(m+1)}\) and \(c_{2}=\frac{1}{m+1}\). **Theorem 5.2** ([32], Theorem 1.2).: _For any \(a,b\in(0,\frac{1}{m})\), there exist two \(f_{a,b}\)-invariant ergodic Borel probability measures \(\mu_{\alpha}\), \(\mu_{\beta}\) of entropy \(\log(m+1)\) which are Bernoulli, give positive weight to any non-empty open subset of \([0,1]^{3}\), and satisfy_ \[\mu_{\alpha}\left(\bigcup_{i=1}^{m}\Omega_{\alpha_{i}}\right)=\mu_{\beta} \left(\bigcup_{i=1}^{m}\Omega_{\beta_{i}}\right)=\frac{m}{m+1}.\] _Moreover, if \(a\in[c_{1},c_{2}]\) or \(b\in[c_{1},c_{2}]\) then \(\mu_{\alpha}\), \(\mu_{\beta}\) are measures of maximal entropy for \(f_{a,b}\). If \(a\in(c_{1},c_{2})\) or \(b\in(c_{1},c_{2})\) then there is no ergodic measure of maximal entropy for \(f_{a,b}\) other than \(\mu_{\alpha}\), \(\mu_{\beta}\)._ ### Invariance of correlations We define an involution \(\rho\colon D\to D\) by \(\rho(\alpha_{i})=\beta_{i}\) and \(\rho(\beta_{i})=\alpha_{i}\) for \(i\in\{1,\dots,m\}\), and \(\iota_{D}\colon\Sigma_{D}\to D^{\mathbb{Z}}\) by \(\iota_{D}((\omega_{n})_{n\in\mathbb{Z}})=(\rho(\omega_{-n}))_{n\in\mathbb{Z}}\). Clearly \(\iota_{D}\) is injective, and \(\iota_{D}(\Sigma_{D})=\Sigma_{D}\) as in Lemma 5.4 below. **Proposition 5.3**.: _For all \(\varphi,\psi\in L^{2}(\nu_{\beta})\) and all \(n\geq 1\) we have_ \[\mathrm{Cor}_{n}(\sigma;\varphi,\psi;\nu_{\beta})=\mathrm{Cor}_{n}(\sigma; \psi\circ\iota_{D},\varphi\circ\iota_{D};\nu_{\alpha}).\] Proof.: By Lemma 5.4 below, for any shift invariant measure \(\nu\) on \(\Sigma_{D}\) we have \(h(\sigma,\nu)=h(\sigma,\nu\circ\iota_{D}^{-1})\) and \(\nu(\bigcup_{i=1}^{m}[\alpha_{i}])=\nu\circ\iota_{D}^{-1}(\bigcup_{i=1}^{m}[ \beta_{i}])\) where \([\gamma]=\{(\omega_{n})_{n\in\mathbb{Z}}\colon\omega_{0}=\gamma\}\) for \(\gamma\in D\). This implies \(\nu_{\beta}=\nu_{\alpha}\circ\iota_{D}^{-1}\). We have \[\int\varphi(\psi\circ\sigma^{n})\mathrm{d}\nu_{\beta} =\int\varphi\circ\iota_{D}(\psi\circ\sigma^{n}\circ\iota_{D}) \mathrm{d}\nu_{\alpha}\] \[=\int\varphi\circ\iota_{D}(\psi\circ\iota_{D}\circ\sigma^{-n}) \mathrm{d}\nu_{\alpha}\] \[=\int\varphi\circ\iota_{D}\circ\sigma^{n}(\psi\circ\iota_{D}) \mathrm{d}\nu_{\alpha},\] and \(\int\varphi\mathrm{d}\nu_{\beta}=\int\varphi\circ\iota_{D}\mathrm{d}\nu_{\alpha}\) and \(\int\psi\mathrm{d}\nu_{\beta}=\int\psi\circ\iota_{D}\mathrm{d}\nu_{\alpha}\). Hence the desired equality holds. **Lemma 5.4**.: _We have \(\iota_{D}(\Sigma_{D})=\Sigma_{D}\), and \(\iota_{D}\circ\sigma=\sigma^{-1}\circ\iota_{D}\)._ Proof.: The second assertion of the lemma is a consequence of the first one, which was proved in [32, Section 3.5]. We include the proof here for the reader's convenience. For \(n\geq 1\) and \(\lambda=\gamma_{1}\cdots\gamma_{n}\in L(\Sigma_{D})\) we set \(\rho^{*}(\lambda)=\rho(\gamma_{n})\cdots\rho(\gamma_{1})\). It is enough to show that \(\rho^{*}(\lambda)\in L(\Sigma_{D})\), namely \(\mathrm{red}(\rho^{*}(\lambda))\neq 0\). By the relations (1.5), either (i) \(\mathrm{red}(\lambda)=1\), or (ii) \(\mathrm{red}(\lambda)=\xi\eta\) for some \(\xi\in L(\{\beta_{1},\dots,\beta_{m}\}^{\mathbb{Z}})\) and \(\eta\in L(\{\alpha_{1},\dots,\alpha_{m}\}^{\mathbb{Z}})\). In case (i), clearly we have \(\mathrm{red}(\rho^{*}(\lambda))=1\). In case (ii), we have \(\rho^{*}(\eta)\in L(\{\beta_{1},\dots,\beta_{m}\}^{\mathbb{Z}})\) and \(\rho^{*}(\xi)\in L(\{\alpha_{1},\dots,\alpha_{m}\}^{\mathbb{Z}})\), and so \(\mathrm{red}(\rho^{*}(\lambda))=\rho^{*}(\eta)\rho^{*}(\xi)\neq 0\) ### Transferring invariant measures In order to transfer invariant measures on the two different spaces under the coding map, we need two lemmas. We extend the central Jacobian to a function on \([0,1]^{3}\) in the obvious way, and still denote the extension by \(\phi^{c}\): \(\phi^{c}(x,y,z)=\phi^{c}(x,y)\) for \((x,y,z)\in[0,1]^{3}\). **Lemma 5.5**.: _Let \(a,b\in(0,\frac{1}{m})\) and let \(\mu\) be an \(f_{a,b}\)-invariant ergodic Borel probability measure satisfying \(\mu(\Lambda_{a,b})=1\)._ 1. _If_ \(\int\phi^{c}\mathrm{d}\mu=0\) _then_ \(\mu\circ\pi^{-1}(A_{0})=1\)_._ 2. _If_ \(\int\phi^{c}\mathrm{d}\mu<0\) _then_ \(\mu\circ\pi^{-1}(A_{\alpha})=1\)_._ 3. _If_ \(\int\phi^{c}\mathrm{d}\mu>0\) _then_ \(\mu\circ\pi^{-1}(A_{\beta})=1\)_._ Proof.: Since shift invariant ergodic measures on \(\Sigma_{D}\) give measure \(1\) to one of the three sets in (5.1) (see [20, pp.102-103]), the assertions are consequences of the definition of \(H_{i}\), \(\phi^{c}\) and Birkhoff's ergodic theorem. Let \[A_{\alpha,\beta}=\left\{\omega\in\Sigma_{D}\colon\liminf_{i\to\infty}H_{i}( \omega)=-\infty\ \ \text{or}\ \ \liminf_{i\to-\infty}H_{i}(\omega)=-\infty\right\}.\] Note that \(A_{\alpha,\beta}\) is shift invariant and contains \(A_{\alpha}\cup A_{\beta}\). **Lemma 5.6** ([32], Lemma 3.1).: _For all \(a,b\in(0,\frac{1}{m})\), the restriction of \(\pi\) to \(\pi^{-1}(A_{\alpha,\beta})\) is a homeomorphism onto its image._ ### Connection to the Lebesgue measure Since \(c_{1}+c_{2}=\frac{1}{m}\), \(f_{c_{1},c_{2}}\) and \(f_{c_{2},c_{1}}\) preserve the Lebesgue measure on \([0,1]^{3}\). **Proposition 5.7**.: _We have \(h(g_{c_{1}},\mathrm{Leb})=h(g_{c_{2}},\mathrm{Leb})=\log(m+1)\), \(\mathrm{Leb}\circ\pi_{c_{2},c_{1}}^{-1}=\nu_{\alpha}\) and \(\mathrm{Leb}\circ\pi_{c_{1},c_{2}}^{-1}=\nu_{\beta}\)._ Proof.: Recall that the restriction of \(g_{c_{1}}\) to \(\Lambda_{c_{1},c_{2}}\) is invertible, uniformly expanding in the \(x\)-direction and uniformly contracting in the \(z\)-direction. Using Proposition 2.6 to deal with the dynamics in the \(y\)-direction, it is easy to see that \[\bigvee_{n=-\infty}^{\infty}g^{-n}\{\Omega_{\gamma}\cap\Lambda_{c_{1},c_{2}} \}_{\gamma\in D}\stackrel{{\circ}}{{=}}\mathscr{B}(\Lambda_{c_{ 1},c_{2}}).\] Moreover, \((g_{c_{1}},\mathrm{Leb})\) is ergodic by Theorem A. Therefore, Shannon-McMillan-Breimann's theorem yields \[h(g_{c_{1}},\mathrm{Leb})=-\lim_{n\to\infty}\frac{1}{n}\log\left|\bigcap_{k=0 }^{n-1}g_{c_{1}}^{-k}\left(\Omega_{\omega_{k}}\right)\right|\] for Lebesgue a.e. \(p\in\Lambda_{c_{1},c_{2}}\) where \(\pi_{c_{1},c_{2}}(p)=(\omega_{n})_{n=-\infty}^{\infty}\in\Sigma_{D}\). Clearly we have \[-\lim_{n\to\infty}\frac{1}{n}\log\left|\bigcap_{k=0}^{n-1}g_{c_{1 }}^{-k}\left(\Omega_{\omega_{k}}\right)\right|\geq mc_{1}\log\frac{1}{c_{1}}+(1-mc_{1})\log\frac{1}{1-mc_{1}}\] \[+(1-2mc_{1})\log m=\log(m+1),\] and hence \(h(g_{c_{1}},\mathrm{Leb})\geq\log(m+1)\). The reverse inequality is a consequence of Theorem 5.2. We have verified \(h(g_{c_{1}},\mathrm{Leb})=\log(m+1)\). By [32, Lemma 3.9] and \(\mathrm{Leb}\circ\iota^{-1}=\mathrm{Leb}\) we have \(h(g_{c_{2}},\mathrm{Leb})=h(g_{c_{1}},\mathrm{Leb})\). By Theorem A and Theorem 2.12, the measure \(\operatorname{Leb}\circ\pi_{c_{2},c_{1}}^{-1}\) on \(\Sigma_{D}\) is shift invariant and ergodic. By \(\int\phi^{c}\mathrm{d}\mathrm{Leb}=-\frac{m-1}{m+1}\log 2<0\) and Lemma 5.5, it gives measure \(1\) to \(A_{\alpha}\), and by Lemma 5.6 has entropy equal to \(h(g_{c_{2}},\operatorname{Leb})\), which equals \(\log(m+1)\) as already proved. Hence we obtain \(\operatorname{Leb}\circ\pi_{c_{2},c_{1}}^{-1}=\nu_{\alpha}\). A proof of the last equality in the proposition is analogous. ### Proof of Theorem C First we show exponential mixing for \((\sigma,\nu_{\alpha})\), \((\sigma,\nu_{\beta})\). To this end, by virtue of Proposition 5.3 it suffices to show that for any pair \(\varphi\), \(\psi\) of Holder continuous functions on \(\Sigma_{D}\), their correlation \(\operatorname{Cor}_{n}(\sigma;\varphi,\psi;\nu_{\beta})\) decays exponentially in \(n\). This does not immediately follow from Theorem B. Indeed, Proposition 5.7 implies \(\operatorname{Cor}_{n}(\sigma;\varphi,\psi;\nu_{\beta})=\operatorname{Cor}_{n} (g_{c_{1}};\varphi\circ\pi_{c_{1},c_{2}},\psi\circ\pi_{c_{1},c_{2}}; \operatorname{Leb})\), but \(\varphi\circ\pi_{c_{1},c_{2}}\) and \(\psi\circ\pi_{c_{1},c_{2}}\) may not be Holder continuous. Our strategy is to transfer to the Dyck system the towers associated with \(f_{c_{1}}\), \(g_{c_{1}}\) constructed in Section 4.1, and mimic the proof of Theorem B. We restrict ourselves to the shift invariant set \(A_{\beta}\) in (5.1). Bear in mind that \(\nu_{\beta}(A_{\beta})=1\). By Lemma 5.6, the restriction of \(\pi_{c_{1},c_{2}}\) to \(\pi_{c_{1},c_{2}}^{-1}(A_{\beta})\) is a homeomorphism onto its image. We put \[\hat{\Delta}_{0}=\pi_{c_{1},c_{2}}(\Delta_{0})\cap A_{\beta}\ \ \text{and}\ \ \hat{\Delta}_{0,i}=\pi_{c_{1},c_{2}}(\Delta_{0,i})\cap A_{\beta}.\] Define \(\hat{R}\colon\pi_{c_{1},c_{2}}([0,1]^{3})\cap A_{\beta}\to\mathbb{Z}_{+}\cup \{\infty\}\) by \(\hat{R}(\omega)=R(\pi_{c_{1},c_{2}}^{-1}(\omega))\). We define a tower \[\hat{\Delta}=\{(\omega,\ell)\colon\omega\in\hat{\Delta}_{0},\ \ell=0,1,\dots,\hat{R}( \omega)-1\},\] and define a tower map \(\hat{G}\colon\hat{\Delta}\to\hat{\Delta}\) by \[\hat{G}(\omega,\ell)=\begin{cases}(\omega,\ell+1)&\text{ if }\ell+1<\hat{R}( \omega),\\ (\sigma^{\hat{R}(\omega)}(\omega),0)&\text{ if }\ell+1=\hat{R}(\omega).\end{cases}\] Collapsing the negative coordinate, we obtain the quotient tower \(\hat{\Delta}^{+}\) and the tower map \(\hat{F}\colon\hat{\Delta}^{+}\to\hat{\Delta}^{+}\). We fix a sigma-algebra on \(\hat{\Delta}\) that is obtained by naturally transplanting the Borel sigma-algebra on \(\Sigma_{D}\). By Proposition 2.6, \(\hat{R}\) is finite \(\nu_{\beta}\)-a.e. and \(\nu_{\beta}\{\hat{R}=n\}=|\{R=n\}|\). Each floor \[\hat{\Delta}_{\ell}=\{(p,\ell)\in\hat{\Delta}\colon p\in\hat{\Delta}_{0}\},\ \ell\geq 0\] can be identified with \(\{\hat{R}>\ell\}\), and so equipped with the restriction of the measure \(\nu_{\beta}\). Let \(\hat{\mu}\) denote the probability measure on \(\hat{\Delta}\) given by \[\hat{\mu}(A)=\frac{1}{\int\hat{R}\mathrm{d}\nu_{\beta}}\sum_{\ell=0}^{\infty} \nu_{\beta}(A\cap\hat{\Delta}_{\ell})\ \ \text{for any measurable set}\ A\subset\hat{\Delta}.\] We repeat Steps 1 to 3 in the proof of Theorem B in Section 3.3. Each floor \(\hat{\Delta}_{\ell}\) is partitioned into \(\{\hat{\Delta}_{\ell,i}\}_{i\geq 1\colon\hat{R}_{i}>\ell}\) where \(\hat{\Delta}_{\ell,i}\) is a copy of \(\hat{\Delta}_{0,i}\). Let \(\hat{\mathscr{D}}_{0}\) denote the partition of \(\hat{\Delta}\) into \(\hat{\Delta}_{\ell,i}\)-components. For \(k\geq 1\), let \(\hat{\mathscr{D}}_{k}=\bigvee_{j=0}^{k-1}\hat{G}^{-j}\hat{\mathscr{D}}_{0}\). For \(\phi\in\mathscr{H}_{\eta}(\Sigma_{D})\) and \(k\geq 1\), define \(\phi_{k}\colon\hat{\Delta}\to\mathbb{R}\) by \(\phi_{k}|_{A}=\inf\{\tilde{\phi}(w)\colon w\in\hat{G}^{k}(A)\}\) for \(A\in\hat{\mathscr{D}}_{2k}\). Define \(\hat{\theta}\colon(\omega,\ell)\in\hat{\Delta}\mapsto\sigma^{\ell}(\omega)\in \Sigma_{D}\). For every \(A\in\hat{\mathscr{D}}_{2k}\), the diameter of the set \(\hat{\theta}(\hat{G}^{k}(A))\) with respect to the Hamming metric \(d\) does not exceed \(e^{-k}\). As a counterpart of (4.1) we have \(\sup_{A}|\tilde{\phi}\circ\hat{G}^{k}-\phi_{k}|\leq|\phi|e^{-\eta k}\), where \(|\phi|\) denotes the \(\eta\)-Holder norm of \(\phi\) with respect to \(d\). Therefore, as a counterpart of Lemma 4.1 we obtain \(\int|\tilde{\phi}\circ\hat{G}^{k}-\phi_{k}|\mathrm{d}\hat{\mu}\leq|\phi|e^{-\eta k}\). To finish, the rest of the argument is completely analogous to Step 3 in the proof of Theorem B. We have verified exponential mixing for both \((\sigma,\nu_{\alpha})\) and \((\sigma,\nu_{\beta})\). Taking a pair of Holder continuous functions on \(\Sigma_{D}\) which depend only on positive coordinates, we obtain exponential mixing for both \((\sigma_{+},\nu_{\alpha}^{+})\) and \((\sigma_{+},\nu_{\beta}^{+})\). The proof of Theorem C is complete. ### Acknowledgments The author thanks Yoshitaka Saiki, Toshi Sugiyama, Masato Tsujii, Kenichiro Yamamoto, James A. Yorke for fruitful discussions. This research was supported by the JSPS KAKENHI 19K21835, 20H01811.
2306.07103
Spectral Closure for the Linear Boltzmann-BGK Equation
We give an explicit description of the spectral closure for the three-dimensional linear Boltzmann-BGK equation in terms of the macroscopic fields, density, flow velocity and temperature. This results in a new linear fluid dynamics model which is valid for any relaxation time. The non-local exact fluid dynamics equations are compared to the Euler, Navier--Stokes and Burnett equations. Our results are based on a detailed spectral analysis of the linearized Boltzmann-BGK operator together with a suitable choice of spectral projection.
Florian Kogelbauer, Ilya Karlin
2023-06-12T13:27:48Z
http://arxiv.org/abs/2306.07103v1
# Spectral closure for the linear Boltzmann-Bgk equation ###### Abstract. We give an explicit description of the spectral closure for the three-dimensional linear Boltzmann-BGK equation in terms of the macroscopic fields, density, flow velocity and temperature. This results in a new linear fluid dynamics model which is valid for any relaxation time. The non-local exact fluid dynamics equations are compared to the Euler, Navier-Stokes and Burnett equations. Our results are based on a detailed spectral analysis of the linearized Boltzmann-BGK operator together with a suitable choice of spectral projection. ## 1. Introduction Since the invention of kinetic theory by Boltzmann [4] and Maxwell [30], the fundamental question arose: What is the connection between kinetic equations and the equations for the motion of continua? Or, to phrase it differently: Can the governing equations of fluid dynamics be rigorously derived from kinetic theory? This problem has a long history. Famously, in his speech at the International Congress of Mathematics in Paris in 1900, Hilbert proposed a program to derive the passage from the atomistic view of fluids and gases to the motion of continua [21]. One interpretation of this challenge, known as Hilbert's sixth problem in this context, aims to prove the convergence of kinetic models, such as the Boltzmann equation, to known hydrodynamic models such as the Euler and the Navier-Stokes equations [36, 37, 38]. The derivation of hydrodynamics from kinetic models is often regarded as a closure problem where one seeks a self-consistent expression of the fluxes in the balance equations for primitive (macroscopic) fields of mass density, momentum density and energy density. On a formal level, a well-established approach to the closure problem is the Chapman-Enskog expansion [12, 7, 8], where a Taylor series in powers of the Knudsen number (the molecular mean free path to a characteristic flow scale ratio) is invoked. The lower-order approximations lead to compressible Euler and Navier-Stokes-Fourier systems. The undeniable success of the Chapman-Enskog method is rooted in the evaluation of the phenomenological transport coefficients, viscosity and thermal conductivity for a one-component gas, in terms of the microscopic interaction potential between particles, as well as prediction of the thermodiffusion effect in gas mixtures [5]. On the other hand, extension of the Chapman-Enskog approximation beyond the classical Navier-Stokes-Fourier order, the Burnett and super-Burnett approximations [6, 8], encountered difficulties. Even in the simplest regime, while linearized around a global equilibrium, the higher-order hydrodynamic closure may exhibit an instability, as first shown by Bobylev [3] for the Burnett and the super-Burnett approximations for Maxwell's molecules. Since the global equilibrium is stable by way of the dissipative nature of the Boltzmann equation, Bobylev's instability is an artifact brought about by the Chapman-Enskog procedure. The problem of higher-order hydrodynamics is exacerbated in the non-linear regime. Indeed, as pointed out by Slemrod [39], convergence of a singular expansion to the leading-order equation is by no means obvious: the formation of shocks might be an obstacle to global uniform convergence in the sense of solutions [40]. Furthermore, the expansion of a non-local operator in frequency space in terms of (local) differential operators may be problematic. As a remedy, Rosenau suggested a non-local closure [35] based on rational functions rather than polynomial approximations to the Chapman-Enskog solution. A different approach is to address the problem of hydrodynamics from kinetics as a problem of _invariant manifolds_. This viewpoint was first suggested in a short paper by McKean [31] and expanded in a series of works by Gorban & Karlin [16, 18, 15]. For model systems (Grad's moment systems [19]), it was shown that the method of invariant manifold is equivalent to _exact summation_ of the Chapman-Enskog series to all orders [17]. We term the latter case _exact hydrodynamics_ since, once achieved, it furnishes the complete characterization of the hydrodynamic limit of the kinetic equation and hence, the rigorous and exact closure. In this setting, the problem remains non-trivial even in the linear case, for infinite-dimensional problems. Accurate numerical solutions were found in [25], on the level of the linear Boltzmann-BGK kinetic model [2], and extended to a finite-moment approximation of the linear Boltzmann equation for Maxwell's molecules in [9]. In this work, the derivation of the exact (valid to all admissible scales) linear hydrodynamics is considered in two consecutive steps. First, the slow invariant manifold is identified as the linear subspace spanned over the hydrodynamic spectrum of the linearized Boltzmann-BGK operator. This is achieved on the basis of the explicit solution to the eigenvalue problem presented recently in Kogelbauer & Karlin [28]. Let us refer to [20, 11, 33] for qualitative results on the spectra of general linear kinetic operators, including the existence of hydrodynamic branches, critical wave number local expansions for small wave numbers. In [26, 27, 29], explicit spectral calculations have been performed for several kinetic models, including explict expressions of critical wave numbers and branch merging. However, the knowledge of the spectrum alone is not the final step towards the derivation of hydrodynamic equations. The next step is the projection of the dynamics onto the slow manifold in terms of primitive variables, density, momentum and energy (or temperature). To that end, we derive the hydrodynamic projection in two independent ways. First, we demonstrate that all information about the projection is essentially encoded in a function of eigenvalues, which we call spectral temperature. This direct computation uses specific features of the BGK model. On the other hand, the hydrodynamic projection can be equivalently derived on the basis of the Riesz spectral projector, a more general route applicable to a variety of linear kinetic problems. Both approaches are shown to be consistent with one another, and resulting in the unique hydrodynamic projection. Let us emphasize that we derive a closed-form expression for the transport coefficients in wave space (transport operators in physical space) in terms of eigenvalues. The structure of the paper is as follows: Preliminaries in Sec. 2 include the notation and nomenclature, in particular, the plasma dispersion function is introduced. Some useful properties of the plasma dispersion function necessary for the spectral analysis of the linearized Boltzmann-BGK operator are collected in Appendix B for the sake of completeness. In Sec. 3, following the invariant manifold formulation of the closure problem [18], we introduce the closure operator for a generic linear kinetic equation. While the majority of derivations of hydrodynamics proceed in terms of primitive variables to solve the invariance equation, our approach is different. We first recognize the slow invariant manifold from the analysis of the spectral problem and, secondly, induce the dynamics on this manifold in terms of primitive variables by a coordinate change from spectral variables to hydrodynamic fields. The realization of this program starts in Sec. 4 where we first review analytical results on the spectral problem of the linearized Boltzmann-BGK model [28]. Sec. 5 is devoted to the explicit construction of the coordinate change from spectral variables to macroscopic variables, involving a single analytic function depending on eigenvalues, called _spectral temperature_. In Sec. 6, we present the exact hydrodynamic equations for each wave vector in frequency space, as well as in physical space. While, classically, the closure is obtained through transport coefficients relating the dynamics of the macroscopic variables to each other, the exact hydrodynamic equations involve transport operators with finite frequency support acting on the corresponding variables. Finally, in Sec. 7, we compare the exact non-local hydrodynamics to local approximations such as the Euler equation, the Navier-Stokes-Fourier system and the Burnett system. In particular, we recover the approximate slow dynamics obtained through the Chapman-Enskog expansion. We conclude with a discussion in Sec. 8. ## 2. Notation and Basic Definitions For a wave vector \(\mathbf{k}\in\mathbb{Z}^{3}\), \(\mathbf{k}=(k_{1},k_{2},k_{3})\), we denote its wave number as \[k:=|\mathbf{k}|=\sqrt{k_{1}^{2}+k_{2}^{2}+k_{3}^{2}}. \tag{2.1}\] For a given wave vector \(\mathbf{k}\neq 0\), we define a coordinate system with a component parallel and with two components orthogonal to \(\mathbf{k}\) by splitting any vector \(\mathbf{v}\in\mathbb{R}^{3}\) as \[\mathbf{v}=\mathbf{v}_{\parallel}+\mathbf{v}_{\perp}, \tag{2.2}\] where \(\mathbf{v}_{\parallel}=\frac{1}{k^{2}}(\mathbf{v}\cdot\mathbf{k})\mathbf{k}\) and \(\mathbf{v}_{\perp}=-\frac{1}{k^{2}}\mathbf{k}\times(\mathbf{k}\times\mathbf{ v})\), which satisfies \(\mathbf{v}^{\perp}\cdot\mathbf{k}=0\). This can be achieved by a rotation matrix \(\mathbf{Q}_{\mathbf{k}}\) satisfying \(\mathbf{k}=\mathbf{Q}_{\mathbf{k}}(k,0,0)^{T}\) to give \[\mathbf{v}=\mathbf{Q}_{\mathbf{k}}(v_{\parallel},v_{\perp 1},v_{\perp 2}), \tag{2.3}\] where \(v_{\parallel}={\bf v}\cdot{\bf k}\) and \((v_{\perp 1},v_{\perp 2})\) are the components of the unit base vectors of \({\bf v}_{\perp}\). The matrix \({\bf Q}_{\bf k}\) can be determined by, e.g. the Rodrigues' rotation formula [34]: \[{\bf Q}_{\bf k}=\left(\begin{array}{ccc}\frac{k_{1}}{k}&-\frac{k_{2}}{k}&- \frac{k_{3}}{k}\\ \frac{k_{2}}{k}&1-\frac{k_{2}^{2}}{k^{2}+k_{1}k}&-\frac{k_{2}k_{3}}{k^{2}+k_{1 }k}\\ \frac{k_{3}}{k}&-\frac{k_{2}k_{3}}{k^{2}+k_{1}k}&1-\frac{k_{3}^{2}}{k^{2}+k_{1 }k}\end{array}\right). \tag{2.4}\] For later calculations, we also define the \(5\times 5\) block-diagonal matrix \[\tilde{\bf Q}_{\bf k}={\rm diag}(1,{\bf Q}_{\bf k},1). \tag{2.5}\] We introduce the _plasma dispersion function_ as the integral \[Z(\zeta)=\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}\frac{e^{-\frac{v^{2}}{2}}}{v- \zeta}\,dv, \tag{2.6}\] for any \(\zeta\in\mathbb{C}\setminus\mathbb{R}\). The function \(Z\) is analytic on each half plane \(\{\Im(\zeta)>0\}\) and \(\{\Im(\zeta)<0\}\) and satisfies the complex differential equation \[\frac{dZ}{d\zeta}=-\zeta Z-1. \tag{2.7}\] As the name suggest, function (2.6) appears in plasma physics in the context of Landau damping [14]. We collect further useful properties of the plasma dispersion function in the Appendix B. Let \(\mathcal{H}\) denote a Hilbert space and let \({\bf T}:\mathcal{H}\to\mathcal{H}\) be a linear operator with domain of definition \(\mathcal{D}(\mathcal{H})\). We denote the spectrum of \({\bf T}\) as \(\sigma({\bf T})\) and its resolvent set as \(\rho({\bf T})\). The main operator \(\mathcal{L}_{\bf k}\) of this paper (to be defined later) will be defined on the Hilbert space \[\mathcal{H}_{\bf v}=L_{\bf v}^{2}(\mathbb{R}^{3},e^{-|{\bf v}|^{2}}), \tag{2.8}\] together with the inner product \[\langle f,g\rangle_{\bf v}=(2\pi)^{-\frac{3}{2}}\int_{\mathbb{R}^{3}}f({\bf v })g^{*}({\bf v})e^{-\frac{|{\bf v}|^{2}}{2}}d{\bf v}. \tag{2.9}\] For later calculation and to ease notation, we define the following set of basis vectors: \[e_{0}({\bf v}) =(2\pi)^{-\frac{3}{4}}, \tag{2.10}\] \[e_{1}({\bf v}) =(2\pi)^{-\frac{3}{4}}v_{1},\] \[e_{2}({\bf v}) =(2\pi)^{-\frac{3}{4}}v_{2},\] \[e_{3}({\bf v}) =(2\pi)^{-\frac{3}{4}}v_{3},\] \[e_{4}({\bf v}) =(2\pi)^{-\frac{3}{4}}\frac{|{\bf v}|^{2}{-}3}{\sqrt{6}},\] which satisfy the orthonormality relation \[\langle e_{i},e_{j}\rangle_{\bf v}=\delta_{ij},\quad\text{ for }\quad 0\leq i,j\leq 4, \tag{2.11}\] where \(\delta_{ij}\) is the Kronecker's delta. We bundle the basis functions (2.10) into a single vector \[\mathbf{e}=(e_{0},e_{1},e_{2},e_{3},e_{4}). \tag{2.12}\] For a one-body distribution function \(f:\mathbb{T}^{3}\times\mathbb{R}^{3}\times[0,\infty)\to\mathbb{R}^{+}\), we introduce the moments of the distribution function \(f\) as \[\mathbf{M}^{(n)}(\mathbf{x},t)=\int_{\mathbb{R}^{3}}f(\mathbf{x},\mathbf{v},t) \,\mathbf{v}^{\otimes n}d\mathbf{v}, \tag{2.13}\] where \(\mathbf{v}^{\otimes 0}=1\), \(\mathbf{v}^{\otimes 1}=\mathbf{v}\) and \[\mathbf{v}^{\otimes n}=\underbrace{\mathbf{v}\otimes...\otimes\mathbf{v}}_{n- \text{times}}, \tag{2.14}\] for \(n\geq 2\) is the \(n\)-th tensor power. The moment defined in (2.13) is an \(n\)-th order symmetric tensor, depending on space and time. Given a vector \(X=(x_{1},\ldots,x_{n})\), we denote the set of cyclical permutations of \(X\) as \[\circlearrowright(x_{1},\ldots,x_{n})=\{(x_{1},x_{2},\ldots,x_{n}),(x_{2},x_{3},\ldots,x_{n},x_{1}),\ldots,(x_{n},x_{1},\ldots,x_{n-1})\}. \tag{2.15}\] For a matrix \(A\), we denote its adj\((A)\), which satisfies \(A\,\text{adj}(A)=\det(A)\text{Id}\). We denote the strip between \(-a\) and \(0\) as \[\mathcal{R}_{a}=\{z\in\mathbb{C}:-a<\Re z<0\}\subset\mathbb{C}. \tag{2.16}\] ## 3. The Closure Problem for the Linear BGK Equation In this section, we recall the classical closure problem for kinetic equations in general and illustrate it on the BGK equation in particular. First, we formulate the governing equations suitable for our setting and illustrate the closure problems for the hierarchy of moment equations. Subsequently, we define the closure operator and outline the relation of the existence of a (slow) invariant manifold with an exact closure relation. We will be interested in the three-dimensional BGK equation linearized around a global Maxwellian: \[\frac{\partial f}{\partial t}+\mathbf{v}\cdot\nabla_{\mathbf{x}}f=-\frac{1}{ \tau}L_{\text{BGK}}[f], \tag{3.1}\] for the deviation relative to the global Maxwellian \(f:\mathbb{T}^{3}\times\mathbb{R}^{3}\times[0,\infty)\to\mathbb{R}\), \(f=f(\mathbf{x},\mathbf{v},t)\) and the BGK collision operator \[L_{\text{BGK}}[f](\mathbf{x},\mathbf{v},t)=\Big{(}f(\mathbf{x},\mathbf{v},t)- \mathbb{P}_{5}[f](\mathbf{x},\mathbf{v},t)\Big{)}. \tag{3.2}\] The projection operator \(\mathbb{P}_{5}:\mathcal{H}_{\mathbf{v}}\to\mathcal{H}_{\mathbf{v}}\) is defined as \[\mathbb{P}_{5}f=\sum_{j=0}^{4}\langle f,e_{j}\rangle_{\mathbf{v}}e_{j}, \tag{3.3}\] i.e., the projection onto the first five basis vectors (2.10). Clearly, (3.3) defines an orthogonal projection with respect to (2.9). Integrating equation (3.1) in \(\mathbf{x}\) shows that the five basis functions (2.10) are center modes (and the dynamic in these directions is conserved), since \[L_{\rm BGK}[e_{j}]=0,\quad\text{ for }\quad 0\leq j\leq 4. \tag{3.4}\] Expanding \(f\) in a Fourier series \[f({\bf x},{\bf v})=\sum_{|{\bf k}|=0}^{\infty}\hat{f}({\bf k},{\bf v})e^{{\rm i} {\bf x}\cdot{\bf k}}, \tag{3.5}\] for the Fourier coefficients \[\hat{f}({\bf k},{\bf v})=\frac{1}{(2\pi)^{3}}\int_{\mathbb{R}^{3}}f({\bf x},{ \bf v})e^{-{\rm i}{\bf x}\cdot{\bf k}}\,d{\bf x}, \tag{3.6}\] the linear operator in (3.1) can be unitarily conjugated to the family of operators \[\mathcal{L}_{\bf k}=-{\rm i}{\bf v}\cdot{\bf k}-\frac{1}{\tau}(1-\mathbb{P}_{5}), \tag{3.7}\] indexed by the wave vector \({\bf k}\). Because of the normalization of the basis functions (2.10), the relation to the _macroscopic variables_ density \(\rho\), velocity \({\bf u}\) and temperature \(T\) is given by \[\begin{split}\rho&=\langle f,e_{0}\rangle_{\bf v}=( 2\pi)^{-\frac{3}{2}}\int_{\mathbb{R}^{3}}f({\bf v})e^{-\frac{|{\bf v}|^{2}}{2}} d{\bf v},\\ {\bf u}&=\langle f,(e_{1},e_{2},e_{3})\rangle_{\bf v }=(2\pi)^{-\frac{3}{2}}\int_{\mathbb{R}^{3}}f({\bf v}){\bf v}e^{-\frac{|{\bf v }|^{2}}{2}}d{\bf v},\\ T&=\sqrt{\frac{2}{3}}\langle f,e_{4}\rangle_{\bf v }=(2\pi)^{-\frac{3}{2}}\int_{\mathbb{R}^{3}}f({\bf v})\frac{|{\bf v}|^{2}{-}3} {3}e^{-\frac{|{\bf v}|^{2}}{2}}d{\bf v},\end{split} \tag{3.8}\] which we bundle into a single vector \[{\bf h}=\left(\begin{array}{c}\rho\\ {\bf u}\\ \sqrt{\frac{3}{2}}T\end{array}\right). \tag{3.9}\] Because of the orthonormality relations (2.11), we prefer to work with the basis \((e_{0},...,e_{4})\). To account for the prefactor \(\sqrt{2/3}\) in the definition of the temperature in (3.8) in the final (physically meaningful) dynamical equations, we have to multiply the last entries accordingly. ### The Closure Problem Let us recall the classical closure problem for kinetic equations illustrated on the BGK equation. Multiplying the BGK equation (3.1) with \({\bf v}^{\otimes n}\) and integrating in velocities gives the following hierarchy of moment equations \[\frac{\partial}{\partial t}{\bf M}^{(n)}=-\nabla\cdot{\bf M}^{(n+1)}-\frac{1} {\tau}{\bf M}^{(n)}+\frac{1}{\tau}{\bf M}^{(n)}_{{\rm eq},{\rm lin}}, \tag{3.10}\] where \[\mathbf{M}^{(n)}_{\text{eq,lin}}=\mathbf{M}^{(0)}\langle\mathbf{v}^{\otimes n},1 \rangle+\langle\mathbf{v}^{\otimes n},\mathbf{v}\cdot\mathbf{M}^{(1)}\rangle+ \frac{1}{3}\text{trace}(\mathbf{M}^{(2)}-\text{Id}_{3\times 3})\langle\mathbf{v}^{ \otimes n},\frac{|\mathbf{v}|^{2}{-}3}{3}\rangle. \tag{3.11}\] Ideally, we would like to obtain a closed system for the first few moments, thus allowing for a consistent dynamical description of macroscopic variables. As equation (3.10) illustrates, however, will the rate of change of \(\mathbf{M}^{(n)}\) always be affected by the flux of the next moment \(\nabla\cdot\mathbf{M}^{(n+1)}\) (the term \(\mathbf{M}^{(n)}_{eq,lin}\) only comprises moments up to order two). Consequently, there is no way to obtain a self-consistent moment system from the full dynamics of the kinetic model (3.1). As a way out of this inconvenient matter of facts, we can constrain the dynamics of our system (3.1) by assuming that the full dynamics is given parametrically as a function of, say, the five macroscopic variables \[f(\mathbf{x},\mathbf{v},t)=F(\rho(\mathbf{x},t),\mathbf{u}(\mathbf{x},t),T( \mathbf{x},t);\mathbf{v}). \tag{3.12}\] Writing equation (3.1) a bit more abstractly as \[\frac{\partial f}{\partial t}=\mathcal{L}[f],\quad\mathcal{L}=-\mathbf{v} \cdot\nabla_{\mathbf{x}}-\frac{1}{\tau}L_{\text{BGK}}, \tag{3.13}\] and denoting \(\mathbb{P}_{5}^{\perp}=1-\mathbb{P}_{5}\), assumption (3.12) corresponds to the existence of a linear operator \(\mathcal{C}:\) range \(\mathbb{P}_{5}\rightarrow\) range \(\mathbb{P}_{5}^{\perp}\), called _closure operator_, such that \[f=(1+\mathcal{C})\mathbb{P}_{5}f, \tag{3.14}\] and the dynamics of the macroscopic variables \(\mathbb{P}_{5}f\) can be written self-consistently as \[\frac{\partial\mathbb{P}_{5}f}{\partial t}=\mathbb{P}_{5}\mathcal{L}(1+ \mathcal{C})\mathbb{P}_{5}f, \tag{3.15}\] while the closure operator \(\mathcal{C}\) satisfies the condition of _exact closure_: \[(\mathcal{C}\mathbb{P}_{5}-\mathbb{P}_{5}^{\perp})\mathcal{L}(1+\mathcal{C})=0. \tag{3.16}\] Indeed, applying \(\mathbb{P}_{5}^{\perp}\) to equation (3.13) and using assumption (3.14), we obtain \[\mathbb{P}_{5}^{\perp}\mathcal{L}(1+\mathcal{C})\mathbb{P}_{5}f=\frac{\partial }{\partial t}\mathbb{P}_{5}^{\perp}f=\mathcal{C}\frac{\partial\mathbb{P}_{5} f}{\partial t}. \tag{3.17}\] Using now the reduced dynamics (3.15), we arrive at \[\mathbb{P}_{5}^{\perp}\mathcal{L}(1+\mathcal{C})\mathbb{P}_{5}f=\mathcal{C} \mathbb{P}_{5}\mathcal{L}(1+\mathcal{C})\mathbb{P}_{5}f, \tag{3.18}\] which is equivalent to (3.16). Since the operator \(\mathcal{L}\) can be written as the direct sum over operators \(\mathcal{L}_{\mathbf{k}}\) for \(\mathbf{k}\in\mathbb{Z}^{3}\), we effectively seek a closure operator for each wave vector by writing \(\hat{f}_{\mathbf{k}}=(1+\mathcal{C}_{\mathbf{k}})\mathbb{P}_{5}\hat{f}_{ \mathbf{k}}\) with \[\frac{\partial\mathbb{P}_{5}\hat{f}_{\mathbf{k}}}{\partial t}=\mathbb{P}_{5} \mathcal{L}_{\mathbf{k}}(1+\mathcal{C}_{\mathbf{k}})\mathbb{P}_{5}\hat{f}_{ \mathbf{k}}, \tag{3.19}\] and the condition of being an exact closure at each wave vector: \[(\mathcal{C}_{\mathbf{k}}\mathbb{P}_{5}-\mathbb{P}_{5}^{\perp})\mathcal{L}_{ \mathbf{k}}(1+\mathcal{C}_{\mathbf{k}})=0, \tag{3.20}\] for \(\mathbf{k}\in\mathbb{Z}^{3}\). _Remark 3.1_.: Equation (3.20) for the closure operator is a special case of the _invariance equation_, and is the cornerstone of any derivation of hydrodynamics from kinetic theory. For example, the Chapman-Enskog method is based on a Taylor series expansion in terms of a small parameter \(\epsilon\) after a rescaling \(\tau\to\epsilon\tau\) while the method of invariant manifold [16] uses Newton-type iterations. Note that, even in the simplest linear setting addressed here, the invariance equation (3.20) is non-linear (quadratic) in the unknown closure operator. In [25], a numerical solution to (3.20) was obtained for the linear Boltzmann-BGK kinetic model. _Remark 3.2_.: Below, we shall circumvent solving the invariance equation (3.20) directly. Instead, the exact closure operator shall be determined in two steps. First, we identify the slow invariant manifold based on the properties of the spectrum of \(\mathcal{L}_{\mathbf{k}}\) (for details on the spectrum of the BGK equation, we refer to the following section). Indeed, for a certain range of wave numbers \(0<k<k_{\text{crit,min}}\), we will see that the spectral properties of \(\mathcal{L}_{\mathbf{k}}\) allow us to define a closure operator \(\mathcal{C}_{\mathbf{k}}\) which has the property that a general solution (restricted to \(\mathbf{k}\)) approaches the dynamics of the \(\mathcal{C}_{\mathbf{k}}\)-constrained ensemble exponentially fast in time. Second, we shall find a unique projection of the dynamics onto the slow invariant manifold in terms of primitive variables. The latter step is the main aspect of this work. ## 4. Spectrum of the Linear BGK Equation and Spectral Closure In this section, we first recall the properties of the BGK spectrum derived in [28]. This involves the three families of modal branches (diffusion, acoustics and shear), as well as their asymptotic behavior for small wave number. Then we define the hydrodynamic manifold as the eigenspace associated to the hydrodynamic modes and define the spectral closure. This will be achieved by a change of coordinates from spectral variables to macroscopic variables, thus providing a solution to (3.20). Because the hydrodynamic modes are slow, any trajectory of distribution functions (deviations relative to the global Maxwellian in the linear case) will approach this linear manifold exponentially fast in time. Consequently, the general moment dynamics (3.10) will be approximate exponentially well in time with the self-consistent moment system derived from the spectral closure operator. ### Properties of the Spectrum In order to define the spectral closure for the BGK system, we recall the most important implications of the detailed spectral analysis performed in [28], including a complete description of the eigenvalues above the essential spectrum for each wave number as zeros of a holomorphic spectral function, as well as the Taylor expansion in wave number. We define the hydrodynamic manifold as a linear combination of eigenvectors and derive the spectral dynamics on the manifold. _Theorem 4.1_.: The spectrum of the linearized BGK operator \(\mathcal{L}\) with relaxation time \(\tau\) around a global Maxwellian is given by \[\sigma(\mathcal{L})=\left\{-\frac{1}{\tau}+\mathrm{i}\mathbb{R}\right\}\cup \bigcup_{N\in\mathrm{Modes}}\bigcup_{|\mathbf{k}|<k_{\mathrm{crit},N}}\{\lambda_ {N}(\tau|\mathbf{k}|)\}, \tag{4.1}\] where \(\mathrm{Modes}=\{\mathrm{shear},\mathrm{diff},\mathrm{ac},\mathrm{ac}*\}\) corresponding to the shear mode, the diffusion mode and the pair of complex conjugate acoustic modes. The essential spectrum is given by the line \(\Re\lambda=-\frac{1}{\tau}\), while the discrete spectrum consists of a _finite_ number of discrete, isolated eigenvalues. Along with each family of modes, there exists a critical wave number \(k_{crit,N}\), limiting the range of wave numbers for which \(\lambda_{N}\) exists. The modes \(\{\mathrm{diff},\mathrm{ac},\mathrm{ac}*\}\) all have algebraic multiplicity one, while the shear mode has algebraic and geometric multiplicity two. The eigenvalues (in dependence of the wave number \(k\) and the relaxation time \(\tau\)) are given as zeros of the spectral function: \[\begin{split}&\Sigma_{k,\tau}(\lambda)=\frac{1}{6(\mathrm{i}k \tau)^{5}}(Z(\zeta)-\mathrm{i}\tau k)^{2}\\ &\times\Big{(}\zeta+6\mathrm{i}k^{3}\tau^{3}-\zeta(\zeta^{2}+5)k ^{2}\tau^{2}+2\mathrm{i}(\zeta^{2}+3)k\tau-4\mathrm{i}Z^{2}(\zeta)((\zeta^{2} +1)k\tau-\mathrm{i}\zeta)\\ &\qquad+Z(\zeta)(\zeta^{2}-(\zeta^{4}+4\zeta^{2}+11)k^{2}\tau^{2 }+2\mathrm{i}k\tau\zeta^{3}-5))\Big{)}\Big{|}_{\zeta=\mathrm{i}\frac{\tau \lambda+1}{k\tau}}.\end{split} \tag{4.2}\] For a proof, we refer to [28]. A typical argument plot of spectral function (4.2) is shown in Figure 4.1. For \(k=0\), the function (4.2) collapses to a multiple of \(\lambda^{5}\), recovering the center spectrum (conserved quantities) of (3.1), see (3.4). Increasing \(k\), the zeros of \(\Sigma_{k,\tau}\) branch out and decrease monotonically in their real parts. Figure 4.1. Argument plot of the spectral function (4.2) for \(\tau=0.5\) and different values of \(k\). The zeros of the function \(\Sigma_{k,\tau}\) in the complex plane define eigenvalues of the linearized BGK operator. Critical wave numbers for the branches are found to be [28], \[k_{\rm crit}(\lambda_{\rm shear})=\sqrt{\frac{\pi}{2}}\frac{1}{ \tau}\approx 1.2533\frac{1}{\tau}, \tag{4.3}\] \[k_{\rm crit}(\lambda_{\rm diff})\approx 1.3560\frac{1}{\tau},\] \[k_{\rm crit}(\lambda_{\rm ac}){=k_{\rm crit}(\lambda_{\rm ac}^{ *})}\approx 1.3118\frac{1}{\tau},\] Figure 4.2. Modal branches in dependence on the wave number (black solid line) compared to the leading-order polynomial approximation (red dashed line) for \(\tau=0.25\) and the essential spectrum at \(-\frac{1}{\tau}\) (dotted black line). The hydrodynamic branches terminate at the minimal critical wave number \(k_{\rm crit,min}\) (4.4). In particular, we note that the critical wave number of the shear mode is minimal, which implies that all three branches exists for \(0<k<k_{\rm crit}\) and we set \[k_{\rm crit,min}=k_{\rm crit}(\lambda_{\rm shear})=\sqrt{\frac{\pi}{2}}\frac{1}{ \tau}. \tag{4.4}\] The eigenvalues admit the following asymptotic expansions in terms of the wave number: \[\begin{split}\lambda_{\rm diff}(k)&=-\tau k^{2}+ \frac{9}{5}\tau^{3}k^{4}+\mathcal{O}(k^{6}),\\ \lambda_{\rm shear}(k)&=-\tau k^{2}+\tau^{3}k^{4}+ \mathcal{O}(k^{6}),\\ \lambda_{\rm ac}(k)&=\mathrm{i}\sqrt{\frac{5}{3}}k- \tau k^{2}+\mathrm{i}\frac{7\tau^{2}}{6\sqrt{15}}k^{3}+\frac{62}{45}\tau^{3}k^ {4}+\mathcal{O}(k^{5}),\end{split} \tag{4.5}\] which can be seen from Taylor expanding \(\lambda\) in \(k\) and comparing powers in (4.2), see also [28]. _Remark 4.2_.: Since zero is a five-fold degenerate eigenvalue for \(k=0\), we do not expect - in general - that the eigenvalues depend analytically on \(k\). Spectral perturbation theory only guarantees the expansion in a Puiseux series, i.e., a Taylor series in \(k^{1/5}\). For the BGK equation, however, the fractional terms cancel out and only powers in \(k\) remain, which is consistent with Ellis & Pinsky [11]. _Remark 4.3_.: While the structure of the Boltzmann-BGK spectrum (4.1) agrees with and is a special case of a more general linear Boltzmann equation [10, 33], the knowledge of the spectral function (4.2) allows to discern more detailed analytical information about the hydrodynamic spectrum, in particular, the accurate estimate for the critical wave numbers (4.3). Figure 4.2 shows the dependence of the modes on wave number in comparison to their leading-order polynomial approximation in (4.5), which correspond to Euler and Navier-Stokes equations. There exists exactly five discrete, isolated eigenvalues of the operator \(\mathcal{L}_{\mathbf{k}}\) with an associated five-dimensional eigenspace, which we will call the _hydrodynamic manifold_. This manifold will serve as our constraint to define a closure operator. Because the five eigenvalues are above the essential spectrum, any solution restricted to the given wave number, will decay exponentially fast to the hydrodynamic manifold, rendering it a _slow manifold_. In the following, we will be interested in the linear subspace generated by the eigenfunctions associated to \(\Lambda_{\rm BGK}=\{\lambda_{\rm diff},\lambda_{\rm ac},\lambda_{\rm ac}^{*}, \lambda_{\rm shear}\}\) at each wave number \(k\). To ease notation, we bundle the eigenvalues in a vector (counted with multiplicity): \[\boldsymbol{\lambda}=(\lambda_{\rm diff},\lambda_{\rm ac},\lambda_{\rm ac}^{*},\lambda_{\rm shear},\lambda_{\rm shear}), \tag{4.6}\] and define the diagonal matrix \[\boldsymbol{\Lambda}=\mathrm{diag}(\boldsymbol{\lambda}). \tag{4.7}\] Also, we denote the set and vector of simple eigenvalues as \[\Lambda_{\rm simple}=\{\lambda_{\rm diff},\lambda_{\rm ac},\lambda_{\rm ac}^ {*}\},\quad\boldsymbol{\lambda}_{\rm simple}=(\lambda_{\rm diff},\lambda_{ \rm ac},\lambda_{\rm ac}^{*}). \tag{4.8}\] For each wave vector \(\mathbf{k}\) with \(0<k<k_{\mathrm{crit,min}}\), the eigenspace associated to the modes spans a five-dimensional linear subspace, which we call the _hydrodynamic manifold_: \[\mathcal{M}_{\mathrm{hydro},\mathbf{k}}=\mathrm{span}_{\lambda\in\Lambda_{ \mathrm{simple}}}\hat{f}_{\lambda}(\mathbf{k})\otimes\mathrm{span}\{\hat{f}_{ \lambda_{\mathrm{shear}},1}(\mathbf{k}),\hat{f}_{\lambda_{\mathrm{shear}},2}( \mathbf{k})\}. \tag{4.9}\] The hydrodynamic manifold (4.9) is invariant with respect to the flow generated by (3.1). We write \[\hat{f}_{\mathrm{hydro}}(\mathbf{k},\mathbf{v},t)=\sum_{\lambda\in\Lambda_{ \mathrm{simple}}}\alpha_{\lambda}(t)\hat{f}_{\lambda}(\mathbf{v},\mathbf{k})+ \alpha_{\lambda_{\mathrm{shear}},1}(t)\hat{f}_{\lambda_{\mathrm{shear}},2}( \mathbf{v},\mathbf{k})+\alpha_{\lambda_{\mathrm{shear}},2}(t)\hat{f}_{\lambda _{\mathrm{shear}},2}(\mathbf{v},\mathbf{k}), \tag{4.10}\] for a solution on \(\mathcal{M}_{\mathrm{hydro}}\). The vector \[\boldsymbol{\alpha}=(\alpha_{\lambda_{\mathrm{diff}}},\alpha_{\lambda_{ \mathrm{ac}}},\alpha_{\lambda_{\mathrm{ac}}^{*}},\alpha_{\lambda_{\mathrm{ shear}},1},\alpha_{\lambda_{\mathrm{shear}},2}), \tag{4.11}\] is comprised of _spectral variables_ or _spectral coefficients_. ### Spectral Closure for the Boltzmann-BGK Equation Given the hydrodynamic manifold \(\mathcal{M}_{\mathrm{hydro},\mathbf{k}}\) for \(0<k<k_{\mathrm{crit,min}}\), which is spanned by the eigenvectors \(\hat{f}_{\lambda}\), we define the _spectral closure_ as, \[\mathcal{C}_{\mathrm{spectral}}:\mathrm{range}\ \mathbb{P}_{5}|_{ \mathcal{M}_{\mathrm{hydro},\mathbf{k}}}\rightarrow\mathrm{range}\ \mathbb{P}_{5}^{\perp}|_{\mathcal{M}_{\mathrm{hydro},\mathbf{k}}}, \tag{4.12}\] \[\mathcal{C}_{\mathrm{spectral}}(\mathbb{P}_{5}\hat{f}_{\lambda})= \mathbb{P}_{5}^{\perp}\hat{f}_{\lambda}.\] The closure operator (4.12), defined only on the \(\mathcal{M}_{\mathrm{hydro},\mathbf{k}}\), maps - for each eigenvalue - the first five moments of an eigenvector to the orthogonal complement on the same eigenvector. The closure operator (4.12) is defined with respect to the spectral basis, whereas the closure formalism (3.14) is defined with respect to macroscopic variables. To obtain the corresponding change of coordinates, let us denote the first five elements of a simple eigenfunction \(\hat{f}_{\lambda}\) as \[\boldsymbol{\eta}(\lambda)=\mathbb{P}_{5}\hat{f}_{\lambda}, \tag{4.13}\] while we write \[\boldsymbol{\eta}_{1}(\lambda_{\mathrm{shear}})=\mathbb{P}_{5}\hat{f}_{ \lambda_{shear},1},\qquad\boldsymbol{\eta}_{2}(\lambda_{\mathrm{shear}})= \mathbb{P}_{5}\hat{f}_{\lambda_{shear},2}. \tag{4.14}\] Taking projections in (4.10), we have \[\hat{\mathbf{h}}_{\mathrm{hydro}}(\mathbf{v},t)=\sum_{\lambda\in\Lambda_{ \mathrm{simple}}}\alpha_{\lambda}(t)\mathbb{P}_{5}\hat{f}_{\lambda}(\mathbf{v },\mathbf{k})+\alpha_{\lambda_{\mathrm{shear}},1}\mathbb{P}_{5}\hat{f}_{ \lambda_{\mathrm{shear}},1}(\mathbf{v},\mathbf{k})+\alpha_{\lambda_{\mathrm{ shear}},2}\mathbb{P}_{5}\hat{f}_{\lambda_{\mathrm{shear}},2}(\mathbf{v},\mathbf{k}), \tag{4.15}\] where we have suppressed the explicit dependence on the wave vector \(\mathbf{k}\). To ease notation, we define the \(5\times 5\) matrix of spectral basis vectors for the BGK equation (see Theorem 4.1) as \[\mathbf{H}:=[\boldsymbol{\eta}(\lambda_{\mathrm{diff}}),\boldsymbol{\eta}( \lambda_{\mathrm{ac}}),\boldsymbol{\eta}(\lambda_{ac}^{*}),\boldsymbol{\eta}_{ 1}(\lambda_{\mathrm{shear}}),\boldsymbol{\eta}_{2}(\lambda_{\mathrm{shear}})], \tag{4.16}\] which allows us to write the macroscopic variables on the hydrodynamic manifold as \[\hat{\mathbf{h}}_{\mathrm{hydro}}=\mathbf{H}\boldsymbol{\alpha}. \tag{4.17}\] Since (4.17) is composed of eigenvectors entirely, the evolution on the hydrodynamic manifold in terms of spectral variables simply becomes \[\frac{d\boldsymbol{\alpha}}{dt}=\boldsymbol{\Lambda}\boldsymbol{\alpha}, \tag{4.18}\] i.e., the spectral dynamics diagonalize completely since geometric and algebraic multiplicity are equal for each mode. To define the spectral closure for macroscopic variables, we define \[F_{\boldsymbol{\lambda}}=[\hat{f}_{\lambda_{\mathrm{diff}}},\hat{f}_{\lambda_{ \mathrm{ac}}},\hat{f}_{\lambda_{\mathrm{ac}}^{*}},\hat{f}_{\lambda_{\mathrm{ shear}},1},\hat{f}_{\lambda_{\mathrm{shear}},2}], \tag{4.19}\] which implies that \(\mathbf{H}=\mathbb{P}_{5}F_{\boldsymbol{\lambda}}\). Based on the coordinate change (4.16) and (4.19), the closure operator in macroscopic variables then reads \[\mathcal{C}_{\mathrm{spectral}}\hat{\mathbf{h}}_{\mathrm{hydro}} =\mathcal{C}_{\mathrm{spectral}}\mathbf{H}\boldsymbol{\alpha} \tag{4.20}\] \[=\mathbb{P}_{5}^{\perp}F_{\boldsymbol{\lambda}}\boldsymbol{\alpha}\] \[=\mathbb{P}_{5}^{\perp}F_{\boldsymbol{\lambda}}\mathbf{H}^{-1} \hat{\mathbf{h}}_{\mathrm{hydro}}\] provided that the inverse exists (which will be elaborated in Section 6). We emphasize that the spectral closure for the BGK equation (4.20) is an exact (invariant) closure in the sense of (3.20) by construction. Indeed, we find that evaluating (3.20) on \(\hat{\mathbf{h}}_{\mathrm{hydro}}\) gives: \[(\mathcal{C}_{\mathrm{spectral}}\mathbb{P}_{5}-\mathbb{P}_{5}^{ \perp})\mathcal{L}_{\mathbf{k}}(1+\mathcal{C}_{\mathrm{spectral}})\hat{ \mathbf{h}}_{\mathrm{hydro}} =(\mathcal{C}_{\mathrm{spectral}}\mathbb{P}_{5}-\mathbb{P}_{5}^{ \perp})\mathcal{L}_{\mathbf{k}}(1+\mathcal{C}_{\mathrm{spectral}})\mathbf{H} \boldsymbol{\alpha} \tag{4.21}\] \[=(\mathcal{C}_{\mathrm{spectral}}\mathbb{P}_{5}-\mathbb{P}_{5}^{ \perp})\mathcal{L}_{\mathbf{k}}F_{\boldsymbol{\lambda}}\boldsymbol{\alpha}\] \[=(\mathcal{C}_{\mathrm{spectral}}\mathbb{P}_{5}-\mathbb{P}_{5}^{ \perp})F_{\boldsymbol{\lambda}}\boldsymbol{\Lambda}\boldsymbol{\alpha}\] \[=(\mathcal{C}_{\mathrm{spectral}}\mathbf{H}-\mathbb{P}_{5}^{ \perp}F_{\boldsymbol{\lambda}})\boldsymbol{\Lambda}\boldsymbol{\alpha}\] \[=0,\] where in the third step, we have used that columns of \(F_{\boldsymbol{\lambda}}\) are eigenvectors of \(\mathcal{L}_{\mathbf{k}}\). _Remark 4.4_.: Because of the existence of a critical wave number for each branch of eigenvalues (4.3), a full set of five eigenvalues only exists up to \(k_{\mathrm{crit,min}}\). For \(k>k_{\mathrm{crit,min}}\), the modes vanish one by one, which implies that the full set of five macroscopic variables cannot be resolved uniquely any longer. In particular, the matrix \(\mathbf{H}\) might not be defined as a square matrix (4.16), but much rather as a rectangular matrix. Also, the inverse appearing in (4.20) has to be understood as a generalized inverse (e.g., pseudo-inverse). Implications of this degeneracy shall not be further discussed in this paper. Next section will be devoted to the explicit calculation of the basis matrix \(\mathbf{H}\) (the invertability of \(\mathbf{H}\) will be discussed in Section 6). Through a simple linear change of coordinates, we then obtain the dynamics for the macroscopic moments. ## 5. From Spectral Coordinates to Macroscopic Variables In this section, we construct the exact spectral closure for the BGK equation based on the knowledge of the spectrum (4.1). We derive the coordinate change from spectral parameters to the primitive, macroscopic variables (i.e., the basis matrix \(\mathbf{H}\)) in two consistency ways: First, we derive a general algebraic from of the first five moments of a simple eigenfunction from the interplay of the linear transport and collision operators. This analysis will be specific for the BGK equation and depends on the specific form of the projection operator (3.3). Secondly, we use analytical spectral calculus and Riesz projections to obtain the same result and to show consistency of the two approaches. We emphasize that the approach via spectral projections, although equivalent, can be be applied to a more general setting as well. Before we proceed, let us collect some notation and results from [28] regarding the spectral problem of the linearized Boltzmann-BGK operator (3.7), which will be useful in the following two subsections. We define the Green's function matrices as \[\begin{split} G_{L}(z,n,m)&=\langle(\mathrm{i} \tau\mathbf{v}\cdot\mathbf{k}-\mathbb{P}_{5}-z)^{-1}e_{n},e_{m}\rangle_{ \mathbf{v}},\\ G_{S}(z,n,m)&=\langle(\mathrm{i}\tau\mathbf{v} \cdot\mathbf{k}-z)^{-1}e_{n},e_{m}\rangle_{\mathbf{v}},\end{split} \tag{5.1}\] which satisfy the equation [28], \[G_{L}=G_{S}+G_{L}G_{S}. \tag{5.2}\] Assuming \(\det(\mathrm{Id}-G_{S})\neq 0\), equation (5.2) can be solved to get \[G_{L}=G_{S}(\mathrm{Id}-G_{S})^{-1}=(\mathrm{Id}-G_{S})^{-1}- \mathrm{Id}. \tag{5.3}\] From [28] we know that the matrix \(z\mapsto G_{S}(z)\) can be conjugated using the rotation matrix (2.5): \[G_{S}(z)|_{z=\mathrm{i}k\tau\zeta}-\mathrm{Id}=\frac{1}{\mathrm{i}\tau k} \tilde{\mathbf{Q}}_{\mathbf{k}}(G(\zeta)-\mathrm{i}\tau k)\tilde{\mathbf{Q}} _{\mathbf{k}}^{T}, \tag{5.4}\] where the matrix \(G(\zeta)\) reads, \[G(\zeta)=\begin{pmatrix}Z(\zeta)&1+\zeta Z(\zeta)&0&0&\frac{\zeta+(\zeta^{2} -1)Z(\zeta)}{\sqrt{6}}\\ 1+\zeta Z(\zeta)&\zeta+\zeta^{2}Z(\zeta)&0&0&\frac{\zeta^{2}+(\zeta^{3}-\zeta )Z(\zeta)}{\sqrt{6}}\\ 0&0&Z(\zeta)&0&0\\ 0&0&0&Z(\zeta)&0\\ \frac{\zeta+(\zeta^{2}-1)Z(\zeta)}{\sqrt{6}}&\frac{\zeta^{2}+(\zeta^{3}-\zeta )Z(\zeta)}{\sqrt{6}}&0&0&\frac{\zeta^{3}-\zeta+(\zeta^{4}-2\zeta^{2}+5)Z( \zeta)}{6}\end{pmatrix}, \tag{5.5}\] while \(\zeta\mapsto Z(\zeta)\) is the plasma dispersion function (2.6). Furthermore, the spectral function (4.2) is related to Green's matrix \(G_{S}\) (5.1) as [28], \[\Sigma_{k,\tau}(\zeta)=\left.\det(G_{S}(z)-\mathrm{Id})\right|_{z= \mathrm{i}k\tau\zeta}. \tag{5.6}\] With (5.4), (5.5), (2.5) and (2.4), the determinant in (5.6) is evaluated easily to get the closed-form expression for the spectral function (4.2) obtained in [28]. ### Spectral-to-Hydrodynamic Coordinate Transform by Spectral Temperature An eigenvector \(\hat{f}_{\lambda}\) of (3.7) with eigenvalue \(\lambda\) satisfies the equation \[-{\rm i}{\bf v}\cdot{\bf k}\hat{f}_{\lambda}-\frac{1}{\tau}\hat{f}_{\lambda}+ \frac{1}{\tau}\mathbb{P}_{5}\hat{f}_{\lambda}=\lambda\hat{f}_{\lambda}, \tag{5.7}\] or, equivalently, \[\hat{f}_{\lambda}=\frac{\mathbb{P}_{5}\hat{f}_{\lambda}}{\tau\lambda+1+{\rm i} \tau{\bf v}\cdot{\bf k}}. \tag{5.8}\] We emphasize that the numerator in (5.8) is always non-zero for the range of \(k\) for which \(\lambda\) is defined since \[-\frac{1}{\tau}<\Re\lambda(k)<0, \tag{5.9}\] for \(0<k<k_{\rm crit,min}\). Projecting equation (5.8) via \(\mathbb{P}_{5}\) gives the following implicit equation for the first five entries of an eigenvector: \[\boldsymbol{\eta}(\lambda)=\left\langle\frac{{\bf e}({\bf v})\cdot \boldsymbol{\eta}(\lambda)}{\tau\lambda+1+{\rm i}\tau{\bf v}\cdot{\bf k}},{ \bf e}({\bf v})\right\rangle_{\bf v}. \tag{5.10}\] Writing \(\boldsymbol{\eta}=(\eta_{1},\eta_{2},\eta_{3},\eta_{4},\eta_{5})\) and integrating (5.7) over \({\bf v}\) implies the relation \[-{\rm i}{\bf k}\cdot(\eta_{1},\eta_{2},\eta_{3})=\lambda\eta_{1}, \tag{5.11}\] or, equivalently, in terms of the splitting (2.2): \[(\eta_{1},\eta_{2},\eta_{3})=\frac{{\rm i}\lambda\eta_{1}}{k^{2}}{\bf k}+( \eta_{1},\eta_{2},\eta_{3})_{\perp}. \tag{5.12}\] Using the Green's functions matrices (5.1), we can rewrite (5.10) as a functional eigenvalue problem: \[\boldsymbol{\eta}(\lambda)=G_{S}(-\tau\lambda-1)\boldsymbol{\eta}(\lambda), \tag{5.13}\] which, in the light of (5.4), can be rewritten as \[\tilde{\bf Q}_{\bf k}^{T}\boldsymbol{\eta}\in\ker(G(\zeta)-{\rm i}k\tau), \tag{5.14}\] for \({\rm i}(1+\tau\lambda)=k\tau\zeta\). Given the structure of (5.5) and the definition of the modes through (4.2), we immediately see that \[\boldsymbol{\eta}_{1}(\lambda_{shear})=\tilde{\bf Q}_{\bf k}\left(\begin{array} []{c}0\\ 0\\ 1\\ 0\\ 0\end{array}\right),\qquad\boldsymbol{\eta}_{2}(\lambda_{shear})=\tilde{\bf Q }_{\bf k}\left(\begin{array}{c}0\\ 0\\ 0\\ 1\\ 0\end{array}\right), \tag{5.15}\] since the middle block in \(G(\zeta)\) decouples from the other part of the matrix (which of course corresponds to the factorisation in (4.2)). To obtain the structure of the simple eigenvectors, we first assume, by rescaling \(\hat{f}_{\lambda}\) accordingly, that \[\langle\hat{f}_{\lambda},1\rangle_{\bf v}=1. \tag{5.16}\] We note that, using (5.12), that columns one two and five of (5.5) are scalar multiples of each other for \((\eta_{1},\eta_{2},\eta_{3})_{\perp}=0\). Consequently, for a simple eigenvalue \(\lambda\), we can set \[\boldsymbol{\eta}(\lambda)=\left(\begin{array}{c}1\\ \frac{{\rm i}\lambda}{k^{2}}{\bf k}\\ \theta(\lambda)\end{array}\right), \tag{5.17}\] for some function \(\lambda\mapsto\theta(\lambda)\). We call the basis vectors (5.17) for a simple eigenvalue \(k\)-_aligned_. Since the two eigenvectors for the shear mode are complete explicit (5.15), all non-trivial information of the spectral closure is encoded in the function \(\lambda\mapsto\theta(\lambda)\), which we call _spectral temperature_. Since we know the structure of a simple eigenvector (5.17), we can derive a formula for the spectral temperature as follows. Using (5.17), we take an inner product of (5.8) with \(e_{4}\) to find \[\theta(\lambda)=\left\langle\frac{1+{\rm i}\frac{\lambda}{k^{2}}{\bf k}\cdot{ \bf v}+\theta(\lambda)e_{4}({\bf v})}{\tau\lambda+1+{\rm i}\tau{\bf v}\cdot{ \bf k}},e_{4}({\bf v})\right\rangle_{\bf v}, \tag{5.18}\] which can be solved to \[\theta(\lambda)=\frac{\frac{1}{k^{2}}\left\langle\frac{k^{2}+{\rm i}\lambda{ \bf k}\cdot{\bf v}}{\tau\lambda+1+{\rm i}\tau{\bf v}\cdot{\bf k}},e_{4}({\bf v })\right\rangle_{\bf v}}{1-\left\langle\frac{e_{4}({\bf v})}{\tau\lambda+1+{ \rm i}\tau{\bf v}\cdot{\bf k}},e_{4}({\bf v})\right\rangle_{\bf v}}. \tag{5.19}\] Expression (5.19) can then be evaluated explicitly for each simple \(\lambda\). We refer to Subsection 6.1 for an explicit formula and properties of \(\theta\). In the next subsection, we will give an alternative derivation of (5.17) and (5.15) using spectral projections to emphasize consistency. ### Spectral-to-Hydrodynamic Coordinate Transform by Riesz Projections In the following, we show that the the spectral basis (5.15) and (5.17) obtained in the previous section can be derived equivalently through spectral calculus. Indeed, for any set of discrete, isolated eigenvalues \(k\mapsto\Lambda_{\rm BGK}(k)\subset\mathbb{C}\), depending on wave number, we can define the _Riesz projection_ as \[\mathbb{P}_{\Lambda}=-\frac{1}{2{\rm i}\pi}\oint_{\Gamma(\Lambda_{\rm BGK})}( \mathcal{L}_{\bf k}-w)^{-1}\,dw, \tag{5.20}\] where \(\Gamma(\Lambda_{\rm BGK})\) is a simple contour in the complex plain, encircling the full spectral set \(\Lambda_{\rm BGK}=\{\lambda_{\rm diff},\lambda_{\rm ac},\lambda_{\rm ac}^{*}, \lambda_{\rm shear}\}\) once in positive direction. From analytical spectral calculus [22], we know that (5.20) is indeed a projection, whose range is given by the invariant subspace (generalized eigenspace) associated to \(\Lambda_{\rm BGK}\). In particular, we see from (4.15), it follows that \[{\bf H}=-\frac{1}{2{\rm i}\pi}\mathbb{P}_{5}\oint_{\Gamma(\Lambda_{\rm BGK})}( \mathcal{L}_{\bf k}-w)^{-1}\,dw\,\mathbb{P}_{5}. \tag{5.21}\] _Remark 5.1_.: Here, we have assumed that the five basis vectors \({\bf e}\) are mapped to five linearly independent vectors on \(\mathcal{M}_{\rm hydro}(\Lambda)\) (which is indeed the case for the BGK equation). For a more general kinetic model, this might not be the case and a non-invertability of the spectral basis in terms of the macroscopic variables would indicate a restriction of the hydrodynamic dynamics for the given range of wave numbers. Setting \(w=-\frac{1}{\tau}(z+1)\), the resolvent transforms according to \[\begin{split}(\mathcal{L}_{\mathbf{k}}-w)^{-1}&= \left(-\mathrm{i}\mathbf{v}\cdot\mathbf{k}-\frac{1}{\tau}+\frac{1}{\tau} \mathbb{P}_{5}+\frac{1}{\tau}(z+1)\right)^{-1}\\ &=-\tau\left(\mathrm{i}\tau\mathbf{v}\cdot\mathbf{k}-\mathbb{P}_{ 5}-z\right)^{-1}.\end{split} \tag{5.22}\] Using the second resolvent identity together with (5.22) and \(dw=-\frac{1}{\tau}dz\), we can then write: \[\begin{split}\mathbb{P}_{\Lambda}e_{m}&=-\frac{1}{ 2\mathrm{i}\pi}\oint_{\Gamma(\Lambda_{\mathrm{BGK}})}(\mathcal{L}_{\mathbf{k}} -w)^{-1}e_{m}\,dw\\ &=-\frac{1}{2\mathrm{i}\pi}\oint_{\Gamma(\Lambda_{\tau})}( \mathrm{i}\tau\mathbf{v}\cdot\mathbf{k}-\mathbb{P}_{5}-z)^{-1}e_{m}\,dz\\ &=-\frac{1}{2\mathrm{i}\pi}\oint_{\Gamma(\Lambda_{\tau})}( \mathrm{i}\tau\mathbf{v}\cdot\mathbf{k}-z)^{-1}e_{m}+(\mathrm{i}\tau\mathbf{v }\cdot\mathbf{k}-z)^{-1}\mathbb{P}_{5}(\mathrm{i}\tau\mathbf{v}\cdot\mathbf{k} -\mathbb{P}_{5}-z)e_{m}\,dz\\ &=-\frac{1}{2\mathrm{i}\pi}\oint_{\Gamma(\Lambda_{\tau})}( \mathrm{i}\tau\mathbf{v}\cdot\mathbf{k}-z)^{-1}\sum_{j=0}^{4}\langle(\mathrm{i} \tau\mathbf{v}\cdot\mathbf{k}-\mathbb{P}_{5}-z)^{-1}e_{m},e_{j}\rangle_{ \mathbf{v}}e_{j}\,dz\\ &=-\frac{1}{2\mathrm{i}\pi}\oint_{\Gamma(\Lambda_{\tau})}\sum_{j= 0}^{4}G_{L}(z,m,j)(\mathrm{i}\tau\mathbf{v}\cdot\mathbf{k}-z)^{-1}e_{j}dz, \end{split} \tag{5.23}\] where we have set \[\Gamma(\Lambda_{\mathrm{BGK}})=-\frac{1}{\tau}(\Gamma(\Lambda_{\tau})+1). \tag{5.24}\] With the notation (2.16), we have that \(w\in\mathcal{R}_{a}\iff z\in\mathcal{R}_{1}\). Using relation (5.2) between the Green's function matrices \(G_{S}\) and \(G_{L}\) together with the fact that \(z\mapsto G_{S}(z)\) is holomorphic in \(\mathcal{R}_{1}\), we arrive at \[\begin{split}\langle\mathbb{P}_{\lambda}e_{m},e_{n}\rangle_{ \mathbf{v}}&=-\frac{1}{2\mathrm{i}\pi}\oint_{\Gamma(\Lambda_{ \tau})}\sum_{j=0}^{4}G_{L}(z,m,j)\langle(\mathrm{i}\tau\mathbf{v}\cdot \mathbf{k}-z)^{-1}e_{j},e_{n}\rangle_{\mathbf{v}}dz\\ &=-\frac{1}{2\mathrm{i}\pi}\oint_{\Gamma(\Lambda_{\tau})}\sum_{j= 0}^{4}G_{L}(z,m,j)G_{S}(z,j,n)dz\\ &=-\frac{1}{2\mathrm{i}\pi}\oint_{\Gamma(\Lambda_{\tau})}(G_{L}( z)G_{S}(z))_{n,m}dz\\ &=-\frac{1}{2\mathrm{i}\pi}\oint_{\Gamma(\Lambda_{\tau})}G_{L}(z, n,m)dz\\ &=\frac{1}{2\mathrm{i}\pi}\oint_{\Gamma(\Lambda_{\tau})}[(G_{S}(z)- \mathrm{Id})^{-1}]_{n,m}dz.\end{split} \tag{5.25}\] By relation (5.6) and by applying the Residue Theorem, we find that \[\begin{split}\langle\mathbb{P}_{\lambda}e_{m},e_{n}\rangle_{\mathbf{v }}&=-\frac{1}{2\mathrm{i}\pi}\oint_{\Gamma(\Lambda_{\tau})}[( \mathrm{Id}-G_{S}(z))^{-1}]_{n,m}dz\\ &=-\frac{1}{2\mathrm{i}\pi}\oint_{\Gamma(\Lambda_{\tau})}\Sigma_{ k,\tau}^{-1}(z)\mathrm{adj}(\mathrm{Id}-G_{S}(z))dz\\ &=\sum_{\lambda_{\tau}\in\Lambda_{\tau}}\mathrm{Res}_{z\to \lambda_{\tau}}\Sigma_{k,\tau}^{-1}(z)\mathrm{adj}(G_{S}(z)-\mathrm{Id}).\end{split} \tag{5.26}\] For a simple eigenvalue \(\lambda_{\tau}\), the function \(\Sigma_{k,\tau}^{-1}\) has a pole of order one at \(\lambda_{\tau}\) and \[\langle\mathbb{P}_{\lambda}e_{m},e_{n}\rangle_{\mathbf{v}}\sim\mathrm{adj}(G_ {S}(\lambda_{\tau})-\mathrm{Id}), \tag{5.27}\] where \(\sim\) indicates equality up to multiplication by a complex number, while at the shifted shear mode \(\lambda_{\mathrm{shear},\tau}\), the function \(\Sigma_{k,\tau}^{-1}\) has a pole of order two and \[\begin{split}\langle\mathbb{P}_{\lambda}e_{m},e_{n}\rangle_{ \mathbf{v}}&=\mathrm{Res}_{z\to\lambda_{\mathrm{shear},\tau}} \Sigma_{k,\tau}^{-1}(z)\mathrm{adj}(G_{S}(z)-\mathrm{Id})\\ &=\lim_{z\to\lambda_{\mathrm{shear},\tau}}\frac{d}{dz}\Big{[}(z- \lambda_{\mathrm{shear},\tau})^{2}\Sigma_{k,\tau}^{-1}(z)\mathrm{adj}(G_{S}(z )-\mathrm{Id})\Big{]}\\ &=\lim_{z\to\lambda_{\mathrm{shear},\tau}}\Big{[}(z-\lambda_{ \mathrm{shear},\tau})^{2}\Sigma_{k,\tau}^{-1}(z)\Big{]}^{\prime}\mathrm{adj}(G_ {S}(\lambda_{\mathrm{shear},\tau})-\mathrm{Id})\\ &\qquad+\lim_{z\to\lambda_{\mathrm{shear},\tau}}(z-\lambda_{ \mathrm{shear},\tau})^{2}\Sigma_{k,\tau}^{-1}(z)\Big{[}\mathrm{adj}(G_{S}(z)- \mathrm{Id})\Big{]}^{\prime}\\ &\sim\lim_{z\to\lambda_{\mathrm{shear},\tau}}\Big{[}\mathrm{ adj}(G_{S}(z)-\mathrm{Id})\Big{]}^{\prime},\end{split} \tag{5.28}\] where prime denotes the derivative \(d/dz\), and where we have used that \(\mathrm{adj}(A)=0\) if \(\dim\ker A\geq 2\). The formula \[\mathrm{adj}(AB)=\mathrm{adj}(B)\mathrm{adj}(A), \tag{5.29}\] in combination with (5.5) allows us to simplify (5.26) further: \[\mathrm{adj}(G_{S}(z)-\mathrm{Id})=\frac{1}{(\mathrm{i}\tau k)^{4}}\tilde{ \mathbf{Q}}_{\mathbf{k}}\mathrm{adj}(G(\zeta)-\mathrm{i}\tau k)\tilde{ \mathbf{Q}}_{\mathbf{k}}^{T}. \tag{5.30}\] For a simple isolated eigenvalue \(\lambda\), the kernel of \(G\) is one dimensional and hence, there exists a complex function \(\zeta\mapsto g(\zeta)\) and a complex vector function \(\zeta\mapsto\mathbf{a}(\zeta)\) such that \[\mathrm{adj}(G(\zeta)-\mathrm{i}\tau k)=g(\zeta)\mathbf{a}(\zeta)\otimes \mathbf{a}^{T}(\zeta), \tag{5.31}\] and consequently, \[\begin{split}\mathrm{adj}(G_{S}(z)-\mathrm{Id})&= \frac{1}{(\mathrm{i}\tau k)^{4}}\tilde{\mathbf{Q}}_{\mathbf{k}}\mathrm{adj}(G( \zeta)-\mathrm{i}\tau k)\tilde{\mathbf{Q}}_{\mathbf{k}}^{T}\\ &=\frac{g(\zeta)}{(\mathrm{i}\tau k)^{4}}\tilde{\mathbf{Q}}_{ \mathbf{k}}(\mathbf{a}(\zeta)\otimes\mathbf{a}^{T}(\zeta))\tilde{\mathbf{Q}}_ {\mathbf{k}}^{T}\\ &=\frac{g(\zeta)}{(\mathrm{i}\tau k)^{4}}[\tilde{\mathbf{Q}}_{ \mathbf{k}}\mathbf{a}(\zeta)\otimes(\tilde{\mathbf{Q}}_{\mathbf{k}}\mathbf{a} (\zeta))^{T}].\end{split} \tag{5.32}\] From (5.31) it suffices to know one row or column of \(\mathrm{adj}(G(\zeta)-\mathrm{i}\tau k)\) to deduce \(\mathrm{adj}(G_{S}(\zeta)-\mathrm{Id})\) completely. Indeed, the last column of \(\mathrm{adj}(G(\zeta)-\mathrm{i}\tau k)\) can be calculated easily and we set \[\mathbf{a}(\zeta)=\left(\begin{array}{c}\frac{\mathrm{i}\tau k}{\sqrt{6}}( \zeta+(\zeta^{2}-1)Z(\zeta))\\ \frac{1}{\sqrt{6}}(1+\mathrm{i}k\tau\zeta)(\zeta+(\zeta^{2}-1)Z(\zeta))\\ 0\\ 0\\ -1-k^{2}\tau^{2}-\mathrm{i}k\tau\zeta-(\mathrm{i}k\tau+\zeta+\mathrm{i}k\tau \zeta^{2})Z(\zeta)\end{array}\right). \tag{5.33}\] A lengthy but elementary calculation shows that, \[\frac{d}{d\zeta}\mathrm{adj}(G(\zeta)-\mathrm{i}\tau k)\bigg{|}_{\zeta= \mathrm{i}\frac{\tau\lambda_{\mathrm{shear}}+1}{k\tau}}=\begin{pmatrix}0&0&0& 0&0\\ 0&0&0&0&0\\ 0&0&A&0&0\\ 0&0&0&0&0\end{pmatrix}, \tag{5.34}\] for the non-zero complex number \[A=-\frac{i\lambda_{\mathrm{shear}}\left(k^{4}\tau^{4}+(\lambda_{\mathrm{ shear}}\tau)^{4}+(\tau\lambda_{\mathrm{shear}})^{3}+\lambda_{\mathrm{ shear}}\tau^{3}k^{2}\right)}{6k}, \tag{5.35}\] which gives, again, the two basis vectors (5.15) for the eigenspace associated with the shear mode. In conclusion, we see that the approach via Riesz projections is equivalent to direct calculations performed in the previous section. Indeed, dividing the vector in (5.33) by its first entry, we recover the form (5.17) of the basis vectors for the simple eigenvalues - the exact form of the fifth entry will be determined in the following section. Similarly, we have shown that evaluating the complex residue around the two-fold degenerate eigenvalue \(\lambda_{\mathrm{shear}}\) in (5.34) produces the same basis vectors as (5.15). In the following section, we will put these basis vectors together to give an explicit description of the coordinate change from spectral variables to macroscopic variables. ## 6. Hydrodynamic Equations from Spectral Closure In this section, we derive the evolution equations for the macroscopic variables (3.9) explicitly, based on the change of coordinates (4.17). First, we analyze the spectral temperature in more detail. As a next step, we describe the transport coefficients arising in the hydrodynamic equations qualitatively and show explicitly how they relate to the eigenvalues. ### Properties of the Spectral Temperature In this subsection, we derive an explicit expression of the spectral temperature and prove some symmetry properties. To this end, we could either evaluate the quotient (5.19) or just divide (5.33) by its first entry. Indeed, consistency of the two expressions can be checked easily and we proceed by dividing (5.33) by \(\frac{\mathrm{i}\tau k}{\sqrt{6}}(\zeta+(\zeta^{2}-1)Z(\zeta))\) to recover the \(k\)-aligned form (5.17) with explicit spectral temperature \[\begin{split}\theta(\lambda)&=\left.\frac{\mathrm{i} \sqrt{6}\left(k^{2}\tau^{2}+\mathrm{i}\zeta k\tau+Z(\zeta)\left(\zeta+\mathrm{i }\left(\zeta^{2}+1\right)k\tau\right)+1\right)}{k\tau\left(\zeta+(\zeta^{2}-1) Z(\zeta)\right)}\right|_{\zeta=\mathrm{i}\frac{\tau\lambda+1}{\tau k}}\\ &=\left.\frac{\sqrt{6}\left(\left(k^{2}\tau^{2}-\tau\lambda(\tau \lambda+1)\right)Z\left(\frac{\mathrm{i}(\tau\lambda+1)}{k\tau}\right)- \mathrm{i}k\tau\left(k^{2}\tau^{2}-\tau\lambda\right)\right)}{(k^{2}\tau^{2}+ (\tau\lambda+1)^{2})\,Z\left(\frac{\mathrm{i}(\tau\lambda+1)}{k\tau}\right)- \mathrm{i}k\tau(\tau\lambda+1)}.\end{split} \tag{6.1}\] Function (6.1) is an analytic function on the strip \(\mathcal{R}_{\frac{1}{\tau}}\), see Figure 6.1. Figure 6.1. Argument plot of the spectral temperate for \(k=1.5\) and \(\tau=0.5\), showing two complex conjugate zeros from the numerator in (6.1). Using (B.6) from Appendix B, we find that \[\begin{split} Z\left(\mathrm{i}\frac{\tau\lambda+1}{k\tau}\right)^{*}& =\left[\mathrm{i}\sqrt{\frac{\pi}{2}}e^{-\frac{\zeta^{2}}{2}} \left[\mathrm{sign}(\Im\zeta)-\mathrm{erf}\left(\frac{-\mathrm{i}\zeta}{\sqrt {2}}\right)\right]\right|_{\zeta=\mathrm{i}\frac{\tau\lambda+1}{k\tau}}\right]^ {*}\\ &=-\mathrm{i}\sqrt{\frac{\pi}{2}}e^{-\frac{\zeta^{2}}{2}}\left[- \mathrm{sign}(\Im\zeta)-\mathrm{erf}\left(\frac{\mathrm{i}\zeta}{\sqrt{2}} \right)\right]\right|_{\zeta=-\mathrm{i}\frac{\tau\lambda^{*}+1}{k\tau}}\\ &=-\mathrm{i}\sqrt{\frac{\pi}{2}}e^{-\frac{\zeta^{2}}{2}}\left[- \mathrm{sign}(-\Im\zeta)-\mathrm{erf}\left(\frac{-\mathrm{i}\zeta}{\sqrt{2}} \right)\right]\right|_{\zeta=\mathrm{i}\frac{\tau\lambda^{*}+1}{k\tau}}\\ &=-Z\left(\mathrm{i}\frac{\tau\lambda^{*}+1}{k\tau}\right),\end{split} \tag{6.2}\] which implies that \[\begin{split}\theta(\lambda)^{*}&=\left[\frac{\sqrt{6 }\left(\left(k^{2}\tau^{2}-\tau\lambda(\tau\lambda+1)\right)Z\left(\frac{ \mathrm{i}(\tau\lambda+1)}{k\tau}\right)-\mathrm{i}k\tau\left(k^{2}\tau^{2}- \tau\lambda\right)\right)}{\left(k^{2}\tau^{2}+(\tau\lambda+1)^{2}\right)Z \left(\frac{\mathrm{i}(\tau\lambda+1)}{k\tau}\right)-\mathrm{i}k\tau(\tau \lambda+1)}\right]^{*}\\ &=-\frac{\sqrt{6}\left(\left(k^{2}\tau^{2}-\tau\lambda^{*}(\tau \lambda^{*}+1)\right)Z\left(\frac{\mathrm{i}(\tau\lambda^{*}+1)}{k\tau}\right) +\mathrm{i}k\tau\left(k^{2}\tau^{2}-\tau\lambda^{*}\right)\right)}{-\left(k^{2 }\tau^{2}+(\tau\lambda^{*}+1)^{2}\right)Z\left(\frac{\mathrm{i}(\tau\lambda^{* }+1)}{k\tau}\right)+\mathrm{i}k\tau(\tau\lambda^{*}+1)}\\ &=\theta(\lambda^{*}).\end{split} \tag{6.3}\] In particular, we conclude that \(\theta|_{\mathbb{R}}\subseteq\mathbb{R}\). This symmetry property will be useful in the following subsection, where we evaluate the spectral temperature on the simple eigenvalues to determine the change of coordinates (5.21) explicitly. ### Hydrodynamics in \(k\)-space Using the \(k\)-aligned basis vectors (5.17) and (5.15), the change of coordinates from spectral to macroscopic variables (5.21) takes the form, \[\mathbf{H}=\tilde{\mathbf{Q}}_{\mathbf{k}}\begin{pmatrix}1&1&1&0&0\\ \frac{\mathrm{i}}{k}\lambda_{\mathrm{diff}}&\frac{\mathrm{i}}{k}\lambda_{\mathrm{ ac}}&\frac{\mathrm{i}}{k}\lambda_{\mathrm{ac}}^{*}&0&0\\ 0&0&0&1&0\\ 0&0&0&0&1\\ \theta(\lambda_{\mathrm{diff}})&\theta(\lambda_{\mathrm{ac}})&\theta(\lambda_{ \mathrm{ac}}^{*})&0&0\end{pmatrix}. \tag{6.4}\] Its determinant is given by \[\begin{split}\det\mathbf{H}&=\frac{\mathrm{i}}{k}[(\lambda_{ \mathrm{diff}}-\lambda_{\mathrm{ac}}^{*})\theta(\lambda_{\mathrm{ac}})+(\lambda_ {\mathrm{ac}}-\lambda_{\mathrm{diff}})\theta(\lambda_{\mathrm{ac}}^{*})+( \lambda_{\mathrm{ac}}^{*}-\lambda_{\mathrm{ac}})\theta(\lambda_{\mathrm{diff}} )]\\ &=\frac{\mathrm{i}}{k}\Big{(}-2\mathrm{i}(\Im\lambda_{\mathrm{ ac}})\theta(\lambda_{\mathrm{diff}})+2\mathrm{i}\Im[(\lambda_{\mathrm{ac}}- \lambda_{\mathrm{diff}})\theta(\lambda_{\mathrm{ac}}^{*})]\Big{)}\\ &=\frac{2}{k}\Big{(}(\Im\lambda_{\mathrm{ac}})\theta(\lambda_{ \mathrm{diff}})-\Im[(\lambda_{\mathrm{ac}}-\lambda_{\mathrm{diff}})\theta( \lambda_{\mathrm{ac}}^{*})]\Big{)},\end{split} \tag{6.5}\] which defines a real-valued function of wave number. A plot \(k\mapsto\det\mathbf{H}(k)\) is shown in Figure 6.2, which already indicates the \(\mathbf{H}\) is invertible for all wave numbers \(0\leq k\leq k_{\mathrm{crit,min}}\). Using (4.18) and the invertibility of \(\mathbf{H}\), the dynamics for the macroscopic variables on the hydrodynamic manifold are then given by \[\frac{\partial\hat{\mathbf{h}}_{\mathrm{hydro}}}{\partial t}=\mathbf{H}\mathbf{ \Lambda}\mathbf{H}^{-1}\hat{\mathbf{h}}_{\mathrm{hydro}}. \tag{6.6}\] _Remark 6.1_.: We remark that the change of coordinates (6.4) on the hydrodynamic manifold does not involve any expression depending on the shear mode \(\lambda_{\mathrm{shear}}\) explicitly (see (5.15)). The _closure operator_, however, will inevitably involve terms that depend on \(\lambda_{\mathrm{shear}}\) as well. In particular, the hydrodynamics (6.6) depend on the shear mode through \(\mathbf{\Lambda}\). _Remark 6.2_.: We note that the hydrodynamics (6.10) could be extended beyond the minimal critical wave number by setting \(\lambda_{N}(k)=-1/\tau\) for \(k>k_{\mathrm{crit,N}}\). Even though, strictly speaking, the eigenvalue does not exist beyond that point, we can, nonetheless, define the hydrodynamic equations by requiring the decay rate of a mode to coincide with the overall minimal decay rate \(-1/\tau\). A cumbersome but elementary calculation shows that \[\mathbf{H}\mathbf{\Lambda}\mathbf{H}^{-1}=\tilde{\mathbf{Q}}_{\mathbf{k}} \begin{pmatrix}0&-\mathrm{i}k&0&0&0\\ C_{1}&C_{2}&0&0&C_{3}\\ 0&0&\lambda_{\mathrm{shear}}&0&0\\ 0&0&0&\lambda_{\mathrm{shear}}&0\\ C_{4}&C_{5}&0&0&C_{6}\end{pmatrix}\tilde{\mathbf{Q}}_{\mathbf{k}}^{T}, \tag{6.7}\] for the following cyclic quantities \[\begin{split} C_{1}&=\frac{1}{k^{2}\det\mathbf{H}}\sum_ {(\lambda_{1},\lambda_{2},\lambda_{3})\in\circlearrowright\boldsymbol{\lambda}_{ \text{simple}}}\lambda_{1}\lambda_{3}(\lambda_{1}-\lambda_{3})\theta(\lambda_{ 2}),\\ C_{2}&=\frac{\text{i}}{k\det\mathbf{H}}\sum_{( \lambda_{1},\lambda_{2},\lambda_{3})\in\circlearrowright\boldsymbol{\lambda}_{ \text{simple}}}(\lambda_{1}^{2}-\lambda_{3}^{2})\theta(\lambda_{2}),\\ C_{3}&=-\frac{1}{k^{2}\det\mathbf{H}}\prod_{( \lambda_{1},\lambda_{2},\lambda_{3})\in\circlearrowright\boldsymbol{\lambda}_{ \text{simple}}}(\lambda_{1}-\lambda_{2}),\\ C_{4}&=\frac{\text{i}}{k\det\mathbf{H}}\sum_{( \lambda_{1},\lambda_{2},\lambda_{3})\in\circlearrowright\boldsymbol{\lambda}_{ \text{simple}}}\lambda_{2}(\lambda_{1}-\lambda_{3})\theta(\lambda_{1})\theta( \lambda_{3}),\\ C_{5}&=\frac{1}{\det\mathbf{H}}\sum_{(\lambda_{1},\lambda_{2},\lambda_{3})\in\circlearrowright\boldsymbol{\lambda}_{\text{simple }}}(\lambda_{3}-\lambda_{1})\theta(\lambda_{1})\theta(\lambda_{3}),\\ C_{6}&=-\frac{\text{i}}{k\det\mathbf{H}}\sum_{( \lambda_{1},\lambda_{2},\lambda_{3})\in\circlearrowright\boldsymbol{\lambda}_{ \text{simple}}}\lambda_{1}\theta(\lambda_{1})(\lambda_{2}-\lambda_{3}),\end{split} \tag{6.8}\] where we have used the notation for cyclical permutations outlined in (2.15). In Appendix A, an explicit expansion of the quantities in (6.8) is performed. We can show that \(C_{1},C_{3}\) and \(C_{4}\) are purely imaginary numbers, whereas \(C_{2},C_{5}\) and \(C_{6}\) are purely real numbers. Thus we set \[C_{1}=\text{i}c_{1},\quad C_{2}=c_{2},\quad C_{3}=\text{i}c_{3},\quad C_{4}=c_ {4},\quad C_{5}=\text{i}c_{5},\quad C_{6}=c_{6}, \tag{6.9}\] for \(c_{j}\in\mathbb{R}\), \(1\leq j\leq 6\), and the full hydrodynamics become \[\frac{\partial\hat{\mathbf{h}}_{\text{hydro}}}{\partial t}=\tilde{\mathbf{Q}}_ {\mathbf{k}}\begin{pmatrix}0&-\text{i}k&0&0&0\\ \text{i}c_{1}&c_{2}&0&0&\text{i}c_{3}\\ 0&0&\lambda_{\text{shear}}&0&0\\ 0&0&0&\lambda_{\text{shear}}&0\\ c_{4}&\text{i}c_{5}&0&0&c_{6}\end{pmatrix}\tilde{\mathbf{Q}}_{\mathbf{k}}^{T} \hat{\mathbf{h}}_{\text{hydro}}. \tag{6.10}\] Figure 6.3 depicts the coefficients (6.9) in dependence of wave number (compared to the Navier-Stokes-Fourier approximation, see Section 7). In summary, the hydrodynamic equations in \(k\)-space are explicitly related to the spectral problem for the linear part through the six transport coefficients \(\{c_{j}\}_{1\leq j\leq 6}\). ### Hydrodynamics in real space Let us transform equation (6.10) back to physical coordinates. To this end, we note that \[\begin{split}\mathbf{Q}_{\mathbf{k}}\text{diag}(C_{2},\lambda_{ \text{shear}},\lambda_{\text{shear}})\mathbf{Q}_{\mathbf{k}}^{T}&= \lambda_{\text{shear}}\text{Id}_{3\times 3}+\mathbf{Q}_{\mathbf{k}}\text{diag}(C_{2}- \lambda_{\text{shear}},\lambda_{\text{shear}},\lambda_{\text{shear}}) \mathbf{Q}_{\mathbf{k}}^{T}\\ &=\lambda_{\text{shear}}\text{Id}_{3\times 3}+\frac{1}{k^{2}}(C_{2}- \lambda_{\text{shear}})\mathbf{k}\otimes\mathbf{k}^{T},\end{split} \tag{6.11}\] where we have used the definition of \(\mathbf{Q}_{\mathbf{k}}\) in (2.4). Consequently, in physical space, the right-hand side of equation (6.10) translates to a linear integral operator, which can be written as \[\frac{\partial}{\partial t}\left(\begin{array}{c}\rho\\ \mathbf{u}\\ T\end{array}\right)=\left(\begin{array}{c}-\nabla\cdot\mathbf{u}\\ I_{1}(\Delta)\nabla\rho+I_{\mathrm{shear}}(\Delta)\mathbf{u}+I_{2}(\Delta) \nabla(\nabla\cdot\mathbf{u})+I_{3}(\Delta)\nabla T\\ I_{4}(\Delta)\rho+I_{5}(\Delta)\nabla\cdot\mathbf{u}+I_{6}(\Delta)T\end{array} \right), \tag{6.12}\] where the integral operators \(\{I_{j}\}_{1\leq j\leq 6}\) and \(I_{\mathrm{shear}}\) are related to (6.8) via Fourier series, multiplication/division by \(\sqrt{\frac{3}{2}}\) and the rotation matrix (2.5). The differential operates are to be understood with respect to \(\mathbf{x}\) and \(\Delta=\nabla\cdot\nabla\) is the Laplacian. We also have used symmetry properties \(k\mapsto-k\) from the explicit form in Appendix A and the symmetry of Figure 6.3. The transport coefficients calculated in Section A in dependence on wave number (\(0\leq k\leq k_{\mathrm{crit,min}}\)) for \(\tau=0.25\) (solid black line) compared to its leading-order approximation at the origin (Navier–Stokes/Euler, dashed red line). eigenvalues. _Remark 6.3_.: Let us summarize the derivation of the non-local, exact hydrodynamics (6.12). The construction begins the evaluation of the eigenvalues by finding zeros of the spectral function (4.2). Since they are given explicitly as the solutions of a transcendental equation, we can analyse them use them in the further analysis. Indeed, up to the minimal critical wave number, there always exists a five-dimensional invariant plane spanned by the eigenvectors of \(\mathcal{L}_{\mathbf{k}}\). Using these eigenfunctions, we can construct the spectral closure (4.20) and define the coordinate change (5.21) from spectral variables to macroscopic variables. Once this is achieved, we can write down the exact hydrodynamic equations on the hydrodynamic manifold (6.10), which will attract all generic trajectories exponentially fast. The properties of the transport coefficients (6.9) can then be analysed in detail. _Remark 6.4_.: Since we posed the governing equations (3.1) on the three-dimensional torus, only finitely-many wave numbers contribute in the hydrodynamic equation (6.10). Consequently, the action of the integral operators \(\{I_{j}\}_{1\leq j\leq 6}\) and \(I_{\mathrm{shear}}\) can be written the convolution with an integral kernel of the form \[K_{j}(\mathbf{x})=\sum_{k=0}^{k_{\mathrm{crit},\mathrm{min}}}\hat{K}_{j}(k)e^{ \mathrm{i}\mathbf{x}\cdot\mathbf{k}}, \tag{6.13}\] where \(1\leq j\leq 6\) or \(j=\mathrm{shear}\) and for coefficients \(\hat{K}_{j}(k)\in\mathbb{R}\). Because of the symmetry properties of the eigenvalues, we actually have that \(\hat{K}_{j}(k)=\hat{K}_{j}(k^{2})\), corresponding to the dependence of \(I_{j}\) on the Laplacian \(\Delta\) only. An effective approximation of the coefficient functions \(\hat{K}_{j}\) will be discussed in a forthcoming paper. The quantities \(c_{2},\lambda_{\mathrm{shear}}\) are viscosity terms, while the term \(c_{6}\) can be regarded as non-local (wave-number dependent) version of the thermal diffusivity. We may re-introduce units by scaling according to \[\mathbf{x}\mapsto L^{-1}\mathbf{x},\quad k\mapsto Lk,\quad\mathbf{v}\mapsto v _{\mathrm{thermal}}^{-1}\mathbf{v},\quad\tau\mapsto t_{\mathrm{thermal}}^{-1} \tau_{\mathrm{relax}}, \tag{6.14}\] for a specific length scale \(L\), the _thermal velocity_\(v_{\mathrm{thermal}}\), the _thermal time_\(t_{\mathrm{thermal}}\) and the _relaxation time_\(\tau_{\mathrm{relax}}\). Given the Boltzmann constant \(k_{\mathrm{B}}\approx 10^{-23}m^{2}kgs^{-2}K^{-1}\), a specific particle mass \(m\) and a reference temperature \(T_{0}\), the thermal quantities are defined as \[t_{\mathrm{thermal}}=L\sqrt{\frac{m}{k_{\mathrm{B}}T_{0}}},\quad v_{\mathrm{ thermal}}=\sqrt{\frac{k_{\mathrm{B}}}{m}T_{0}}. \tag{6.15}\] We also define the _mean-free path length_ as \[l_{\mathrm{mfp}}=\tau L=\tau_{\mathrm{relax}}v_{\mathrm{thermal}}. \tag{6.16}\] The macroscopic variables re-scale according to \[\rho\mapsto\rho_{0}^{-1}\rho,\quad\mathbf{u}\mapsto v_{\mathrm{thermal}}^{-1} \mathbf{u},\quad T\mapsto T_{0}^{-1}T, \tag{6.17}\] where \(\rho_{0}\) is a reference density and \(T_{0}\) is a reference temperature. We note that the transport coefficients (6.9) can be written as \[\begin{split}& c_{1}(k,\tau)=k\tilde{c}_{1}[(\tau k)^{2}],\quad c _{2}(k,\tau)=\tau^{-1}\tilde{c}_{2}[(\tau k)^{2}],\quad c_{3}(k,\tau)=k\tilde{c }_{3}[(\tau k)^{2}],\\ & c_{4}(k,\tau)=\tau^{-1}\tilde{c}_{4}[(\tau k)^{2}],\quad c_{5} (k,\tau)=k\tilde{c}_{5}[(\tau k)^{2}],\quad c_{6}(k,\tau)=\tau^{-1}\tilde{c}_{ 6}[(\tau k)^{2}],\end{split} \tag{6.18}\] see also the asymptotic expansions (7.4) in the next section. Finally, the hydrodynamic equations (6.10) can be cast in the form \[\begin{split}\frac{\partial\rho}{\partial t}&=- \rho_{0}\nabla\cdot\mathbf{u},\\ \frac{\partial\mathbf{u}}{\partial t}&=\frac{k_{ \mathrm{B}}T_{0}}{m\rho_{0}}\mathcal{I}_{1}[l_{\mathrm{mfp}}^{2}\Delta]\nabla \rho+\frac{1}{\tau_{\mathrm{relax}}}\mathcal{I}_{\mathrm{shear}}(l_{ \mathrm{mfp}}^{2}\Delta)\mathbf{u}+\frac{l_{\mathrm{mfp}}^{2}}{\tau_{ \mathrm{relax}}}\mathcal{I}_{2}(l_{\mathrm{mfp}}^{2}\Delta)\nabla(\nabla \cdot\mathbf{u})+\frac{k_{\mathrm{B}}}{m\tau_{\mathrm{relax}}}\mathcal{I}_{3} (l_{\mathrm{mfp}}^{2}\Delta)\nabla T,\\ \frac{\partial T}{\partial t}&=\frac{T_{0}}{\rho_{0} \tau_{\mathrm{relax}}}\mathcal{I}_{4}[l_{\mathrm{mfp}}^{2}\Delta]\rho+T_{0} \mathcal{I}_{5}[l_{\mathrm{mfp}}^{2}\Delta](\nabla\cdot\mathbf{u})+\frac{1}{ \tau_{\mathrm{relax}}}\mathcal{I}_{6}[l_{\mathrm{mfp}}^{2}\Delta]T,\end{split} \tag{6.19}\] where the integral operators \(\{\mathcal{I}_{j}\}_{1\leq j\leq 6}\) and \(\mathcal{I}_{\mathrm{shear}}\) are defined through Fourier series and (6.18). ## 7. Comparison to Existing Fluid Models: Small Wave-Number Limit In this section, we compare the exact hydrodynamic system (6.10) to fluid models derived from the Chapmann-Enskog expansion. Because of the coupling between the wave number and the relaxation time through eigenvalues and the k-aligned spectral basis (6.4), the terms of the Chapmann-Enskog series correspond to an expansion in wave number (4.5). We write \[\lambda(k)=\sum_{n=1}^{\infty}\lambda_{n}k^{n}, \tag{7.1}\] for any of the four modal branches, for the Taylor expansion of a mode in terms of wave number. Invoking (7.1) into the the spectral temperature (6.1) and using the asymptotic expansion (B.16) (in the limit \(k\to 0\)), we can expand \[\begin{split}\theta(\lambda(k))&\sim\frac{\sqrt{6} \left(\left(k^{2}\tau^{2}-\tau\lambda(\tau\lambda+1)\right)Z\left(\frac{\mathrm{ i}(\tau\lambda+1)}{k\tau}\right)-\mathrm{i}k\tau\left(k^{2}\tau^{2}-\tau \lambda\right)\right)}{\left(k^{2}\tau^{2}+(\tau\lambda+1)^{2}\right)Z\left( \frac{\mathrm{i}(\tau\lambda+1)}{k\tau}\right)-\mathrm{i}k\tau(\tau\lambda+1) }\\ &\sim\frac{\sqrt{6}\left(\left(k^{2}\tau^{2}-\tau\lambda(\tau \lambda+1)\right)\left(-\frac{k\tau}{\mathrm{i}(\tau\lambda+1)}-\frac{(k\tau) ^{3}}{[\mathrm{i}(\tau\lambda+1)]^{3}}+\mathcal{O}(k^{5})\right)-\mathrm{i}k \tau\left(k^{2}\tau^{2}-\tau\lambda\right)\right)}{\left(k^{2}\tau^{2}+(\tau \lambda+1)^{2}\right)\left(-\frac{k\tau}{\mathrm{i}(\tau\lambda+1)}-\frac{(k \tau)^{3}}{[\mathrm{i}(\tau\lambda+1)]^{3}}+\mathcal{O}(k^{5})\right)- \mathrm{i}k\tau(\tau\lambda+1)}\\ &\sim-\sqrt{\frac{3}{2}}\left(\lambda_{1}^{2}+1\right)-\sqrt{ \frac{3}{2}}k\lambda_{1}\left(\tau\lambda_{1}^{2}+2\lambda_{2}+3\tau\right)\\ &\quad-\sqrt{\frac{3}{2}}k^{2}\left(3\tau\lambda_{1}^{2}\left( \lambda_{2}+\tau\right)+2\lambda_{1}\lambda_{3}+3\tau\lambda_{2}+\lambda_{2}^{ 2}+3\tau^{2}\right)\\ &\quad+\sqrt{\frac{3}{2}}k^{3}\left(3\tau^{3}\lambda_{1}^{3}+3 \tau\lambda_{1}\left(-2\tau\lambda_{2}-\lambda_{2}^{2}+\tau^{2}\right)-3\tau \lambda_{1}^{2}\lambda_{3}-\lambda_{3}\left(2\lambda_{2}+3\tau\right)\right) \\ &\quad+\mathcal{O}(k^{4}),\end{split} \tag{7.2}\] for \(k\) sufficiently small. Plugging (7.2) together with (4.5) into (6.4) leads to the following asymptotic expansions for the closure coefficients (6.8): \[\begin{split} C_{1}&\sim\frac{ik\left(357k^{6}\tau^{ 6}+991k^{4}\tau^{4}-1620k^{2}\tau^{2}-900\right)}{60\left(7k^{2}\tau^{2}+15 \right)}+\text{h.o.t.},\\ C_{2}&\sim\frac{k^{2}\tau\left(203k^{4}\tau^{4}+520 k^{2}\tau^{2}-600\right)}{30\left(7k^{2}\tau^{2}+15\right)}+\text{h.o.t.},\\ C_{3}&\sim-\frac{ik\left(7k^{2}\tau^{2}+30\right)^{ 2}}{60\left(7k^{2}\tau^{2}+15\right)}+\text{h.o.t.},\\ C_{4}&\sim\frac{k^{4}\tau^{3}\left(4437k^{4}\tau^{4}- 89k^{2}\tau^{2}-3000\right)}{90\left(7k^{2}\tau^{2}+15\right)}+\text{h.o.t.}, \\ C_{5}&\sim-\frac{ik\left(2523k^{6}\tau^{6}+1670k^{4} \tau^{4}+360k^{2}\tau^{2}+450\right)}{45\left(7k^{2}\tau^{2}+15\right)}+\text{ h.o.t.},\\ C_{6}&\sim-\frac{k^{2}\tau\left(203k^{4}\tau^{4}+1 150k^{2}\tau^{2}+750\right)}{30\left(7k^{2}\tau^{2}+15\right)}+\text{h.o.t.}, \end{split} \tag{7.3}\] for small \(k\), here h.o.t. indicates terms of higher order in \(k\), either polynomials or rational functions of \(k\). Expanding the quotients in (7.3) in Taylor series around zero, we obtain \[\begin{split}& C_{1}\sim-\mathrm{i}k-\mathrm{i}\frac{4}{3}\tau^{2}k ^{3}+\mathcal{O}(k^{5}),\\ & C_{2}\sim-\frac{4}{3}k^{2}\tau+\frac{16}{9}\tau^{3}k^{4}+ \mathcal{O}(k^{6}),\\ & C_{3}\sim-\sqrt{\frac{2}{3}}\mathrm{i}k+\mathcal{O}(k^{5}),\\ & C_{4}\sim-\sqrt{\frac{2}{3}}\frac{10}{3}\tau^{3}k^{4}+\mathcal{ O}(k^{6}),\\ & C_{5}\sim-\mathrm{i}\sqrt{\frac{2}{3}}k-\mathrm{i}\frac{1}{3} \sqrt{\frac{2}{3}}\tau^{2}k^{3}+\mathcal{O}(k^{5}),\\ & C_{6}\sim-\frac{5}{3}\tau k^{2}-\frac{16}{9}\tau^{3}k^{4}+ \mathcal{O}(k^{6}),\end{split} \tag{7.4}\] for small \(k\). At first order in \(k\), (7.4) shows that we have recovered the Euler equation, \[\frac{\partial}{\partial t}\left(\begin{array}{c}\hat{\rho}\\ \hat{\mathbf{u}}\\ \hat{T}\end{array}\right)_{\text{Euler}}=\tilde{\mathbf{Q}}_{\mathbf{k}} \left(\begin{matrix}0&-\mathrm{i}k&0&0&0\\ -\mathrm{i}k&0&0&0&-\mathrm{i}k\\ 0&0&0&0&0\\ 0&0&0&0&0\\ 0&-\mathrm{i}\frac{2}{3}k&0&0&0\end{matrix}\right)\tilde{\mathbf{Q}}_{\mathbf{ k}}^{T}\left(\begin{array}{c}\hat{\rho}\\ \hat{\mathbf{u}}\\ \hat{T}\end{array}\right) \tag{7.5}\] while at second order in \(k\), we recover the Navier-Stokes equation in wave space: \[\frac{\partial}{\partial t}\left(\begin{array}{c}\hat{\rho}\\ \hat{\mathbf{u}}\\ \hat{T}\end{array}\right)_{\text{Navier-Stokes}}=\tilde{\mathbf{Q}}_{\mathbf{ k}}\left(\begin{matrix}0&-\mathrm{i}k&0&0&0\\ -\mathrm{i}k&-\frac{4}{3}\tau k^{2}&0&0&-\mathrm{i}k\\ 0&0&-\tau k^{2}&0&0\\ 0&0&0&-\tau k^{2}&0\\ 0&-\mathrm{i}\frac{2}{3}k&0&0&-\frac{5}{3}\tau k^{2}\end{matrix}\right)\tilde {\mathbf{Q}}_{\mathbf{k}}^{T}\left(\begin{array}{c}\hat{\rho}\\ \hat{\mathbf{u}}\\ \hat{T}\end{array}\right) \tag{7.6}\] Transforming back according to (2.4) and summing to Fourier series gives the well-known expressions \[\frac{\partial}{\partial t}\left(\begin{array}{c}\rho\\ \mathbf{u}\\ T\end{array}\right)_{\text{Euler}}=\left(\begin{array}{c}-\nabla\cdot \mathbf{u}\\ -\nabla(\rho+T)\\ -\frac{2}{3}\nabla\cdot\mathbf{u}\end{array}\right) \tag{7.7}\] as well as \[\frac{\partial}{\partial t}\left(\begin{array}{c}\rho\\ \mathbf{u}\\ T\end{array}\right)_{\text{Navier-Stokes}}=\left(\begin{array}{c}-\nabla \cdot\mathbf{u}\\ -\nabla(\rho+T)+\tau\Delta\mathbf{u}+\frac{\tau}{3}\nabla(\nabla\cdot\mathbf{u })\\ -\frac{2}{3}\nabla\cdot\mathbf{u}+\frac{5}{3}\tau\Delta T\end{array}\right) \tag{7.8}\] Let us comment on the third-order approximation of (6.10) in \(k\), the Burnett equation: \[\frac{\partial}{\partial t}\left(\begin{array}{c}\hat{\rho}\\ \hat{\mathbf{u}}\\ \hat{T}\end{array}\right)_{\text{Burnett}}=\tilde{\mathbf{Q}}_{\mathbf{k}} \begin{pmatrix}0&-\mathrm{i}k&0&0&0\\ -\mathrm{i}k-\mathrm{i}\frac{4}{3}\tau^{2}k^{3}&-\frac{4}{3}\tau k^{2}&0&- \mathrm{i}k\\ 0&0&-\tau k^{2}&0&0\\ 0&0&0&-\tau k^{2}&0\\ 0&-\mathrm{i}\frac{2}{3}k-\mathrm{i}\frac{2}{9}\tau^{2}k^{3}&0&0&-\frac{5}{3} \tau k^{2}\end{pmatrix}\tilde{\mathbf{Q}}_{\mathbf{k}}^{T}\left(\begin{array} []{c}\hat{\rho}\\ \hat{\mathbf{u}}\\ \hat{T}\end{array}\right) \tag{7.9}\] or, equivalently, in physical space \[\frac{\partial}{\partial t}\left(\begin{array}{c}\rho\\ \mathbf{u}\\ T\end{array}\right)_{\text{Burnett}}=\left(\begin{array}{c}-\nabla\cdot \mathbf{u}\\ -\nabla(\rho+T)+\frac{4}{3}\tau^{2}\nabla\Delta\rho+\tau\Delta\mathbf{u}+ \frac{\tau}{3}\nabla(\nabla\cdot\mathbf{u})\\ -\frac{2}{3}\nabla\cdot\mathbf{u}-\frac{2}{9}\tau^{2}\Delta\nabla\cdot\mathbf{ u}+\frac{5}{3}\tau\Delta T\end{array}\right) \tag{7.10}\] We compare equation (7.10) with p. 105 equation (34) in [43], the non-dimensional linearized Burnett equation for a more general ellipsoid-statistical BGK (ES-BGK) kinetic model of Holway [23], \[\frac{\partial\rho}{\partial t}+\nabla\cdot\mathbf{u}=0, \tag{7.11}\] \[\frac{\partial\mathbf{u}}{\partial t}+\nabla(\rho+T)-\Delta \mathbf{u}+\frac{1}{3}\nabla(\nabla\cdot\mathbf{u})-\frac{4}{3}\nabla\Delta \rho-\frac{3b}{3}\nabla\Delta T=0,\] \[\frac{3}{2}\frac{\partial T}{\partial t}+\nabla\cdot\mathbf{u}- \frac{5(1-b)}{2}\Delta T-\frac{(1-b)(1-5b)}{3}\Delta\nabla\cdot\mathbf{u},\] where parameter \(b\) is related to the Prandtl number \(\Pr=\frac{1}{1-\mathrm{b}}\). Since the ES-BGK model reduces to the BGK model with \(\Pr=1\) for the BGK equation, we find that equation (7.11) is exactly the same as system (7.10) for \(b=0\) and \(\tau=1\). _Remark 7.1_.: As a special feature of the BGK equation, we find that the coefficient \(k\mapsto C_{3}(k)\) (7.4) happens to be very close to the Euler approximation, deviating only at order \(k^{5}\). This explains that there is no contribution of temperature in the non-classical terms in the Burnett approximation, which also implies that the Burnett approximation is globally stable. Indeed, the cubic terms enter only through \(C_{1}\) and \(C_{5}\), which are purely imaginary, and thus only contribute to the higher-order wave motion, not altering the amplitude dynamics. This, of course, is merely a coincidence for the BGK system, since the Shakhov, Maxwell and Hard-Sphere model are not expected to share that property, leading to the Burnett stability. We conjecture that this is due to the temperature coupling back to the velocity dynamics with opposite sign, producing an unstable term. _Remark 7.2_.: Since the equations (6.10) are linear and derived as the invariant dynamics of a globally well-posed system (3.1) (which is linear itself), the exact hydrodynamics are obviously hyperbolic as a Cauchy problem. The decay rates of solutions on the hydrodynamic manifold, however, will be _weaker_ than the decay rate of a general solution, as the hydrodynamic eigenvalues are negative but larger than the general relaxation time in real part. ## 8. Conclusions and Further Perspectives We have given an explicit and complete description of the full, non-local hydrodynamic closure of the BGK equation. Based on an explicit description of the spectrum of the linear BGK operator [28], we obtain an invariant, slow manifold as the space spanned by the hydrodynamic eigenvectors. On this manifold, we are able to explicitly define a closure operator relating the spectral dynamics to the dynamics of the macroscopic variables (density, velocity and temperature) through a linear change of coordinates. The full non-local dynamics are compared to the Euler, the Navier-Stokes-Fourier and the Burnett equations (which may be obtained through the Chapman-Enskog expansion) and full consistency is demonstrated in the small wave-number regime. The explicit form of the transport coefficients in (6.10) allows us to derive effective approximations in frequency space through polynomials matching both derivatives of the eigenvalues close to zero and the essential spectrum in combination with cut-off functions in wave number. These effective approximations will be non-local as well (involving the convolution with a Dirichlet-type integral kernel), while considerably simplifying the form of the transport coefficients, thus rendering them an interesting candidate for linear gaseous hydrodynamics across all Knudsen numbers. ## Acknowledgement This work was supported by European Research Council (ERC) Advanced Grant 834763-PonD. Computational resources at the Swiss National Super Computing Center CSCS were provided under the grant s1066. ## Declaration of Interest The authors declare that there is no conflict of interests. ## Appendix A Explicit Form of the Closure Coefficients In this section we expand the cyclical expressions for the closure coefficients (6.8) explicitly. This allows us to infer the purely imaginary/real nature in (6.9). We expand the first relation in (6.8), \[\begin{split} C_{1}&=\frac{1}{k^{2}\det\mathbf{H}} \Big{[}\lambda_{\text{diff}}\lambda_{\text{ac}}^{*}(\lambda_{\text{diff}}- \lambda_{\text{ac}}^{*})\theta(\lambda_{\text{ac}})+\lambda_{\text{ac}} \lambda_{\text{diff}}(\lambda_{\text{ac}}-\lambda_{\text{diff}})\theta( \lambda_{\text{ac}}^{*})+\lambda_{\text{ac}}^{*}\lambda_{\text{ac}}(\lambda_{ \text{ac}}^{*}-\lambda_{\text{ac}})\theta(\lambda_{\text{diff}})\Big{]}\\ &=\frac{2\text{i}}{k^{2}\det\mathbf{H}}\Big{[}\lambda_{\text{ diff}}\Im[\lambda_{\text{ac}}^{*}(\lambda_{\text{diff}}-\lambda_{\text{ac}}) \theta(\lambda_{\text{ac}})]-|\lambda_{\text{ac}}|^{2}(\Im\lambda_{\text{ac}} )\theta(\lambda_{\text{diff}})\Big{]},\end{split}\] (A.1) to find that \(C_{1}\) is purely imaginary. We expand the second relation in (6.8), \[\begin{split} C_{2}&=\frac{\text{i}}{k\det\mathbf{H }}\Big{[}(\lambda_{\text{diff}}^{2}-(\lambda_{\text{ac}}^{*})^{2})\theta( \lambda_{\text{ac}})+(\lambda_{\text{ac}}^{2}-\lambda_{\text{diff}}^{2}) \theta(\lambda_{\text{ac}}^{*})+((\lambda_{\text{ac}}^{*})^{2}-\lambda_{\text {ac}}^{2})\theta(\lambda_{\text{diff}})\Big{]}\\ &=-\frac{2}{k\det\mathbf{H}}\Big{[}\Im[(\lambda_{\text{diff}}^{2} -(\lambda_{\text{ac}}^{*})^{2})\theta(\lambda_{\text{ac}})]-2(\Re\lambda_{ \text{ac}})(\Im\lambda_{\text{ac}})\theta(\lambda_{\text{diff}})\Big{]},\end{split}\] (A.2) to find that \(C_{2}\) is purely real. We expand the third relation in (6.8), \[\begin{split} C_{3}&=-\frac{1}{k^{2}\det\mathbf{H}}( \lambda_{\text{diff}}-\lambda_{\text{ac}})(\lambda_{\text{ac}}-\lambda_{\text{ ac}}^{*})(\lambda_{\text{ac}}^{*}-\lambda_{\text{diff}})\\ &=\frac{2\text{i}}{k^{2}\det\mathbf{H}}|\lambda_{\text{diff}}- \lambda_{\text{ac}}|^{2}(\Im\lambda_{\text{ac}}),\end{split}\] (A.3) to find that \(C_{3}\) is purely imaginary. We expand the fourth relation in (6.8), \[\begin{split} C_{4}&=\frac{\text{i}}{k\det\mathbf{H }}\Big{[}\lambda_{\text{ac}}(\lambda_{\text{diff}}-\lambda_{\text{ac}}^{*}) \theta(\lambda_{\text{ac}}^{*})\theta(\lambda_{\text{diff}})+\lambda_{\text{ diff}}(\lambda_{\text{ac}}^{*}-\lambda_{\text{ac}})\theta(\lambda_{\text{ac}}^{*}) \theta(\lambda_{\text{ac}})\\ &\qquad+\lambda_{\text{ac}}^{*}(\lambda_{\text{ac}}-\lambda_{ \text{diff}})\theta(\lambda_{\text{ac}})\theta(\lambda_{\text{diff}})\Big{]}\\ &=\frac{2}{k\det\mathbf{H}}\Big{[}\Im[\lambda_{\text{ac}}(\lambda _{\text{diff}}-\lambda_{\text{ac}})\theta(\lambda_{\text{ac}})\theta(\lambda_{ \text{diff}})]+\lambda_{\text{diff}}(\Im\lambda_{\text{ac}})|\theta(\lambda_{ \text{ac}})|^{2}\Big{]},\end{split}\] (A.4) to find that \(C_{4}\) is purely real. We expand the fifth relation in (6.8), \[\begin{split} C_{5}&=\frac{1}{\det\mathbf{H}}\Big{[} (\lambda_{\text{ac}}^{*}-\lambda_{\text{diff}})\theta(\lambda_{\text{diff}}) \theta(\lambda_{\text{ac}}^{*})+(\lambda_{\text{ac}}-\lambda_{\text{ac}}^{*}) \theta(\lambda_{\text{ac}})\theta(\lambda_{\text{ac}}^{*})+(\lambda_{\text{ diff}}-\lambda_{\text{ac}})\theta(\lambda_{\text{diff}})\theta(\lambda_{\text{ac}})\Big{]}\\ &=\frac{2\text{i}}{\det\mathbf{H}}\Big{[}\theta(\lambda_{\text{ diff}})\Im[\theta(\lambda_{ac})(\lambda_{\text{diff}}-\lambda_{\text{ac}})]+( \Im\lambda_{\text{ac}})|\theta(\lambda_{ac})|^{2}\Big{]},\end{split}\] (A.5) to find that \(c_{5}\) is purely imaginary. We expand the sixth relation in (6.8), \[\begin{split} C_{6}&=-\frac{\text{i}}{k\det\mathbf{ H}}\Big{[}\lambda_{\text{diff}}\theta(\lambda_{\text{diff}})(\lambda_{\text{ac}}- \lambda_{\text{ac}}^{*})+\lambda_{\text{ac}}\theta(\lambda_{\text{ac}})( \lambda_{\text{ac}}^{*}-\lambda_{\text{diff}})+\lambda_{\text{ac}}^{*}\theta( \lambda_{\text{ac}}^{*})(\lambda_{\text{diff}}-\lambda_{\text{ac}})\Big{]}\\ &=\frac{2}{k\det\mathbf{H}}\Big{[}\lambda_{\text{diff}}\theta( \lambda_{\text{diff}})(\Im\lambda_{\text{ac}})+\Im[\lambda_{\text{ac}}\theta( \lambda_{\text{ac}})(\lambda_{\text{ac}}^{*}-\lambda_{\text{diff}})]\Big{]}, \end{split}\] (A.6) to find that \(C_{6}\) is purely real. ## Appendix B Properties of the Plasma Dispersion Function \(Z\) In the following, we collect some properties of the plasma dispersion function \(Z\), defined through the integral expression (2.6). In our presentation, we will closely follow the calculations performed in [28]. First, let us derive an expression of the integral (2.6) in terms of less exotic functions. To this end, we rely on the identities in [1, p.297]. Let \[w(\zeta)=e^{-\zeta^{2}}(1-\text{erf}(-\text{i}\zeta)),\quad\zeta\in\mathbb{C},\] (B.1) which satisfies the functional identity \[w(-\zeta)=2e^{-\zeta^{2}}-w(\zeta),\quad\zeta\in\mathbb{C}.\] (B.2) Function (B.1) is called _Faddeeva function_ and is frequently encountered in problems related to kinetic equations [14]. We then have that \[w(\zeta)=\frac{\mathrm{i}}{\pi}\int_{\mathbb{R}}\frac{e^{-s^{2}}}{\zeta-s}\,ds, \quad\Im\zeta>0,\] (B.3) and, by relation (B.2), we have for \(\Im\zeta<0\): \[\begin{split}\frac{\mathrm{i}}{\pi}\int_{\mathbb{R}}\frac{e^{-s^ {2}}}{\zeta-s}\,ds&=-\frac{\mathrm{i}}{\pi}\int_{\mathbb{R}}\frac{e ^{-s^{2}}}{(-\zeta)+s}\,ds\\ &=-\frac{\mathrm{i}}{\pi}\int_{\mathbb{R}}\frac{e^{-s^{2}}}{(- \zeta)-s}\,ds\\ &=-w(-\zeta)\\ &=e^{-\zeta^{2}}[-1-\mathrm{erf}(-\mathrm{i}\zeta)].\end{split}\] (B.4) Consequently, we obtain \[\begin{split}\int_{\mathbb{R}}\frac{1}{s-\zeta}e^{-\frac{s^{2}}{2 }}\,ds&=\int_{\mathbb{R}}\frac{e^{-s^{2}}}{s-\frac{\zeta}{\sqrt{2 }}}\,ds\\ &=\mathrm{i}\pi\frac{\mathrm{i}}{\pi}\int_{\mathbb{R}}\frac{e^{-s ^{2}}}{\frac{\zeta}{\sqrt{2}}-s}\,ds\\ &=\begin{cases}\mathrm{i}\pi e^{-\frac{\zeta^{2}}{2}}\left[1- \mathrm{erf}\left(\frac{-\mathrm{i}\zeta}{\sqrt{2}}\right)\right],&\text{if }\Im \zeta>0,\\ \mathrm{i}\pi e^{-\frac{\zeta^{2}}{2}}\left[-1-\mathrm{erf}\left(\frac{- \mathrm{i}\zeta}{\sqrt{2}}\right)\right],&\text{if }\Im\zeta<0,\end{cases}\end{split}\] (B.5) where in the first step, we have re-scaled \(s\mapsto\sqrt{2}s\) in the integral. Written more compactly, we arrive at \[Z(\zeta)=\mathrm{i}\sqrt{\frac{\pi}{2}}e^{-\frac{\zeta^{2}}{2}}\left[\mathrm{ sign}(\Im\zeta)-\mathrm{erf}\left(\frac{-\mathrm{i}\zeta}{\sqrt{2}}\right) \right],\quad\Im\zeta\neq 0.\] (B.6) An an argument plot together with an modulus-argument plot of \(Z\) are shown in Figure B.1. Clearly, \(Z\) is discontinuous across the real line (albeit that \(Z|_{\mathbb{R}}\) exists in the sense of principal values as the Hilbert transform of a real Gaussian [14]). The properties \[\begin{split}|Z(\zeta)|\leq\sqrt{\frac{\pi}{2}},\,\text{for }\zeta \in\mathbb{C}\setminus\mathbb{R},\\ 0<\arg Z(\zeta)<\pi\text{ for }\Im\zeta(\zeta)>0,\\ -\pi<\arg Z(\zeta)<0\text{ for }\Im\zeta(\zeta)<0,\end{split}\] (B.7) are easy to show and can be read off from the plots (B.1) directly as well. We also note that \[\begin{split}\lim_{\zeta\to 0,\Im\zeta>0}Z(\zeta)=\mathrm{i} \sqrt{\frac{\pi}{2}},\\ \lim_{\zeta\to 0,\Im\zeta<0}Z(\zeta)=-\mathrm{i}\sqrt{\frac{\pi}{2}}, \end{split}\] (B.8) as can be seen from (B.6). Function (B.6) satisfies an ordinary differential equation (in the sense of complex analytic functions) on the upper and on the lower half-plane. Indeed, integrating (2.6) by parts gives \[\begin{split} 1&=\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}(v- \zeta)\frac{e^{-\frac{v^{2}}{2}}}{v-\zeta}\,dv=-\zeta Z+\frac{1}{\sqrt{2\pi}} \int_{\mathbb{R}}v\frac{e^{-\frac{v^{2}}{2}}}{v-\zeta}\,dv\\ &=-\zeta Z-\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}\frac{e^{-\frac {v^{2}}{2}}}{(v-\zeta)^{2}}\,dv=-\zeta Z-\frac{d}{d\zeta}Z,\end{split}\] (B.9) which implies that \(Z\) satisfies the differential equation \[\frac{d}{d\zeta}Z=-\zeta Z-1,\] (B.10) for \(\zeta\in\mathbb{C}\setminus\mathbb{R}\). Formula (B.10) can also be used as a recurrence relation for the higher derivatives of \(Z\). Since we will be interested in function (B.6) for \(\Im\zeta\) positive and negative as global functions, Figure B.1. Complex plots of the function Z. we define \[\begin{split}& Z_{+}(\zeta)=\mathrm{i}\sqrt{\frac{\pi}{2}}e^{-\frac{ \zeta^{2}}{2}}\left[1-\mathrm{erf}\left(\frac{-\mathrm{i}\zeta}{\sqrt{2}}\right) \right],\\ & Z_{-}(\zeta)=\mathrm{i}\sqrt{\frac{\pi}{2}}e^{-\frac{\zeta^{2}}{ 2}}\left[-1-\mathrm{erf}\left(\frac{-\mathrm{i}\zeta}{\sqrt{2}}\right)\right],\end{split}\] (B.11) for all \(\zeta\in\mathbb{C}\). Both functions can be extended to analytic functions on the whole complex plane via analytic continuation. Recall that the error function has the properties that \[\mathrm{erf}(-\zeta)=-\mathrm{erf}(\zeta),\qquad\mathrm{erf}(\zeta^{*})= \mathrm{erf}(\zeta)^{*},\] (B.12) for all \(\zeta\in\mathbb{C}\), which implies that for \(x\in\mathbb{R}\), \[\mathrm{erf}(\mathrm{i}x)=-\mathrm{erf}(-\mathrm{i}x)=-\mathrm{erf}(\mathrm{i }x)^{*},\] (B.13) i.e, the error function maps imaginary numbers to imaginary numbers. Defining the _imaginary error function_, \[\mathrm{erf}(\zeta):=-\mathrm{i}\mathrm{erf}(\mathrm{i}\zeta),\] (B.14) for \(\zeta\in\mathbb{C}\), which, by (B.13) satisfies \(\mathrm{erf}|_{\mathbb{R}}\subset\mathbb{R}\), it follows that for \(x\in\mathbb{R}\): \[\Re Z_{+}(x)=-\sqrt{\frac{\pi}{2}}e^{-\frac{\pi^{2}}{2}}\,\mathrm{erf}\left( \frac{x}{\sqrt{2}}\right),\quad\Im Z_{+}(x)=-\sqrt{\frac{\pi}{2}}e^{-\frac{\pi ^{2}}{2}},\] (B.15) similarly for \(Z_{-}(x)\). Next, let us prove the following asymptotic expansion of \(Z_{+}\): \[Z_{+}(\zeta)\sim-\sum_{n=0}^{\infty}\frac{(2n-1)!!}{\zeta^{2n+1}},\qquad\text { for }|\mathrm{arg}(\zeta)|\leq\frac{\pi}{2}-\delta,\qquad\zeta\to\infty,\] (B.16) for any \(0<\delta\leq\frac{\pi}{2}\), see also [24]. The proof will be based on a generalized version of Watson's Lemma [41]. To this end, let us define the Laplace transform \[\mathcal{L}[f](\zeta)=\int_{0}^{\infty}f(x)e^{-\zeta x}\,dx,\quad\zeta\in \mathbb{C},\] (B.17) of an integrable function \(f:[0,\infty)\to\mathbb{C}\). _Lemma B.1_.: [Generalized Watson's Lemma] Assume that (B.17) exists for some \(\zeta=\zeta_{0}\in\mathbb{C}\) and assume that \(f\) admits an asymptotic expansion of the form \[f(x)=\sum_{n=0}^{N}a_{n}x^{\beta_{n}-1}+o(x^{\beta_{N}-1}),\qquad x>0,\quad x \to 0,\] (B.18) where \(a_{n}\in\mathbb{C}\) and \(\beta_{n}\in\mathbb{C}\) with \(\Re\beta_{0}>0\) and \(\Re\beta_{n}>\Re\beta_{n-1}\) for \(1\leq n\leq N\). Then \(\mathcal{L}[f](\zeta)\) admits an asymptotic expansion of the form \[\mathcal{L}[f](\zeta)=\sum_{n=0}^{N}a_{n}\Gamma(\beta_{n})\zeta^{-\beta_{n}}+o (\zeta^{-\beta_{N}}),\quad v,\quad\zeta\to\infty,\] (B.19) for any real number \(0<\delta\leq\frac{\pi}{2}\), where \(\Gamma\) is the standard Gamma function. For a proof of the above Lemma, we refer e.g. to [13]. Classically, Lemma (B.1) is applied to prove that the imaginary error function admits an asymptotic expansion for \(x\in\mathbb{R}\) of the form \[\text{erfi}(x)\sim\frac{e^{x^{2}}}{\sqrt{\pi}x}\sum_{k=0}^{\infty}\frac{(2k-1)! \,!}{(2x^{2})^{k}},\qquad\text{ for }x>0,\quad x\to\infty,\] (B.20) see also [32], based on the classical version of Watson's Lemma, whose assumptions are, however, unnecessarily restrictive [42]. For completeness, we recall the derivation of (B.16) based on Lemma B.1. First, let us rewrite \(\text{erfi}\) as a Laplace transform using the change of variables \(t=\sqrt{1-s}\) with \(dt=\frac{ds}{2\sqrt{1-s}}\) \[\begin{split}\text{erfi}(\zeta)&=\int_{0}^{1}\frac {d}{dt}\text{erfi}(t\zeta)\,dt=\frac{2\zeta}{\sqrt{\pi}}\int_{0}^{1}e^{t^{2} \zeta^{2}}\,dt=\frac{2\zeta}{\sqrt{\pi}}\int_{0}^{1}e^{\zeta^{2}(1-s)}\,\frac{ ds}{2\sqrt{1-s}}\\ &=\frac{\zeta e^{\zeta^{2}}}{\sqrt{\pi}}\int_{0}^{1}\frac{1}{ \sqrt{1-s}}e^{-s\zeta^{2}}\,ds=\frac{\zeta e^{\zeta^{2}}}{\sqrt{\pi}}\int_{0}^ {\infty}\frac{\chi_{[0,1]}(s)}{\sqrt{1-s}}e^{-s\zeta^{2}}\,ds.\end{split}\] (B.21) From the Taylor expansion of the Binomial function, we know that \[\frac{1}{\sqrt{1-s}}=\sum_{n=0}^{\infty}\binom{-1/2}{n}(-s)^{n}=\sum_{n=0}^{ \infty}4^{-n}\binom{2n}{n}s^{n},\] (B.22) which allows us to apply Lemma (B.1) with \(\beta_{n}=n+1\) and \(a_{n}=4^{-n}\binom{2n}{n}\), thus leading to \[\begin{split}\text{erfi}(\zeta)&\sim\frac{\zeta e ^{\zeta^{2}}}{\sqrt{\pi}}\sum_{n=0}^{\infty}4^{-n}\binom{2n}{n}\Gamma(n+1) \zeta^{-2(n+1)}\\ &\sim\frac{e^{\zeta^{2}}}{\sqrt{\pi}}\sum_{n=0}^{\infty}\frac{(2 n)!}{4^{n}n!}\zeta^{-2n-1}\\ &\sim\frac{e^{\zeta^{2}}}{\zeta\sqrt{\pi}}\sum_{n=0}^{\infty} \frac{(2n-1)!!}{(2\zeta)^{n}},\end{split}\] (B.23) for \(\zeta\to\infty\) and \(|\text{arg}(\zeta)|\leq\frac{\pi}{2}-\delta\), \(0<\delta\leq\frac{\pi}{2}\). This is consistent with formula (B.20) for the limit along the real line. Finally, we arrive at the following asymptotic expansion for \(Z\): \[Z_{+}(\zeta)\sim\text{i}\sqrt{\frac{\pi}{2}}e^{-\frac{\zeta^{2}}{2}}-\sum_{n= 0}^{\infty}\frac{(2n-1)!!}{\zeta^{2n+1}},\qquad\text{ for }|\text{arg}(\zeta)|\leq\frac{\pi}{2}-\delta,\qquad\zeta\to\infty,\] (B.24) which is, of course, equivalent to \[Z_{+}(\zeta)\sim-\sum_{n=0}^{\infty}\frac{(2n-1)!!}{\zeta^{2n+1}},\qquad\text { for }|\text{arg}(\zeta)|\leq\frac{\pi}{2}-\delta,\qquad\zeta\to\infty,\] (B.25) since \(|e^{-\zeta^{2}}|^{2}=e^{-2(x^{2}-y^{2})}\to 0\) for \(\Re\zeta=x\to\infty\).
2303.13970
Rough ideal convergence of double sequences in intuitionistic fuzzy normed spaces
The idea of rough statistical convergence for double sequences was studied by Ozcan and Or[29] in a intuitionistic fuzzy normed space. Recently the same has been generalized in the ideal context by Hossain and Banerjee[15] for sequences. Here in this paper we have discussed the idea of rough ideal convergence of double sequences in intuitionistic fuzzy normed spaces generalizing the idea of rough statistical convergence of double sequences. Also we have defined rough I2-cluster points for a double sequence and also investigated some of the basic properties associated with rough I2-limit set of a double sequence in a intuitionistic fuzzy normed space.
Rahul Mondal, Nesar Hossain
2023-03-24T12:43:01Z
http://arxiv.org/abs/2303.13970v1
# Rough ideal convergence of double sequences in intuitionistic fuzzy normed spaces ###### Abstract. The idea of rough statistical convergence for double sequences was studied by Ozcan and Or[29] in a intuitionistic fuzzy normed space. Recently the same has been generalized in the ideal context by Hossain and Banerjee[15] for sequences. Here in this paper we have discussed the idea of rough ideal convergence of double sequences in intuitionistic fuzzy normed spaces generalizing the idea of rough statistical convergence of double sequences. Also we have defined rough \(\mathcal{I}_{2}\)-cluster points for a double sequence and also investigated some of the basic properties associated with rough \(\mathcal{I}_{2}\)-limit set of a double sequence in a intuitionistic fuzzy normed space. Key words and phrases:Ideal, filter, Intuitionistic fuzzy normed space, double sequence, rough \(\mathcal{I}_{2}\)-convergence, rough \(\mathcal{I}_{2}\)-cluster point 2 ###### Abstract We study the following problem of a non-empty set of non-empty sets of If \(\mathcal{I}\) is a non trivial ideal of \(X\) then the family \(\mathcal{F}(\mathcal{I})=\{X\setminus A:A\in\mathcal{I}\}\) is a filter on \(X\), called filter associated with the ideal \(\mathcal{I}\). Throughout the paper \(\mathcal{I}\) will stand for a non trivial admissible ideal in \(\mathbb{N}\). **Definition 2.3**.: _[_6_]_ _A non trivial ideal \(\mathcal{I}_{2}\) of \(\mathbb{N}\times\mathbb{N}\) is called strongly admissible if \(\{i\}\times\mathbb{N}\) and \(\mathbb{N}\times\{i\}\) belong to \(\mathcal{I}_{2}\) for each \(i\in\mathbb{N}\)._ It is clear that a strongly admissible ideal is admissible also. **Definition 2.4**.: _Let \(K\subset\mathbb{N}\). Then the natural density of \(K\) is denoted by \(\delta(K)\) and is defined by_ \[\delta(K)=\lim_{n\to\infty}\frac{1}{n}|\{k\leq n:k\in K\}|,\] _provided the limit exists, where \(|\cdot|\) designates the number of elements of the enclosed set._ It is clear that if \(K\) is finite then \(\delta(K)=0\). **Definition 2.5**.: _[_25_]_ _The double natural density of the set \(A\subset\mathbb{N}\times\mathbb{N}\) is defined by_ \[\delta_{2}(A)=\lim_{m,n\to\infty}\frac{|\{(i,j)\in A:i\leq m,j\leq n\}|}{mn}\] _where \(|\{(i,j)\in A:i\leq m,j\leq n\}|\) denotes the number of elements of \(A\) not exceeding \(m\) and \(n\), respectively. It is clear that if \(A\) is finite then \(\delta_{2}(A)=0\)._ **Definition 2.6**.: _[_6_]_ _A non trivial ideal \(\mathcal{I}_{2}\) of \(\mathbb{N}\times\mathbb{N}\) is said to be strongly admissible if \(\{i\}\times\mathbb{N}\) and \(\mathbb{N}\times\{i\}\) belong to \(\mathcal{I}_{2}\) for each \(i\in\mathbb{N}\)._ It is clear that a strongly admissible ideal is also admissible. Throughout the discussion \(\mathcal{I}_{2}\) stands for an admissible ideal of \(\mathbb{N}\times\mathbb{N}\). **Definition 2.7**.: _(see [6]) A double sequence \(\{x_{mn}\}\) of real numbers is said to be convergent to \(\xi\in\mathbb{R}\) if for any \(\varepsilon>0\) there exists \(\mathcal{N}_{\varepsilon}\in\mathbb{N}\) such that \(|x_{mn}-\xi|<\varepsilon\) for all \(m,n\geq\mathcal{N}_{\varepsilon}\)._ **Definition 2.8**.: _[_25_]_ _A double sequence \(\{x_{mn}\}_{m,n\in\mathbb{N}}\) of real numbers is said to be statistically convergent to \(\xi\in\mathbb{R}\) if for any \(\varepsilon>0\), we have \(\delta_{2}(A(\varepsilon))=0\) where \(A(\varepsilon)=\{(m.n)\in\mathbb{N}\times\mathbb{N}:|x_{mn}-\xi|\geq\varepsilon\}\)._ **Definition 2.9**.: _(see [6]) A double sequence \(\{x_{mn}\}_{m,n\in\mathbb{N}}\) of real numbers is said to be \(\mathcal{I}_{2}\)-convergent to \(\xi\in\mathbb{R}\) if for every \(\varepsilon>0\), the set \(\{(m,n)\in\mathbb{N}\times\mathbb{N}:|x_{mn}-\xi|\geq\varepsilon\}\in\mathcal{ I}_{2}\)._ **Remark 2.1**.: (see [6]) \((a)\) If we take \(\mathcal{I}_{2}=\mathcal{I}_{2}^{0}\), where \(\mathcal{I}_{2}^{0}=\{A\subset\mathbb{N}\times\mathbb{N}:\exists\ m(A)\in \mathbb{N}:i,j\geq m(A)\implies(i,j)\notin A\}\), then \(\mathcal{I}_{2}^{0}\) will be a non trivial strongly admissible ideal. In this case \(\mathcal{I}_{2}\)-convergence coincides with ordinary convergence of double sequences of real numbers. \((b)\) If we take \(\mathcal{I}_{2}=\mathcal{I}_{2}^{\delta}\), where \(\mathcal{I}_{2}^{\delta}=\{A\subset\mathbb{N}\times\mathbb{N}:\delta_{2}(A)=0\}\), then \(\mathcal{I}_{2}^{\delta}\)-convergence becomes statistical convergence of double sequences of real numbers. Now, we recall some basic definitions and notations which will be useful in the sequal. **Definition 2.10**.: _[_36_]_ _A binary operation \(\star:[0,1]\times[0,1]\to[0,1]\) is said to be a continuous \(t\)-norm if the following conditions hold:_ _(1) \(\star\) is associative and commutative;_ _(2) \(\star\) is continuous;_ _(3) \(x\star 1=x\) for all \(x\in[0,1]\);_ _(4) \(x\star y\leq z\star w\) whenever \(x\leq z\) and \(y\leq w\) for each \(x,y,z,w\in[0,1]\)._ **Definition 2.11**.: _[_36_]_ _A binary operation \(\circ:[0,1]\times[0,1]\to[0,1]\) is said to be a continuous \(t\)-conorm if the following conditions are satisfied:_ _(1) \(\circ\) is associative and commutative;_ _(2) \(\circ\) is continuous;_ _(3) \(x\circ 0=x\) for all \(x\in[0,1]\);_ _(4) \(x\circ y\leq z\circ w\) whenever \(x\leq z\) and \(y\leq w\) for each \(x,y,z,w\in[0,1]\)._ **Example 2.1**.: _[_17_]_ _The following are the examples of \(t\)-norms:_ 1. \(x\star y=min\{x,y\}\); 2. \(x\star y=x.y\); 3. \(x\star y=max\{x+y-1,0\}\). This \(t\)-norm is known as Lukasiewicz \(t\)-norm. **Example 2.2**.: _[_17_]_ _The following are the examples of \(t\)-conorms:_ 1. \(x\circ y=max\{x,y\}\); 2. \(x\circ y=x+y-x.y\); 3. \(x\circ y=min\{x+y,1\}\). This is known as Lukasiewicz \(t\)-conorm. **Definition 2.12**.: _[_37_]_ _The \(5\)-tuple \((X,\mu,\nu,\star,\circ)\) is said to be an intuitionistic fuzzy normed space (in short, IFNS) if \(X\) is a normed linear space, \(\star\) is a continuous \(t\)-norm, \(\circ\) is a continuous \(t\)-conorm and \(\mu\) and \(\nu\) are the fuzzy sets on \(X\times(0,\infty)\) satisfying the following conditions for every \(x,y\in X\) and \(s,t>0\):_ 1. \(\mu(x,t)+\nu(x,t)\leq 1\)_;_ 2. \(\mu(x,t)>0\)_;_ 3. \(\mu(x,t)=1\) _if and only if_ \(x=0\)_;_ 4. \(\mu(\alpha x,t)=\mu(x,\frac{t}{|\alpha|})\) _for each_ \(\alpha\neq 0\)_;_ 5. \(\mu(x,t)\star\mu(y,s)\leq\mu(x+y,t+s)\)_;_ 6. \(\mu(x,t):(0,\infty)\to[0,1]\) _is continuous in_ \(t\)_;_ 7. \(\lim_{t\to\infty}\mu(x,t)=1\) _and_ \(\lim_{t\to 0}\mu(x,t)=0\)_;_ 8. \(\nu(x,t)<1\)_;_ 9. \(\nu(x,t)=0\) _if and only if_ \(x=0\)_;_ 10. \(\nu(\alpha x,t)=\nu(x,\frac{t}{|\alpha|})\) _for each_ \(\alpha\neq 0\)_;_ 11. \(\nu(x,t)\circ\nu(y,s)\geq\nu(x+y,s+t)\)_;_ 12. \(\nu(x,t):(0,\infty)\to[0,1]\) _is continuous in_ \(t\)_;_ 13. \(\lim_{t\to\infty}\nu(x,t)=0\) _and_ \(\lim_{t\to 0}\nu(x,t)=1\)_._ _In this case \((\mu,\nu)\) is called an intuitionistic fuzzy norm on \(X\)._ **Example 2.3**.: [37] Let \((X,\|\cdot\|)\) be a normed space. Denote \(a\star b=ab\) and \(a\circ b=min\{a+b,1\}\) for all \(a,b\in[0,1]\) and let \(\mu\) and \(\nu\) be fuzzy sets on \(X\times(0,\infty)\) defined as follows: \[\mu(x,t)=\frac{t}{t+\|x\|},\ \nu(x,t)=\frac{\|x\|}{t+\|x\|}.\] Then \((X,\mu,\nu,\star,\circ)\) is an intuitionistic fuzzy normed space. **Definition 2.13**.: [38] _Let \((X,\mu,\nu,\star,\circ)\) be an IFNS with intuitionistic fuzzy norm \((\mu,\nu)\). For \(r>0\), the open ball \(B(x,\lambda,r)\) with center \(x\in X\) and radius \(0<\lambda<1\), is the set_ \[B(x,\lambda,r)=\{y\in X:\mu(x-y,r)>1-\lambda,\ \nu(x-y,r)<\lambda\}.\] Similarly, closed ball is the set \(\overline{B(x,\lambda,r)}=\{y\in X:\mu(x-y,r)\geq 1-\lambda,\ \nu(x-y,r)\leq \lambda\}\). **Definition 2.14**.: [39] _Let \(\{x_{n}\}_{n\in\mathbb{N}}\) be a sequence in an IFNS \((X,\mu,\nu,\star,\circ)\). Then a point \(\gamma\in X\) is called a \(\mathcal{I}\)-cluster point of \(\{x_{n}\}_{n\in\mathbb{N}}\) with respect to the intuitionistic fuzzy norm \((\mu,\nu)\) if for every \(\varepsilon>0\), \(\lambda\in(0,1)\), the set \(\{n\in\mathbb{N}:\mu(x_{n}-\gamma,\varepsilon)>1-\lambda\ \text{and}\ \nu(x_{n}-\gamma, \varepsilon)<\lambda\}\notin\mathcal{I}\)._ **Definition 2.15**.: [26] _Let \(\{x_{mn}\}\) be a double sequence in an IFNS \((X,\mu,\nu,\star,\circ)\). Then \(\{x_{mn}\}\) is said to be convergent to \(\xi\in X\) with respect to the intuitionistic fuzzy norm \((\mu,\nu)\) if for every \(\varepsilon>0\) and \(\lambda\in(0,1)\) there exists \(\mathcal{N}_{\varepsilon}\in\mathbb{N}\) such that \(\mu(x_{mn}-\xi,\varepsilon)>1-\lambda\) and \(\nu(x_{mn}-\xi,\varepsilon)<\lambda\) for all \(m,n\geq\mathcal{N}_{\varepsilon}\). In this case we write \((\mu,\nu)\)-\(\lim x_{mn}=\xi\) or \(x_{mn}\xrightarrow{(\mu,\nu)}\xi\)._ **Definition 2.16**.: _[_28_]_ _Let \(\mathcal{I}_{2}\) be a non trivial ideal of \(\mathbb{N}\times\mathbb{N}\) and \((X,\mu,\nu,\star,\circ)\) be an intuitionistic fuzzy normed space. A double sequence \(x=\{x_{mn}\}\) of elements of \(X\) is said to be \(\mathcal{I}_{2}\)-convergent to \(L\in X\) if for each \(\varepsilon>0\) and \(t>0\), \(\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn}-L,t)\leq 1-\varepsilon\text{ or }\nu(x_{mn}-L,t)\geq \varepsilon\}\in\mathcal{I}_{2}\). In this case we write \(\mathcal{I}_{2}^{(\mu,\nu)}\text{-}\lim x=L\text{ or }x_{mn}\xrightarrow{ \mathcal{I}_{2}^{(\mu,\nu)}}L\)._ **Definition 2.17**.: _[_2_]_ _Let \((X,\mu,\nu,\star,\circ)\) be an IFNS with intuitionistic fuzzy norm \((\mu,\nu)\). A sequence \(\{x_{n}\}_{n\in\mathbb{N}}\) in \(X\) is said to be rough statistical convergent to \(\xi\in X\) with respect to the norm \((\mu,\nu)\) for some non-negative number \(r\) if for every \(\varepsilon>0\) and \(\lambda\in(0,1)\), \(\delta(\{n\in\mathbb{N}:\mu(x_{n}-\xi,r+\varepsilon)\leq 1-\lambda\text{ or }\nu(x_{n}-\xi,r+\varepsilon)\geq\lambda\})=0\)._ **Definition 2.18**.: _[_15_]_ _Let \(\{x_{n}\}_{n\in\mathbb{N}}\) be a sequence in an IFNS \((X,\mu,\nu,\star,\circ)\) and \(r\) be a non-negative number. Then \(\{x_{n}\}_{n\in\mathbb{N}}\) is said to be rough \(\mathcal{I}\)-convergent to \(\xi\in X\) with respect to the intuitionistic fuzzy norm \((\mu,\nu)\) if for every \(\varepsilon>0\) and \(\lambda\in(0,1)\), \(\{n\in\mathbb{N}:\mu(x_{n}-\xi,r+\varepsilon)\leq 1-\lambda\text{ or }\nu(x_{n}-\xi,r+\varepsilon)\geq \lambda\}\in\mathcal{I}\). In this case \(\xi\) is called \(r\)-\(\mathcal{I}_{(\mu,\nu)}\)-limit of \(\{x_{n}\}_{n\in\mathbb{N}}\) and we write \(r\)-\(\mathcal{I}_{(\mu,\nu)}\text{-}\lim_{n\to\infty}x_{n}=\xi\text{ or }x_{n}\xrightarrow{r\text{-}\mathcal{I}_{(\mu,\nu)}}\xi\)._ **Definition 2.19**.: _[_29_]_ _Let \(\{x_{mn}\}\) be a double sequence in an IFNS \((X,\mu,\nu,\star,\circ)\) and \(r\) be a non negative real number. Then \(\{x_{mn}\}\) is said to be rough convergent (in short \(r\)-convergent) to \(\xi\in X\) with respect to the intuitionistics fuzzy norm \((\mu,\nu)\) if for every \(\varepsilon>0\) and \(\lambda\in(0,1)\) there exists \(\mathcal{N}_{\lambda}\in\mathbb{N}\) such that \(\mu(x_{mn}-\xi,r+\varepsilon)>1-\lambda\) and \(\nu(x_{mn}-\xi,r+\varepsilon)<\lambda\) for all \(m,n\geq\mathcal{N}_{\lambda}\). In this case we write \(r_{2}^{(\mu,\nu)}-\lim x_{mn}=\xi\text{ or }x_{mn}\xrightarrow{r_{2}^{(\mu,\nu)}}\xi\)._ **Definition 2.20**.: _[_29_]_ _Let \(\{x_{mn}\}\) be a double sequence in an IFNS \((X,\mu,\nu,\star,\circ)\) and \(r\) be a non negative real number. Then \(\{x_{mn}\}\) is said to be rough statistically convergent to \(\xi\in X\) with respect to the intuitionistic fuzzy norm \((\mu,\nu)\) if for every \(\varepsilon>0\) and \(\lambda\in(0,1)\), \(\delta_{2}(\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn}-\xi,r+\varepsilon) \leq 1-\lambda\text{ or }\nu(x_{mn}-\xi,r+\varepsilon)\geq\lambda\})=0\). In this case we write \(r\)-\(st_{2}^{(\mu,\nu)}\text{-}\lim x_{mn}=\xi\text{ or }x_{mn}\xrightarrow{r-st_{2}^{(\mu,\nu)}}\xi\)._ ## 3. Main Results We first introduce the notion of rough ideal convergence of double sequences in an IFNS and then investigate some important results associated with rough \(\mathcal{I}_{2}\)-cluster points in the same space. **Definition 3.1**.: _Let \(\{x_{mn}\}\) be a double sequence in an IFNS \((X,\mu,\nu,\star,\circ)\) and \(r\) be a non negative real number. Then \(\{x_{mn}\}\) is said to be rough \(\mathcal{I}_{2}\)-convergent to \(\xi\in X\) with respect to the intuitionistic fuzzy norm \((\mu,\nu)\) if for every \(\varepsilon>0\) and \(\lambda\in(0,1)\), \(\delta(\{n\in\mathbb{N}:\mu(x_{n}-\xi,r+\varepsilon)\leq 1-\lambda\text{ or }\nu(x_{n}-\xi,r+\varepsilon)\geq \lambda\})=0\)._ **Definition 3.21**.: _Let \(\{x_{mn}\}\) be a double sequence in an IFNS \((X,\mu,\nu,\star,\circ)\) and \(r\) be a non negative real number. Then \(\{x_{mn}\}\) is said to be rough \(\mathcal{I}_{2}\)-convergent to \(\xi\in X\) with respect to the intuitionistic fuzzy norm \((\mu,\nu)\) if for every \(\varepsilon>0\) and \(\lambda\in(0,1)\), \(\delta(\{n\in\mathbb{N}:\mu(x_{n}-\xi,r+\varepsilon)\leq 1-\lambda\text{ or }\nu(x_{n}-\xi,r+\varepsilon)\geq \lambda\})=0\)._ **Definition 3.22**.: _Let \(\{x_{mn}\}\) be a double sequence in an IFNS \((X,\mu,\nu,\star,\circ)\) and \(r\) be a non negative real number. Then \(\{x_{mn}\}\) is said to be rough \(\mathcal{I}_{2}\)-convergent to \(\xi\in X\) with respect to the intuitionistic fuzzy norm \((\mu,\nu)\) if for every \(\varepsilon>0\) and \(\lambda\in(0,1)\), \(\delta(\{n\in\mathbb{N}:\mu(x_{n}-\xi,r+\varepsilon)\leq 1-\lambda\text{ or }\nu(x_{n}-\xi,r+\varepsilon)\geq \lambda\})=0\)._ **Definition 3.23**.: _Let \(\{x_{mn}\}\) be a double sequence in an IFNS \((X,\mu,\nu,\star,\circ)\) and \(r\) be a non negative real number. Then \(\{x_{mn}\}\) is said to be rough \(\mathcal{I}_{2}\)-convergent to \(\xi\in X\) with respect to the intuitionistic fuzzy norm \((\mu,\nu)\) if for every \(\varepsilon>0\) and \(\lambda\in(0,1)\), \(\delta(\{n\in\mathbb{N}:\mu(x_{n}-\xi,r+\varepsilon)\leq 1-\lambda\text{ or }\nu(x_{n}-\xi,r+\varepsilon)\geq \lambda\})=0\)._ **Definition 3.24**.: _Let \(\{x_{mn}\}\) be a double sequence in an IFNS \((X,\mu,\nu,\star,\circ)\) and \(r\) be a non negative real number. Then \(\{x_{mn}\}\) is said to be rough \(\mathcal{I}_{2}\)-convergent to \(\xi\in X\) with respect to the intuitionistic fuzzy norm \((\mu,\nu)\) if for every \(\varepsilon>0\) and \(\lambda\in(0,1)\), \(\delta(\{n\in\mathbb{N}:\mu(x_{n}-\xi,r+\varepsilon)\leq 1-\lambda\text{ or }\nu(x_{n}-\xi,r+\varepsilon)\geq \lambda\})=0\)._ **Definition 3.24**.: _Let \(\{x_{mn}\}\) be a double sequence in an IFNS \((X,\mu,\nu,\star,\circ)\) and \(r\) be a non negative real number. Then \(\{x_{mn}\}\) is said to be rough \(\mathcal{I}_{2}\)-convergent to \(\xi\in X\) with respect to the intuitionistic fuzzy norm \((\mu,\nu)\) if for every \(\varepsilon>0\) and \(\lambda\in(0,1)\), \(\delta(\{n\in\mathbb{N}:\mu(x_{n}-\xi,r+\varepsilon)\leq 1-\lambda\text{ or }\nu(x_{n}-\xi,r+\varepsilon)\geq \lambda\})=0\)._ **Definition 3.25**.: _Let \(\{x_{mn}\}\) be a double sequence in an IFNS \((X,\mu,\nu,\star,\circ)\) and \(r\) be a non negative real number. Then \(\{x_{mn}\}\) is said to be rough \(\mathcal{I}_{2}\)-convergent to \(\xi\in X\) with respect to the intuitionistic fuzzy norm \((\mu,\nu)\) if for every \(\varepsilon>0\) and \(\lambda\in(0,1)\), \(\delta(\{n\in\mathbb{N}:\mu(x_{n}-\xi,r+\varepsilon)\leq 1-\lambda\text{ or }\nu(x_{n}-\xi,r+\varepsilon)\geq \lambda\})=0\)._ **Definition 3.26**.: _Let \(\{x_{mn}\}\) be a double sequence in an IFNS \((X,\mu,\nu,\star,\circ)\) and \(r\) be a non negative real number. Then \(\{x_{mn}\}\) is said to be rough \(\mathcal{I}_{2}\)-convergent to \(\xi\in X\) with respect to the intuitionistic fuzzy norm \((\ to the intuitionistic fuzzy norm \((\mu,\nu)\) if for every \(\varepsilon>0\) and \(\lambda\in(0,1)\), \(\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn}-\xi,r+\varepsilon)\leq 1-\lambda\) or \(\nu(x_{mn}-\xi,r+\varepsilon)\geq\lambda\}\in\mathcal{I}_{2}\). In this case \(\xi\) is called \(r\)-\(\mathcal{I}_{2}^{(\mu,\nu)}\)-limit of \(\{x_{mn}\}\) and we write \(r\)-\(\mathcal{I}_{2}^{(\mu,\nu)}\)-\(\lim x_{mn}=\xi\) or \(x_{mn}\xrightarrow{r-\mathcal{I}_{2}^{(\mu,\nu)}}\xi\)._ **Remark 3.1**.: \((a)\) If we put \(r=0\) in Definition 3.1 then the notion of rough \(\mathcal{I}_{2}\)-convergence with respect to the intuitionistic fuzzy norm \((\mu,\nu)\) coincides with the notion of \(\mathcal{I}_{2}\)-convergence with respect to the intuitionistic fuzzy norm \((\mu,\nu)\). So, our main interest is on the fact \(r>0\). \((b)\) If we use \(\mathcal{I}_{2}=\mathcal{I}_{2}^{0}\) in Definition 3.1, then the notion of rough \(\mathcal{I}_{2}\)-convergence with respect to the intuitionistic fuzzy norm \((\mu,\nu)\) coincides with the notion of rough convergence of double sequences with respect to the intuitionistic fuzzy norm \((\mu,\nu)\). \((c)\) If we take \(\mathcal{I}_{2}=\mathcal{I}_{2}^{\delta}\) in Definition 3.1, then the notion of rough \(\mathcal{I}_{2}\)-convergence with respect to the intuitionistic fuzzy norm \((\mu,\nu)\) coincides with the notion of rough statistical convergence of double sequences with respect to the intuitionistic fuzzy norm \((\mu,\nu)\). **Note 3.1**.: From Definition 3.1, we get \(r\)-\(\mathcal{I}_{2}^{(\mu,\nu)}\)-limit of \(\{x_{mn}\}\) is not unique. So, in this regard we denote \(\mathcal{I}_{2}^{(\mu,\nu)}\)-\(LIM_{x_{mn}}^{r}\) to mean the set of all \(r\)-\(\mathcal{I}_{2}^{(\mu,\nu)}\)-limit of \(\{x_{mn}\}\), i.e., \(\mathcal{I}_{2}^{(\mu,\nu)}\)-\(LIM_{x_{mn}}^{r}=\{\xi\in X:x_{mn}\xrightarrow{r-\mathcal{I}_{2}^{(\mu, \nu)}}\xi\}\). The double sequence \(\{x_{mn}\}\) is called rough \(\mathcal{I}_{2}\)-convergent if \(\mathcal{I}_{2}^{(\mu,\nu)}\)-\(LIM_{x_{mn}}^{r}\neq\emptyset\). We denote \(LIM_{x_{mn}}^{r_{(\mu,\nu)}}\) to mean the set of all rough convergent limits of the double sequence \(\{x_{mn}\}\) with respect to the intuitionistic fuzzy norm \((\mu,\nu)\). The sequence \(\{x_{mn}\}\) is called rough convergent if \(LIM_{x_{mn}}^{r_{(\mu,\nu)}}\neq\emptyset\). If the sequence is unbounded then \(LIM_{x_{mn}}^{r_{(\mu,\nu)}}=\emptyset\)[29], although in such cases \(\mathcal{I}_{2}^{(\mu,\nu)}\)-\(LIM_{x_{mn}}^{r}\neq\emptyset\) may happen which will be shown in the following example. **Example 3.1**.: Let \((X,\|\cdot\|)\) be a real normed linear space and \(\mu(x,t)=\frac{t}{t+\|x\|}\) and \(\nu(x,t)=\frac{\|x\|}{t+\|x\|}\) for all \(x\in X\) and \(t>0\). Also, let \(a\star b=ab\) and \(a\circ b=\min\{a+b,1\}\). Then \((X,\mu,\nu,\star,\circ)\) is an IFNS. Now let us consider ideal \(\mathcal{I}_{2}\) consisting of all those subsets of \(\mathbb{N}\times\mathbb{N}\) whose double natural density are zero. Let us consider the double sequence \(\{x_{mn}\}\) by \(x_{mn}=\begin{cases}(-1)^{m+n},\text{ if }m,n\neq i^{2},i\in\mathbb{N}\\ mn,\text{ otherwise}\end{cases}\). Then \(\mathcal{I}_{2}^{(\mu,\nu)}\)-\(LIM_{x_{mn}}^{r}=\begin{cases}\emptyset,\ r<1\\ [1-r,r-1],\ r\geq 1\end{cases}\) and \(LIM_{x_{mn}}^{r_{(\mu,\nu)}}=\emptyset\) for any \(r\). **Remark 3.2**.: From Example 3.1, we have \(\mathcal{I}_{2}^{(\mu,\nu)}\)-\(LIM_{x_{mn}}^{r}\neq\emptyset\) does not imply that \(LIM_{x_{mn}}^{r_{(\mu,\nu)}}\neq\emptyset\). But, whenever \(\mathcal{I}_{2}\) is an admissible ideal then \(LIM_{x_{mn}}^{r_{(\mu,\nu)}}\neq\emptyset\) implies \(\mathcal{I}_{2}^{(\mu,\nu)}\)-\(LIM_{x_{mn}}^{r}\neq\emptyset\) as \(\mathcal{I}_{2}^{0}\subset\mathcal{I}_{2}\). Now we define \(\mathcal{I}_{2}\)-bounded of double sequences in an IFNS analogue to ([29], Definition 3.6). **Definition 3.2**.: _Let \(\{x_{mn}\}\) be a double sequence in an IFNS \((X,\mu,\nu,\star,\circ)\). Then \(\{x_{mn}\}\) is said to be \(\mathcal{I}_{2}\)-bounded with respect to the intuitionistic fuzzy norm \((\mu,\nu)\) if for every \(\lambda\in(0,1)\) there exists a positive real number \(M\) such that \(\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn},M)\leq 1-\lambda\text{ or }\nu(x_{mn},M)\geq \lambda\}\in\mathcal{I}_{2}\)._ **Theorem 3.1**.: _Let \(\{x_{mn}\}\) be a double sequence in an IFNS \((X,\mu,\nu,\star,\circ)\). Then \(\{x_{mn}\}\) is \(\mathcal{I}_{2}\)-bounded if and only if \(\mathcal{I}_{2}^{(\mu,\nu)}\text{-}LIM_{x_{mn}}^{r}\neq\emptyset\) for all \(r>0\)._ Proof.: First suppose that \(\{x_{mn}\}\) is \(\mathcal{I}_{2}\)-bounded in \(X\) with respect to the intuitionistic fuzzy norm \((\mu,\nu)\). Then for every \(\lambda\in(0,1)\) there exists a positive real number \(M\) such that \(\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn},M)\leq 1-\lambda\text{ or }\nu(x_{mn},M)\geq \lambda\}\in\mathcal{I}_{2}\). Let \(K=\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn},M)\leq 1-\lambda\text{ or }\nu(x_{mn},M)\geq\lambda\}\). Now for \((i,j)\in K^{c}\), we have \(\mu(x_{ij}-\theta,r+M)\geq\mu(x_{ij};M)\star\mu(\theta,r)>(1-\lambda)\star 1=1-\lambda\) and \(\nu(x_{ij}-\theta,r+M)\leq\nu(x_{ij};M)\circ\nu(\theta,r)<\lambda\circ 0=\lambda\), where \(\theta\) is the zero element of \(X\). Therefore \(\{(i,j)\in\mathbb{N}\times\mathbb{N}:\mu(x_{ij}-\theta,r+M)\leq 1-\lambda\text{ or }\nu(x_{mn},M)\geq \lambda\}\subset K\). Since \(K\in\mathcal{I}_{2}\), \(\theta\in\mathcal{I}_{2}\). Hence \(\mathcal{I}_{2}^{(\mu,\nu)}\text{-}LIM_{x_{mn}}^{r}\neq\emptyset\). Conversely, suppose that \(\mathcal{I}_{2}^{(\mu,\nu)}\text{-}LIM_{x_{mn}}^{r}\neq\emptyset\). Then there exists \(\beta\in\mathcal{I}_{2}^{(\mu,\nu)}\text{-}LIM_{x_{mn}}^{r}\) such that for every \(\varepsilon>0\) and \(\lambda\in(0,1)\) such that \(\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn}-\beta,r+\varepsilon)\leq 1-\lambda \text{ or }\nu(x_{mn}-\beta,r+\varepsilon)\geq\lambda\}\in\mathcal{I}_{2}\). This shows that almost all \(x_{mn}\) are contained in some ball with center \(\beta\). Hence \(\{x_{mn}\}\) is \(\mathcal{I}_{2}\)-bounded. This completes the proof. Now we will discuss on some algebraic characterization of rough \(\mathcal{I}_{2}\)-convergence in an IFNS. **Theorem 3.2**.: _Let \(\{x_{mn}\}\) and \(\{y_{mn}\}\) be two double sequences in an IFNS \((X,\mu,\nu,\star,\circ)\). Then for some \(r>0\), the following statements hold:_ 1. _If_ \(x_{mn}\xrightarrow{r-\mathcal{I}_{2}^{(\mu,\nu)}}\xi\) _and_ \(y_{mn}\xrightarrow{r-\mathcal{I}_{2}^{(\mu,\nu)}}\eta\)_, then_ \(x_{mn}+y_{mn}\xrightarrow{r-\mathcal{I}_{2}^{(\mu,\nu)}}\xi+\eta\)_._ 2. _If_ \(x_{mn}\xrightarrow{r-\mathcal{I}_{2}^{(\mu,\nu)}}\xi\) _and_ \(k(\neq 0)\in\mathbb{R}\)_, then_ \(kx_{mn}\xrightarrow{r-\mathcal{I}_{2}^{(\mu,\nu)}}k\xi\)_._ Proof.: Let \(\{x_{mn}\}\) and \(\{y_{mn}\}\) be two double sequences in an IFNS \((X,\mu,\nu,\star,\circ)\), \(r>0\) and \(\lambda\in(0,1)\). 1. Let \(x_{mn}\xrightarrow{r-\mathcal{I}_{2}^{(\mu,\nu)}}\xi\) and \(y_{mn}\xrightarrow{r-\mathcal{I}_{2}^{(\mu,\nu)}}\eta\). Also, let \(\varepsilon>0\) be given. Now, for a given \(\lambda\in(0,1)\), choose \(s\in(0,1)\) such that \((1-s)\star(1-s)>1-\lambda\) and \(s\circ s<\lambda\). Then \(A,B\in\mathcal{I}_{2}\), where \(A=\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn}-\xi,\frac{r+\varepsilon}{2} )\leq 1-s\text{ or }\nu(x_{mn}-\xi,\frac{r+\varepsilon}{2})\geq s\}\) and \(B=\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(y_{mn}-\eta,\frac{r+\varepsilon}{2 })\leq 1-s\text{ or }\nu(y_{mn}-\eta,\frac{r+\varepsilon}{2})\geq s\}\). So, \(A^{c}\cap B^{c}\in\mathcal{F}(\mathcal{I}_{2})\). Now for \((i,j)\in A^{c}\cap B^{c}\), we have \(\mu(x_{ij}+y_{ij}-(\xi+\eta),r+\varepsilon)\geq\mu(x_{ij}-\xi,\frac{r+ \varepsilon}{2})\star\mu(y_{ij}-\eta,\frac{r+\varepsilon}{2})>(1-s)\star(1-s)>1-\lambda\) and \(\nu(x_{ij}+y_{ij}-(\xi+\eta),r+\varepsilon)\leq\nu(x_{ij}-\xi,\frac{r+ \varepsilon}{2})\circ\nu(y_{ij}-\eta,\frac{r+\varepsilon}{2})<s\circ s<\lambda\). Therefore \(\{(i,j)\in\mathbb{N}\times\mathbb{N}:\mu(x_{ij}+y_{ij}-(\xi+\eta),r+ \varepsilon)\leq 1-\lambda\text{ or }\nu(x_{ij}-\xi,\frac{r+\varepsilon}{2})\geq\lambda\} \subset A\cup B\). Since \(A\cup B\in\mathcal{I}_{2}\), \(\{(i,j)\in\mathbb{N}\times\mathbb{N}:\mu(x_{ij}+y_{ij}-(\xi+\eta),r+ \varepsilon)\leq 1-\lambda\text{ or }\nu(x_{ij}-\xi,\frac{r+\varepsilon}{2})\geq \lambda\}\in\mathcal{I}_{2}\). Therefore \(x_{mn}+y_{mn}\xrightarrow{r-\mathcal{I}_{2}^{(\mu,\nu)}}\xi+\eta\). 2. Let \(x_{mn}\xrightarrow{r-\mathcal{I}_{2}^{(\mu,\nu)}}\xi\) and \(k(\neq 0)\in\mathbb{R}\). Then, \(\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn}-\xi,\frac{r+\varepsilon}{|k|} )\leq 1-\lambda\text{ or }\nu(x_{mn}-\xi,\frac{r+\varepsilon}{|k|})\geq\lambda\}\in \mathcal{I}_{2}\). Therefore, \(\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(kx_{mn}-k\xi,r+\varepsilon)\leq 1-\lambda \text{ or }\nu(kx_{mn}-k\xi,r+\varepsilon)\geq\lambda\}\in\mathcal{I}_{2}\). Hence \(kx_{mn}\xrightarrow{r-\mathcal{I}_{2}^{(\mu,\nu)}}k\xi\). This completes the proof. Now we prove some topological and geometrical properties of the set \(\mathcal{I}_{2}^{(\mu,\nu)}\)-\(LIM_{x_{mn}}^{r}\). **Theorem 3.3**.: _Let \(\{x_{mn}\}\) be a double sequence in an IFNS \((X,\mu,\nu,\star,\circ)\). Then for all \(r>0\), the set \(\mathcal{I}_{2}^{(\mu,\nu)}\)-\(LIM_{x_{mn}}^{r}\) is closed._ Proof.: If \(\mathcal{I}_{2}^{(\mu,\nu)}\)-\(LIM_{x_{mn}}^{r}=\emptyset\) then there is nothing to prove. So, let \(\mathcal{I}_{2}^{(\mu,\nu)}\)-\(LIM_{x_{mn}}^{r}\neq\emptyset\). Suppose that \(\{z_{mn}\}\) is a double sequence in \(\mathcal{I}_{2}^{(\mu,\nu)}\)-\(LIM_{x_{mn}}^{r}\) such that \(z_{mn}\xrightarrow{(\mu,\nu)}y_{0}\). Now, for a given \(\lambda\in(0,1)\), choose \(s\in(0,1)\) such that \((1-s)\star(1-s)>1-\lambda\) and \(s\circ s<\lambda\). Let \(\varepsilon>0\) be given. Then there exists \(m_{0}\in\mathbb{N}\) such that \(\mu(z_{mn}-y_{0},\frac{\varepsilon}{2})>1-s\) and \(\nu(z_{mn}-y_{0},\frac{\varepsilon}{2})<s\) for all \(m,n\geq m_{0}\). Suppose \(i,j>m_{0}\). Then \(\mu(z_{ij}-y_{0},\frac{\varepsilon}{2})>1-s\) and \(\nu(z_{ij}-y_{0},\frac{\varepsilon}{2})<s\). Also, \(P=\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn}-z_{ij},r+\frac{\varepsilon}{ 2})\leq 1-s\text{ or }\nu(x_{mn}-z_{ij},r+\frac{\varepsilon}{2})\geq s\}\in \mathcal{I}_{2}\). Now, for \((p,q)\in P^{c}\), we have \(\mu(x_{pq}-y_{0},r+\varepsilon)\geq\mu(x_{pq}-z_{ij},r+\frac{\varepsilon}{2}) \star\mu(z_{ij}-y_{0},\varepsilon)>(1-s)\star(1-s)>1-\lambda\) and \(\nu(x_{pq}-y_{0},r+\varepsilon)\leq\nu(x_{pq}-z_{ij},r+\frac{\varepsilon}{2}) \circ\nu(z_{ij}-y_{0},\frac{\varepsilon}{2})<s\circ s<\lambda\). Therefore \(\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn}-y_{0},r+\frac{\varepsilon}{2}) \leq 1-s\text{ or }\nu(x_{mn}-y_{0},r+\frac{\varepsilon}{2})\geq s\}\subset P\). Since \(P\in\mathcal{I}_{2}\), \(y_{0}\in\mathcal{I}_{2}^{(\mu,\nu)}\)-\(LIM_{x_{mn}}^{r}\). Therefore \(\mathcal{I}_{2}^{(\mu,\nu)}\)-\(LIM_{x_{mn}}^{r}\) is closed. This completes the proof. **Theorem 3.4**.: _Let \(\{x_{mn}\}\) be a double sequence in an IFNS \((X,\mu,\nu,\star,\circ)\). Then for all \(r>0\), the set \(\mathcal{I}_{2}^{(\mu,\nu)}\)-\(LIM_{x_{mn}}^{r}\) is convex._ Proof.: Let \(x_{1},x_{2}\in\mathcal{I}_{2}^{(\mu,\nu)}\)-\(LIM_{x_{mn}}^{r}\) and \(\kappa\in(0,1)\). Let \(\lambda\in(0,1)\). Choose \(s\in(0,1)\) such that \((1-s)\star(1-s)>1-\lambda\) and \(s\circ s<\lambda\). Then for every \(\varepsilon>0\), the sets \(H,T\in\mathcal{I}_{2}\) where \(H=\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn}-x_{1},\frac{r+\varepsilon}{ 2(1-\kappa)})\leq 1-s\text{ or }\nu(x_{mn}-x_{1},\frac{r+\varepsilon}{2(1-\kappa)})\geq s\}\) and \(T=\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn}-x_{2},\frac{r+\varepsilon}{ 2\kappa})\leq 1-s\text{ or }\nu(x_{mn}-x_{2},\frac{r+\varepsilon}{2\kappa})\geq s\}\). Now for \((m,n)\in H^{c}\cap T^{c}\) we have \(\mu(x_{mn}-[(1-\kappa)x_{1}+\kappa x_{2}],r+\varepsilon)\geq\mu((1-\kappa)(x_{mn}- x_{1}),\frac{r+\varepsilon}{2})\star\mu(\kappa(x_{mn}-x_{2}),\frac{r+\varepsilon}{2})=\mu(x_{mn} -x_{1},\frac{r+\varepsilon}{2(1-\kappa)})\star\mu(x_{mn}-x_{2},\frac{r+ \varepsilon}{2\kappa})>(1-s)\star(1-s)>1-\lambda\) and \(\nu(x_{mn}-[(1-\kappa)x_{1}+\kappa x_{2}],r+\varepsilon)\leq\nu((1-\kappa)(x_{ mn}-x_{1}),\frac{r+\varepsilon}{2})\circ\nu(\kappa(x_{mn}-x_{2}),\frac{r+ \varepsilon}{2})=\nu(x_{mn}-x_{1},\frac{r+\varepsilon}{2(1-\kappa)})\circ\nu(x _{mn}-x_{2},\frac{r+\varepsilon}{2\kappa})<s\circ s<\lambda\), which gives that \(\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn}-[(1-\kappa)x_{1}+\kappa x_{2}],r+\varepsilon)\leq 1-\lambda\) or \(\nu(x_{mn}-[(1-\kappa)x_{1}+\kappa x_{2}],r+\varepsilon)\geq\lambda\}\subset H \cup T\). Since \(H\cup T\in\mathcal{I}_{2}\), \((1-\kappa)x_{1}+\kappa x_{2}\in\mathcal{I}_{2}^{(\mu,\nu)}-LIM_{x_{mn}}^{r}\). Therefore \(\mathcal{I}_{2}^{(\mu,\nu)}\)-\(LIM_{x_{mn}}^{r}\) is convex. This completes the proof. **Theorem 3.5**.: _A double sequence \(\{x_{mn}\}\) in an IFNS \((X,\mu,\nu,\star,\circ)\) is rough \(\mathcal{I}_{2}\)-convergent to \(\beta\in X\) with respect to the intuitionistic fuzzy normed spaces \((\mu,\nu)\) for some \(r>0\) if there exists a double sequence \(\{y_{mn}\}\) in \(X\) such that \(y_{mn}\xrightarrow{\mathcal{I}_{2}^{(\mu,\nu)}}\beta\) and for every \(\lambda\in(0,1)\), \(\mu(x_{mn}-y_{mn},r)>1-\lambda\) and \(\nu(x_{mn}-y_{mn},r)<\lambda\) for all \(m,n\in\mathbb{N}\)._ Proof.: Let \(\varepsilon>0\) be given. Now, for a given \(\lambda\in(0,1)\) choose \(s\in(0,1)\) such that \((1-s)\star(1-s)>1-\lambda\) and \(s\circ s<\lambda\). Suppose that \(y_{mn}\xrightarrow{\mathcal{I}_{2}^{(\mu,\nu)}}\beta\) and \(\mu(x_{mn}-y_{mn},r)>1-s\) and \(\nu(x_{mn}-y_{mn},r)<s\) for all \(m,n\in\mathbb{N}\). Then the set \(P=\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(y_{mn}-\beta,\varepsilon)\leq 1-s\) or \(\nu(y_{mn}-\beta,\varepsilon)\geq s\}\in\mathcal{I}_{2}\). Now for \((i,j)\in P^{c}\), we have \(\mu(x_{ij}-\beta,r+\varepsilon)\geq\mu(x_{ij}-y_{ij},r)\star\mu(y_{ij}-\beta, \varepsilon)>(1-s)\star(1-s)>1-\lambda\) and \(\nu(x_{ij}-\beta,r+\varepsilon)\leq\nu(x_{ij}-y_{ij},r)\circ\nu(y_{ij}-\beta, \varepsilon)<s\circ s<\lambda\). Therefore \(\{(i,j)\in\mathbb{N}\times\mathbb{N}:\mu(x_{ij}-\beta,r+\varepsilon)\leq 1-\lambda\) or \(\nu(x_{ij}-\beta,r+\varepsilon)\geq\lambda\}\subset P\). Since \(P\in\mathcal{I}_{2}\), \(\{(i,j)\in\mathbb{N}\times\mathbb{N}:\mu(x_{ij}-\beta,r+\varepsilon)\leq 1-\lambda\) or \(\nu(x_{ij}-\beta,r+\varepsilon)\geq\lambda\}\in\mathcal{I}_{2}\). Therefore \(\{x_{mn}\}\) is rough \(\mathcal{I}_{2}\)-convergent to \(\beta\) with respect to the probabilistic norm \((\mu,\nu)\). This completes the proof. **Theorem 3.6**.: _Let \(\{x_{mn}\}\) be a double sequence in an IFNS \((X,\mu,\nu,\star,\circ)\). Then there do not exist \(\beta_{1},\beta_{2}\in\mathcal{I}_{2}^{(\mu,\nu)}\)-\(LIM_{x_{mn}}^{r}\) for some \(r>0\) and every \(\lambda\in(0,1)\) such that \(\mu(\beta_{1}-\beta_{2},mr)\leq 1-\lambda\) and \(\nu(\beta_{1}-\beta_{2},mr)\geq\lambda\) for \(m(\in\mathbb{R})>2\)._ Proof.: We prove it by contradiction. If possible, let there exists \(\beta_{1},\beta_{2}\in\mathcal{I}_{2}^{(\mu,\nu)}\)-\(LIM_{x_{mn}}^{r}\) such that \(\mu(\beta_{1}-\beta_{2},mr)\leq 1-\lambda\) and \(\nu(\beta_{1}-\beta_{2},mr)\geq\lambda\) for \(m(\in\mathbb{R})>2\). Now, for a given \(\lambda\in(0,1)\) choose \(s\in(0,1)\) such that \((1-s)\star(1-s)>1-\lambda\) and \(s\circ s<\lambda\). Then for every \(\varepsilon>0\), the sets \(A,B\in\mathcal{I}_{2}\) where \(A=\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn}-\beta_{1},r+\frac{\varepsilon} {2})\leq 1-s\) or \(\nu(x_{mn}-\beta_{1},r+\frac{\varepsilon}{2})\geq s\}\in\mathcal{I}_{2}\) and \(B=\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn}-\beta_{2},r+\frac{\varepsilon} {2})\leq 1-s\) or \(\nu(x_{mn}-\beta_{2},r+\frac{\varepsilon}{2})\geq s\}\in\mathcal{I}_{2}\). Then \(A^{c}\cap B^{c}\in\mathcal{F}(\mathcal{I}_{2})\). Now for \((m,n)\in A^{c}\cap B^{c}\), we have \(\mu(\beta_{1}-\beta_{2},2r+\varepsilon)\geq\mu(x_{mn}-\beta_{1},r+\frac{ \varepsilon}{2})\star\mu(x_{mn}-\beta_{2},r+\frac{\varepsilon}{2})>(1-s)\star(1 -s)>1-\lambda\) and \(\nu(\beta_{1}-\beta_{2},2r+\varepsilon)\leq\nu(x_{mn}-\beta_{1},r+\frac{ \varepsilon}{2})\circ\nu(x_{mn}-\beta_{1},r+\frac{\varepsilon}{2})\circ\nu(x_{mn} -\beta_{2},r+\frac{\varepsilon}{2})\). Therefore \(\{(i,j)\in\mathbb{N}\times\mathbb{N}:\mu(x_{ij}-\beta,r+\varepsilon)\leq 1-\lambda\) or \(\nu(x_{ij}-\beta,r+\varepsilon)\geq\lambda\}\subset P\). Since \(P\in\mathcal{I}_{2}\), \(\{(i,j)\in\mathbb{N}\times\mathbb{N}:\mu(x_{ij}-\beta,r+\varepsilon)\leq 1-\lambda\) or \(\nu(x_{ij}-\beta,r+\varepsilon)\geq\lambda\}\in\mathcal{I}_{2}\). Therefore \(\{x_{mn}\}\) is rough \(\mathcal{I}_{2}\)-convergent to \(\beta\) with respect to the probabilistic norm \((\mu,\nu)\). This completes the proof. **Theorem 3.7**.: _Let \(\{x_{mn}\}\) be a double sequence in an IFNS \((X,\mu,\nu,\star,\circ)\). Then there do not exist \(\beta_{1},\beta_{2}\in\mathcal{I}_{2}^{(\mu,\nu)}\)-\(LIM_{x_{mn}}^{r}\) for some \(r>0\) and every \(\lambda\in(0,1)\) such that \(\mu(\beta_{1}-\beta_{2},mr)\leq 1-\lambda\) and \(\nu(\beta_{1}-\beta_{2},mr)\geq\lambda\) for \(m(\in\mathbb{R})>2\)._ Proof.: We prove it by contradiction. If possible, let there exists \(\beta_{1},\beta_{2}\in\mathcal{I}_{2}^{(\mu,\nu)}\)-\(LIM_{x_{mn}}^{r}\) such that \(\mu(\beta_{1}-\beta_{2},mr)\leq 1-\lambda\) and \(\nu(\beta_{1}-\beta_{2},mr)\geq\lambda\) for \(m(\in\mathbb{R})>2\). Now, for a given \(\lambda\in(0,1)\) choose \(s\in(0,1)\) such that \((1-s)\star(1-s)>1-\lambda\) and \(s\circ s<\lambda\). Then for every \(\varepsilon>0\), the sets \(A,B\in\mathcal{I}_{2}\) where \(A=\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn}-\beta_{1},r+\frac{\varepsilon} {2})\leq 1-s\) or \(\nu(x_{mn}-\beta_{1},r+\frac{\varepsilon}{2})\geq s\}\in\mathcal{I}_{2}\) and \(B=\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn}-\beta_{2},r+\frac{\varepsilon} {2})\leq 1 \(\nu(x_{mn}-\beta_{2},r+\frac{\varepsilon}{2})<s\circ s<\lambda\). Therefore, \[\mu(\beta_{1}-\beta_{2},2r+\varepsilon)>1-\lambda\text{ and }\nu(\beta_{1}-\beta_{2},2r+ \varepsilon)<\lambda \tag{3.1}\] Now if we put \(\varepsilon=mr-2r,m>2\) in Equation 3.1 then we have \(\mu(\beta_{1}-\beta_{2},mr)>1-\lambda\) and \(\nu(\beta_{1}-\beta_{2},mr)<\lambda\), which is a contradiction. This completes the proof. Now we define \(\mathcal{I}_{2}\)-cluster point analogue to Definition 2.14. Ozcan and Or [29] defined rough statistical cluster point of double sequences in an IFNS and, here, we give its ideal version in the same sapce. Also, we prove an important result analogue to ([39], Theorem 4.7) in the same space which will be useful in the sequal. **Definition 3.3**.: _Let \(\{x_{mn}\}\) be a double sequence in an IFNS \((X,\mu,\nu,\star,\circ)\). Then a point \(\zeta\in X\) is said to be \(\mathcal{I}_{2}\)-cluster point of \(\{x_{mn}\}\) with respect to the intuitionistic fuzzy norm \((\mu,\nu)\) if for every \(\varepsilon>0\) and \(\lambda\in(0,1)\), \(\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn}-\zeta,\varepsilon)>1-\lambda\) and \(\nu(x_{mn}-\zeta,\varepsilon)<\lambda\}\notin\mathcal{I}_{2}\)._ We denote \(\Gamma_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})\) to mean the set of all \(\mathcal{I}_{2}\)-cluster points of \(\{x_{mn}\}\) with respect to the intuitionistic fuzzy norm \((\mu,\nu)\). **Definition 3.4**.: _Let \(\{x_{mn}\}\) be a double sequence in an IFNS \((X,\mu,\nu,\star,\circ)\) and \(r\geq 0\). Then a point \(\beta\in X\) is said to be rough \(\mathcal{I}_{2}\)-cluster point of \(\{x_{mn}\}\) with respect to the intuitionistic fuzzy norm \((\mu,\nu)\) if for every \(\varepsilon>0\) and \(\lambda\in(0,1)\), \(\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn}-\beta,r+\varepsilon)>1-\lambda\) and \(\nu(x_{mn}-\beta,r+\varepsilon)<\lambda\}\notin\mathcal{I}_{2}\)._ We denote \(\Gamma_{(x_{mn})}^{\tau}(\mathcal{I}_{2}^{(\mu,\nu)})\) to mean the set of all rough \(\mathcal{I}_{2}\)-cluster points of \(\{x_{mn}\}\) with respect to the intuitionistic fuzzy norm \((\mu,\nu)\). **Remark 3.3**.: Now if we put \(r=0\) in Definition 3.4, then \(\Gamma_{(x_{mn})}^{r}(\mathcal{I}_{2}^{(\mu,\nu)})=\Gamma_{(x_{mn})}( \mathcal{I}_{2}^{(\mu,\nu)})\). **Theorem 3.7**.: _Let \(\{x_{mn}\}\) be a double sequence in an IFNS \((X,\mu,\nu,\star,\circ)\) such that \(x_{mn}\xrightarrow{\mathcal{I}_{2}^{(\mu,\nu)}}L\). Then \(\Gamma_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})=\{L\}\)._ Proof.: If possible let \(\Gamma_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})=\{L,\mathcal{J}\}\), where \(L\neq\mathcal{J}\). For a given \(\lambda\in(0,1)\), choose \(s\in(0,1)\) such that \((1-s)\star(1-s)>1-\lambda\) and \((s)\circ(s)<\lambda\). Then for every \(\varepsilon>0\), \(K_{1}=\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn}-L,\frac{\varepsilon}{2})>1-s\) and \(\nu(x_{mn}-L,\frac{\varepsilon}{2})<s\}\notin\mathcal{I}_{2}\) and \(K_{2}=\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn}-\mathcal{J},\frac{ \varepsilon}{2})>1-s\) and \(\nu(x_{mn}-\mathcal{J},\frac{\varepsilon}{2})<s\}\notin\mathcal{I}_{2}\). Clearly \(K_{1}\cap K_{2}=\emptyset\), If not, let \((i,j)\in K_{1}\cap K_{2}\). Then \(\mu(L-\mathcal{J},\varepsilon)\geq\mu(x_{ij}-L,\frac{\varepsilon}{2})\star\mu( x_{ij}-\mathcal{J},\frac{\varepsilon}{2})>(1-s)\star(1-s)>1-\lambda\) and \(\nu(L-\mathcal{J},\varepsilon)\leq\nu(x_{ij}-L,\frac{\varepsilon}{2})\circ \nu(x_{ij}-\mathcal{J},\frac{\varepsilon}{2})<s\circ s<\lambda\). Since \(\lambda\in(0,1)\) is arbitrary, \(\mu(L-\mathcal{J},\varepsilon)=1\), which gives \(L=\mathcal{J}\) and \(\nu(L-\mathcal{J},\varepsilon)=0\), which gives \(L=\mathcal{J}\) for all \(\varepsilon>0\). This yields to a contradiction. Therefore \(K_{2}\subset K_{1}^{c}\). Since \(x_{mn}\xrightarrow{\mathcal{I}_{2}^{(\mu,\nu)}}L\), then \(K_{1}^{c}=\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn}-L,\frac{\varepsilon }{2})\leq 1-s\) or \(\nu(x_{mn}-L,\frac{\varepsilon}{2})\geq s\}\in\mathcal{I}_{2}\). Hence \(K_{2}\in\mathcal{I}_{2}\), which contradicts \(K_{2}\not\in\mathcal{I}_{2}\). Therefore, \(\Gamma_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})=\{L\}\). This completes the proof. **Theorem 3.8**.: _Let \(\{x_{mn}\}\) be a double sequence in an IFNS \((X,\mu,\nu,\star,\circ)\). Then, for all \(r>0\) the set \(\Gamma_{(x_{mn})}^{r}(\mathcal{I}_{2}^{(\mu,\nu)})\) is closed with respect to the intuitionistic fuzzy norm \((\mu,\nu)\)._ Proof.: The proof is almost similar to the proof of Theorem 3.3. So we omit details. **Theorem 3.9**.: _Let \(\{x_{mn}\}\) be a double sequence in an IFNS \((X,\mu,\nu,\star,\circ)\). Then for an arbitrary \(x_{1}\in\Gamma_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})\) and \(\lambda\in(0,1)\) we have \(\mu(x_{2}-x_{1},r)>1-\lambda\) and \(\nu(x_{2}-x_{1},r)<\lambda\) for all \(x_{2}\in\Gamma_{(x_{mn})}^{r}(\mathcal{I}_{2}^{(\mu,\nu)})\)._ Proof.: For given \(\lambda\in(0,1)\), choose \(s\in(0,1)\) such that \((1-s)\star(1-s)>1-\lambda\) and \(s\circ s<\lambda\). Since \(x_{1}\in\Gamma_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})\), for every \(\varepsilon>0\), we get \[\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn}-x_{1},\varepsilon)>1-s\text{ and }\nu(x_{mn}-x_{1},\varepsilon)<s\}\notin\mathcal{I}_{2}. \tag{3.2}\] Now, let \((i,j)\in\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn}-x_{1},\varepsilon)>1-s\) and \(\nu(x_{mn}-x_{1},\varepsilon)<s\}\). Then we have \(\mu(x_{ij}-x_{2},r+\varepsilon)\geq\mu(x_{ij}-x_{1},\varepsilon)\star\mu(x_{1 }-x_{2},r)>(1-s)\star(1-s)>1-\lambda\) and \(\nu(x_{ij}-x_{2},r+\varepsilon)\leq\nu(x_{ij}-x_{1},\varepsilon)\circ\nu(x_{2 }-x_{1},r)<s\circ s<\lambda\). Therefore \(\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn}-x_{1},\varepsilon)>1-s\text{ and }\nu(x_{mn}-x_{1},\varepsilon)<s\}\subset\{(m,n)\in\mathbb{N}\times \mathbb{N}:\mu(x_{mn}-x_{2},r+\varepsilon)>1-s\text{ and }\nu(x_{mn}-x_{2},r+\varepsilon)<s\}\). So, from Equation 3.2, we get \(\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn}-x_{2},r+\varepsilon)>1-s\text{ and }\nu(x_{mn}-x_{2},r+\varepsilon)<s\}\notin\mathcal{I}_{2}\). Hence \(x_{2}\in\Gamma_{(x_{mn})}^{r}(\mathcal{I}_{2}^{(\mu,\nu)})\). This completes the proof. **Theorem 3.10**.: _Let \(\{x_{mn}\}\) be a double sequence in an IFNS \((X,\mu,\nu,\star,\circ)\). Then for some \(r>0\), \(\lambda\in(0,1)\) and fixed \(x_{0}\in X\) we have_ \[\Gamma_{(x_{mn})}^{r}(\mathcal{I}_{2}^{(\mu,\nu)})=\bigcup_{x_{0}\in\Gamma_{(x _{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})}\overline{B(x_{0},\lambda,r)}.\] Proof.: For a given \(\lambda\in(0,1)\), choose \(s\in(0,1)\) such that \((1-s)\star(1-s)>1-\lambda\) and \(s\circ s<\lambda\). Let \(y_{0}\in\bigcup_{x_{0}\in\Gamma_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})} \overline{B(x_{0},\lambda,r)}\). Then there is \(x_{0}\in\Gamma_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})\) such that \(\mu(x_{0}-y_{0},r)>1-s\) and \(\nu(x_{0}-y_{0},r)<s\). Now, since \(x_{0}\in\Gamma_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})\), for every \(\varepsilon>0\) there exists a set \(M=\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn}-x_{0},\varepsilon)>1-s\text{ and }\nu(x_{mn}-x_{0},\varepsilon)<s\}\) with \(M\notin\mathcal{I}_{2}\). Let \((i,j)\in M\). Now we have \(\mu(x_{ij}-y_{0},r+\varepsilon)\geq\mu(x_{ij}-x_{0},\varepsilon)\star\mu(x_{0}-y_ {0},r)>(1-s)\star(1-s)>1-\lambda\) and \(\nu(x_{ij}-y_{0},r+\varepsilon)\leq\nu(x_{ij}-x_{0},\varepsilon)\circ\nu(x_{0} -y_{0},r)<s\circ s<\lambda\). Therefore \(M\subset\{(i,j)\in\mathbb{N}\times\mathbb{N}:\mu(x_{ij}-y_{0},r+\varepsilon)> 1-\lambda\text{ and }\nu(x_{ij}-y_{0},r+\varepsilon)<\lambda\}\). Since \(M\notin\mathcal{I}_{2}\), \(\{(i,j)\in\mathbb{N}\times\mathbb{N}:\mu(x_{ij}-y_{0},r+\varepsilon)>1-\lambda\text { and }\nu(x_{ij}-y_{0},r+\varepsilon)<\lambda\}\notin\mathcal{I}_{2}\). Hence \(y_{0}\in\Gamma^{r}_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})\). Therefore \(\bigcup_{x_{0}\in\Gamma_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})}\overline{B( x_{0},\lambda,r)}\subseteq\Gamma^{r}_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})\). Conversely, suppose that \(y_{*}\in\Gamma^{r}_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})\). We shall show that \(y_{*}\in\bigcup_{x_{0}\in\Gamma_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})} \overline{B(x_{0},\lambda,r)}\). If possible, let \(y_{*}\notin\bigcup_{x_{0}\in\Gamma_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})} \overline{B(x_{0},\lambda,r)}\). So, \(\mu(x_{0}-y_{*},r)\leq 1-\lambda\) and \(\nu(x_{0}-y_{*},r)\geq\lambda\) for every \(x_{0}\in\Gamma_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})\). Now, by Theorem 3.9, we have \(\mu(x_{0}-y_{*},r)>1-\lambda\) and \(\nu(x_{0}-y_{*},r)<\lambda\), which is a contradiction. Therefore, \(\Gamma^{r}_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})\subseteq\bigcup_{x_{0}\in \Gamma_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})}\overline{B(x_{0},\lambda,r)}\). Hence \(\Gamma^{r}_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})=\bigcup_{x_{0}\in\Gamma_{ (x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})}\overline{B(x_{0},\lambda,r)}\). This completes the proof. **Theorem 3.11**.: _Let \(\{x_{mn}\}\) be a double sequence in an IFNS \((X,\mu,\nu,\star,\circ)\). Then for any \(\lambda\in(0,1)\) the following statements hold:_ 1. _If_ \(x_{0}\in\Gamma_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})\) _then_ \(\mathcal{I}_{2}^{(\mu,\nu)}\)_-LIM_\({}^{r}_{x_{mn}}\subseteq\overline{B(x_{0},\lambda,r)}\)_._ 2. \(\mathcal{I}_{2}^{(\mu,\nu)}\)_-LIM_\({}^{r}_{x_{mn}}=\bigcap_{x_{0}\in\Gamma_{(x_{mn})}(\mathcal{I}_{2}^{( \mu,\nu)})}\overline{B(x_{0},\lambda,r)}=\{\eta\in X:\Gamma_{(x_{mn})}( \mathcal{I}_{2}^{(\mu,\nu)})\subseteq\overline{B(\eta,\lambda,r)}\}\)_._ Proof.: 1. For a given \(\lambda\in(0,1)\), choose \(s_{1},s_{2}\in(0,1)\) such that \((1-s_{1})\star(1-s_{2})>1-\lambda\) and \(s_{1}\circ s_{2}<\lambda\). If possible, we suppose that there exists an element \(x_{0}\in\Gamma_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})\) and \(\gamma\in\mathcal{I}_{2}^{(\mu,\nu)}\)-LIM_\({}^{r}_{x_{mn}}\) such that \(\gamma\notin\overline{B(x_{0},\lambda,r)}\) i.e., \(\mu(\gamma-x_{0},r)<1-\lambda\) and \(\nu(\gamma-x_{0},r)>\lambda\). Let \(\varepsilon>0\) be given. Since \(\gamma\in\mathcal{I}_{2}^{(\mu,\nu)}\)-LIM_\({}^{r}_{x_{mn}}\), the sets \(M_{1}=\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn}-x_{0},\varepsilon)>1-s_{1}\text { and }\nu(x_{mn}-x_{0},\varepsilon)<s_{1}\}\notin\mathcal{I}_{2}\) and \(M_{2}=\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn}-\gamma,r+\varepsilon)\leq 1-s_{2} \text{ or }\nu(x_{mn}-\gamma,r+\varepsilon)\geq s_{2}\}\in\mathcal{I}_{2}\). Now for \((i,j)\in M_{1}\cap M_{2}^{c}\), we have \(\mu(\gamma-x_{0},r)\geq\mu(x_{ij}-x_{0},\varepsilon)\star\mu(x_{ij}-\gamma,r+ \varepsilon)>(1-s_{1})\star(1-s_{2})>1-\lambda\) and \(\nu(\gamma-x_{0},r)\leq\nu(x_{ij}-x_{0},\varepsilon)\circ\nu(x_{ij}-\gamma,r+ \varepsilon)<s_{1}\circ s_{2}<\lambda\), which is a contradiction. Therefore \(\gamma\in\overline{B(x_{0},\lambda,r)}\). Hence \(\mathcal{I}_{2}^{(\mu,\nu)}\)-LIM_\({}^{r}_{x_{mn}}\subseteq\overline{B(x_{0},\lambda,r)}\). 2. Using Part 1, we get \[\mathcal{I}_{2}^{(\mu,\nu)}\text{-LIM}^{r}_{x_{mn}}\subseteq\bigcap_{x_{0}\in \Gamma_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})}\overline{B(x_{0},\lambda,r)}.\] (3.3) Now, let \(\beta\in\bigcap_{x_{0}\in\Gamma_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})} \overline{B(x_{0},\lambda,r)}\). So, we have \(\mu(\beta-x_{0},r)\geq 1-\lambda\) and \(\nu(\beta-x_{0},r)\leq\lambda\) for \(x_{0}\in\Gamma_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})\) and, therefore \(\Gamma_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})\subseteq\overline{B(\beta, \lambda,r)}\) i.e., we can write \[\bigcap_{x_{0}\in\Gamma_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})}\overline{B(x_{0}, \lambda,r)}\subseteq\{\eta\in X:\Gamma_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)}) \subseteq\overline{B(\eta,\lambda,r)}\} \tag{3.4}\] Now we shall show that \(\{\eta\in X:\Gamma_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})\subseteq\overline{B( \eta,\lambda,r)}\}\subseteq\mathcal{I}_{2}^{(\mu,\nu)}\)-\(LIM_{x_{mn}}^{r}\). Let \(\beta\notin\mathcal{I}_{2}^{(\mu,\nu)}\)-\(LIM_{x_{mn}}^{r}\). Then for \(\varepsilon>0\), \(\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn}-\beta,r+\varepsilon)\leq 1- \lambda\text{ or }\nu(x_{mn}-\beta,r+\varepsilon)\geq\lambda\}\notin\mathcal{I}_{2}\), which gives there exists an \(\mathcal{I}_{2}\)-cluster point \(x_{0}\) for the double sequence \(\{x_{mn}\}\) with \(\mu(\beta-x_{0},r+\varepsilon)\leq 1-\lambda\) and \(\beta-x_{0},r+\varepsilon)\geq\lambda\). Hence \(\Gamma_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})\nsubseteq\overline{B(\beta, \lambda,r)}\) and so, \(\beta\notin\{\eta\in X:\Gamma_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})\subseteq \overline{B(\eta,\lambda,r)}\}\). So, \[\{\eta\in X:\Gamma_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})\subseteq\overline{B (\eta,\lambda,r)}\}\subseteq\mathcal{I}_{2}^{(\mu,\nu)}\text{-}LIM_{x_{mn}}^{r}. \tag{3.5}\] Therefore from Equations 3.3, 3.4 and 3.5, we have \[\mathcal{I}_{2}^{(\mu,\nu)}\text{-}LIM_{x_{mn}}^{r}=\bigcap_{x_{0}\in\Gamma_{ (x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})}\overline{B(x_{0},\lambda,r)}=\{\eta \in X:\Gamma_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})\subseteq\overline{B(\eta, \lambda,r)}\}.\] This completes the proof. **Theorem 3.12**.: _Let \(\{x_{mn}\}\) be a double sequence in an IFNS \((X,\mu,\nu,\star,\circ)\) such that \(x_{mn}\xrightarrow{\mathcal{I}_{2}^{(\mu,\nu)}}y_{*}\). Then, for any \(\lambda\in(0,1)\) and \(r>0\), we have \(\mathcal{I}_{2}^{(\mu,\nu)}\text{-}LIM_{x_{mn}}^{r}=\overline{B(y_{*},\lambda, r)}\)._ Proof.: Let \(\lambda_{2}\in(0,1)\). Choose \(\lambda_{1}\in(0,1)\) such that \(\lambda_{1}\star\lambda>\lambda_{2}\) and \(\lambda_{1}\circ\lambda<\lambda_{2}\). Let \(\varepsilon>0\) be given. Since \(x_{mn}\xrightarrow{\mathcal{I}_{2}^{(\mu,\nu)}}y_{*}\), the set \(P=\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn}-y_{*},\varepsilon)\leq 1- \lambda_{1}\text{ or }\nu(x_{mn}-y_{*},\varepsilon)\geq\lambda_{1}\}\in\mathcal{I}_{2}\). Now, let \(\xi\in\overline{B(y_{*},\lambda,r)}\). Then \(\mu(\xi-y_{*},r)\geq 1-\lambda\) and \(\nu(\xi-y_{*},r)\leq\lambda\). Now for \((m,n)\in P^{c}\), we have \(\mu(x_{mn}-\xi,r+\varepsilon)\geq\mu(x_{mn}-y_{*},\varepsilon)\star\mu(\xi-y_ {*},r)>(1-\lambda_{1})\star(1-\lambda)>1-\lambda_{2}\) and \(\nu(x_{mn}-\xi,r+\varepsilon)\leq\nu(x_{mn}-y_{*},\varepsilon)\circ\nu(\xi-y_ {*},r)<\lambda_{1}\circ\lambda<\lambda_{2}\). Therefore \(\{(m,n)\in\mathbb{N}\times\mathbb{N}:\mu(x_{mn}-\xi,r+\varepsilon)\leq 1- \lambda_{2}\text{ or }\nu(x_{mn}-\xi,r+\varepsilon)\geq\lambda_{2}\}\subset P\). Hence \(\xi\in\mathcal{I}_{2}^{(\mu,\nu)}\text{-}LIM_{x_{mn}}^{r}\). Consequently, \(\overline{B(y_{*},\lambda,r)}\subseteq\mathcal{I}_{2}^{(\mu,\nu)}\text{-}LIM_ {x_{mn}}^{r}\). Now, using Theorem 3.7 and 3.11, we have \(\mathcal{I}_{2}^{(\mu,\nu)}\text{-}LIM_{x_{mn}}^{r}\subseteq\overline{B(y_{*}, \lambda,r)}\). Therefore \(\mathcal{I}_{2}^{(\mu,\nu)}\text{-}LIM_{x_{mn}}^{r}=\overline{B(y_{*},\lambda, r)}\). This completes the proof. **Theorem 3.13**.: _Let \(\{x_{mn}\}\) be a double sequence in an IFNS \((X,\mu,\nu,\star,\circ)\) such that \(x_{mn}\xrightarrow{\mathcal{I}_{2}^{(\mu,\nu)}}L\). Then, for any \(\lambda\in(0,1)\), \(\Gamma_{(x_{mn})}^{r}(\mathcal{I}_{2}^{(\mu,\nu)})=\mathcal{I}_{2}^{(\mu,\nu)} \text{-}LIM_{x_{mn}}^{r}\) for some \(r>0\)._ Proof.: Since \(x_{mn}\xrightarrow{\mathcal{I}_{2}^{(\mu,\nu)}}L\), therefore from Theorem 3.7, \(\Gamma_{(x_{mn})}(\mathcal{I}_{2}^{(\mu,\nu)})=\{L\}\). Again from Theorem 3.10, \(\Gamma_{(x_{mn})}^{r}(\mathcal{I}_{2}^{(\mu,\nu)})=\overline{B(L,\lambda,r)}\). And, from Theorem 3.12, \(\overline{B(L,\lambda,r)}=\mathcal{I}_{2}^{(\mu,\nu)}\)-\(LIM_{x_{mn}}^{r}\). Therefore \(\Gamma_{(x_{mn})}^{r}(\mathcal{I}_{2}^{(\mu,\nu)})=\mathcal{I}_{2}^{(\mu,\nu)}\)-\(LIM_{x_{mn}}^{r}\). This completes the proof. ### Acknowledgments The second author is grateful to The Council of Scientific and Industrial Research (CSIR), HRDG, India, for the grant of Senior Research Fellowship during the preparation of this paper.
2307.10756
Monge solutions for discontinuous Hamilton-Jacobi equations in Carnot groups
In this paper we study Monge solutions to stationary Hamilton-Jacobi equations associated to discontinuous Hamiltonians in the framework of Carnot groups. After showing the equivalence between Monge and viscosity solutions in the continuous setting, we prove existence and uniqueness for the Dirichlet problem, together with a comparison principle and a stability result.
Fares Essebei, Gianmarco Giovannardi, Simone Verzellesi
2023-07-20T10:41:32Z
http://arxiv.org/abs/2307.10756v2
# Monge solutions for discontinuous Hamilton-Jacobi equations ###### Abstract. In this paper we study Monge solutions to stationary Hamilton-Jacobi equations associated to discontinuous Hamiltonians in the framework of Carnot groups. After showing the equivalence between Monge and viscosity solutions in the continuous setting, we prove existence and uniqueness for the Dirichlet problem, together with a comparison principle and a stability result. Keywords. Hamilton-Jacobi equations, Carnot groups, Monge solutions, discontinuous Hamiltonians MSC. 35F21, 35R03 G. Giovannardi has been supported by INdAM-GNAMPA 2023 Project _Variational and non-variational problems with lack of compactness_. S. Verzellesi has been supported by INdAM-GNAMPA 2023 Project _Equazioni differenziali alle derivate parziali di tipo misto o dipendenti da campi di vettori ## 1. Introduction Let \(X\) be a smooth smooth smooth manifold \(M\) with smooth to \(X_{1},\dots,X_{m}\) as _generating horizontal vector fields_. Moreover, we recall that an absolutely continuous \(\gamma\colon[a,b]\to\mathbb{G}\) is said to be _horizontal_ if there exists \(a(t)=(a_{1}(t),\dots,a_{m}(t))\in L^{\infty}((a,b),\mathbb{R}^{m})\) such that \[\dot{\gamma}(t)=\sum_{i=1}^{m}a_{i}(t)X_{i}(\gamma(t)) \tag{1.7}\] for a.e. \(t\in(a,b)\). The structural assumptions (H) allows us to associate a suitable norm to the Hamiltonian \(H\). More precisely, inspired by [14], we define \(\sigma^{\star}:\Omega\times\mathbb{R}^{m}\longrightarrow[0,\infty)\) by \[\sigma^{\star}(x,p)=\sup\{\langle-\xi,p\rangle\ :\ \xi\in Z(x)\} \tag{1.8}\] for any \(x\in\Omega\) and any \(p\in\mathbb{R}^{m}\). It is easy to observe that \(\sigma^{\star}\) is a sub-Finsler norm defined on the _horizontal bundle_\(H\Omega\), that is the subbundle of \(T\Omega\) of the horizontal vector fields. Accordingly, we exploit \(\sigma^{\star}\) to induce a distance \(d_{\sigma^{\star}}\) on \(\Omega\), whose Euclidean counterpart is known in literature as _optical length function_, by \[d_{\sigma^{\star}}(x,y)=\inf\left\{\int_{0}^{1}\sigma^{\star}(\gamma(t),\dot {\gamma}(t))\,dt\ :\ \gamma:[0,1]\longrightarrow\Omega,\,\gamma\text{ is horizontal, }\gamma(0)=x,\,\gamma(1)=y\right\} \tag{1.9}\] for each \(x,y\in\Omega\). Chow's connectivity theorem (cf. [31]) implies that \(d_{\sigma^{\star}}\) is well defined. Again inspired by [38, 14], we are ready to state the main definition of this paper. **Definition 1.1** (Monge solution).: Let \(\Omega\subset\mathbb{G}\) be an open and connected subset of \(\mathbb{G}\). If \(u\in C(\Omega)\), we say that \(u\) is a _Monge solution_ (resp. _subsolution_,_supersolution_) to (3.1) in \(\Omega\) if \[\liminf_{x\to x_{0}}\frac{u(x)-u(x_{0})+d_{\sigma^{\star}}(x_{0},x)}{d_{\Omega }(x_{0},x)}=0\quad(\text{resp. }\geqslant,\leqslant) \tag{1.10}\] for any \(x_{0}\in\Omega\), where \(d_{\Omega}\) is the standard _Carnot-Caratheodory distance_ on \(\Omega\) (cf. Section 2). The aim of this paper is to investigate the main aspects of this definition in the sub-Rimannian setting, recovering the Euclidian results achieved in [14]. A first step consists in relating this notion to the classical sub-Riemannian notion of viscosity solution. To this aim, after describing some properties of the optical length function (1.9) (cf. Section 3) and of viscosity solutions in Carnot groups (cf. Section 4), we will show that the theory of Monge solutions embeds the theory of viscosity solutions, proving the equivalence of these two notion as soon as the Hamiltonian is continuous. **Theorem 1.2**.: _Let \(\Omega\subset\mathbb{G}\) be a domain. Let \(H\) be a continuous Hamiltonian satisfying_ (H)_. Then \(u\in C(\Omega)\) is a Monge subsolution (resp. supersolution) to_ (1.5) _if and only if it is a viscosity subsolution (resp. supersolution) to_ (1.5)_._ In order to prove Theorem 1.2, we will first need to recover a suitable Hopf-Lax formula for the Dirichlet problem associated to (1.5). Anyway, it is interesting to observe that the solution of this particular issue let our sub-Riemannian approach differs from the Euclidean one. Indeed, in the classical theory of Monge solutions (cf. [38, 14]) the optical length function is defined on the whole \(\overline{\Omega}\). This possibility relies on the fact that every two points in \(\overline{\Omega}\) can be joined by an Euclidean Lipschitz curve as soon as the boundary of \(\Omega\) is locally Lipschitz. Unfortunately this property is no longer true in our setting, since it is not always the case that two points on \(\partial\Omega\) can be connected by a horizontal curve. A useful consequence of the Euclidean approach is that the optical length function is a geodesic distance, which is no longer true in our case. To solve this problem we will often need to localize our arguments to suitable neighborhoods. The key point in which one would like to exploit the fact that the optical length function is defined up to the boundary is the validity of the classical Hopf-Lax formula. To be more precise, assume that \(g\in C(\partial\Omega)\) satisfies the compatibility condition \[g(x)-g(y)\leqslant d_{\sigma^{\star}}(x,y)\] for any \(x,y\in\partial\Omega\). Then (cf. [14, Theorem 5.3]) the function \(w\) defined by \[w(x)=\inf_{y\in\partial\Omega}\{d_{\sigma^{\star}}(x,y)+g(y)\}\] is a Monge solution to (1.5) and coincides with \(g\) on \(\partial\Omega\). Since our optical length function is defined only on \(\Omega\), this formula would become meaningless. To overcome this difficulty, we propose a possible solution by suitably extending our original Hamiltonian. To this aim, we let \[\mathcal{K}_{0}(H,\Omega):=\{K:\Omega_{0}\times\mathbb{R}^{m}\longrightarrow \mathbb{R}\,:\,\Omega\Subset\Omega_{0}\text{ is a domain, }K\text{ satisfies (H), }K\equiv H\text{ on }\Omega\times\mathbb{R}^{m}\}. \tag{1.11}\] For any fixed \(K\in\mathcal{K}_{0}(H,\Omega)\), we consider the associated metric \(\sigma^{\star}_{K}\) and optical length function \(d_{\sigma^{\star}_{K}}\). Since now \(\Omega\Subset\Omega_{0}\), \(d_{\sigma^{\star}_{K}}\) is defined on the whole \(\overline{\Omega}\). Moreover, we will show (cf. Proposition 5.2) that the definition of Monge solution on \(\Omega\) is invariant by replacing \(H\) with any \(K\in\mathcal{K}_{0}(H,\Omega)\). These facts motivate the following result. **Theorem 1.3** (Hopf-Lax formula).: _Let \(\Omega\subseteq\mathbb{G}\) be a domain and let \(H\) satisfy_ (H)_. Let \(g\in C(\partial\Omega)\) be such that there exists \(K\in\mathcal{K}_{0}(\Omega)\) for which_ \[g(x)-g(y)\leqslant d_{\sigma^{*}_{K}}(x,y) \tag{1.12}\] _for any \(x,y\in\partial\Omega\). Let us define_ \[w(x):=\inf_{y\in\partial\Omega}\{d_{\sigma^{*}_{K}}(x,y)+g(y)\}. \tag{1.13}\] _Then \(w\in\operatorname{Lip}(\Omega,d_{\Omega})\cap C(\overline{\Omega})\) and \(w\) is a Monge solution to the Dirichlet problem_ \[H(x,Xw) =0\quad\text{in }\Omega\] \[w =g\quad\text{on }\partial\Omega.\] After proving Theorem 1.3, we move on the study of the Hamilton-Jacobi equation in the discontinuous setting. First, we show the validity of the following comparison principle for Monge solutions. **Theorem 1.4** (Comparison Principle).: _Let \(H\) be an Hamiltonian satisfying_ (H)_, let \(u\in C(\widetilde{\Omega})\) be a Monge subsolution of (3.1) and \(v\in C(\widetilde{\Omega})\) be a Monge supersolution of (3.1). If \(u\leqslant v\) on \(\partial\Omega\), then \(u\leqslant v\) in \(\Omega\)._ Notice that, combining Theorem 1.3 and Theorem 1.4, we guarantee existence and uniqueness for the Dirichlet problem associated to (1.5) under the compatibility condition (1.12). Finally, inspired by [14], we show that the notion of Monge solution is stable under suitable notions of convergence for sequences of Hamiltonians and Monge solutions. **Theorem 1.5** (Stability).: _Let \((H_{n})_{n\in\mathbb{N}}\) and \(H_{\infty}\) satisfy_ (H) _with a uniform choice of \(\alpha\). For any \(n\in\mathbb{N}\), let \(u_{n}\in C(\Omega)\) be a Monge solution to_ \[H_{n}(x,Xu_{n}(x))=0\] _on \(\Omega\). Assume that \(d_{\sigma^{*}_{n}}\to d_{\sigma^{*}_{\infty}}\) locally uniformly on \(\Omega\times\Omega\), where, for any \(n\in\mathbb{N}\), \(d_{\sigma^{*}_{n}}\) is the optical length function associated to \(H_{n}\) and \(d_{\sigma^{*}_{\infty}}\) is the optical length function associated to \(H_{\infty}\). Assume that there exists \(u_{\infty}\in C(\Omega)\) such that \(u_{n}\to u_{\infty}\) locally uniformly on \(\Omega\). Then \(u_{\infty}\) is a Monge solution to_ \[H_{\infty}(x,Xu_{\infty}(x))=0\] _on \(\Omega\)._ The paper is organized as follows. In Section 2 we recall some basic facts and properties about Carnot groups. In Section 3 we study some properties of the induced metric \(\sigma^{\star}\) and the optical length function \(d_{\sigma^{\star}}\). Section 4 is devoted to a short survey about viscosity solutions in Carnot groups. In Section 5 we introduce the Hopf-Lax formula (1.13) and we prove Theorem 1.3. In Section 6 we show the equivalence between Monge and viscosity solutions, proving Theorem 1.2. Finally, Section 7 is devoted to the proof of Theorem 1.4 and Theorem 1.5. ### Acknowledgements The authors would like to thank Andrea Pinamonti for suggesting them the study of this problem and Eugenio Vecchi for useful discussions about these topics. ## 2. Preliminaries ### Carnot groups Here and in the following, we fix a Carnot group \(\mathbb{G}\equiv(\mathbb{R}^{n},\cdot)\) of step \(k\), rank \(m\) and dimension \(n\geqslant m\), where \(\cdot\) is the group law. Moreover, we fix an adapted basis \(X_{1},\dots,X_{n}\) that coincides with the canonical basis of \(\mathbb{R}^{n}\) at the origin, and we denote points \(y\in\mathbb{G}\) by \[y=(y^{(1)},\dots,y^{(k)}),\] where \(y^{(j)}=(y_{h_{j-1}+1},\dots,y_{h_{j}})\) for each \(j=1,\dots,k\), and where we recall that the usual identification between \(\mathbb{G}\) an its Lie algebra is given by \[y=\exp\left(\sum_{j=1}^{k}\sum_{\ell=h_{j-1}+1}^{h_{j}}y_{\ell}X_{\ell}\right).\] Moreover, if \(d\) is a distance, we denote by \(B_{r}(x,d)\) the \(d\)-metric ball of radius \(r>0\) centered at \(x\in\mathbb{G}\). For any \(\lambda>0\) and any \(x\in\mathbb{G}\), we define the _left translation_\(\tau_{x}:\mathbb{G}\longrightarrow\mathbb{G}\) and the _intrinsic dilation_\(\delta_{\lambda}:\mathbb{G}\longrightarrow\mathbb{G}\) by \[\tau_{x}(z):=x\cdot z\qquad\text{and}\qquad\delta_{\lambda}(y):=(\lambda y^{(1 )},\lambda^{2}y^{(2)},\dots,\lambda^{k}y^{(k)})\] for any \(y,z\in\mathbb{G}\). Both \(\tau_{x}\) and \(\delta_{\lambda}\) are smooth diffeomorphisms, and \(\delta_{\lambda}\) is a Lie group isomorphism. The subbundle of the tangent bundle \(T\mathbb{G}\) that is spanned by the vector fields \(X_{1},\dots,X_{m}\) is called the _horizontal bundle_\(H\mathbb{G}\), with fibers given by \[H_{x}\mathbb{G}=\operatorname{span}\left\{X_{1}(x),\dots,X_{m}(x)\right\}.\] As costumary, being the horizontal bundle trivial, we identify each fiber \(H_{x}\mathbb{G}\) with \(\mathbb{R}^{m}\), considering an horizontal vector field \(\sum_{j=1}^{m}a_{j}(x)X_{j}|_{x}\) as the vector-valued function \(x\mapsto(a_{1}(x),\dots,a_{m}(x))\). In this way, a sub-Riemannian structure can be defined on \(\mathbb{G}\) considering a scalar product \(\langle\cdot,\cdot\rangle_{x}\) that makes \(\{X_{1},\dots,X_{m}\}\) orthonormal at each point \(x\in\mathbb{G}\). Moreover, we denote by \(|\cdot|_{x}\) the norm induced by \(\langle\cdot,\cdot\rangle_{x}\), namely \(|v|_{x}:=\sqrt{\langle v,v\rangle_{x}}\) for every \(v\in H_{x}\mathbb{G}\). Notice that, with the above identification between \(H_{x}\mathbb{G}\) and \(\mathbb{R}^{m}\), \(\langle\cdot,\cdot\rangle_{x}\) coincides with Euclidean scalar product \(\langle\cdot,\cdot\rangle\) on \(\mathbb{R}^{m}\). Finally, we denote by \(\pi:\mathbb{G}\longrightarrow H\mathbb{G}\) the smooth section defined by \[\pi(y)=\sum_{j=1}^{m}y_{j}X_{j}(y)\] for any \(y\in\mathbb{G}\). ### Sub-Riemannian distances We recall that a horizontal curve \(\gamma:[a,b]\longrightarrow\mathbb{G}\) is called _sub-unit_ whenever the vector-valued function \(a\) as in (1.7) satisfies \(\|a\|_{\infty}\leqslant 1\). In the following, for any domain \(\Omega\subseteq\mathbb{G}\), we denote by \(\mathcal{H}(\Omega)\) the set \[\mathcal{H}(\Omega):=\{\gamma:[0,T]\longrightarrow\Omega\,:\,\gamma\text{ is sub-unit, }T>0\}.\] and we define the _Carnot-Caratheodory distance_ on \(\Omega\) by \[d_{\Omega}(x,y)\coloneqq\inf\left\{L_{\Omega}(\gamma)\colon\gamma:[0,T] \longrightarrow\Omega,\ \gamma\in\mathcal{H}(\Omega),\,\gamma(0)=x,\,\gamma(T)=y\right\},\] where \[L_{\Omega}(\gamma):=\int_{0}^{T}|\dot{\gamma}(t)|\,dt\] and by \(\dot{\gamma}(t)\) we mean as usual the coordinates of \(\dot{\gamma}(t)\) with respect to \(X_{1}|_{\gamma(t)},\dots,X_{m}|_{\gamma(t)}\). Let us notice (cf. [36]) that an absolutely continuous curve is horizontal if and only if it is \(d_{\Omega}\)-Lipschitz. We recall that when \(\Omega=\mathbb{G}\) then \(d_{\mathbb{G}}\) is a geodesic distance, while \(d_{\Omega}\) is not geodesic in general. The following crucial result is a consequence of [37, Proposition 1.1]. **Theorem 2.1**.: _Let \(\mathbb{G}\) be a Carnot group of step \(k\), and let \(\Omega\subseteq\mathbb{G}\) be open and connected. Then the following properties hold._ 1. \((\Omega,d_{\Omega})\) _is a Carnot-Caratheodory space._ 2. _For any domain_ \(\tilde{\Omega}\Subset\Omega\) _there exists a positive constant_ \(C_{\tilde{\Omega}}\) _such that_ \[C_{\tilde{\Omega}}^{-1}|x-y|\leqslant d_{\Omega}(x,y)\leqslant C_{\tilde{ \Omega}}|x-y|^{\frac{1}{k}}\qquad\text{ for any }x,y\in\tilde{\Omega}.\] The rich algebraic structure of \(\mathbb{G}\) allows to define the well known _Gauge-Koranyi distance_ on \(\mathbb{G}\) (cf. [13]). To this aim, consider the function \[\|(y^{(1)},\dots,y^{(k)})\|=\left(\sum_{j=1}^{k}|y^{(j)}|^{\frac{2\pi i}{j}} \right)^{\frac{1}{2k}}\] for any \(y\in\mathbb{G}\). This is an _homogeneous norm_ (cf. [13]) on \(\mathbb{G}\), which induces the homogeneous distance \[d_{\mathfrak{g}}(y,z)=\|y^{-1}\cdot z\| \tag{2.1}\] for any \(y,z\in\mathbb{G}\). It is well known (cf. [37]) that \(d_{\mathfrak{g}}\) and \(d_{\mathbb{G}}\) are equivalent distances on \(\mathbb{G}\). Moreover, if \(\Omega\subseteq\mathbb{G}\) is any domain, then clearly \(d_{\mathbb{G}}\leqslant d_{\Omega}\). Thanks to Theorem 2.1, the following holds. **Proposition 2.2**.: _Let \(\mathbb{G}\) be a Carnot group of step \(k\), and let \(\Omega\subseteq\mathbb{G}\) be open and connected. Then \(d_{\Omega}\), \(d_{\mathbb{G}}\) and \(d_{k}\) are locally equivalent on \(\Omega\)._ To prove Proposition 2.2, we need the following result. **Lemma 2.3**.: _Let \(\mathbb{G}\) be a Carnot group of step \(k\), and let \(\Omega\subseteq\mathbb{G}\) be open and connected. Then, for any \(x_{0}\in\Omega\), there exists \(r>0\) such that, for any \(x,y\in B_{r}(x_{0},d_{\mathbb{G}})\), any optimal curve for \(d_{\mathbb{G}}(x,y)\) lies in \(\Omega\)._ Proof.: Assume by contradiction that there exists \(x_{0}\in\Omega\) and sequences \((x_{h})_{h},(y_{h})_{h}\), \((\gamma_{h})_{h}\) such that \(d_{\mathbb{G}}(x_{0},x_{h}),d_{\mathbb{G}}(x_{0},y_{h})<\frac{1}{h}\), \(\gamma_{h}:[0,T_{h}]\longrightarrow\mathbb{G}\) is sub-unit and is optimal for \(d_{\mathbb{G}}(x_{h},y_{h})\), and there exists \(0<t_{h}<T_{h}\) such that \(z_{h}:=\gamma_{h}(t_{h})\in\partial\Omega\). Up to a subsequence, there exists \(R>0\) such that \((x_{h})_{h},(y_{h})_{h}\subseteq B_{R}(x_{0},d_{\mathbb{G}})\Subset\Omega\). Set \[D=\inf\{d_{\mathbb{G}}(z,w)\,:\,z\in\partial\Omega,\;w\in\partial B_{R}(x_{0}, d_{\mathbb{G}})\}.\] Since \(B_{R}(x_{0},d_{\mathbb{G}})\Subset\Omega\), then \(D>0\). On one hand \[d_{\mathbb{G}}(x_{h},y_{h})\leqslant d_{\mathbb{G}}(x_{0},x_{h})+d_{\mathbb{G }}(x_{0},y_{h})\to 0\] as \(h\to\infty\). On the other hand, in view of the choice of \(\gamma_{h}\), \[d_{\mathbb{G}}(x_{h},y_{h})=d_{\mathbb{G}}(x_{h},z_{h})+d_{\mathbb{G}}(y_{h}, z_{h})\geqslant 2D>0.\] A contradiction then follows. Proof of Proposition 2.2.: In view of the previous considerations, we are left to show that for any domain \(\tilde{\Omega}\Subset\Omega\) there exists \(K_{\tilde{\Omega}}>0\) such that \(d_{\mathbb{G}}\geqslant\ K_{\tilde{\Omega}}d_{\Omega}\). Assume by contradiction that there exists a domain \(\tilde{\Omega}\Subset\Omega\) and two sequences \((x_{h})_{h},(y_{h})_{h}\subseteq\tilde{\Omega}\) such that \[d_{\mathbb{G}}(x_{h},y_{h})<\frac{1}{h}d_{\Omega}(x_{h},y_{h})\] for any \(h\in\mathbb{N}\). Let \(D\) be the Euclidean diameter of \(\tilde{\Omega}\). Since \(\tilde{\Omega}\) is bounded, then \(D<\infty\). Thanks to Theorem 2.1, we have that \[d_{\mathbb{G}}(x_{h},y_{h})<\frac{1}{h}d_{\Omega}(x_{h},y_{h})\leqslant\frac{ 1}{h}\sup_{x,y\in\tilde{\Omega}}d_{\Omega}(x,y)\leqslant\frac{C_{\tilde{\Omega }}}{h}\sup_{x,y\in\tilde{\Omega}}|x-y|^{\frac{1}{h}}\leqslant\frac{C_{\tilde{ \Omega}}D^{\frac{1}{h}}}{h}.\] This implies that \(d_{\mathbb{G}}(x_{h},y_{h})\to 0\). Therefore, up to a subsequence, we can assume that \(x_{h},y_{h}\to x_{0}\) for some \(x_{0}\in\Omega\). Choose \(r\) as in Lemma 2.3, and assume up to a subsequence that \((x_{h})_{h},(y_{h})_{h}\subseteq B_{r}(x_{0},d_{\mathbb{G}})\). Then Lemma 2.3 implies that \[d_{\mathbb{G}}(x_{h},y_{h})=d_{\Omega}(x_{h},y_{h}),\] a contradiction. ### Calculus on Carnot groups Given \(u\in L^{1}_{\rm loc}(\Omega)\), we define its distributional _horizontal gradient_\(Xu\) by \[Xu(\varphi):=-\int_{\Omega}u\sum_{j=1}^{m}X_{j}\varphi_{j}\,dx\] for any \(\varphi\in C^{\infty}_{c}(\Omega)\). This notion allows to define in the obvious way the classical functional spaces \(W^{1,\infty}_{X}(\Omega)\), \(W^{1,\infty}_{X,{\rm loc}}(\Omega)\) and \(C^{1}_{X}(\Omega)\). More specifically, we say that \(u\in W^{1,\infty}_{X}(\Omega)\) if \(u\in L^{\infty}(\Omega)\) and \(Xu\in L^{\infty}(\Omega,\mathbb{R}^{m})\), and that \(u\in C^{1}_{X}(\Omega)\) if \(u\) is continuous and \(Xu\) is continuous. The space \(W^{1,\infty}_{X,{\rm loc}}(\Omega)\) is defined accordingly. If \(d\) is a distance, we define the space \[\operatorname{Lip}(\Omega,d):=\left\{u\in C(\Omega)\,:\,\text{there exists $C>0$ such that }\sup_{x\neq y\in\Omega}\frac{|u(x)-u(y)|}{d(x,y)}\leqslant C\right\}.\] The space \(\operatorname{Lip}_{\rm loc}(\Omega,d)\) is defined in the obvious way. It is well known (cf. [29]) that \[W^{1,\infty}_{X,{\rm loc}}(\Omega)=\operatorname{Lip}_{\rm loc}(\Omega,d_{ \Omega}).\] We conclude this section recalling the following differentiability result due to Pansu (cf. [39]). **Theorem 2.4**.: _Let \(\Omega\subset\mathbb{G}\) be an open set. Let \(u\in W^{1,\infty}_{X,{\rm loc}}(\Omega)\). Then \(u\) is Pansu-differentiable at almost every \(x_{0}\in\Omega\), that is_ \[\lim_{x\to x_{0}}\frac{u(x)-u(x_{0})-\langle Xu(x_{0}),\pi(x_{0}^{-1}\cdot x) \rangle}{d_{\Omega}(x_{0},x)}=0\] _for almost every \(x_{0}\in\Omega\)._ ### Subgradient in Carnot groups In this section we recall some properties of the so-called \((X,N)\)_-subgradient_ of a function \(u\in W^{1,\infty}_{X,\mathrm{loc}}(\Omega)\), introduced in [40] as a generalization of the classical Clarke's subdifferential (cf. [19]) and defined by \[\partial_{X,N}u(x):=\overline{\alpha}\left\{\lim_{n\to\infty}Xu(y_{n})\,:\,y_ {n}\to x,\,y_{n}\notin N\text{ and }\lim_{n\to\infty}Xu(y_{n})\text{ exists}\right\}\] for any \(x\in\Omega\), where \(N\subseteq\Omega\) is any Lebesgue negligible set containing the non-Lebesgue points of \(Xu\) and \(\overline{\alpha}\overline{\alpha}\) denotes the closure of the convex hull. In the sequel we will need the following result, which can be found as [40, Proposition 2.5]. **Proposition 2.5**.: _Let \(u\in W^{1,\infty}_{X,\mathrm{loc}}(\Omega)\) and let \(\gamma\in\mathrm{AC}([-\beta,\beta],\Omega)\) be a horizontal curve as in (1.7). The function \(t\mapsto u(\gamma(t))\) belongs to \(W^{1,\infty}(-\beta,\beta)\), and there exists a function \(g\in L^{\infty}((-\beta,\beta),\mathbb{R}^{m})\) such that_ \[\frac{d(u\circ\gamma)(t)}{dt}=\langle g(t),a(t)\rangle\] _for a.e. \(t\in(-\beta,\beta)\). Moreover_ \[g(t)\in\partial_{X,N}u(\gamma(t))\] _for a.e. \(t\in(-\beta,\beta)\)._ ## 3. Some properties of \(\sigma^{\star}\) and \(d_{\sigma^{\star}}\) Here and in the following we will be focused on Hamilton-Jacobi equations as in (1.5), that is \[H(x,Xu)=0 \tag{3.1}\] on \(\Omega\), where \(\Omega\) is a subdomain of \(\mathbb{G}\) and \(H\) satisfies the structural assumptions (H). Since the notion of Monge solution heavily depends on the properties of the associated optical length function, and hence on the properties of \(\sigma^{\star}\), let us make some preliminary considerations on these objects. First, notice that condition (H\({}_{3}\)) is equivalent to the estimate \[\frac{1}{\alpha}|v|_{x}\leqslant\sigma^{\star}(x,v)\leqslant\alpha|v|_{x} \quad\text{for every }(x,v)\in H\mathbb{G}. \tag{3.2}\] Moreover the following simple result, which is the sub-Riemannian analogous of [14, Lemma 4.2], will be useful to state the equivalence between Monge and viscosity solutions in the continuous setting. We refer to [26] for an account of sub-Finsler metrics. **Lemma 3.1**.: \(\sigma^{\star}:H\Omega\longrightarrow\mathbb{R}\) _is a sub-Finsler convex metric. Moreover, for any \(v\in\mathbb{R}^{m}\), the following hold._ 1. _If_ \(H\) _is upper semicontinuous on_ \(H\Omega\)_, then_ \(\sigma^{\star}(\cdot,v)\) _is lower semicontinuous on_ \(\Omega\)_._ 2. _If_ \(H\) _is lower semicontinuous on_ \(H\Omega\)_, then_ \(\sigma^{\star}(\cdot,v)\) _is upper semicontinuous on_ \(\Omega\)_._ Regarding the optical length function, an easy computation shows that \[d_{\sigma^{\star}}(x,y)=\inf\left\{\int_{0}^{T}\sigma^{\star}(\gamma(t),\dot{ \gamma}(t))\,dt\ :\,\gamma:[0,T]\longrightarrow\Omega,\,\gamma\in\mathcal{H}(\Omega),\,\gamma(0 )=x\,,\gamma(T)=y\right\} \tag{3.3}\] for any \(x,y\in\Omega\). The quantity (3.3) is well-defined, both because the map \(t\mapsto\sigma^{\star}(\gamma(t),\dot{\gamma}(t))\) is Borel measurable on the horizontal bundle, and because, as already mentioned, every two points in \(\Omega\) can be connected by a horizontal curve. However, \(d_{\sigma^{\star}}\) can presents some pathological behaviour without some semicontinuity assumptions (see [26, Example 5.5]). Let us discuss some properties of \(d_{\sigma^{\star}}\) which will be useful in the sequel. **Lemma 3.2**.: _The following properties hold._ 1. \(d_{\sigma^{\star}}\) _is a non-symmetric distance on_ \(\Omega\)_._ 2. \(d_{\sigma^{\star}}\) _is equivalent to_ \(d_{\Omega}\) _on_ \(\Omega\)_, i.e._ \[\frac{1}{\alpha}\,d_{\Omega}(x,y)\leqslant d_{\sigma^{\star}}(x,y)\leqslant \alpha\,d_{\Omega}(x,y)\] _for any_ \(x,y\in\Omega\)_._ 3. \(d_{\sigma^{\star}}\) _is_ \(d_{\Omega}\)_-Lipschitz on_ \(\Omega\times\Omega\)_, that is_ \[|d_{\sigma^{\star}}(x,y)-d_{\sigma^{\star}}(z,w)|\leqslant\alpha(d_{\Omega}(x, z)+d_{\Omega}(y,w))\] _for any_ \(x,y,z,w\in\Omega\) Proof.: The proof of \((i)\) follows as in [26, Lemma 5.7]. \((ii)\) is an easy consequence of estimate (3.2). Let us show \((iii)\). To this aim, fix \(x,y,z,w\in\Omega\). Being \(d_{\sigma^{*}}\) a distance and thanks to point \((ii)\), we have that \[d_{\sigma^{*}}(x,y)-d_{\sigma^{*}}(z,w) =d_{\sigma^{*}}(x,y)-d_{\sigma^{*}}(z,y)+d_{\sigma^{*}}(z,y)-d_{ \sigma^{*}}(z,w)\] \[\leqslant d_{\sigma^{*}}(x,z)+d_{\sigma^{*}}(w,y)\] \[\leqslant\alpha(d_{\Omega}(x,z)+d_{\Omega}(y,w))\] and \[d_{\sigma^{*}}(z,w)-d_{\sigma^{*}}(x,y) =d_{\sigma^{*}}(z,w)-d_{\sigma^{*}}(x,w)+d_{\sigma^{*}}(x,w)-d_{ \sigma^{*}}(x,y)\] \[\leqslant d_{\sigma^{*}}(z,x)+d_{\sigma^{*}}(y,w)\] \[\leqslant\alpha(d_{\Omega}(x,z)+d_{\Omega}(y,w)).\] Therefore \((\Omega,d_{\sigma^{*}})\) is a non-symmetric metric space. The definition of \(d_{\sigma^{*}}\) trivially implies that it is a length space. However, we already know that it is not geodesic in general, since, for instance, \((\Omega,d_{\Omega})\) may not be geodesic. Nevertheless, exploiting standard arguments of analysis in metric spaces (cf. [3]) it can be shown that \((\Omega,d_{\sigma^{*}})\) is locally geodesic in the following sense. **Proposition 3.3**.: _For any \(x_{0}\in\Omega\) there exists \(r>0\) such that for any \(x,y\in B_{r}(x_{0},d_{\Omega})\) there exists \(\gamma\in\mathcal{H}(\Omega)\) such that \(\gamma:[0,T]\longrightarrow\Omega\), \(\gamma(0)=x\), \(\gamma(T)=y\) and_ \[d_{\sigma^{*}}(x,y)=L_{\sigma^{*}}(\gamma),\] _where_ \[L_{\sigma^{*}}(\gamma)=\int_{0}^{T}\sigma^{*}(\gamma(t),\dot{\gamma}(t))\ dt.\] We first need the following technical lemma. **Lemma 3.4**.: _For any \(x_{0}\in\Omega\), and for any \(R>0\) such that \(B_{R}(x_{0},d_{\Omega})\Subset\Omega\), there exists \(0<r<R\) and \(\bar{\varepsilon}>0\) such that, for any \(x,y\in B_{r}(x_{0},d_{\Omega})\) and for any \(0<\varepsilon<\bar{\varepsilon}\), every horizontal curve \(\gamma:[0,T]\longrightarrow\Omega\) such that \(\gamma(0)=x\), \(\gamma(T)=y\) and_ \[d_{\sigma^{*}}(x,y)\geqslant L_{\sigma^{*}}(\gamma)-\varepsilon\] _lies in \(B_{R}(x_{0},d_{\Omega})\)._ Proof.: Assume by contradiction that there exists \(x_{0}\in\Omega\), \(R>0\) with \(B_{R}(x_{0},d_{\Omega})\Subset\Omega\) and sequences \((x_{h})_{h},(y_{h})_{h}\) and \((\gamma_{h})_{h}\) such that \(d_{\sigma^{*}}(x_{0},x_{h}),d_{\sigma^{*}}(x_{0},y_{h})<\frac{1}{h}\), \(\gamma_{h}:[0,T_{h}]\longrightarrow\Omega\) is sub-unit, \(\gamma_{h}(0)=x_{h}\), \(\gamma_{h}(T_{h})=y_{h}\) and \[d_{\sigma^{*}}(x_{h},y_{h})\geqslant L_{\sigma^{*}}(\gamma_{h})-\frac{1}{h},\] and moreover there exists \(0<t_{h}<T_{h}\) such that \(z_{h}\coloneqq\gamma_{h}(t_{h})\) satisfies \(d_{\Omega}(x_{0},z_{h})=R\). Up to a subsequence, \((x_{h})_{h},(y_{h})_{h}\subseteq B_{\frac{R}{2}}(x_{0},d_{\Omega})\Subset\Omega\). Set \[D:=\inf\{d_{\Omega}(z,w)\,:\,z\in B_{\frac{R}{2}}(x_{0},d_{\Omega}),\,w\in \partial B_{R}(x_{0},d_{\Omega})\}>0.\] On one hand \[d_{\Omega}(x_{h},y_{h})\leqslant d_{\Omega}(x_{h},x_{0})+d_{\Omega}(x_{0},y_{ h})\to 0\] as \(h\to\infty\). On the other hand, in view of the choice of the curves \(\gamma_{h}\), \[d_{\Omega}(x_{h},y_{h}) \geqslant\frac{1}{\alpha}d_{\sigma^{*}}(x_{h},y_{h})\] \[\geqslant\frac{1}{\alpha}d_{\sigma^{*}}(x_{h},z_{h})+\frac{1}{ \alpha}d_{\sigma^{*}}(z_{h},y_{h})-\frac{1}{h\alpha}\] \[\geqslant\frac{1}{\alpha^{2}}d_{\Omega}(x_{h},z_{h})+\frac{1}{ \alpha^{2}}d_{\Omega}(z_{h},y_{h})-\frac{1}{h\alpha}\] \[\geqslant\frac{D}{\alpha^{2}}\] for \(h\) big enough. A contradiction then follows. Proof of Proposition 3.3.: Let \(x_{0}\in\Omega\) and \(R>0\) be such that \(B_{R}(x_{0},d_{\Omega})\Subset\Omega\). Then let \(r>0\) be as in Lemma 3.4. Let \(x,y\in B_{r}(x_{0},d_{\Omega})\) and let \((\gamma_{h})_{h}\) be a sequence of horizontal curves such that \(\gamma(0)=x\), \(\gamma(1)=y\) and \[L_{\sigma^{*}}(\gamma_{h})\leqslant d_{\sigma^{*}}(x,y)+\frac{1}{h}. \tag{3.4}\] In view of Lemma 3.4, we can assume that \(\gamma_{h}([0,T_{\underline{h}}])\subseteq\overline{B_{R}(x_{0},d_{\Omega})}\subseteq\Omega\) for any \(h\in\mathbb{N}\). Notice that (3.4) and \((H_{3})\) implies that \((\gamma_{h})_{h}\) is equicontinuous on \((\overline{B_{R}(x_{0},d_{\Omega})},d_{\sigma^{*}})\). Since the latter is a compact metric space, Ascoli-Arzela's Theorem implies the existence of a horizontal curve \(\gamma:[0,1]\longrightarrow\Omega\) such that \((\gamma_{h})_{h}\) converges uniformly to \(\gamma\). Since \(d_{\sigma^{*}}\) is lower semicontinuous with respect to the uniform convergence of curves, (3.4) allows to conclude. **Proposition 3.5**.: _Assume that \(H\) is upper semicontinuous on \(H\Omega\). Then it holds that_ \[\liminf_{t\to 0^{+}}\frac{d_{\sigma^{*}}(x,x\cdot\delta_{t}(\xi,\eta))}{t} \geqslant\sigma^{*}(x,\xi)\] _for any \(x\in\Omega\), \(\xi\in\mathbb{R}^{m}\) and \(\eta\in\mathbb{R}^{n-m}\)._ Proof.: Let us fix \(x\in\Omega\), \(\xi\in\mathbb{R}^{m}\) and \(\eta\in\mathbb{R}^{n-m}\). Since \(H\) is upper semicontinuous on \(H\Omega\), then \(\sigma^{*}(\cdot,\xi)\) is lower semicontinuous on \(\Omega\) by Lemma 3.1. This is equivalent to say that, for any \(\varepsilon>0\) and for any \(\tilde{\xi}\in S^{m-1}\), there exists \(r=r(x,\epsilon,\tilde{\xi})\) such that \(\sigma^{*}(y,\tilde{\xi})\geqslant\sigma^{*}(x,\tilde{\xi})-\varepsilon\) for any \(y\in B_{r}(x,d_{\Omega})\). Recalling that \(\sigma^{*}\) is Lipschitz in the second entry and exploiting a standard compactness argument, we infer that for any \(\varepsilon>0\) there exists \(r=r(x,\varepsilon)>0\) such that \[\sigma^{*}(y,\tilde{\xi})\geqslant\sigma^{*}(x,\tilde{\xi})-\varepsilon \tag{3.5}\] for any \(y\in B_{r}(x,d_{\Omega})\) and any \(\tilde{\xi}\in S^{m-1}\). Let us choose a sequence of sub-unit curves \(\gamma_{h}:[0,t_{h}]\longrightarrow\Omega\) in such a way that \(\gamma_{h}(0)=x\), \(\gamma_{h}(t_{h})=x\cdot\delta_{t_{h}}(\xi,\eta)\) and \[\liminf_{t\to 0^{+}}\frac{d_{\sigma^{*}}(x,x\cdot\delta_{t}(\xi,\eta))}{t} \leq\liminf_{h\to\infty}\ \mathchoice{\mathop{\kern 0.0pt\hbox to 0.0pt{\kern 2.0pt\vrule width 6.0pt height 6.0pt depth -0.2pt width 1px}\kern 2.0pt\kern 2.0pt\vrule width 6.0pt heigh t 0.4pt depth -0.2pt width 1px}\kern 2.0pt\int_{0}^{t_{h}}\sigma^{*}(\gamma_{h}(t),\dot{\gamma}_{h}(t))\,dt.\] Since \(\lim_{t\to 0^{+}}x\cdot\delta_{t}(\xi,\eta)=x\), and in view of Lemma 3.4, the sequence of curves can be choosen in such a way that \(\gamma_{h}([0,t_{h}])\subseteq B_{r}(x,d_{\Omega})\) for any \(h\in\mathbb{N}\). Therefore, exploiting (3.5), we infer that \[\liminf_{h\to\infty}\ \mathchoice{\mathop{\kern 0.0pt\hbox to 0.0pt{\kern 2.0pt \vrule width 6.0pt height 6.0pt depth -0.2pt width 1px}\kern 2.0pt\kern 2.0pt\vrule width 6.0pt heigh t 0.4pt depth -0.2pt width 1px}\kern 2.0pt\int_{0}^{t_{h}}\sigma^{*}(\gamma_{h}(t),\dot{\gamma}_{h}(t))\,dt\geqslant \liminf_{h\to\infty}\ \mathchoice{\mathop{\kern 0.0pt\hbox to 0.0pt{\kern 2.0pt \vrule width 6.0pt height 6.0pt depth -0.2pt width 1px}\kern 2.0pt\kern 2.0pt\vrule width 6.0pt heigh t 0.4pt depth -0.2pt width 1px}\kern 2.0pt\int_{0}^{t_{h}}\sigma^{*}(x,\dot{\gamma}_{h}(t))\,dt-\varepsilon.\] For any \(h\in\mathbb{N}\), set \(\gamma_{h}=(\gamma_{h}^{1},\ldots,\gamma_{h}^{m},\gamma_{h}^{m-1},\ldots, \gamma_{h}^{n})\). We recall that in the previous equations \(\dot{\gamma}_{h}\) is the \(m\)-tuple of the components of \(\dot{\gamma}_{h}\) along the generating vector fields. In other words, we mean \(\dot{\gamma}_{h}(t)=(a_{h}^{1}(t),\ldots,a_{h}^{m}(t))\), where \(\dot{\gamma}_{h}(t)=\sum_{j=1}^{m}a_{h}^{j}(t)X_{j}(\gamma_{h}(t))\). It is then easy to see that \(\dot{\gamma}_{h}^{j}=a_{h}^{j}\) for any \(j=1,\ldots,m\). Therefore, by the convexity properties of \(\sigma^{*}\) and Jensen's inequality, we get that \[\liminf_{h\to\infty}\ \mathchoice{\mathop{\kern 0.0pt\hbox to 0.0pt{\kern 2.0pt \vrule width 6.0pt height 6.0pt depth -0.2pt width 1px}\kern 2.0pt\kern 2.0pt\vrule width 6.0pt heigh t 0.4pt depth -0.2pt width 1px}\kern 2.0pt\int_{0}^{t_{h}}\sigma^{*}(x,\dot{\gamma}(t))\,dt-\varepsilon\geqslant \liminf_{h\to\infty}\sigma^{*}\left(x,\mathchoice{\mathop{\kern 0.0pt\hbox to 0.0pt{\kern 2.0pt \vrule width 6.0pt height 6.0pt depth -0.2pt width 1px}\kern 2.0pt\kern 2.0pt\vrule width 6.0pt heigh t 0.4pt depth -0.2pt width 1px}\kern 2.0pt\int_{0}^{t_{h}}\dot{\gamma}_{h}(t)\,dt\right)- \varepsilon=\sigma^{*}(x,\xi)-\varepsilon.\] The thesis follows letting \(\varepsilon\) go to \(0\). ## 4. Viscosity solutions for continuous Hamilton-Jacobi equations When the Hamiltonian \(H\) is continuous, the study of (3.1) can be carried out in the setting of sub-Riemannian viscosity solutions. To introduce this notion, we recall that the _first order superjet_ of \(u\in C(\Omega)\) at a point \(x_{0}\in\Omega\) is defined by \[\partial_{X}^{+}u(x_{0})=\{v\in\mathbb{R}^{m}\,:\,u(x)\leqslant u(x_{0})+\langle v,\pi(x_{0}^{-1}\cdot x)\rangle+o(d_{\Omega}(x_{0},x))\},\] while the _first order subjet_ of \(u\) at \(x_{0}\) is defined by \[\partial_{X}^{-}u(x_{0})=\{v\in\mathbb{R}^{m}\,:\,u(x)\geqslant u(x_{0})+\langle v,\pi(x_{0}^{-1}\cdot x)\rangle+o(d_{\Omega}(x_{0},x))\}.\] It is easy to see that \(\partial_{X}^{+}u(x_{0})\) and \(\partial_{X}^{-}u(x_{0})\) are closed and convex, and that they may be empty in general. Moreover, in view of Proposition 2.2, in the previous definition \(d_{\Omega}\) can be equivalently replaced by \(d_{\mathfrak{g}}\) or \(d_{\mathbb{G}}\). In the Euclidean setting (cf. [21]) the notion of viscosity solution can be equivalently given exploiting either jets or suitable test function (cf. [22, 20]). Following this path (cf. [35, 43]) we say that a function \(u\in C(\Omega)\) is a _jet subsolution_ to (3.1) in \(\Omega\) if \[H(x_{0},v)\leqslant 0\] for every \(x_{0}\in\Omega\) and every \(v\in\partial_{X}^{+}u(x_{0})\). Similarly, \(u\) is a _jet supersolution_ to (3.1) in \(\Omega\) if \[H(x_{0},v)\geqslant 0\] for every \(x_{0}\in\Omega\) and every \(v\in\partial_{X}^{-}u(x_{0})\). Finally, \(u\) is a _jet solution_ to (3.1) if it is both a jet subsolution and a jet supersolution. On the other hand, we say that \(u\) is a _viscosity subsolution_ to (3.1) if \[H(x_{0},X\psi(x_{0}))\leqslant 0\] for any \(x_{0}\in\Omega\) and for any \(\psi\in C^{1}_{X}(\Omega)\) such that \[u(x_{0})-\psi(x_{0})\geqslant u(x)-\psi(x)\] for any \(x\) in a neighborhood of \(x_{0}\). We say that \(u\) is a _viscosity supersolution_ to (3.1) if \[H(x_{0},X\psi(x_{0}))\geqslant 0\] for any \(x_{0}\in\Omega\) and for any \(\psi\in C^{1}_{X}(\Omega)\) such that \[u(x_{0})-\psi(x_{0})\leqslant u(x)-\psi(x),\] for any \(x\) in a neighborhood of \(x_{0}\). Again, we say that \(u\) is a _viscosity solution_ to (3.1) if it is both a viscosity subsolution and a viscosity supersolution. The next proposition shows that even in this sub-Riemannian setting these two definitions are equivalent. The following proof is inspired by [35]. **Proposition 4.1**.: _Let \(\Omega\subseteq\mathbb{G}\) be open. Assume that \(H\) is continuous. Then \(u\in C(\Omega)\) is a jet subsolution (resp. supersolution) to (3.1) if and only if it is a viscosity subsolution (resp. supersolution) to (3.1)._ Proof.: We prove only the half of the claim concerning subsolutions, being the other half analogous. The fact that a jet subsolution is a viscosity subsolution follows from [18, Proposition 3.2]. On the contrary, assume that \(u\) is a viscosity subsolution to (3.1), let \(x_{0}\in\Omega\) and \(p\in\partial_{X}^{+}u(x_{0})\). Let \(d_{\mathfrak{g}}\) be as in (2.1). It is well known that \(y\mapsto d_{\mathfrak{g}}(x_{0},y)\) is smooth outside \(x_{0}\) and its horizontal gradient is bounded near \(x_{0}\). Therefore \[u(x)\leqslant u(x_{0})+\langle p,\pi(x_{0}^{-1}\cdot x)\rangle+o(d_{\mathfrak{ g}}(x_{0},x)). \tag{4.1}\] Let \(R>0\) be such that \(B_{R}(x_{0},d_{\mathfrak{g}})\Subset\Omega\), and define \(g:(0,R]\longrightarrow\mathbb{R}\) by \[g(r):=\sup_{x\in B_{r}(x_{0},d_{\mathfrak{g}})}\frac{\max\{0,u(x)-u(x_{0})- \langle p,\pi(x_{0}^{-1}\cdot x)\rangle\}}{d_{\mathfrak{g}}(x_{0},x)}.\] Then \(g\) is nondecreasing and, by the choice of \(p\), \(\lim_{r\to 0}g(r)=0\), Hence there exists \(\tilde{g}\in C([0,R])\) such that \(\tilde{g}\) is nondecreasing, \(\tilde{g}(0)=0\) and \(\tilde{g}\geqslant g\). Let \(G(r):=\int_{0}^{r}\tilde{g}(r)d\tau\). Then \(G\in C^{1}([0,R[)\) and \(G(0)=G^{\prime}(0)=0\). Moreover, for any \(0<r<\frac{R}{2}\), it holds that \[G(2r)\geqslant\int_{r}^{2r}\tilde{g}(\tau)d\tau\geqslant r\tilde{g}(r)\geqslant rg (r). \tag{4.2}\] Let us define \(\varphi(x)=u(x_{0})+\langle p,\pi(x_{0}^{-1}\cdot x)\rangle+G(2d_{\mathfrak{g} }(x,x_{0}))\). Then \(\varphi\in C^{1}_{X}(B_{\frac{R}{2}}(x_{0},d_{\mathfrak{g}}))\), \(u(x_{0})=\varphi(x_{0})\) and \(X\varphi(x_{0})=p\). Finally, notice that (4.2) and the definition of \(g\) imply that \(u(x)\leqslant\varphi(x)\) on \(B_{\frac{R}{2}}(x_{0},d_{\mathfrak{g}})\). Therefore, being \(u\) a viscosity subsolution, we conclude that \[H(x_{0},p)=H(x_{0},X\varphi(x_{0}))\leqslant 0.\] Moreover, when \(H\) enjoys some mild convexity properties in its second entry, a locally Lipschitz functions is a viscosity subsolution if and only if it satisfies (3.1) pointwise almost everywhere. **Proposition 4.2**.: _Let \(\Omega\) be an open subset of \(\mathbb{G}\). Assume that \(H\) is continuous and that \(Z(x)\) is convex for any \(x\in\Omega\). Let \(u\in W^{1,\infty}_{X,\mathrm{loc}}(\Omega)\). Then the following conditions are equivalent._ 1. \(u\) _is a viscosity subsolution to (_3.1_)._ 2. \(u\) _is a jet subsolution to (_3.1_)._ 3. \(H(x,Xu(x))\leqslant 0\) _for almost every_ \(x\in\Omega\)_._ Proof.: The implication \((i)\iff(ii)\) follows from Proposition 4.1. Moreover, \((iii)\implies(i)\) follows from [18, Theorem 3.7]. Finally, we prove \((ii)\implies(iii)\). Let \(x\in\Omega\) be such that \(u\) is Pansu-differentiable at \(x\). Then clearly \(Xu(x)\in\partial_{X}^{+}u(x)\), and so \(H(x,Xu(x))\leqslant 0\). To conclude this section, we point out that the sub-Riemannian Hamilton-Jacobi equation (3.1) can be viewed as an Euclidean equation in the following sense. Let \(C(x)\) denote the \(m\times n\) matrix whose rows correspond to the coefficients of the generating vector fields of \(\mathfrak{g}_{1}\) at \(x\). We define the auxiliary Hamiltonian \(\tilde{H}:\Omega\times\mathbb{R}^{n}\longrightarrow\mathbb{R}\) by \[\tilde{H}(x,v)=H(x,v\cdot C(x)^{T}) \tag{4.3}\] for any \((x,v)\in\Omega\times\mathbb{R}^{n}\). It is easy to see that \(\tilde{H}\in C(\Omega\times\mathbb{R}^{n})\) when \(H\) is continuous. With the next result, we show that sub-Riemannian viscosity solutions to (3.1) coincides with Euclidean viscosity solutions to the Hamilton-Jacobi equation associated to (4.3). **Proposition 4.3**.: _Let \(\Omega\) be an open subset of \(\mathbb{G}\). Let \(\tilde{H}\) be as in (4.3). Then \(u\in C(\Omega)\) is a viscosity solution to_ \[H(x,Xu)=0 \tag{4.4}\] _if and only if \(u\) is a viscosity solution to_ \[\tilde{H}(x,\nabla u)=0. \tag{4.5}\] Proof.: Since \(C^{1}(\Omega)\subseteq C^{1}_{X}(\Omega)\), then a viscosity solution to (4.4) is a viscosity solution to (4.5). To prove the converse implication we only show that viscosity subsolutions to (4.5) are viscosity subsolutions to (4.4), being the other part of the proof analogous. Therefore, assume that \(u\) is a viscosity subsolution to (4.5), let \(x_{0}\in\Omega\) and let \(\varphi\in C^{1}_{X}(\Omega)\) be such that \(u(x_{0})=\varphi(x_{0})\) and \(\varphi(x)>u(x)\) for any \(x\in B_{2r}(x_{0},d_{\mathfrak{g}})\), for some \(r>0\) small enough to ensure that \(B_{2r}(x_{0},d_{\mathfrak{g}})\Subset\Omega\). Thanks to [18, Proposition 2.4] (cf. also [27, Proposition 1.20]), there exists a sequence \((\varphi_{h})_{h}\subseteq C^{\infty}(\Omega)\) converging to \(\varphi\) in \(C^{1}_{X}(B_{2r}(x_{0},d_{\mathfrak{g}}))\). For any \(h\in\mathbb{N}\), let \(x_{h}\) be a maximum point for \(u-\varphi_{h}\) in \(\overline{B_{r}(x_{0},d_{\mathfrak{g}})}\). We claim that \(x_{h}\to x_{0}\) as \(h\to+\infty\). Otherwise, we can assume that, up to a subsequence, \(x_{h}\to x_{1}\) for some \(x_{1}\neq x_{0}\) such that \(x_{1}\in\overline{B_{r}(x_{0},d_{\mathfrak{g}})}\). Recalling that \(u(x_{h})-\varphi_{h}(x_{h})\geqslant u(x_{0})-\varphi_{h}(x_{0})\) for any \(h\in\mathbb{N}\), and since \(x_{h}\to x_{1}\) and \(\varphi_{h}\to\varphi\) uniformly on \(B_{2r}(x_{0},d_{\mathfrak{g}})\), we pass to the limit and we infer that \(u(x_{1})-\varphi(x_{1})\geqslant u(x_{0})-\varphi(x_{0})=0.\) Therefore \(\varphi(x_{1})\leqslant u(x_{1})\), a contradiction. By our choice of \(x_{h}\), and thanks to (4.5), we get that \[H(x_{h},X\varphi_{h}(x_{h}))=\tilde{H}(x_{h},\nabla\varphi_{h}(x_{h}))\leqslant 0.\] Therefore, since \(H\) is continuous, \(x_{h}\to x\) and \(X\varphi_{h}\to X\varphi\) uniformly on \(B_{2r}(x_{0},d_{\mathfrak{g}})\), passing to the limit in the previous inequality we conclude that \[H(x_{0},X\varphi(x_{0}))\leqslant 0.\] Hence \(u\) is a viscosity subsolution to (4.4). ## 5. A Sub-Riemannian Hopf-Lax Formula for the Dirichlet Problem As already mentioned, the properties of Monge subsolutions and supersolutions strictly depend on those enjoyed by the optical length function \(d_{\sigma^{*}}\). Moreover, as it happens in the viscosity setting, \(d_{\Omega}\) can be equivalently replaced by \(d_{\mathfrak{g}}\) or \(d_{\mathbb{G}}\). Now we explain how to replace \(d_{\sigma^{*}}\) with suitable extensions as already explained in the introduction. Set \[\mathcal{K}(H,\Omega):=\{K:\Omega_{0}\times\mathbb{R}^{m}\longrightarrow \mathbb{R}\,:\,\Omega\subseteq\Omega_{0}\text{ is a domain, $K$ satisfies (H), $K\equiv H$ on $\Omega\times\mathbb{R}^{m}$}\}.\] Notice that \(\mathcal{K}_{0}(H,\Omega)\subseteq\mathcal{K}(H,\Omega)\), where \(\mathcal{K}_{0}(H,\Omega)\) is defined by (1.11). For a fixed \(K\in\mathcal{K}\), we can consider the associated \(\sigma^{*}_{K}\) and \(d_{\sigma^{*}_{K}}\). We want to show that the notion of Monge solution is independent of the choice of \(K\in\mathcal{K}\). To this aim, we prove the following preliminary result. **Lemma 5.1**.: _For any \(K\in\mathcal{K}\), for any \(x_{0}\in\Omega\) and for any \(R>0\) such that \(B_{R}(x_{0},d_{\Omega})\subseteq\Omega\) there exists \(r>0\) and \(\bar{\varepsilon}>0\) such that, for any \(x\in B_{r}(x_{0},d_{\Omega})\) and for any \(0<\varepsilon<\bar{\varepsilon}\), any curve \(\gamma:[0,T]\longrightarrow\Omega_{0}\) such that \(\gamma\) is sub-unit, \(\gamma(0)=x_{0}\), \(\gamma(T)=x\) and_ \[d_{\sigma^{*}_{K}}(x_{0},x)\geqslant L_{\sigma^{*}_{K}}(\gamma)-\varepsilon\] _lies in \(B_{R}(x_{0},d_{\Omega})\)._ Proof.: Assume by contradiction that there exists \(K:\Omega_{0}\times\mathbb{R}^{m}\longrightarrow\mathbb{R}\) such that \(K\in\mathcal{K}\), \(x_{0}\in\Omega\), \(R>0\) with \(B_{R}(x_{0})\subseteq\Omega\) and sequences \((x_{h})_{h}\) and \((\gamma_{h})_{h}\subseteq\mathcal{H}(\Omega_{0})\), with \(\gamma_{h}:[0,T_{h}]\longrightarrow\Omega_{0}\), \(\gamma_{h}(0)=x_{0}\), \(\gamma_{h}(T_{h})=x_{h}\) and \[d_{\sigma^{*}_{K}}(x_{0},x_{h})\geqslant L_{\sigma^{*}_{K}}(x_{0},x_{h})-\frac {1}{h}\] such that \(x_{h}\to x_{0}\) and for any \(h\) there exists \(0<t_{h}<T_{h}\) such that \(z_{h}:=\gamma(t_{h})\in\partial B_{R}(x_{0},d_{\Omega})\). Since \(K\) satisfies (H) on \(\Omega_{0}\), it follows that \[d_{\sigma^{*}_{K}}(x_{0},x_{h})\leqslant\alpha d_{\Omega_{0}}(x_{0},x_{h}) \leqslant\alpha d_{\Omega}(x_{0},x_{h})\to 0\] as \(h\to\infty\). On the other hand, in view of the choice of \(\gamma_{h}\) and Proposition 2.2, there exists \(C>0\) such that \[d_{\sigma^{*}}(x_{0},x_{h}) \geqslant d_{\sigma^{*}}(x_{0},z_{h})+d_{\sigma^{*}}(z_{h},x_{h})- \frac{1}{h}\] \[\geqslant\frac{1}{\alpha}d_{\Omega_{0}}(x_{0},z_{h})-\frac{1}{h}\] \[\geqslant\frac{C}{\alpha}d_{\Omega}(x_{0},z_{h})-\frac{1}{h}\] \[\geqslant\frac{CR}{2\alpha}\] for any \(h\) big enough. A contradiction then follows. **Proposition 5.2**.: _Let \(K:\Omega_{0}\times\mathbb{R}^{m}\longrightarrow\mathbb{R}\) be such that \(K\in\mathcal{K}\). A function \(u\in C(\Omega)\) is a Monge solution (resp. subsolution, supersolution) to (3.1) in \(\Omega\) if and only if_ \[\liminf_{x\to x_{0}}\frac{u(x)-u(x_{0})+d_{\sigma^{*}_{K}}(x_{0},x)}{d_{\Omega _{0}}(x_{0},x)}=0\quad(\text{resp. }\geqslant,\leqslant) \tag{5.1}\] _for any \(x_{0}\in\Omega\)._ Proof.: It suffices to observe that, thanks to Lemma 5.1 and the definition of \(\mathcal{K}(\Omega)\), for any \(x_{0}\in\Omega\) there exists \(r>0\) such that \[d_{\sigma^{*}}(x_{0},x)=d_{\sigma^{*}_{K}}(x_{0},x)\] for any \(x\in B_{r}(x_{0})\) and for any \(K\in\mathcal{K}(\Omega)\). Thanks to the results of Section 3, we are in position to prove Theorem 1.3. The proof of this result is inspired by [14]. Proof of Theorem 1.3.: Let \(K:\Omega_{0}\times\mathbb{R}^{m}\longrightarrow\mathbb{R}\) be as in the statement. Fix \(x,z\in\Omega\) and, for any \(h\in\mathbb{N}_{+}\), let \(y_{h}\in\partial\Omega\) be such that \(w(z)\geqslant d_{\sigma^{*}_{K}}(z,y_{h})+g(y_{h})-\frac{1}{h}\). Then \[w(x)-w(z)\leqslant d_{\sigma^{*}_{K}}(x,y_{h})-d_{\sigma^{*}_{K}}(z,y_{h})+ \frac{1}{h}\leqslant d_{\sigma^{*}_{K}}(x,z)+\frac{1}{h}\leqslant d_{\sigma^ {*}}(x,z)+\frac{1}{h}\leqslant\alpha d_{\Omega}(x,z)+\frac{1}{h}.\] Letting \(h\to\infty\), and since \(w(z)-w(x)\) can be estimated similarly, we conclude that \(w\in\operatorname{Lip}(\Omega,d_{\Omega})\). Fix \(x\in\partial\Omega\). Then, by definition of \(w\), it follows that \(w(x)\leqslant g(x)\). On the other hand, if \(y\in\partial\Omega\), (1.12) implies that \[d_{\sigma^{*}_{K}}(x,y)+g(y)\geqslant g(x),\] and so, taking the infimum over \(\partial\Omega\), we conclude that \(w(x)\geqslant g(x)\). Therefore \(w=g\) on \(\partial\Omega\). Let now \(x\in\partial\Omega\) and let \((x_{h})_{h}\subseteq\Omega\) be such that \(x_{h}\to x\) as \(h\to\infty\). Then \[w(x_{h})-w(x)\leqslant d_{\sigma^{*}_{K}}(x_{h},x)+g(x)-g(x)\leqslant\alpha d_ {\Omega_{0}}(x_{h},x)\] and there exists \((y_{h})_{h}\subseteq\partial\Omega\) such that \[w(x)-w(x_{h})\leqslant g(x)-g(y_{h})-d_{\sigma^{*}_{K}}(x_{h},y_{h})+\frac{1} {h}\leqslant d_{\sigma^{*}_{K}}(x,y_{h})-d_{\sigma^{*}_{K}}(x_{h},y_{h})+ \frac{1}{h}\leqslant d_{\sigma^{*}_{K}}(x,x_{h})+\frac{1}{h}.\] Hence we conclude that \(w\in C(\overline{\Omega})\). Let us show that \(w\) is a Monge subsolution. To this aim, let \(x_{0}\in\Omega\) and let \((x_{h})_{h}\subseteq\Omega\) be such that \(x_{h}\to x_{0}\) as \(h\to\infty\). For any \(h\in\mathbb{N}\), by definition of \(w\), there exists \(y_{h}\in\partial\Omega\) such that \[w(x_{h})\geqslant d_{\sigma^{*}_{K}}(x_{h},y_{h})+g(y_{h})-\frac{\|x_{0}^{-1} \cdot x_{h}\|}{h}.\] Therefore we infer that \[\frac{w(x_{h})-w(x_{0})+d_{\sigma^{*}_{K}}(x_{0},x_{h})}{\|x_{0}^{-1}\cdot x_{ h}\|}\geqslant\frac{d_{\sigma^{*}_{K}}(x_{0},y_{h})+g(y_{h})-w(x_{0})}{\|x_{0}^{-1} \cdot x_{h}\|}-\frac{1}{h}\geqslant-\frac{1}{h}.\] Letting \(h\to\infty\), being the sequence \((x_{h})_{h}\) arbitrary and recalling Proposition 5.2, we infer that \(w\) is a Monge subsolution. Conversely, let \(x_{0}\in\Omega\) and assume without loss of generality that \(B_{\frac{1}{h}}(x_{0},d_{\emptyset})\subseteq\Omega\) for any \(h\in\mathbb{N}_{+}\). Fix such an \(h\) and choose \(y_{h}\in\partial\Omega\) such that \[w(x_{0})\geqslant d_{\sigma^{*}_{K}}(x_{0},y_{h})+g(y_{h})-\frac{1}{h^{2}}.\] Moreover, for any \(h\), let \(\gamma_{h}:[0,T_{h}]\longrightarrow\Omega_{0}\) be a sub-unit curve with the property that \(\gamma_{h}(0)=x_{0}\), \(\gamma_{h}(T_{h})=y_{h}\) and \[d_{\sigma^{*}_{K}}(x_{0},y_{h})\geqslant\int_{0}^{T_{h}}\sigma^{*}_{K}(\gamma( t),\dot{\gamma}(t))\,dt-\frac{1}{h^{2}}.\] Pick \(t_{h}\in(0,T_{h})\) such that \(\gamma(t_{h})\in\partial B_{\frac{1}{h}}(x_{0},d_{\theta})\) and set \(x_{h}:=\gamma(t_{h})\). Then clearly \(x_{h}\to x_{0}\) as \(h\to\infty\) and therefore, by definition of \(w\) and the choice of \((\gamma_{h})_{h}\), we infer that \[w(x_{h})-w(x_{0})+d_{\sigma^{*}_{K}}(x_{0},x_{h})\leqslant d_{\sigma^{*}_{K}} (x_{h},y_{h})-d_{\sigma^{*}_{K}}(x_{0},y_{h})+d_{\sigma^{*}_{K}}(x_{0},x_{h}) +\frac{1}{h^{2}}\leqslant\frac{2}{h^{2}}.\] Noticing that \(\|x_{0}^{-1}\cdot x_{h}\|=\frac{1}{h}\) for some \(C>0\), we conclude that \[\liminf_{h\to\infty}\frac{w(x_{h})-w(x_{0})+d_{\sigma^{*}_{K}}(x_{0},x_{h})}{ \|x_{0}^{-1}\cdot x_{h}\|}\leqslant\liminf_{h\to\infty}\frac{2}{h}=0,\] and so \(w\) is a Monge supersolution. ## 6. Monge and Viscosity Solutions In this section we show that, as in the Euclidean setting (cf. [38, 14]), when \(H\) is continuous the notions of Monge and viscosity solution coincide. We begin to prove that Monge solutions are viscosity solutions. **Proposition 6.1**.: _Let \(H\) be continuous. If \(u\in C(\Omega)\) is a Monge subsolution (resp. supersolution) to (3.1), then \(u\) is a viscosity subsolution (resp. supersolution) to (3.1)._ Proof.: Let \(u\) be a Monge supersolution to (3.1), fix \(x_{0}\in\Omega\) and \(p\in\partial_{X}^{-}u(x_{0})\). Then it follows that \[0\geqslant\liminf_{x\to x_{0}}\frac{u(x)-u(x_{0})+d_{\sigma^{*}}(x_{0},x)}{ \|x_{0}^{-1}\cdot x\|}\geqslant\liminf_{x\to x_{0}}\frac{\langle p,\pi(x_{0}^ {-1}\cdot x)\rangle+d_{\sigma^{*}}(x_{0},x)}{\|x_{0}^{-1}\cdot x\|}\] Let \((x_{h})_{h}\) be a minimizing sequence for the right hand side. Let us set \(t_{h}:=\|x_{0}^{-1}\cdot x_{h}\|\) and \(\xi_{h}:=\frac{1}{t_{h}}\pi(x_{0}^{-1}\cdot x_{h})\). In this way, \(t_{h}\to 0^{+}\) when \(h\to\infty\). For any \(h\in\mathbb{N}\), let \(\eta_{h}\in\mathbb{R}^{n-m}\) be such that \[\delta_{\frac{1}{t_{h}}}(x_{0}^{-1}\cdot x_{h})=(\xi_{h},\eta_{h}).\] By construction, \((\delta_{\frac{1}{t_{h}}}(x_{0}^{-1}\cdot x_{h}))_{h}\) is bounded. Then there exists \(\xi\in\mathbb{R}^{m}\) and \(\eta\in\mathbb{R}^{n-m}\) such that, up to a subsequence, \((\xi_{h},\eta_{h})\to(\xi,\eta)\) as \(h\to\infty\). Then, by Proposition 3.5 and the choice of \((x_{h})_{h}\), we infer that \[\liminf_{x\to x_{0}}\frac{\langle p,\pi(x_{0}^{-1}\cdot x)\rangle+d_{ \sigma^{*}}(x_{0},x)}{\|x_{0}^{-1}\cdot x\|} =\liminf_{h\to\infty}\left(\langle p,\xi_{h}\rangle+\frac{d_{ \sigma^{*}}(x_{0},x_{h})}{t_{h}}\right)\] \[=\langle p,\xi\rangle+\liminf_{h\to\infty}\frac{d_{\sigma^{*}}(x _{0},x_{0}\cdot\delta_{t_{h}}(\xi_{h},\eta_{h}))}{t_{h}}\] \[=\langle p,\xi\rangle+\liminf_{h\to\infty}\frac{d_{\sigma^{*}}(x _{0},x_{0}\cdot\delta_{t_{h}}(\xi,\eta))}{t_{h}}\] \[\geqslant\langle p,\xi\rangle+\sigma^{*}(x_{0},\xi).\] Therefore we conclude that \(\langle-\xi,p\rangle\geqslant\sigma^{*}(x_{0},\xi).\) If it was the case that \(H(x_{0},p)<0\), then \(p\) is an interior point of \(Z(x_{0})\). But then \(\langle-\xi,p\rangle<\sigma^{*}(x_{0},\xi)\), since \(q\mapsto\langle-\xi,q\rangle\) is a linear and non-constant, and so it achieves its maximum on \(\partial Z(x_{0})\). A contradiction then follows. Assume now that \(u\) is a Monge subsolution to (3.1), let \(x_{0}\in\Omega\) and \(p\in\partial_{X}^{+}u(x_{0})\). Assume by contradiction that \(H(x_{0},p)>0\). Hence, by Hahn-Banach Theorem, there exists \(\xi\in S^{m-1}\) such that \(\langle-\xi,p\rangle>\sigma^{*}(x_{0},\xi).\) For any \(h\in\mathbb{N}\setminus\{0\}\), let \(x_{h}:=x_{0}\cdot\delta_{t_{h}}(\xi,0)\), where \((t_{h})_{h}\subseteq(0,1)\) goes to \(0\) as \(h\to\infty\). Then \(x_{h}\to x_{0}\) as \(h\to\infty\), and moreover \(x_{0}^{-1}\cdot x_{h}=(t_{h}\xi,0)\). Therefore, being \(u\) a Monge subsolution, it follows that \[0 \leqslant\liminf_{x\to x_{0}}\frac{u(x)-u(x_{0})+d_{\sigma^{*}}(x_{0},x)}{ \|x_{0}^{-1}\cdot x\|}\] \[\leqslant\liminf_{x\to x_{0}}\frac{\langle p,\pi(x_{0}^{-1}\cdot x )\rangle+d_{\sigma^{*}}(x_{0},x)}{\|x_{0}^{-1}\cdot x\|}\] \[\leqslant\liminf_{h\to\infty}\frac{\langle p,\pi(x_{0}^{-1}\cdot x _{h})\rangle+d_{\sigma^{*}}(x_{0},x_{h})}{\|x_{0}^{-1}\cdot x_{h}\|}\] \[=\langle p,\xi\rangle+\liminf_{h\to\infty}\frac{d_{\sigma^{*}}(x _{0},x_{0}\cdot\delta_{t_{h}}(\xi,0))}{t_{h}}.\] Let us set \(\gamma:[0,1]\longrightarrow\Omega\) by \(\gamma(t):=x_{0}\cdot\delta_{t}(\xi,0)\). Notice that \(\dot{\gamma}(t)\equiv\xi\), and so \(\gamma\) is sub-unit. Moreover \(\gamma(0)=x_{0}\) and \(\gamma(t_{h})=x_{h}\). Hence, since the continuity of \(H\) implies the continuity of \(\sigma^{\star}(\cdot,\xi)\), we infer that \[\liminf_{h\to\infty}\frac{d_{\sigma^{\star}}(x_{0},x_{0}\cdot\delta_{t_{h}}( \xi,0))}{t_{h}}\leqslant\liminf_{h\to\infty}\,\fint_{0}^{t_{h}}\sigma^{\star}( \gamma(t),\xi)\,dt=\sigma^{\star}(x_{0},\xi).\] Therefore we conclude that \(\langle-\xi,p\rangle\leqslant\sigma^{\star}(x_{0},\xi)\), a contradiction. In order to prove the converse implication, we need some preliminary results. **Proposition 6.2**.: _Let \(H\) be continuous. Let \(u\in C(\Omega)\) and assume that \(u\) is a viscosity subsolution to (3.1). Then \(u\in W^{1,\infty}_{X,\mathrm{loc}}(\Omega)\)._ Proof.: Let \(x_{0}\in\Omega\) and \(p\in\partial_{X}^{+}u(x_{0})\) with \(p\neq 0\). Then \(H(x_{0},p)\leqslant 0\), which implies that \(p\in Z(x_{0})\). Therefore it holds that \(|p|\leqslant\alpha\) by \((H_{3})\). Hence \(u\) is a viscosity subsolution to \[|Xu|\leqslant\alpha \tag{6.1}\] on \(\Omega\). Thanks to Proposition 4.3 and [42, Proposition 2.1], we conclude that \(u\in W^{1,\infty}_{X,\mathrm{loc}}(\Omega)\). **Proposition 6.3**.: _Assume that \(H\) is continuous. If \(u\) is a viscosity subsolution to (3.1) in \(\Omega\), then_ \[u(x)-u(y)\leqslant d_{\sigma^{\star}}(x,y) \tag{6.2}\] _for any \(x,y\in\Omega\)._ Proof.: Let \(x,y\in\Omega\). If \(x=y\) the thesis is trivial. If instead \(x\neq y\), let \(\gamma:[0,T]\longrightarrow\Omega\) be a sub-unit curve such that \(\gamma(0)=x\) and \(\gamma(T)=y\) for some \(T>0\). Thanks to Proposition 6.2 and Proposition 4.2 we know that \(u\in W^{1,\infty}_{X,\mathrm{loc}}(\Omega)\) and that \[H(z,Xu(z))\leqslant 0 \tag{6.3}\] for almost every \(z\in\Omega\). Let \(N\) be a Lebesgue negligible subset of \(\Omega\) containing all the non-Lebesgue points of \(Xu\) and all the points where (6.3) does not hold. Then, in view of [40, Lemma 2.7], we infer that \(H(z,p)\leqslant 0\) for any \(z\in\Omega\) and for any \(p\in\partial_{X,N}u(z)\). Therefore, in particular, \[p\in Z(z) \tag{6.4}\] for any \(z\in\Omega\) and for any \(p\in\partial_{X,N}u(z)\). Hence, thanks to (6.4) and Proposition 2.5, we conclude that \[u(x)-u(y)=\int_{0}^{T}\langle\dot{\gamma}(t),-g(t)\rangle\,dt\leqslant\int_{0} ^{T}\sigma^{\star}(\gamma(t),\dot{\gamma}(t))\,dt.\] Since \(\gamma\) is arbitrary, the thesis follows. We need also the following sub-Riemannian comparison principle, whose proof is inspired by [8]. **Proposition 6.4**.: _Let \(\Omega\) be a bounded domain. Assume that \(H\) is continuous and satisfies_ (H)_. Assume that \(u\in C(\overline{\Omega})\cap W^{1,\infty}_{X,\mathrm{loc}}(\Omega)\) is a viscosity subsolution to (3.1) on \(\Omega\) and that \(v\in C(\overline{\Omega})\) is a viscosity supersolution to (3.1) on \(\Omega\). If \(u\leqslant v\) on \(\partial\Omega\), then \(u\leqslant v\) on \(\overline{\Omega}\)._ Proof.: We can assume without loss of generality that \(u,v>0\). Let us fix \(\delta\in(0,1)\) and set \(w:=\delta u\). Clearly \(w\in C(\overline{\Omega})\cap W^{1,\infty}_{X,\mathrm{loc}}(\Omega)\) and \(w\leqslant v\) on \(\partial\Omega\). If we prove that \(w\leqslant v\) on \(\Omega\), then the thesis follows letting \(\delta\to 1\). **Step 1.** We first claim that for any \(\tilde{\Omega}\Subset\Omega\) there exists \(\eta>0\) such that \(w\) is a viscosity subsolution to \[H(x,Xw)+\eta=0\quad\text{on}\quad\tilde{\Omega}.\] If it was not the case, then there exists \(\tilde{\Omega}\Subset\Omega\) and sequences \((x_{h})_{h}\subseteq\tilde{\Omega}\), \((p_{h})_{h}\subseteq\mathbb{R}^{m}\) such that \(p_{h}\in\partial_{X}^{+}w(x_{h})\) and \[H(x_{h},p_{h})+\frac{1}{h}>0\] for any \(h\in\mathbb{N}_{+}\). Since by assumption \(Z(x_{h})\subseteq\hat{B}_{\alpha}(0)\) for any \(h\in\mathbb{N}_{+}\), then we can assume up to a subsequence that \(x_{h}\to\tilde{x}\in\Omega\) and \(p_{h}\to\tilde{p}\in\mathbb{R}^{m}\). Being \(H\) continuous, we infer that \(H(\tilde{x},\tilde{p})\geqslant 0.\) On the other hand, notice that \(\frac{p_{h}}{\delta}\in\partial_{X}^{+}u(x_{h})\) for any \(h\in\mathbb{N}_{+}\), and so, being \(u\) a subsolution, we infer that \(H\left(x_{h},\frac{p_{h}}{\delta}\right)\leqslant 0.\) Since \(H\) is continuous, we conclude that \[H\left(\tilde{x},\frac{\tilde{p}}{\delta}\right)\leqslant 0.\] The last equation implies that \(\frac{\tilde{p}}{\tilde{\varepsilon}}\in Z(\tilde{x})\). But then, being \(Z(\tilde{x})\) convex and since \(|\tilde{p}|<\frac{|\tilde{p}|}{\tilde{\delta}}\), we conclude that \(\tilde{p}\) is an interior point of \(Z(\tilde{x})\), and so \(H(\tilde{x},\tilde{p})<0\), a contradiction. **Step 2.** Let us define \(M:=\max_{\overline{\Omega}}(w-v)\), and assume by contradiction that \(M>0.\) Let us define, for any \(\varepsilon\in(0,1)\), \[\varphi_{\varepsilon}(x,y):=w(x)-v(y)-\frac{d_{\mathfrak{g}}(x,y)^{2r!}}{ \varepsilon^{2}}.\] Being \(\varphi_{\varepsilon}\) continuous on \(\overline{\Omega}\times\overline{\Omega}\), there exists \((x_{\varepsilon},y_{\varepsilon})\in\overline{\Omega}\times\overline{\Omega}\) such that \[M_{\varepsilon}:=\max_{\overline{\Omega}\times\overline{\Omega}}\varphi_{ \varepsilon}=\varphi_{\varepsilon}(x_{\varepsilon},y_{\varepsilon}).\] **Step 3**. We claim the following facts. 1. \(M_{\varepsilon}\to M\) as \(\varepsilon\to 0\). 2. \(w(x_{\varepsilon})-v(y_{\varepsilon})\to M\) as \(\varepsilon\to 0\). 3. \(\frac{d_{\mathfrak{g}}(x_{\varepsilon},y_{\varepsilon})^{2r!}}{\varepsilon^{2} }\to 0\) as \(\varepsilon\to 0\). 4. Let us set \[p_{\varepsilon}:=\frac{(2r!)d_{\mathfrak{g}}(x_{\varepsilon},y_{\varepsilon})^{ 2r!-1}Xd_{\mathfrak{g}}(x_{\varepsilon},y_{\varepsilon})}{\varepsilon^{2}}.\] Then \((p_{\varepsilon})_{\varepsilon}\) is bounded. 5. There exists \(\tilde{\Omega}\Subset\Omega\) such that \(x_{\varepsilon},y_{\varepsilon}\in\tilde{\Omega}\) for any \(\varepsilon\) small enough. Indeed, since from the choice of \((x_{\varepsilon},y_{\varepsilon})\) it is easy to see that \(M\leqslant M_{\varepsilon}\) for any \(\varepsilon\in(0,1)\). Let us set \(R:=\max\{\|w\|_{\infty},\|v\|_{\infty}\}\). Then we have that \[M\leqslant 2R-\frac{d_{\mathfrak{g}}(x_{\varepsilon},y_{\varepsilon})^{2r!}}{ \varepsilon^{2}}.\] Since we assumed that \(M>0\), we infer that \[\frac{d_{\mathfrak{g}}(x_{\varepsilon},y_{\varepsilon})^{2r!}}{\varepsilon^{2 }}\leqslant 2R.\] This implies in particular that \(d_{\mathfrak{g}}(x_{\varepsilon},y_{\varepsilon})\to 0\) as \(\varepsilon\to 0\). This fact, together with the compactness of \(\overline{\Omega}\), allows to assume up to a subsequence that there exists \(\bar{x}\in\overline{\Omega}\) such that \[\lim_{\varepsilon\to 0}d_{\mathfrak{g}}(x_{\varepsilon},\bar{x})=\lim_{ \varepsilon\to 0}d_{\mathfrak{g}}(y_{\varepsilon},\bar{x})=0. \tag{6.5}\] Moreover, notice that \(M\leqslant M_{\varepsilon}\) implies that \(M\leqslant w(x_{\varepsilon})-v(y_{\varepsilon})\) for any \(\varepsilon>0\). This last inequality, together with (6.5), implies that \[M\leqslant\liminf_{\varepsilon\to 0}w(x_{\varepsilon})-v(y_{\varepsilon}) \leqslant\limsup_{\varepsilon\to 0}w(x_{\varepsilon})-v(y_{\varepsilon}) \leqslant M. \tag{6.6}\] This proves \((ii)\). The fact that \(M\leqslant M_{\varepsilon}\), combined with (6.6), allows to conclude that \[M\leqslant\liminf_{\varepsilon\to 0}M_{\varepsilon}\leqslant\lim_{ \varepsilon\to 0}w(x_{\varepsilon})-v(y_{\varepsilon})=M.\] This proves \((i)\) and \((iii)\). To prove \((v)\), it suffices to observe that \[M=\lim_{\varepsilon\to 0}w(x_{\varepsilon})-v(y_{\varepsilon})=w(\bar{x})-v( \bar{x}),\] and thus, recalling that \(M>0\) and that \(w\leqslant v\) on \(\partial\Omega\), \((v)\) follows. Finally, we prove \((iv)\). Indeed, notice that, in view of the choice of \(x_{\varepsilon},y_{\varepsilon}\), then \[w(y_{\varepsilon})-v(y_{\varepsilon})=\varphi_{\varepsilon}(y_{\varepsilon},y_{ \varepsilon})\leqslant\varphi(x_{\varepsilon},y_{\varepsilon})=w(x_{ \varepsilon})-v(y_{\varepsilon})-\frac{d_{\mathfrak{g}}(x_{\varepsilon},y_{ \varepsilon})^{2r!}}{\varepsilon^{2}},\] which implies that \[\frac{d_{\mathfrak{g}}(x_{\varepsilon},y_{\varepsilon})^{2r!}}{\varepsilon^{2} }\leqslant w(x_{\varepsilon})-w(y_{\varepsilon})\leqslant Cd_{\mathfrak{g}}(x _{\varepsilon},y_{\varepsilon}),\] where \(C>0\) is the \(d_{\mathfrak{g}}-\)Lipschitz constant of \(w\) on \(\tilde{\Omega}\). Therefore \[\frac{d_{\mathfrak{g}}(x_{\varepsilon},y_{\varepsilon})^{2r!-1}}{\varepsilon^{2 }}\leqslant C.\] The proof is concluded noticing that \(z\mapsto Xd_{\mathfrak{g}}(z_{0},z)\) is bounded on \(\Omega\setminus\{z_{0}\}\) uniformly with respect to \(z_{0}\in\Omega\). **Step 4.** Let us define \[\varphi_{\varepsilon}^{1}(y):=w(x_{\varepsilon})-\frac{d_{\mathfrak{g}}(x_{ \varepsilon},y)^{2r!}}{\varepsilon^{2}}\qquad\text{and}\qquad\varphi_{ \varepsilon}^{2}(x):=v(y_{\varepsilon})+\frac{d_{\mathfrak{g}}(x,y_{ \varepsilon})^{2r!}}{\varepsilon^{2}}\] for any \(x,y\in\Omega\). These are smooth function on \(\Omega\). Moreover, \(x_{\varepsilon}\) is a maximum point for \(x\mapsto w(x)-\varphi_{\varepsilon}^{2}(x)\) and \(y_{\varepsilon}\) is a maximum point for \(y\mapsto-v(y)+\varphi_{\varepsilon}^{1}(y)\). Therefore, if \(\eta>0\) is the constant coming from Step 1 and relative to \(\tilde{\Omega}\) as in \((v)\), then \[H(x_{\varepsilon},p_{\varepsilon})+\eta\leqslant H(y_{\varepsilon},p_{ \varepsilon}).\] Being \((p_{\varepsilon})_{\varepsilon}\) bounded, we can assume that \(p_{\varepsilon}\to\bar{p}\) as \(\varepsilon\to 0\). Therefore we conclude from the previous inequality that \[H(\bar{x},\bar{p})+\eta\leqslant H(\bar{x},\bar{p}),\] a contradiction. **Proposition 6.5**.: _Let \(H\) be continuous. Let \(u\in C(\Omega)\) be a viscosity supersolution to (3.1). Then \(u\) is a Monge supersolution to (3.1)._ Proof.: Let \(u\) be as in the statement. If by contradiction \(u\) is not a Monge supersolution to (3.1), there exists \(x_{0}\in\Omega\), \(r>0\) and \(\delta>0\) such that \[u(x)-u(x_{0})+d_{\sigma^{*}}(x_{0},x)\geqslant\delta\|x_{0}^{-1}\cdot x\| \tag{6.7}\] for any \(x\in B_{r}(x_{0},d_{\emptyset})\). Notice that, without loss of generality, we can assume that \(u(x_{0})=0\). Set \(\psi(x)=-d_{\sigma^{*}}(x_{0},x)+\delta r\). Notice that, as \(B_{r}(x_{0},d_{\emptyset})\Subset\Omega\), then \(H\in\mathcal{K}_{0}(B_{r}(x_{0},d_{\emptyset}))\). Moreover, notice that \[\psi(x)-\psi(y)=d_{\sigma^{*}}(x_{0},y)-d_{\sigma^{*}}(x_{0},x)\leqslant d_{ \sigma^{*}}(x,y)\] for any \(x,y\in\partial B_{r}(x_{0},d_{\emptyset})\), and so (1.12) is satisfied by \(\psi\). Therefore we know from Theorem 2.2 that, if we define \(w:\overline{B_{r}(x_{0},d_{\emptyset})}\longrightarrow\mathbb{R}\) as in (1.13) with \(\Omega=B_{r}(x_{0},d_{\emptyset})\) and \(g=\psi\), then \(w\in C(\overline{B_{r}(x_{0},d_{\emptyset})})\) and \(w\) solves in the viscosity sense the Dirichlet problem \[H(x,Xw) =0\quad\text{in }B_{r}(x_{0},d_{\emptyset})\] \[w =\psi\quad\text{on }\partial B_{r}(x_{0},d_{\emptyset}).\] Moreover, in view of (6.7), \(u\geqslant\psi\) on \(\partial B_{r}(x_{0},d_{\emptyset})\). Therefore, recalling that \(w\in C(\overline{B_{r}(x_{0},d_{\emptyset})})\cap W^{1,\infty}_{X,\text{loc}}( B_{r}(x_{0},d_{\emptyset}))\), we conclude from Proposition 6.4 that \(w(x_{0})\leqslant u(x_{0})=0\), but this is impossible, since \(w(x_{0})=\delta r>0\). Proof of Theorem 1.2.: It follows from Proposition 6.1, Proposition 2.2 and Proposition 6.5. ## 7. Comparison Principle and Stability ### Comparison Principle In this section we prove Theorem 1.4. This result, as customary, yields uniqueness for the Dirichlet problem associated to (3.1). The proof of Theorem 1.4, strongly inspired by [23], is based on the validity of the following two properties of Monge subsolutions. **Proposition 7.1**.: _Let \(u\in C(\Omega)\). Assume that \(u\) is a Monge subsolution to (3.1). Then \(u\in W^{1,\infty}_{X,\text{loc}}(\Omega)\)._ Proof.: Assume that \(u\in C(\Omega)\) is a Monge subsolution to (3.1). Then \[\liminf_{x\to x_{0}}\frac{u(x)-u(x_{0})+\alpha d_{\Omega}(x_{0},x)}{\|x_{0}^{- 1}\cdot x\|}\geqslant\liminf_{x\to x_{0}}\frac{u(x)-u(x_{0})+d_{\sigma^{*}}(x _{0},x)}{\|x_{0}^{-1}\cdot x\|}\geqslant 0\] for any \(x_{0}\in\Omega\). Let \(K(x,\xi):=|\xi|-\alpha\). Then \(\sigma^{*}_{K}(x,\xi)=\alpha|\xi|\) and \(d_{\sigma^{*}_{K}}(x,y)=\alpha d_{\Omega}(x,y)\). This implies that \(u\) is a Monge subsolution to (6.1) on \(\Omega\). Since \(K\) is continuous, then \(u\) is also a viscosity subsolution to (6.1), in view of Proposition 6.1. The conclusion follows as in the proof of Proposition 6.2. **Proposition 7.2**.: _If \(u\) is a Monge subsolution to (3.1) in \(\Omega\), then for any \(x_{0}\in\Omega\) there exists \(r>0\) such that_ \[u(x)-u(y)\leqslant d_{\sigma^{*}}(x,y)\] _for any \(x,y\in\overline{B_{r}(x_{0},d_{\Omega})}\)._ Proof.: Let \(r>0\) be as in Proposition 3.3. Then in particular \(B_{r}(x_{0},d_{\Omega})\Subset\Omega\). Moreover, since \(u\) and \(d_{\sigma^{*}}\) are continuous on \(\overline{B_{r}(x_{0},d_{\Omega})}\), it suffices to consider points in \(B_{r}(x_{0},d_{\Omega})\). Let \(x,y\in B_{r}(x_{0},d_{\Omega})\). If \(x=y\) the thesis is trivial. If instead \(x\neq y\), in view of Proposition 3.3 there exists a sub-unit curve \(\gamma:[0,T]\longrightarrow\Omega\) such that \(\gamma(0)=x\), \(\gamma(T)=y\) for some \(T>0\) and \(\gamma\) is optimal for \(d_{\sigma^{*}}(x,y)\). Set \(f(t)\coloneqq d_{\sigma^{*}}(x_{0},\gamma(t))\) and \(g(t):=u(\gamma(t))\). Therefore Proposition 7.1 implies that both \(f,g\in W^{1,\infty}_{\text{loc}}(0,T)\). We infer that the derivative of \(f+g\) exists almost everywhere on \((0,T)\). To conclude, it suffices to show that it is non-negative. To this aim, recalling that \(u\) is a Monge subsolution to (3.1) and by the choice of \(\gamma\), we observe that \[\frac{d}{dt}(f+g)\Big{|}_{t=t_{0}} =\lim_{h\to 0^{+}}\frac{g(t_{0}+h)-g(t_{0})+f(t_{0}+h)-f(t_{0})}{h}\] \[=\lim_{h\to 0^{+}}\frac{u(\gamma(t_{0}+h))-u(\gamma(t_{0}))+d_{ \sigma^{*}}(x_{0},\gamma(t_{0}+h))-d_{\sigma^{*}}(x_{0},\gamma(t_{0}))}{\| \gamma(t_{0})^{-1}\cdot\gamma(t_{0}+h)\|}\] \[\geqslant\liminf_{h\to 0^{+}}\frac{u(\gamma(t_{0}+h))-u(\gamma(t_{0} ))+d_{\sigma^{*}}(\gamma(t_{0}),\gamma(t_{0}+h))}{\|\gamma(t_{0})^{-1}\cdot \gamma(t_{0}+h)\|}\cdot\frac{\|\gamma(t_{0})^{-1}\cdot\gamma(t_{0}+h)\|}{h}\] \[\geqslant 0\] for almost every \(t_{0}\in(0,T)\). Finally, integrating \(\frac{d}{dt}(f+g)\) in \([0,T]\) we get the result. **Lemma 7.3**.: _Let \(H,K:\Omega\times\mathbb{R}^{m}\longrightarrow\mathbb{R}\) satisfy (H), and assume that there exists \(\delta\in(0,1)\) such_ \[Z_{K}(x)\subseteq\delta Z_{H}(x) \tag{7.1}\] _for any \(x\in\Omega\). Assume that \(u\in C(\overline{\Omega})\) is a Monge subsolution to \(K(x,Xu)=0\) and that \(v\in C(\overline{\Omega})\) is a Monge supersolution to \(H(x,Xv)=0\). If \(u\leqslant v\) on \(\partial\Omega\), then \(u\leqslant v\) on \(\overline{\Omega}\)._ Proof.: Assume by contradiction that there exists \(x_{0}\in\Omega\) such that \(u(x_{0})>v(x_{0})\). Let us define \(\tilde{H},\tilde{K}\) by \[\tilde{H}(x,\xi):=\begin{cases}H(x,\xi)&\text{ if }(x,\xi)\in\Omega\times \mathbb{R}^{m}\\ |\xi|-\alpha&\text{ otherwise}\end{cases}\] and \[\tilde{K}(x,\xi):=\begin{cases}K(x,\xi)&\text{ if }(x,\xi)\in\Omega\times \mathbb{R}^{m}\\ |\xi|-\frac{1}{\alpha}&\text{ otherwise}.\end{cases}\] Then \(\tilde{H}\in\mathcal{K}_{0}(H,\Omega)\) and \(\tilde{K}\in\mathcal{K}_{0}(K,\Omega)\). Notice that, since \(\tilde{H},\tilde{K}\) are defined on the whole \(\mathbb{G}\times\mathbb{R}^{m}\), then \(d_{\sigma^{*}_{\tilde{H}}},d_{\sigma^{*}_{\tilde{K}}}\) are geodesic distances (cf. [26]). Moreover, (7.1) and the definition of \(\tilde{H},\tilde{K}\) imply that \[Z_{\tilde{K}}(x)\subseteq\delta Z_{\tilde{H}}(x) \tag{7.2}\] holds for any \(x\in\Omega\). We claim that there exists \(\varepsilon>0\) such that \[f_{\varepsilon}(x,y):=u(x)-v(y)-\frac{d_{\sigma^{*}_{\tilde{H}}}(x,y)^{2}}{\varepsilon}\] achieves its maximum over \(\overline{\Omega}\times\overline{\Omega}\) on \(\Omega\times\Omega\). If not, then for any \(h\in\mathbb{N}_{+}\) there exists \((x_{h},y_{h})\in(\overline{\Omega}\times\overline{\Omega})\backslash\Omega\times\Omega\) which realizes the maximum for \(f_{\frac{1}{h}}\). Up to a subsequence, we can assume that \(x_{h}\to\bar{x}\) and that \(y_{h}\to\bar{y}\). Moreover, we can assume without loss of generality that \(\bar{x}\in\partial\Omega\). Notice that \[0<f_{\frac{1}{h}}(x_{0},x_{0})\leqslant u(x_{h})-v(y_{h})-hd_{\sigma^{*}_{ \tilde{H}}}(x_{h},y_{h})^{2}. \tag{7.3}\] Therefore \(hd_{\sigma^{*}_{\tilde{H}}}(x_{h},y_{h})\) is bounded, an hence \(d_{\mathbb{G}}(x_{h},y_{h})\to 0\). This implies that \(\bar{x}=\bar{y}\). Hence, noticing that \(f_{\frac{1}{h}}(x_{0},x_{0})\) does not depend on \(h\), (7.3) implies that \(u(\bar{x})>v(\bar{x})\), which is impossible since \(\bar{x}\in\partial\Omega\). Let then \((\tilde{x},\tilde{y})\in\Omega\times\Omega\) be a maximum point for \(f_{\varepsilon}\), and let \(\gamma:[0,T]\longrightarrow\mathbb{G}\) be a sub-unit curve such that \(\gamma(0)=\tilde{x}\), \(\gamma(T)=\tilde{y}\) and \[d_{\sigma^{*}_{\tilde{H}}}(\tilde{x},\tilde{y})=L_{\sigma^{*}_{\tilde{H}}}( \gamma),\] and set \[h(t):=\frac{1}{\varepsilon}(d_{\sigma^{*}_{\tilde{H}}}(\tilde{x},\tilde{y})+d_{ \sigma^{*}_{\tilde{H}}}(\gamma(t),\tilde{y})).\] We claim that \(h(0)\leqslant\delta\). If \(\tilde{x}=\tilde{y}\), the thesis is trivial. So assume \(\tilde{x}\neq\tilde{y}\). Notice that \(f_{\varepsilon}(\tilde{x},\tilde{y})\geqslant f_{\varepsilon}(\gamma(t), \tilde{y})\) for any \(t\) small enough, and so \[u(\tilde{x})-u(\gamma(t))\geqslant h(t)(d_{\sigma^{*}_{\tilde{H}}}(\tilde{x}, \tilde{y})-d_{\sigma^{*}_{\tilde{H}}}(\gamma(t),\tilde{y}))\geqslant h(t)d_{ \sigma^{*}_{\tilde{H}}}(\tilde{x},\gamma(t))\] for any \(t\) small enough. Since \(u\) is a subsolution to \(K(x,Xu)=0\), we can apply Proposition 7.2 to infer that \[d_{\sigma^{*}_{\tilde{K}}}(\tilde{x},\gamma(t))\geqslant h(t)d_{\sigma^{*}_{ \tilde{H}}}(\tilde{x},\gamma(t))\] for any \(t>0\) small enough. Moreover (7.2) implies that \(d_{\sigma^{*}_{\tilde{K}}}(\tilde{x},\gamma(t))\leqslant\delta d_{\sigma^{*}_{ \tilde{H}}}(\tilde{x},\gamma(t))\) for any \(t\in[0,T]\). We conclude that \[\delta d_{\sigma^{*}_{\tilde{H}}}(\tilde{x},\gamma(t))\geqslant h(t)d_{\sigma^{*}_ {\tilde{H}}}(\tilde{x},\gamma(t))\] for any \(t>0\) small enough, which yields the claim. Noticing that \(f_{\varepsilon}(\tilde{x},\tilde{y})\geqslant f_{\varepsilon}(\tilde{x},y)\) for any \(y\) close enough to \(\tilde{y}\), we see that \[v(\tilde{y})-v(y) =f_{\varepsilon}(\tilde{x},y)-f_{\varepsilon}(\tilde{x},\tilde{ y})+\frac{1}{\varepsilon}(d_{\sigma^{*}_{\tilde{H}}}(\tilde{x},y)^{2}-d_{ \sigma^{*}_{\tilde{H}}}(\tilde{x},\tilde{y})^{2})\] \[\leqslant\frac{1}{\varepsilon}(d_{\sigma^{*}_{\tilde{H}}}( \tilde{x},y)+d_{\sigma^{*}_{\tilde{H}}}(\tilde{x},\tilde{y}))d_{\sigma^{*}_{ \tilde{H}}}(\tilde{y},y)\] \[<\frac{1+\delta}{2}d_{\sigma^{*}_{\tilde{H}}}(\tilde{y},y)\] for any \(y\) in a neighborhood of \(\tilde{y}\), where the last inequality follows from \(h(0)\leqslant\delta\) and from the local uniform continuity of \(d_{\sigma^{*}_{\tilde{H}}}(\tilde{x},\cdot)\). Therefore we can conclude that \[v(y)-v(\tilde{y})+d_{\sigma^{*}_{\tilde{H}}}(\tilde{y},y)\geqslant\frac{1- \delta}{2}d_{\sigma^{*}_{\tilde{H}}}(\tilde{y},y)\geqslant\frac{1-\delta}{2 \alpha}d_{\mathbb{G}}(\tilde{y},y),\] which is a contradiction since \(v\) is a supersolution to \(H(x,Xv)=0\). Proof of Theorem 1.4.: The proof, in view of Proposition 7.1, Proposition 7.2 and Lemma 7.3, follows with the obvious modifications as in [23, Theorem 5.8]. ### Stability Finally, following [14], we prove Theorem 1.5, which is the analogue of [14, Theorem 6.4]. Proof of Theorem 1.5.: Fix \(x_{0}\in\Omega\) and let \(r>0\) be such that \(B_{r}(x_{0},d_{\Omega})\Subset\Omega\) and Proposition 3.3 holds. Then \(H_{n}\in\mathcal{K}_{0}(H_{n},B_{r}(x_{0},d_{\Omega}))\) for any \(n\in\mathbb{N}\) and \(H_{\infty}\in\mathcal{K}_{0}(H_{\infty},B_{r}(x_{0},d_{\Omega}))\). Moreover, in view of Proposition 7.2, \(u_{n}(x)-u_{n}(y)\leqslant d_{\sigma^{*}_{H_{n}}}(x,y)\) for any \(n\in\mathbb{N}\) and for any \(x,y\in\partial B_{r}(x_{0},d_{\Omega})\). Hence, in view of Theorem 1.3, \[u_{n}(x)=\inf_{y\in\partial B_{r}(x_{0})}\{d_{\sigma^{*}_{\tilde{H}_{n}}}(x,y )+u_{n}(y)\}\] for any \(x\in\overline{B_{r}(x_{0},d_{\Omega})}\) and any \(n\in\mathbb{N}\). By the local uniform convergence assumptions we infer that \[u_{\infty}(x)=\inf_{y\in\partial B_{r}(x_{0})}\{d_{\sigma^{*}_{\tilde{H}_{ \infty}}}(x,y)+u_{\infty}(y)\}\] for any \(x\in\overline{B_{r}(x_{0},d_{\Omega})}\), and so we conclude thanks to Theorem 1.3. **Remark 7.4**.: The convergence condition in the hypotheses of Theorem 1.5 is based on the optical length functions rather than on the Hamiltonians. Arguing as in [14], one can easily find sufficient conditions on the Hamiltonians in order to guarantee the local uniform convergence of the optical length functions.
2306.08230
Unbiased Learning of Deep Generative Models with Structured Discrete Representations
By composing graphical models with deep learning architectures, we learn generative models with the strengths of both frameworks. The structured variational autoencoder (SVAE) inherits structure and interpretability from graphical models, and flexible likelihoods for high-dimensional data from deep learning, but poses substantial optimization challenges. We propose novel algorithms for learning SVAEs, and are the first to demonstrate the SVAE's ability to handle multimodal uncertainty when data is missing by incorporating discrete latent variables. Our memory-efficient implicit differentiation scheme makes the SVAE tractable to learn via gradient descent, while demonstrating robustness to incomplete optimization. To more rapidly learn accurate graphical model parameters, we derive a method for computing natural gradients without manual derivations, which avoids biases found in prior work. These optimization innovations enable the first comparisons of the SVAE to state-of-the-art time series models, where the SVAE performs competitively while learning interpretable and structured discrete data representations.
Harry Bendekgey, Gabriel Hope, Erik B. Sudderth
2023-06-14T03:59:21Z
http://arxiv.org/abs/2306.08230v2
# Unbiased Learning of Deep Generative Models ###### Abstract By composing graphical models with deep learning architectures, we learn generative models with the strengths of both frameworks. The structured variational autoencoder (SVAE) inherits structure and interpretability from graphical models, and flexible likelihoods for high-dimensional data from deep learning, but poses substantial optimization challenges. We propose novel algorithms for learning SVAEs, and are the first to demonstrate the SVAE's ability to handle multimodal uncertainty when data is missing by incorporating discrete latent variables. Our memory-efficient implicit differentiation scheme makes the SVAE tractable to learn via gradient descent, while demonstrating robustness to incomplete optimization. To more rapidly learn accurate graphical model parameters, we derive a method for computing natural gradients without manual derivations, which avoids biases found in prior work. These optimization innovations enable the first comparisons of the SVAE to state-of-the-art time series models, where the SVAE performs competitively while learning interpretable and structured discrete data representations. ## 1 Introduction Advances in deep learning have dramatically increased the expressivity of machine learning models at great cost to their interpretability. This trade-off can be seen in deep generative models that produce remarkably accurate synthetic data, but often fail to illuminate the data's underlying factors of variation, and cannot easily incorporate domain knowledge. The _structured variational autoencoder_ (SVAE, Johnson et al. [24]) aims to elegantly address these issues by combining probabilistic graphical models [50] with the VAE [27], gaining both flexibility and interpretability. But since its 2016 introduction, SVAEs have seen few applications because their expressivity leads to optimization challenges. This work proposes three key fixes that enable efficient training of general SVAEs. SVAE inference requires iterative optimization [50; 17] of variational parameters for latent variables associated with every observation. Johnson et al. [24] backpropagate gradients through this multi-stage optimization, incurring prohibitive memory cost. We resolve this issue via an implicit differentiation scheme that shows empirical robustness even when inference has not fully converged. Prior work [24] also identifies natural gradients [2; 20] as an important accelerator of optimization convergence, but apply natural gradients in a manner that requires dropping parts of the SVAE loss, yielding biased learning updates. We instead derive unbiased natural gradient updates that are easily and efficiently implemented for any SVAE model via automatic differentiation. Basic VAEs require carefully tuned continuous relaxations [23; 38] for discrete latent variables, but SVAEs can utilize them seamlessly. We incorporate adaptive variational inference algorithms [21] to robustly avoid local optima when learning SVAEs with discrete structure, enabling data clustering. SVAE inference easily accommodates missing data, leading to accurate and multimodal imputations. We further improve training speed by generalizing prior work on parallel Kalman smoothers [45]. We begin in Sec. 2 and 3 by linking variational inference in graphical models and VAEs. Our optimization innovations (implicit differentiation in Sec. 4, unbiased natural gradients in Sec. 5, variational inference advances in Sec. 6) then enable SVAE models to be efficiently trained to their full potential. Although SVAEs may incorporate any latent graphical structure, we focus on temporal data. In Sec. 8, we are the first to compare SVAE performance to state-of-the-art recurrent neural network- and transformer-based architectures on time series benchmarks [18], and the first to demonstrate that SVAEs provide a principled method for multimodal interpolation of missing data. ## 2 Background: Graphical Models and Variational Inference We learn generative models that produce complex data \(x\) via lower-dimensional latent variables \(z\). The distribution \(p(z|\theta)\) is defined by a graphical model (as in Fig. 2) with parameters \(\theta\), and \(z\) is processed by a (deep) neural network with weights \(\gamma\) to compute the data likelihood \(p_{\gamma}(x|z)\). Exact evaluation or simulation of the posterior \(p_{\gamma}(z,\theta|x)\) is intractable due to the neural network likelihood. _Variational inference_ (VI [50]) defines a family of approximate posteriors, and finds the distribution that best matches the true posterior by optimizing the _evidence lower bound_ (ELBO): \[\mathcal{L}[q(\theta;\eta)q(z;\omega),\gamma]=\mathbb{E}_{q(\theta;\eta)q(z; \omega)}\bigg{[}\log\frac{p(\theta)p(z|\theta)p_{\gamma}(x|z)}{q(\theta;\eta) q(z;\omega)}\bigg{]} \tag{1}\] Here, \(q(\theta;\eta)q(z;\omega)\approx p_{\gamma}(z,\theta|x)\) are known as _variational factors_. We parameterize these distributions via arbitrary exponential families with _natural parameters_\(\eta,\omega\). This implies that \[q(z;\omega)=\exp\{\langle\omega,t(z)\rangle-\log Z(\omega)\}, \hskip 28.452756ptZ(\omega)=\int_{z}\exp\{\langle\omega,t(z)\rangle\}\ dz. \tag{2}\] An exponential family is log-linear in its sufficient statistics \(t(z)\), where the normalizing constant \(Z(\omega)\) ensures it is a proper distribution. For models where \(p_{\gamma}(x|z)\) has a restricted conjugate form (rather than a deep neural network), we can maximize Eq. (1) by alternating optimization of \(\eta,\omega\); these coordinate ascent updates have a closed form [50]. _Stochastic VI_[20] improves scalability (for models with exponential-family likelihoods) by sampling batches of data \(x\), fitting a locally-optimal \(q(z;\omega)\) to the latent variables in that batch, and updating \(q(\theta;\eta)\) by the resulting (natural) gradient. **Amortized VI.** Because it is costly to optimize Eq. (1) with respect to \(\omega\) for each batch of data, VAEs employ _amortized VI_[26, 39, 42] to approximate the parameters of the optimal \(q(z;\omega)\) via a neural network _encoding_ of \(x\). The inference network weights \(\phi\) for this approximate posterior \(q_{\phi}(z|x)\) are jointly trained with the generative model. A potentially substantial _amortization gap_ exists [13, 30]: the inference network does not globally optimize the ELBO of Eq. (1) for all \(x\). **Structured VAEs.** Rather than approximating the full posterior \(p_{\gamma}(z|x,\theta)\propto p(z|\theta)p_{\gamma}(x|z)\), the SVAE [24] only approximates the neural-network likelihood, and leverages classic VI updates using the graphical structure of the prior \(p(z|\theta)\). Let \(\tilde{\ell}_{\phi}(z|x)\) approximate the unnormalized likelihood \(\ell(z)=p_{\gamma}(x|z)\) with form chosen to be conjugate to \(p(z|\theta)\), and parameters produced by an inference network with input \(x\) and weights \(\phi\). We determine an approximate SVAE posterior by optimizing a Figure 1: The SVAE SLDS segments each sequence of human motion, which we display as a sequence of discrete colors. **Discrete variables are interpretable:** Below each segmentation, we show 5 segmentations of _other_ subjects performing the same action, noting similarity across semantically similar series. **Discrete variables are compact representations:** Samples from the generative model conditioned on ground-truth segmentations yield the stick figures in grey, which track closely with the observed data. surrogate problem instead of Eq. (1), where the true likelihood is replaced by the conjugate surrogate: \[q_{\phi}(z|x;\eta)=\operatorname*{arg\,max}_{q(z)}\hat{\mathcal{L}}[q(\theta;\eta) q(z),\phi]=\operatorname*{arg\,max}_{q(z)}\mathbb{E}_{q(\theta;\eta)q(z)}\bigg{[} \log\frac{p(\theta)p(z|\theta)\hat{\ell}_{\phi}(z|x)}{q(\theta;\eta)q(z)}\bigg{]}.\] Given conjugacy, Eq. (3) can be optimized in closed form. The resulting variational factor \(q_{\phi}(z|x;\eta)\propto\hat{\ell}_{\phi}(z|x)\exp\{\mathbb{E}_{q(\theta;\eta) }[\log p(z|\theta)]\}\) is surrogate-loss optimal across _all probability distributions_, not just members of a restricted family. The overall ELBO is then \(\mathcal{L}[q(\theta;\eta)q_{\phi}(z|x;\eta),\gamma]\). If \(\hat{\ell}_{\phi}(z|x)\) is proportional to the true likelihood \(p_{\gamma}(x|z)\), we recover the true global maximizer of the ELBO, \(q_{\phi}(z|x)\propto\exp\{\mathbb{E}_{q(\theta)}[\log p_{\gamma}(z|x,\theta)]\}\); see appendix for derivations. SVAE inference does not introduce any additional approximations compared to amortized VI, and has the advantage that \(q_{\phi}(z|x;\eta)\) depends on the learned posterior \(q(\theta;\eta)\) of graphical model parameters. As we take gradient steps to improve the graphical generative model, the coupled inference model remains closely tied to the generative process. The ladder VAE [46] and related hierarchical VAEs [48; 11] also incorporate generative parameters in amortized variational inference, but impose a restricted generative hierarchy that is less flexible and interpretable than the SVAE. Example 1: Standard Normal.For a basic VAE, \(\theta\) is fixed and \(z\sim\mathcal{N}(0,I)\). The SVAE inference network outputs \(\hat{\ell}_{\phi}(z\mid\mu=\mu(x;\phi),\tau=\tau(x;\phi))\propto\mathcal{N}(z; \mu,\text{diag}(\tau^{-1}))\), which combines with the prior so that \(q_{\phi}(z|x)=\mathcal{N}(z;\frac{\tau}{\tau+1}\mu,\text{diag}((\tau+1)^{-1}))\). This reparameterization of the standard VAE posterior imposes the (useful) constraint that posterior variances must be smaller than prior variances. Example 2: Linear Dynamical System (LDS).A LDS model for temporal data assumes the latent _state_ variables evolve linearly with Gaussian noise: \(z_{t}\sim\mathcal{N}(A_{t}z_{t-1}+b_{t},Q_{t})\), \(z_{1}\sim\mathcal{N}(\mu_{1},\Sigma_{1})\). In this case, we expand the exponential family distribution \(q(z;\omega)\) as a sum across time steps: \[q(z;\omega)=q(z_{1})\prod_{t=2}^{T}q(z_{t}|z_{t-1})=\exp\Big{\{}\langle \omega_{1},t(z_{1})\rangle+\sum_{t=2}^{T}\langle\omega_{t},t(z_{t-1},z_{t}) \rangle-\log Z(\omega)\Big{\}}, \tag{3}\] where \(\omega=\texttt{concat}(\omega_{t})\), and the prior \(p(z|\theta)\) belongs to this family. The surrogate loss-optimal \(q_{\phi}(z|x)\) incorporates linear dynamics from \(p(z|\theta)\) and independent Gaussian likelihoods \(\hat{\ell}_{\phi}(z|x)\) for each time step. Inference and simulation of \(q_{\phi}(z|x)\) is then feasible via a Kalman smoothing algorithm [4] that efficiently aggregates information across time steps to smooth our estimate of \(z\). ## 3 Structured Variational Inference For complex graphical models, the distributions \(p(z|\theta)\) and \(q(z)\) typically factorize across subsets of the latent variables \(z\), as illustrated in Fig. 2. We thus generalize Eq. (3) by partitioning \(z\) into local variable groups, and representing the dependencies between them via a set of factors \(\mathcal{F}\): \[q(z;\omega)=\exp\Big{\{}\sum_{f\in\mathcal{F}}\langle\omega_{f},t(z_{f}) \rangle-\log Z(\omega)\Big{\}}. \tag{4}\] For certain factor graphs, we can efficiently compute marginals and draw samples via the _belief propagation_ (BP) algorithm [40; 32]. However, exact inference is intractable for many important Figure 2: _Left:_ Generative (above) and inference (below) graphical models for LDS and SLDS SVAEs. For the SLDS, we show the prior and posterior as factor graphs [32]. \(q_{\phi}(z|x)\) combines potentials from the inference network with its prior. Structured variational inference separates continuous from discrete latent variables for tractability, and mean field messages are propagated across residual edges between disentangled factors. graphical models, and SVAE training accomodates this via _structured_ variational inference [17; 51; 50]. We connect structured VI to SVAEs in this section, and provide detailed proofs in the appendix. ### Background: Block Coordinate Ascent for Mean Field Variational Inference Let \(\{z_{m}\}_{m=1}^{M}\) be a partition of the variables in the graphical model, chosen so that inference within each \(z_{m}\) is tractable. We infer factorized (approximate) marginals \(q_{\phi}(z_{m}|x;\eta)\) for each mean field cluster by maximizing \(\hat{\mathcal{L}}[q(\theta;\eta)\prod_{m}q(z_{m}),\phi]\). The optimal \(q_{\phi}(z_{m}|x;\eta)\) inherit the structure of the joint optimizer \(q_{\phi}(z|x;\eta)\), replacing any factors which cross cluster boundaries with factorized approximations (see Fig. 2). The optimal parameters for these disentangled factors are a linear function of the expected statistics of clusters connected to \(m\) via residual edges. These expectations in turn depend on their clusters' parameters, defining a stationary condition for the optimal \(\omega\): \[\omega_{m}=\texttt{MF}(\mu_{-m};\eta),\hskip 28.452756pt\mu_{m}=\texttt{BP}( \omega_{m};\eta,\phi,x). \tag{5}\] Here, BP is a belief propagation algorithm which computes expected statistics \(\mu_{m}\) for cluster \(m\), and the linear _mean field_ function MF updates parameters of cluster \(m\) given the expectations of _other_ clusters \(\mu_{-m}\) along residual edges. We solve this optimization problem via the _block updating_ coordinate ascent in Alg. 1, which is guaranteed to converge to a local optimum of \(\hat{\mathcal{L}}[q(\theta)\prod_{m}q(z_{m})]\). ### Reparameterization and Discrete Latent Variables While optimizing \(q_{\phi}(z_{m}|x;\eta)\) at inference time requires some computational overhead, it allows us to bypass the typical obstacles to training VAEs with discrete latent variables. To learn the parameters \(\phi\) of the inference network, VAE training must backpropagate through samples of latent variables via a smooth reparameterization [26], which is impossible for discrete variables. Continuous relaxations of discrete variables [38; 23] exist, but these methods produced biased approximations of the true discrete ELBO, and are sensitive to annealing schedules for temperature hyperparameters. SVAE training only requires reparameterized samples of those latent variables which are direct inputs to the generative network \(p_{\gamma}(x|z)\). By restricting these inputs to continuous variables, and using other latent discrete variables to capture their dependencies, discrete variables are marginalized via structured VI _without_ any need for biased relaxations. With a slight abuse of notation, we will denote continuous variables in \(z\) by \(z_{m}\), and discrete variables by \(k_{m}\). Example 3: Gaussian Mixture.Consider a generalized VAE where the latent state is sampled from a mixture model: \(k\sim\text{Cat}(\pi)\), \(z\sim\mathcal{N}(\mu_{k},\Sigma_{k})\). Variational inference produces disentangled factors \(q_{\phi}(z|x)q_{\phi}(k|x)\), and we only require continuous samples \(z\sim q_{\phi}(z|x)\) for gradient estimation. Example 4: Switching Linear Dynamical System (SLDS).Consider a set of discrete states which evolve according to a Markov chain \(k_{1}\sim\text{Cat}(\pi_{0}),k_{t}\sim\text{Cat}(\pi_{k_{t-1}})\), and a continuous state evolving according to switching linear dynamics: \(z_{0}\sim\mathcal{N}(\mu_{0},\Sigma_{0}),z_{t}\sim\mathcal{N}(A_{k_{t}}z_{t-1} +b_{k_{t}},Q_{k_{t}})\). The transition matrix, offset, and noise at step \(t\) depends on \(k_{t}\). Exact inference in SLDS is intractable [33], but structured VI [17] learns a partially factorized posterior \(q_{\phi}(z|x;\eta)q_{\phi}(k|x;\eta)\) that exactly captures dependencies _within_ the continuous and discrete Markov chains. The BP step uses variational extensions [5; 4] of the Kalman smoother to compute means and variances in the continuous LDS, and forward-backward message-passing to compute marginals for the discrete Markov model. Let \(k_{tj}=1\) if the SLDS is in discrete state \(j\) at time \(t\), \(k_{tj}=0\) otherwise, and \(\bar{\theta}_{j}=\mathbb{E}_{q(\theta;\eta)}[\theta_{j}]\) equal the expected (natural) parameters of the LDS for discrete state \(j\). Structured MF then updates the natural parameters of discrete states \(\omega_{k_{tj}}\), and continuous states \(\omega_{z_{t},z_{t+1}}\), as follows: \[\omega_{z_{t},z_{t+1}}=\sum_{j}\mathbb{E}_{q}[k_{tj}]\bar{\theta}_{j},\hskip 42.679134pt \omega_{k_{tj}}=\langle\bar{\theta}_{j},\mathbb{E}_{q}[t(z_{t-1},z_{t})]\rangle. \tag{6}\] ## 4 Stable and Memory-Efficient Learning via Implicit Gradients When \(q_{\phi}(z|x)\) is computed via closed-form inference, gradients of the SVAE ELBO may be obtained via automatic differentiation. This requires backpropagating through the encoder and decoder networks, as well as through reparameterized sampling \(z\sim q_{\phi}(z|x;\eta)\) from the variational posterior. For more complex models where structured VI approximations are required, gradients of the loss become difficult to compute because we must backpropagate through Alg. 1. For the SLDS this _unrolled_ gradient computation must backpropagate through repeated application of the Kalman smoother and discrete BP, which often has prohibitive memory cost (see Table 1). We instead apply the _implicit function theorem_ (IFT [28]) to compute implicit gradients \(\frac{\partial\omega}{\partial\eta}\), \(\frac{\partial\omega}{\partial\phi}\) without storing intermediate states. We focus on gradients with respect to \(\eta\) for compactness, but gradients with respect to \(\phi\) are computed similarly. Let \(\omega^{(1)},\ldots,\omega^{(L)}\) be the sequence of \(\omega\) values produced during the "forward" pass of block coordinate ascent, where \(\omega^{(L)}\) are the optimized structured VI parameters. The IFT expresses gradients via the solution of a set of linear equations: \[\frac{\partial\omega^{(L)}}{\partial\eta}=\left(\frac{\partial g(\omega;\eta, \phi,x)}{\partial\omega}\right)^{-1}\frac{\partial g(\omega;\eta,\phi,x)}{ \partial\eta},\hskip 28.452756ptg(\omega)=\omega-\texttt{MF}(\texttt{BP}(\omega; \eta,\phi,x);\eta). \tag{7}\] Here we apply the BP and MF updates in _parallel_ for all variable blocks \(m\), rather than sequentially as in Eq. (5). At a VI fixed point, these parallel updates leave parameters unchanged and \(g(\omega)=0\). For an SLDS with latent dimension \(D\) and \(K\) discrete states, \(\omega\) has \(O(K+D^{2})\) parameters at each time step. Over \(T\) time steps, \(\frac{\partial g}{\partial\omega}\) is thus a matrix with \(O(T(D^{2}+K))\) rows/columns and \(O(T^{2}D^{2}K)\) non-zero elements. For even moderate-sized models, this is infeasible to explicitly construct or solve. We numerically solve Eq. (7) via a Richardson iteration [43, 52] that repeatedly evaluates matrix-vector products \((I-A)v^{\prime}\) to solve \(A^{-1}v\). Such numerical methods have been previously used for other tasks, like hyperparameter optimization [37] and meta-learning [41], but not for the training of SVAEs. The resulting algorithm resembles unrolled gradient estimation, but we repeatedly backpropagate through updates at the _endpoint_ of optimization instead of along the optimization trajectory. _Richardson:_ \[\frac{\partial\omega^{(L)}}{\partial\eta} \approx-\sum_{j=0}^{J}\left(I-\frac{\partial g(\omega^{(L)};\eta, \phi,x)}{\partial\omega}\right)^{j}\frac{\partial g(\omega^{(L)};\eta,\phi,x) }{\partial\eta}.\] (8) _Unrolled:_ \[\frac{\partial\omega^{(L)}}{\partial\eta} \approx-\sum_{\ell=0}^{L}\bigg{[}\prod_{i=\ell}^{L}\bigg{(}I- \frac{\partial g(\omega^{(i)};\eta,\phi,x)}{\partial\omega}\bigg{)}\bigg{]} \frac{\partial g(\omega^{(\ell)};\eta,\phi,x)}{\partial\eta}.\] (9) Lorraine et al. [37] tune the number of Richardson steps \(J\) to balance speed and accuracy. However, there is another reason to limit the number of iterations: when the forward pass is not iterated until convergence, \(\omega^{(L)}\) is not a stationary point of \(g(\omega)\) and therefore Eq. (8) is not guaranteed to converge as \(J\to\infty\). For batch learning, waiting for _all_ VI routines to converge to a (local) optimum might be prohibitively slow, so we might halt VI before \(\omega^{(L)}\) converges to numerical precision. Seeking robustness even when the forward pass has not converged, we propose a _capped implicit_ gradient estimator that runs one Richardson iteration for every step of forward optimization, so that \(J=L\). In this regime, implicit gradient computation has a one-to-one correspondence to unrolled gradient computation, while requiring a small fraction of the memory. This can be thought of as a form of gradient regularization: if we take very few steps in the forward pass, we should have low confidence in the optimality of our end-point and compute fewer terms of the Neumann series (8). **Experiments.** In Fig. 3 (left, middle) we compute the accuracy of different approximate gradient estimators for training a SLDS SVAE as in Fig. 3. To our knowledge, we are the first to investigate the quality of implicit gradients evaluated away from an optimum, and we compare our _capped implicit_ proposal to other gradient estimators. Ground truth gradients are computed as the implicit gradient at the optimum, and we compare the root-mean-squared-error (rMSE) of various gradient estimators to that of the naive _No-Solve_ solution, which replaces the inverted matrix in Eq. (7) with Identity. \begin{table} \begin{tabular}{|c c c c c|} \hline \multicolumn{5}{|c|}{Time of gradient step (ms)} \\ Method & \(B=1\) & \(B=32\) & \(B=64\) & \(B=128\) \\ \hline Implicit + Parallel & 603 & 922 & 1290 & 2060 \\ Unrolled + Parallel & 659 & 1080 & n/a & n/a \\ \hline Implicit + Sequential & 2560 & 3160 & 3290 & 3530 \\ Unrolled + Sequential & 2660 & 3290 & 3980 & n/a \\ \hline \end{tabular} \end{table} Table 1: Time cost of ELBO backpropagation in the SVAE SLDS with batch size \(B\) and \(L=10\) block update steps. We compare _capped implicit_ gradients to unrolled gradients using two different inference algorithms: standard sequential BP, and our parallel extension. For large batch sizes, the unrolled method crashed because it attempted to allocate more than 48GB of GPU memory. Here, the latent variable \(z\) has dimension \(D=16\) at each of \(T=250\) time steps, and there are \(50\) discrete states. We consider two models: one with randomly initialized parameters, and one that has been trained for 20 epochs. The newly-initialized model requires more forward steps for the block updating routine to converge. We compare the memory-intensive unrolled estimator (_Unrolled_) to three versions of the implicit gradient estimator. First, an uncapped version (_Implicit_) always performs \(J=50\) Richardson iterations regardless of the number of forward iterations, thus incurring high computation time. Note that evaluating implicit gradient far from an optimum can produce high error; in the newly-initialized model, many of these iterations diverge to infinity when fewer than 20 forward steps are taken. Second, we consider a capped implicit estimator (_Implicit+Cap_) which sets \(J=L\) to match the number of forward steps. Finally, we consider a capped implicit estimator which also includes a threshold (_Implicit+Cap+Threshold_): if the forward pass has not converged in the specified number of steps, the thresholded estimator simply returns the _No-Solve_ solution. This gradient is stable in all regimes while retaining desirable asymptotic properties [52]. Our remaining experiments therefore use this method for computing gradients for SVAE training. **Prior work.** Johnson et al. [24] consider implicit differentiation, but only very narrowly. They derive implicit gradients by hand in cases (like the LDS) where exact inference is tractable, so the linear solve in Eq. (7) cancels with other terms, and gradients may be evaluated via standard automatic differentiation. For models requiring structured VI (like the SLDS), [24] instead computes _unrolled_ gradients for inference network weights \(\phi\), suffering high memory overhead. They compute neither unrolled nor implicit gradients with respect to generative model parameters \(\eta\); in practice they set the gradient of the inner optimization to 0, yielding a biased training signal. Our innovations instead enable memory-efficient and unbiased gradient estimates for all parameters, for all graphical models. ## 5 Rapid Learning via Unbiased Natural Gradients Another central challenge in optimizing the SVAE lies in the optimization of the parameters of probability distributions. Gradient descent implicitly uses Euclidean distance as its notion of distance between parameter vectors, which is sensible for a neural network's parameters but a poor indicator for the divergence between two distributions. The natural gradient [2] resolves this issue by rescaling the gradient by the Fisher information matrix \(F_{\eta}\) of \(q(\theta;\eta)\), given by: \[F_{\eta}=\mathbb{E}_{q(\theta;\eta)}\big{[}\big{(}\nabla_{\eta}q(\theta;\eta) \big{)}\cdot\big{(}\nabla_{\eta}q(\theta;\eta)\big{)}^{T}\big{]}. \tag{10}\] Johnson et al. [24] demonstrate the advantages of natural gradients for the SVAE, drawing parallels to the natural gradients of stochastic VI (SVI, [20]). SVI extends the variational EM algorithm to mini-batch learning: similar to the SVAE, it fits \(q(z)\) in an inner optimization loop and learns \(q(\theta;\eta)\) in an outer loop by natural gradient descent. The key difference between SVI and the SVAE is that SVI's inner optimization is done with respect to the true loss function \(\mathcal{L}\), whereas the SVAE uses a surrogate \(\hat{\mathcal{L}}\). SVI can only do this inner optimization by restricting all distributions to be conjugate exponential family members, giving up the flexibility provided by neural networks in the SVAE. Figure 3: We compare implicit gradient estimators’ stability (left, middle), and gradient conditioning methods’ loss trajectory (right), on human motion capture data (Sec. 8). _Stability:_ Gradient estimate rMSE relative to the _No-Solve_ estimator (smaller is better) for various numbers of VI block updates \(L\), and SVAE models taken from the start of training (left) and after 20 epochs (middle). Solid lines show median rMSE ratio across a batch of 128 data points, and dashed lines show \(90^{th}\) percentiles. _Conditioning:_ Convergence of negative-ELBO loss versus number of optimization steps (log-scale) for conventional (non-natural) gradients, biased natural gradients [24], and unbiased natural gradients computed via automatic differentiation (Sec. 5). Let \(\mu_{\eta}\) be the expected sufficient statistics of \(q(\theta;\eta)\). Exponential family theory tells us that \(\frac{\partial\mu}{\partial\eta}=F_{\eta}\), allowing Johnson et al. [24] to derive the natural gradients of the SVAE loss: \[\frac{\partial\mathcal{L}}{\partial\eta}F_{\eta}^{-1}=\frac{\partial\mathcal{L }}{\partial\mu}\frac{\partial\mu}{\partial\eta}F_{\eta}^{-1}=\frac{\partial \mathcal{L}}{\partial\mu},\qquad\qquad\frac{\partial\mathcal{L}}{\partial\mu }=\overbrace{\eta_{0}+\mathbb{E}_{q_{\phi}(z|x;\eta)}[t(z)]-\eta}^{\text{SV update}}+\overbrace{\frac{\partial\mathcal{L}}{\partial\omega}\cdot\frac{ \partial\omega}{\partial\eta}}^{\text{correction term}}. \tag{11}\] This gradient differs from the SVI gradient by the final term: because SVI's inner loop optimizes \(\omega\) with respect to the true loss \(\mathcal{L}\), \(\frac{\partial\mathcal{L}}{\partial\omega}=0\) for conjugate models. Johnson et al. [24] train their SVAE by dropping the correction term and optimizing via the SVI update equation, yielding biased gradients. There are two challenges to computing unbiased gradients in the SVAE. First, in the structured mean field case \(\frac{\partial\omega}{\partial\eta}\) involves computing an implicit or unrolled gradient, as described in Sec. 3. Second, including the correction term in the gradient costs us a desirable property of the SVI natural gradient: for step size less than 1, any constraints on the distribution's natural parameters are guaranteed to be preserved, such as positivity or positive-definiteness. We resolve this issue by reparameterizing \(\eta\) into an unconstrained space, and computing natural gradients with respect to those new parameters. Letting \(\tilde{\eta}\) be an unconstrained reparameterization of \(\eta\), such as \(\eta=\texttt{Softplus}\{\tilde{\eta}\}=\log(1+e^{\tilde{\eta}})\) for a positive precision parameter, we have: \[\frac{\partial\mathcal{L}}{\partial\tilde{\eta}}F_{\tilde{\eta}}^{-1}=\frac{ \partial\mathcal{L}}{\partial\mu}\cdot\frac{\partial\eta-T}{\partial\tilde{ \eta}}=\left(\frac{\partial\tilde{\eta}}{\partial\eta}\cdot\nabla_{\mu} \mathcal{L}\right)^{T}. \tag{12}\] See appendix for proof. This differs from the non-natural gradient in two ways. First, the Jacobian of the \(\eta\to\mu\) map is dropped, as before. Unlike Johnson et al. [24], we do not hand-derive the solution; we employ a _straight-through gradient estimator_[6] to replace this Jacobian with the identity. Then, the Jacobian of the \(\tilde{\eta}\to\eta\) map is replaced by its inverse-transpose. This new gradient can be computed without any matrix arithmetic by noting that the inverse of a Jacobian is the Jacobian of the inverse function. Thus Eq. (12) can be computed by replacing the reverse-mode backpropagation through the \(\tilde{\eta}\to\eta\) map with a forward-mode differentiation through the inverse \(\eta\to\tilde{\eta}\) map. In Fig. 3 (right) we show the performance benefits of our novel unbiased natural gradients (with stochastic gradient descent) compared to regular gradients with an Adam optimizer [25] and stochastic gradient descent via biased natural gradients of Johnson et al. [24] which drop the correction term. Results are shown for an SVAE-LDS model whose pre-trained encoder and decoder are fixed. ## 6 Adapting Graphical Model Innovations Efficient implementations of BP inference, parameter initializations that avoid poor local optima, and principled handling of missing data are well-established advantages of the graphical model framework. We incorporate all of these to make SVAE training more efficient and robust. Parallel inference.The BP algorithm processes temporal data sequentially, making it poorly suited for large-scale learning of SVAEs on modern GPUs. Sarkka & Garcia-Fernandez [45] developed a method to parallelize the usually-sequential Kalman smoother algorithm across time for jointly Gaussian data. Their algorithm is not directly applicable to our VI setting where we take expectations over \(q(\theta)\) instead of having fixed parameters \(\hat{\theta}\), but we derive an analogous parallelization of variational BP in the appendix. We demonstrate large speeds gains from this adaptation in Table 1. Initialization.Poor initialization of discrete clusters can cause SLDS training to collapse to a single discrete state. This problem becomes worse when the graphical model is trained on the output of a neural network, which when untrained produces outputs which do not capture meaningful statistics of the high-dimensional data. We therefore propose a three-stage training routine: a basic VAE is trained to initialize \(p_{\gamma}(x|z)\), and then the output of the corresponding inference network is used for variational learning of graphical model parameters [21]. Once the deep network and graphical model are sensibly initialized, we refine them via joint optimization while avoiding collapse. Missing data.The structure provided by the SVAE graphical model allows us to solve marginal and conditional inference queries not seen at training time. In particular, we explore the ability of a trained SVAE to impute data that is missing for an extended interval of time. By simply setting \(\hat{\ell}_{\phi}(z_{t}|x_{t};\eta)\) to be uniform at a particular timestep \(t\), our posterior estimate of \(z_{t}\) is only guided by the prior, which aggregates information across time to produce a smooth estimate of the posterior on \(z_{t}\) ## 7 Related Work The most directly comparable model to the SVAE is the _Stochastic Inference Network_ (SIN [35]), which employs a graphical model prior \(p(z|\theta)\) but estimates \(q(z)\) through traditional amortized inference; a parameterized function that shares no parameters with the graphical model produces variational posteriors. The authors consider discrete latent variable models like the SLDS, but due to the intractability of discrete reparameterization, discrete dynamics cannot be used for reconstruction. In recent independent work, Zhao & Linderman [55] also revisit the capabilities of the SVAE. However, their work differs from ours in a few key respects. First, because their experiments are restricted to the LDS graphical model (which requires no mean field factorization nor block updating), they do not need implicit differentiation, and do not explore the capacity of the SVAE to include discrete latent variables. Second, because they optimize point-estimates \(\hat{\theta}\) of parameters instead of variational factors \(q(\theta)\), they do not make use of natural gradients. In this point-estimate formulation, they apply the parallel Kalman smoother [45] off-the-shelf, whereas we derive a novel extension for our variational setting. Finally, their experimental results are confined to toy and synthetic data sets. Girin et al. [18] provide a comprehensive survey of dyanamal VAEs (DVAEs) for time series data which use recurrent neural networks to model temporal dependencies. The _Stochastic Recurrent Neural Network_ (SRNN [14]), which has similar structure to the _Variational Recurrent Neural Network_ (VRNN [12]), is the highest-performing model in their survey; it models data via one-step-ahead prediction, producing probabilities \(p(x_{t}|z,x_{t-1})\). This model therefore reconstructs \(x\) using more information than is encoded in \(z\) by skipping over the latent state and directly connecting ground truth to reconstruction, reducing the problem of sequence generation to a sum of very-local one-step predictions. On the other hand, the _Deep Kalman Smoother_ (DKS [29]) extension of the Deep Kalman Filter [31] is the best-performing model which generates observations independently given only information stored in the latent code \(z\). Transformers [49] have achieved state-of-the-art generative performance on sequential language modeling. However, Zeng et al. [54] argue that their permutation-invariance results in weak performance for time series data where each observation carries low semantic meaning. Unlike text, many time series models are characterized by their temporal dynamics rather than the collection of possibly-permutable tokens. Lin et al. [36] propose a variational autoencoder with encoder \(q(z_{t}|x_{1:T})\), decoder \(p(x_{t+1}|x_{1:t},z_{1:t+1})\), and latent dynamics \(p(z_{t+1}|x_{1:t},z_{1:t})\) parameterized by transformers. RNNs lack principled options for handling missing data. Heuristics such as constructing a dummy neural network input of all zeros for unobserved time steps, or interpolating with exponentially decayed observations [10], effectively require training to learn these imputation heuristics. Such RNNs must thus be trained on missing-ness that is similar to test missing-ness, unlike the SVAE. We, as in Johnson et al. [24], only consider SVAEs where the inference network output factorizes across latent variables. Orthogonal to our contributions, Yu et al. [53] investigate the advantages of taking the SVAE beyond this restriction, and building models where the recognition network outputs proxy-likelihood functions on _groups_ of latent variables. The Graphical Generative Adversarial Network [34] integrates graphical models with GANs for structured data. Experiments in [34] solely used image data; we compared to their implementation, but it performed poorly on our time-series data, generating unrecognizable samples with huge FID. Figure 4: Unconstrained generation of speech spectrogram data with data dimension \(513\) and \(T=500\) time-steps (models are trained on data with \(T=50\)). An example sequence is shown. Like the true data, the SVAE-SLDS moves between discrete structures over time (horizontal) representing individual sounds, while the single-discrete-state SIN-SLDS [35] blends together continuous dynamics. The autoregressive Transformer DVAE [36] degenerates to constant prediction across long time horizons. While discriminative methods may be explicitly trained to impute time series, we use imputation performance as a measure of generative model quality, so do not compare to these approaches. Unlike discriminative methods, SVAE imputation does _not_ require training data with aligned missing-ness. ## 8 Experiments We compare models on test likelihood, the quality of their generations, and the quality of their interpolations on joint-tracking human motion capture data (MOCAP [9; 22]) and audio spectrogram data of people reading Wall Street Journal headlines (WSJ0 [16]); see Table 2. MOCAP data has data dimension \(84\) and \(T=250\) time steps. WSJ0 has \(513\)-dimensional data and \(T=50\). Experimental details are provided in the supplement. Generation quality is judged via a modified _Frechet inception distance_[15] metric. We replace the InceptionV3 network with appropriate classifiers for motion capture and speech data (see appendix). SVAE-SLDS-Bias runs the SVAE as presented by Johnson et al. [24], with unrolled gradients, dropped correction term, sequential Kalman smoother, and no pre-training scheme. We match encoder-decoder architectures for all SVAE and SIN models using small networks (about 100,000 total parameters for motion data). The DKS, SRNN, and Transformer DVAE have approximately 300,000, 500,000, and 1.4 million parameters each; see appendix for details. For every non-SVAE model, we interpolate by drawing missing \(z\) from the generative prior \(p(z|\theta)\), noting that an alternative approach of in-filling missing data with zeros causes the model to reconstruct the zeros; see appendix. In Fig. 5, we see the SVAE-SLDS uses discrete states to sample \begin{table} \begin{tabular}{|c||c||c|c c c|} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{\(\log p(x)\geq\{\uparrow\}\)} & \multirow{2}{*}{Sample FID (\(\downarrow\))} & \multicolumn{4}{c|}{Interpolation FIDs (\(\downarrow\))} \\ \cline{3-6} & & & 00.0-8 & 0.2-1.0 & 0.2-0.8 \\ \hline \multicolumn{6}{|c||}{**Human motion capture (h3.6m)**} \\ \hline SVAE-SLDS & 2.39 & \(\mathbf{12.3\pm 0.2}\) & \(\mathbf{7.9\pm 0.2}\) & \(\mathbf{7.5\pm 0.2}\) & \(\mathbf{2.8\pm 0.02}\) \\ SVAE-SLDS-Bias [24] & 2.36 & \(34.6\pm 0.7\) & \(28.8\pm 0.2\) & \(25.8\pm 0.3\) & \(6.71\pm 0.12\) \\ SVAE-LDS & 2.28 & \(34.0\pm 0.3\) & \(19.3\pm 0.2\) & \(21.9\pm 0.2\) & \(7.90\pm 0.13\) \\ \hline SIN-SLDS [35] & 2.36 & \(33.7\pm 0.4\) & \(12.38\pm 0.12\) & \(8.97\pm 0.08\) & \(3.27\pm 0.05\) \\ SIN-LDS [35] & 2.33 & \(65.2\pm 1.4\) & \(18.3\pm 0.2\) & \(15.5\pm 0.2\) & \(6.24\pm 0.09\) \\ \hline Transformer [36] & 2.82 & \(421\pm 11\) & \(234\pm 9\) & \(228\pm 5\) & \(113\pm 5\) \\ SRNN [14] & **2.94** & \(62.7\pm 0.7\) & \(43.5\pm 0.7\) & \(24.2\pm 0.6\) & \(14.2\pm 0.3\) \\ DKS [29] & 2.31 & \(136\pm 6\) & \(46.7\pm 1.7\) & \(33.3\pm 1.1\) & \(9.0\pm 0.3\) \\ \hline \multicolumn{6}{|c||}{**Audio Speedform (WSJ0)**} \\ \hline SVAE-SLDS & 1.54 & \(\mathbf{10.15\pm 0.15}\) & \(\mathbf{7.13\pm 0.07}\) & \(\mathbf{7.38\pm 0.09}\) & \(\mathbf{4.55\pm 0.09}\) \\ SVAE-SLDS-Bias & 1.45 & \(12.3\pm 0.06\) & \(7.95\pm 0.08\) & \(8.82\pm 0.09\) & \(6.32\pm 0.09\) \\ SVAE-LDS & 1.56 & \(11.75\pm 0.06\) & \(8.28\pm 0.08\) & \(8.34\pm 0.12\) & \(5.79\pm 0.08\) \\ SIN-SLDS & 1.54 & \(16.2\pm 0.2\) & \(13.55\pm 0.09\) & \(12.66\pm 0.11\) & \(7.81\pm 0.06\) \\ SIN-LDS & 1.53 & \(14.9\pm 0.2\) & \(13.56\pm 0.11\) & \(13.85\pm 0.05\) & \(8.22\pm 0.05\) \\ \hline Transformer & 1.88 & \(\mathbf{10.23\pm 0.11}\) & \(10.20\pm 0.11\) & \(\mathbf{7.48\pm 0.08}\) & \(8.03\pm 0.03\) \\ SRNN & \(\mathbf{1.94}\) & \(17.81\pm 0.06\) & \(20.07\pm 0.09\) & \(15.88\pm 0.1\) & \(17.80\pm 0.08\) \\ DKS & \(1.55\) & \(12.77\pm 0.08\) & \(11.64\pm 0.11\) & \(8.6\pm 0.2\) & \(8.02\pm 0.14\) \\ \hline \end{tabular} \end{table} Table 2: Comparison of model performance on log-likelihood (higher is better), FIDs of unconditionally generated samples (lower is better), and FIDs of interpolations on augmented human motion capture and audio spectrogram data. Each interpolation column corresponds to a masking regime where the shown range of percentiles of the data is masked, e.g. 0.0-0.8 means the first 80% of time steps are masked. Figure 5: Interpolations of human motion capture data. Red figures (covering 150 time steps) are generated by each model to interpolate between the black figures four times. We see that our SVAE-SLDS provides varied plausible imputations with corresponding segmentations (colors shared with Fig. 1). The biased SVAE-SLDS [24] only uses a single state and thus cannot produce diverse imputations. The autoregressive SRNN [14] produces varied sequences, but its inability to account for future observations prevents it from smoothly interpolating into the observed sequence end. variable interpolations, while the SRNN's one-step-ahead prediction scheme cannot incorporate future information in imputation, leading to discontinuity at the end of the missing region. We also note that despite achieving the highest test likelihoods, the SRNN produces some degenerate sequences when we iterate next-step prediction, and has inferior FID. The biased SVAE-SLDS collapses to a single discrete state in training (as does the SIN-SLDS), resulting in uniform imputations that lack diversity. The permutation-invariance of transformers is visible in its varied performance on these two tasks. A lack of sequential modeling can lead to discontinuities in the data sequence which are naturally present in speech. For MOCAP, joint locations are continuous across time, making transformer-generated samples unrealistic (for visualizations, see appendix). ## 9 Discussion The SVAE is uniquely situated at the intersection of flexible, high-dimensional modeling and interpretable data clustering, enabling models which both generate data and help us understand it. Our optimization innovations leverage automatic differentiation for broad applicability, and provide the foundation for learning SVAEs with rich, non-temporal graphical structure in other domains. ## Acknowledgements This research supported in part by NSF Robust Intelligence Award No. IIS-1816365, and by the HPI Research Center in Machine Learning and Data Science at UC Irvine.
2304.08449
Ab-initio Simulations of Coherent Phonon-Induced Pumping of Carriers in Zirconium Pentatelluride
Laser-driven coherent phonons can act as modulated strain fields and modify the adiabatic ground state topology of quantum materials. Here we use time-dependent first-principles and effective model calculations to simulate the effect of the coherent phonon induced by strong terahertz electric field on electronic carriers in the topological insulator ZrTe$_5$. We show that a coherent $A_\text{1g}$ Raman mode modulation can effectively pump carriers across the band gap, even though the phonon energy is about an order of magnitude smaller than the equilibrium band gap. We reveal the microscopic mechanism of this effect which occurs via Landau-Zener-St\"uckelberg tunneling of Bloch electrons in a narrow region in the Brillouin zone center where the transient energy gap closes when the system switches from strong to weak topological insulator. The quantum dynamics simulation results are in excellent agreement with recent pump-probe experiments in ZrTe$_5$ at low temperature.
Tao Jiang, Peter P. Orth, Liang Luo, Lin-Lin Wang, Feng Zhang, Cai-Zhuang Wang, Jin Zhao, Kai-Ming Ho, Jigang Wang, Yong-Xin Yao
2023-04-17T17:17:59Z
http://arxiv.org/abs/2304.08449v2
# _Ab-initio_ Simulations of Coherent Phonon-Induced Pumping of Carriers in ZrTe\({}_{5}\) ###### Abstract Laser-driven coherent phonons can act as modulated strain fields and modify the adiabatic ground state topology of quantum materials. We use time-dependent first-principles and effective model calculations to simulate the effect of a strong terahertz electric field on electronic carriers in the topological insulator ZrTe\({}_{5}\). We show that a coherent \(A_{18}\) Raman mode modulation can effectively pump carriers across the band gap, even though the phonon energy is about an order of magnitude smaller than the equilibrium band gap. We reveal the microscopic mechanism of this effect which occurs via Landau-Zener-Stuckelberg tunneling of Bloch electrons in a narrow region in the Brillouin zone center where the transient energy gap closes when the system switches from strong to weak topological insulator. The quantum dynamics simulation results are in excellent agreement with recent pump-probe experiments in ZrTe\({}_{5}\) at low temperature. ## I Introduction Coherent phonons that are excited by laser pulses in the THz or mid-infrared frequency range [1; 2; 3] can provide nonthermal pathways for the dynamical control of quantum phases of condensed matter [4; 5; 6; 7]. Recent experimental demonstrations include ultrafast phononic manipulation of magnetic orders [8; 9; 10], of insulator to metal phase transitions [11; 12; 13], and a transient enhancement of martensitic phase [14] and superconducting correlations [15; 16; 17; 18]. In topological quantum materials, coherent phonon excitations were shown to induce dynamical switching between different topological phases by modifying the crystal symmetry and by tuning strain fields [19; 20; 21; 22; 23; 24; 25; 26; 27]. Particularly, recent coherent phonon pumping work provides compelling evidence of light-induced Dirac points [25], Weyl nodes [26] and enhanced stability of topological systems [28; 29]. The theoretical understanding and first-principles simulations of light-excited electron-ion quantum systems are challenging, but significant progress has been achieved in recent years [30; 31; 32; 33; 34]. Specifically to describe the ultrafast electronic and spin dynamics associated with phonon excitations, microscopic theories have been developed for the light-induced insulator-to-metal structural phase transition [35], the switching of magnetic orders [36], and the enhancement of superconducting correlations through symmetry-allowed electron-phonon coupling [37; 38; 39; 40; 41; 42; 43; 44; 45; 46]. Here, we theoretically investigate the switching between strong and weak topological insulators (STI and WTI) induced by THz-driven coherent Raman phonon excitations in the model Dirac system ZrTe\({}_{5}\)[25]. Although a qualitative picture has previously been established using static density functional theory (DFT) calculations [25], the ultrafast dynamics of the laser-driven system such as the observed continuous increase of electronic carrier density after the THz pump and the underlying mechanism, calls for a more in-depth quantum dynamics simulations. The progress in this direction is highly appealing to the experimental community, who have been actively pursuing THz-driven quantum dynamics in various quantum materials recently [47; 48; 49; 50]. In this paper, we employ time-dependent DFT simulations to simulate the coherent phonon-induced carrier dynamics in ZrTe\({}_{5}\). We complement the DFT calculations by dynamics simulations of an effective model that captures the essentials of the microscopic mechanism. Our detailed numerical analysis shows that the switching between STI and WTI, which necessarily involves the closing of the bulk gap, creates a small but finite volume in momentum space, where effective two-level systems (TLSs) undergo avoided level crossings. This results in Landau-Zener-Stuckelberg (LZS) tunnelling [51; 52; 53; 54] and leads to an increase of the carrier concentration during several cycles of the coherent phonon modulation. Our time-dependent DFT simulations predict the dynamics of the phonon-induced carrier concentration in quantitative agreement with experiment. Summary of previous pump-probe experimental results To facilitate the presentation, we summarize the key results of the THz pump-THz probe experiment on ZrTe\({}_{5}\) at 4.2 K, which is described in detail in Ref. [25]. These results motivate the numerical simulations in this work. In the experiment, an intense THz-pump pulse with an \(E\)-field trace plotted in Fig. 1 is incident normally on the ZrTe\({}_{5}\) single crystal. The THz pump-induced coherent phonon emission from the sample is observed after the pump pulse between 2.5 ps \(\lesssim t\lesssim\) 5.8 ps, as highlighted in Fig. 1. The coherent phonon emission lasts for about five full cycles and its dominant spectral peak at \(f_{\rm ph}=1.2\) THz after Fourier transformation matches the \(A_{1g}\) Raman mode at the Brillouin zone center. By performing THz pump and THz probe measurements using the same pump pulse, the THz probe differential transmission \(\Delta E/E_{0}\), which is proportional to the change of carrier density \(\Delta n\), is obtained and plotted in Fig. 1. The carrier density continuously increases after the pump pulse as long as the coherent phonon emission is observed. It saturates for \(t\gtrsim 5.8\) ps, which coincides with the loss of phonon coherence. \(\Delta E/E_{0}\) decays back to zero after about 120 ps [25]. In this paper, we focus on simulating this intriguing carrier excitation dynamics for the time period 2.5 ps \(\lesssim t\lesssim\) 5.8 ps, where the coherent phonon excitation is present. The residual pump pulse is negligible during this time period, and the coherent phonon excitation can be treated as preexisting, i.e., without explicitly modelling the light-driven phonon generation process. The strong correlation between the carrier generation and the coherent phonon emission suggests a charge excitation mechanism assisted by a coherent Raman vibration. Indeed, by adiabatically following the \(A_{1g}\) phonon trajectory, static DFT calculations have revealed that the electronic state of the system undergoes a topological transition between STI to critical Dirac point (DP) to WTI [25] (see also Appendix A). This suggests the importance of the associated closing of the bulk band gap and potentially further topological effects in the carrier pumping process [25]. This makes a detailed quantum dynamics simulations of the physical process highly desirable. ## III Effective model description ### Model setup We first study a toy model that qualitatively captures the dynamical carrier generation observed experimentally in ZrTe\({}_{5}\). This model includes the essential physics underlying this phenomenon which is the topological phase transition (driven by coherent phonon oscillations) and a resulting inter-band charge excitation. For simplicity, we consider a Kitaev chain model [55], which in momentum space is represented by the following two-orbital spinless fermion Bogoliubov-de Gennes (BdG) Hamiltonian: \[H(k)=(-2\nu\cos(k)-\mu)\tau_{z}+2\Delta\sin(k)\tau_{y}\,. \tag{1}\] Here \(\tau_{i}\) are the Pauli matrices and the Hamiltonian parameters include onsite energy \(\mu\), nearest-neighbor hopping \(\nu\) and a superconducting pairing amplitude \(\Delta\). The momentum lies in the range \(k\in[-\pi,\pi)\). The model obeys particle hole symmetry \(\tau_{x}H^{*}(-k)\tau_{x}=-H(k)\). With the chemical potential fixed at zero, the model exhibits a topological phase transition from a gapped superconductor that is trivial (NI) to one that is topological (TI) by tuning \(\mu\). The corresponding BdG band structures together with the orbital (\(\tau_{z}\)) projections of the wavefunction are shown in Fig. 2. This evolution of the band energies is qualitatively similar to the phonon-induced topological phase transition in ZrTe\({}_{5}\) obtained from DFT calculations [25, 56]. The band structure in panels (a) to (d) are obtained for \(\mu/\nu\) equal to \(-2.02\), \(-2.00\), \(-1.97\) to \(-1.92\) (a-d). The evolution of the band gap as a function of \(\mu/\nu\) is shown in panel (e). The bands in panels (a-d) are plotted together with weight of the projection of the Bloch wave function onto the first basis orbital (\(\tau_{z}=1\)), as given by the size of the Figure 1: **THz pump-THz probe experimental results of ZrTe\({}_{5}\) at 4.2 K.** (a) Normalized THz pump \(E\)-field as a function of pump delay time \(t\), with the maximal value of \(E(t)\) being \(E_{\rm max}=736\) kV/cm. (b) Phonon emission as a function of \(t\). The time-period after the pump pulse, where a coherent phonon emission is observed, is highlighted in red. This region is the focus of this work. (c) Normalized THz probe differential transmission \(\Delta E(t)/E_{0}\) as a function of \(t\). \(\Delta E(t)\) is the differential transmission of the THz probe peak field \(E_{0}\) measured with and without the THz pump pulse. The right \(y\)-axis labels the corresponding change of the carrier density \(\Delta n\) in unit of \(10^{16}\) cm\({}^{-3}\). red circles. Clearly, a band inversion occurs when the system transforms from the trivial phase (\(\mu/\nu<-2\)) to the topological phase (\(\mu/\nu>-2\)). At \(\mu/\nu=-2\), the band gap closes and a nodal point forms at \(\Gamma\) point. Consistently, the topological index, which can be defined as \(Q\equiv\text{sign}(\mu^{2}-4\nu^{2})\)[55], is 1 in the trivial and \(-1\) in the topological phase, as labelled in panels (a,c,d). For the numerical simulations, we set \(\nu=1\) eV, and \(\Delta/\nu=0.1\), which results in a mode speed \(\partial\varepsilon_{k}/\partial k=\pm 2\Delta\) at the Dirac point (\(\mu/\nu=-2\)) that is in qualitative agreement with the Fermi velocity in ZrTe\({}_{5}\). To model the effect of the coherent phonon excitation in ZrTe\({}_{5}\), we consider a periodic modulation of the onsite energy \[\mu(t)=\mu_{0}+\mu_{1}\sin(\omega(t-t_{0})) \tag{2}\] resulting in the time-dependent Hamiltonian \[H(k,t)=(-2\nu\cos(k)-\mu(t))\tau_{z}+2\Delta\sin(k)\tau_{y}\,. \tag{3}\] We choose \(\omega/2\pi=1.2\)THz (\(T=833\) fs) to match the experimental value of the \(A_{1\text{g}}\) Raman mode frequency. We set \(\mu_{0}/\nu=-1.97\) and \(\mu_{1}=0.05\), such that the gap variation at the \(\Gamma\) point is approximately the same as in ZrTe\({}_{5}\)[25], as shown in Fig. 2(c). Because the zone-center phonon carries zero momentum (\(q=0\)) it does not mix different Bloch momenta and the Hamiltonian \(H(k,t)\) thus remains block diagonal in momentum space. The quantum dynamics simulation can therefore be performed by solving the time-dependent Schrodinger equation separately at each \(k\)-point: \[i\hbar\frac{\partial}{\partial t}\ket{\psi(k,t)}=H(k,t)\ket{\psi(k,t)}\,. \tag{4}\] We adopt a discrete-time propagator based on a Trotter decomposition of the state evolution [57; 58] \[\ket{\psi(k,t+dt)}=e^{-i\frac{H(k,t)}{\hbar}dt}\ket{\psi(k,t)}, \tag{5}\] where the time step \(dt\ll T\) is chosen sufficiently small compared to the variation of the onsite energy. ### Dynamics simulation results The simulation starts at time \(t=t_{0}\) and evolves until time \(t=t_{0}+5T\) to agree with the experimental situation (see Fig. 1). We monitor the time-dependent excited state population \(n_{e}(t)\), which corresponds to the excited carrier density in the experiment and is tied to the differential emission \(\Delta E/E\) that is measured experimentally. In our simulations we obtain \(n_{e}(t)=\sum_{k}w_{k}n_{e}(k,t)\) as a weighted sum of contributions \(n_{e}(k,t)\) at each \(k\)-point in the Brillouin zone \([-\pi,\pi)\). The weight \(w_{k}\) is obtained as \(1/N_{k}\) with \(N_{k}\) the total number of \(k\)-points uniformly sampled in the Brillouin zone. Here \(n_{e}(k,t)=\ket{\bra{\psi(k,t)}\psi_{c}(k,t)}^{2}\) is the size of the projection of the one-electron wavefunction \(\psi(k,t)\) on the adiabatic conduction band \(\psi_{c}(k,t)\), which is an eigenstate of \(H(k,t)\) with eigenvalue \(\epsilon_{c}(k,t)\). We use a step size \(dt=T/1000=0.833\) fs and a uniform \(k\)-mesh with \(N_{k}=560\) points, and find that \(n_{e}(t)\) converges to a precision of \(10^{-6}\). In Fig. 2(f) we plot the excited state population \(n_{e}(k,t)\) as a function of time \(t\). We consider five modulation cycles of the onsite energy \(\mu(t)\) starting with \(t_{0}=0\). The black line represents the data at a momentum point close to the zone center \(k=0.0033\pi\), and the orange line is for momentum \(k=0.033\pi\). The corresponding (instantaneous) energy gap \(E_{G}(k,t)\) at the respective \(k\)-points versus \(t\) is shown in Fig. 2(g). Sharp variations of \(n_{e}(k,t)\) are observed whenever the energy gap is minimal. In total we observe a substantial increase of \(n_{e}(k,t)\) from zero to a finite value at the two representative \(k\)-points during the dynamical process. When integrated over the Brillouin zone, the excited state population per unit cell \[n_{e}(t)=\frac{1}{2\pi}\int_{-\pi}^{\pi}dkn_{e}(k,t) \tag{6}\] is plotted as a black line in Fig. 2(h). It shows a similar behavior as \(n_{e}(k,t)\) at the two individual \(k\)-points shown in Fig. 2(f). By the end of the simulation, \(n_{e}(t=t_{0}+5T)\) has increased from zero to about 0.01. To account for the experimentally unknown initial phase of the coherent phonon oscillation, we also provide results that are averaged over \(t_{0}\) that is uniformly sampled within the interval \([0,T]\). The averaged results \(\bar{n}_{e}(t)=\frac{1}{10}\sum_{i=0}^{9}n_{e}(t)|_{t_{0}=\frac{i}{16}T}\) are shown in orange in Fig. 2(h) and also increase from zero to about 0.01 during the simulations. ### Discussion of toy model results The dynamics of the wavefunction at different \(k\)-points is completely independent from each other [see Eq. (4)]. The above calculation is thus composed of a collection of independent and periodically-driven two-level systems (TLSs), which resembles the well-known problem of LZS tunneling of a driven TLS in the presence of an avoided crossing [59]. The behavior of the TLS is largely determined by the potential ramp speed (i.e. the oscillation frequency of the drive) and the minimal energy gap in the avoided crossing. While the ramp speed is set by the phonon frequency (1.2 THz), the minimal energy gap \(E_{G}^{\text{min}}(k)\equiv\min_{t\in[0,T]}E_{G}(k,t)\) and the nature of the diabatic level crossing is strongly \(k\)-point dependent. In Fig. 2(i), we show the minimal gap \(E_{G}^{\text{min}}\) versus \(k\) in the range \(k\in[-\pi/20,\pi/20]\). The minimal gap changes almost linearly from to zero in the zone center (\(k=0\)) to about 60 meV at \(k=\pm\pi/20\). For reference, we also plot the 1.2 THz phonon mode energy (\(E_{\text{ph}}=4.1\) meV) as a dotted horizontal line. These findings suggest that the contribution to the excited carrier density \(n_{e}(t)\) arises from a small part in momentum space around the zone center. To obtain a more quantitative analysis, we define the following time-averaged and \(k\)-resolved excited state density \[W_{n}(k)\equiv\frac{\int_{0}^{5T}n_{e}(k,t)dt}{\int_{0}^{5T}n_{e}(t)dt}\,. \tag{7}\] We plot \(W_{n}(k)\) in Fig. 2(j) for a simulation with \(t_{0}=0\), and find that it peaks in a \(\Gamma\)-centered narrow \(k\)-region. Interestingly, peaks occur in a wider range than naively expected by the condition that \(E_{G}^{\rm min}(k)\leq E_{\rm ph}\). This shows that higher order LZS resonances are important, where excitations occur across a minimal band gap that is a multiple of the driving frequency. Following the analysis of LZS tunneling, we define diabatic states \(|\psi(k)\rangle\equiv|\psi(k,t=0)\rangle\). We find that the \(k\)-range, where a diabatic level crossing occurs, \(\langle\tilde{\psi}_{0}(k)|\,H(k,t^{\prime})\,|\tilde{\psi}_{0}(k)\rangle= \langle\tilde{\psi}_{1}(k)|\,H(k,t^{\prime})\,|\tilde{\psi}_{1}(k)\rangle\), at some time \(0<t^{\prime}<5T\), matches well with the region of significant \(W_{n}(k)\) as highlighted in blue. We observe several distinct peaks of \(W_{n}\) in this region: the location of the peak closest to \(\Gamma\)-point (\(k=0.0033\pi\)) and that of the peak furthest away (\(k=0.033\pi\)) correspond to the \(k\)-points chosen for the presentation of \(n_{e}(k,t)\) in (f, g). We indicate these \(k\)-points by black and orange Figure 2: **Toy model results of phonon-induced topological phase transition and carrier excitation dynamics.** (a-d) Band structure of the BdG Kitaev chain model with \(\mu/\nu=-2.02\), \(-2.00\), \(-1.97\), and \(-1.92\). The red color encodes the projection weight of the band wavefunction on the first basis orbital (\(\tau_{z}=1\)). The topological index \(Q=\pm 1\) is also shown. (e) Band gap at the zone center \(\Gamma\) point, \(E_{G}(\Gamma)\), as a function of \(\mu/\nu\in[-2.02,-1.92]\). This behavior qualitatively mirrors the behavior of ZrTe\({}_{5}\) system under the \(A_{1\rm g}\) Raman phonon modulation [25]. Red circles indicate \(\mu/\nu\) values in panels (a-d). The topological region is highlighted in blue. The vertical dashed line indicates the equilibrium value \(\mu_{0}\). (f) Time evolution of the excited state population \(n_{e}(k,t)\) for a periodic modulation \(\mu(t)/\nu\in[-2.02,-1.92]\) starting at \(\mu(t_{0})=\mu_{0}\) for five full cycles. The black line is for \(k=0.0033\pi\) and the orange line for \(k=0.033\pi\). (g) Time evolution of the energy gap \(E_{G}(k,t)\) at \(k=0.0033\pi\) (black) and \(0.033\pi\) (orange). (h) Excited state population \(n_{e}(t)\). The black curve is for \(t_{0}=0\) and the orange one is averaged over 10 runs with \(t_{0}\) taken from 10 uniformly spaced points in the interval \([0,T)\). (i) The \(k\)-point dependence of the minimal energy gap \(E_{G}^{\rm min}(k)\equiv\min_{t\in[0,T]}E_{G}(k,t)\) over the simulation period. The horizontal dotted line in (e,h) indicates the energy of the \(A_{1\rm g}\) phonon \(E_{\rm ph}\). (j) The \(k\)-point resolved number density in the excited band, which is defined as \(W_{n}(k)\equiv\int_{0}^{5T}n_{e}(k,t)dt/\int_{0}^{5T}n_{e}(t)dt\). The dash vertical lines located at \(k=m\times 0.0033\pi\) indicate the \(k\)-points where \(E_{G}^{\rm min}(k)=mE_{\rm ph}\) is multiples of phonon energy with \(m=1,2,\ldots,10\). The momentum space region where a diabatic level crossing occurs is highlighted in blue, where the diabatic state is defined as \(|\psi(k,t=0)\rangle\). The black and orange vertical dashed lines denote the \(k\)-point location of the peak closest and furthest away from the \(\Gamma\)-point, respectively. These momenta are used in panels (f, g). vertical dashed lines. The peak at \(k=0.0033\pi\) can be explained by the minimal gap being resonant with the coherent phonon frequency: \(E_{G}^{\rm min}(k)=E_{\rm ph}\). The \(\Gamma\) point, where the band gap closes exactly, is a location of local minimum (but the value of \(W_{n}\) is still substantial). As illustrated by the vertical dashed lines, the largest peak at \(k=5\times 0.0033\pi\) corresponds to \(E_{G}^{\rm min}(k)=5E_{\rm ph}\), while the far right peak at \(k=0.033\pi\) corresponds to \(E_{G}^{\rm min}(k)=10E_{\rm ph}\). To summarize, our analysis demonstrates the crucial role of the phonon-induced topological band closing for carrier excitation. This creates a finite momentum space volume where effective TLSs experience an avoided level crossing with a sufficiently small band gap such that carriers can be excited through LZS tunneling. ## IV First-principles quantum dynamics simulations ### Model and ab initio simulation method To gain a more material-specific understanding of the carrier excitation dynamics of the phonon-modulated ZrTe\({}_{5}\) system, we carry out first-principles simulations within the time-dependent Kohn-Sham (TDKS) DFT framework. The time-dependence of the KS Hamiltonian \(\hat{\mathcal{H}}_{0}(t)\) is encoded in the ionic trajectory \(\mathbf{R}(t)\) that is set by the \(A_{1g}\) coherent phonon. In the implementation of DFT for periodic systems one often adopts a basis set with large dimension, such as plane waves. This renders a direct manipulation of the TDKS Hamiltonian \(\hat{\mathcal{H}}_{0}(t)\) cumbersome. Note that \(\hat{\mathcal{H}}_{0}(t)\) generally covers higher-energy unoccupied states and deeper occupied states, which are likely irrelevant for the carrier excitation dynamics in phonon-modulated ZrTe\({}_{5}\), which we expect to be dominated by states close to the chemical potential. Standard tight-binding downfolding approaches, such as maximally localized Wannier function [60] and quasi-atomic minimal basis-set orbitals method [61; 62], can be useful; but the downfolding calculation for many snapshots along the trajectory \(\mathbf{R}(t)\) in the simulation time period can be time-consuming, and the time-dependence of the downfolded orbitals introduces additional complexity. Here we adopt an alternative representation where the component of the dynamical electronic state \(\ket{\Psi(t)}=\otimes_{\mathbf{k}}\ket{\Psi(\mathbf{k},t)}\) is approximated as a linear combination of \(N_{\rm b}\) adiabatic states \(\{\ket{\Phi_{i}(\mathbf{k},t)}\}\) generalized to a generic \(\mathbf{k}\)-point from Ref. [63; 64]: \[\ket{\Psi(\mathbf{k},t)}=\sum_{i=1}^{N_{\rm b}}c_{i}(\mathbf{k},t)\ket{\Phi_{ i}(\mathbf{k},t)}, \tag{8}\] where \(\ket{\Phi_{i}(\mathbf{k},t)}\equiv\prod_{\mu\in S_{i}}\phi_{\mu}^{\dagger}( \mathbf{k},t)\ket{0}\) is a noninteracting single Slater determinant state defined by a set \(S_{i}\) of occupied KS orbitals \(\phi_{\mu}(\mathbf{k},t)\), which satisfies \(\hat{\mathcal{H}}_{0}(\mathbf{k},\mathbf{R}(t))\ket{\phi_{\mu}(\mathbf{k},t)} =\epsilon_{\mu}(\mathbf{k},t)\ket{\phi_{\mu}(\mathbf{k},t)}\). Here the crystal momentum \(\mathbf{k}\) is conjugate to the position vector of the simulation unit cell. The propagation of \(\ket{\Psi(\mathbf{k},t)}\) is encoded in the time-dependent complex amplitudes \(c_{i}(\mathbf{k},t)\) and the adiabatic states \(\ket{\Phi_{i}(\mathbf{k},t)}\). Substituting Eq. (8) into the time-dependent Schrodinger equation leads to the equation of motion (EOM) of the amplitudes \[i\hbar\frac{\partial c_{i}(\mathbf{k},t)}{\partial t}=\sum_{j=1}^{N_{\rm b}}H_ {ij}(\mathbf{k},t)c_{j}(\mathbf{k},t)\,. \tag{9}\] The vibronic Hamiltonian is given by \[H_{ij}(\mathbf{k},t)=\varepsilon_{i}(\mathbf{k},t)\delta_{ij}-i\hbar d_{ij}( \mathbf{k},t)\,. \tag{10}\] Here, we define \(\varepsilon_{i}(\mathbf{k},t)=\sum_{\mu\in S_{i}}\epsilon_{\mu}(\mathbf{k},t)\). The complex nonadiabatic coupling (NAC) coefficient between a pair of distinct states \(\{\ket{\Phi_{i}},\ket{\Phi_{j}}\}\) is given by \(d_{ij}=\bra{\Phi_{i}}\frac{\partial}{\partial t}\ket{\Phi_{j}}\), which is nonzero only if there is exactly one distinct occupied KS orbital between \(\ket{\Phi_{i}}\) and \(\ket{\Phi_{j}}\) due to the single Slater determinant nature [65; 66]. The NAC can be conveniently evaluated using the finite-difference method [65]: \[d_{ij}(\mathbf{k},t)\approx\frac{1}{2dt}(\bra{\Phi_{i}(\mathbf{k },t)}\Phi_{j}(\mathbf{k},t+dt))\] \[-\bra{\Phi_{i}(\mathbf{k},t+dt)}\Phi_{j}(\mathbf{k},t))\,, \tag{11}\] which is completely determined by the state overlap matrix between consecutive time steps. ### Technical details of the simulation The computational complexity of the simulation is tied to the number \(N_{\rm b}\) of adiabatic states that are used in Eq. (8). The Raman \(A_{1g}\) phonon in ZrTe\({}_{5}\) has a frequency of 1.2 THz, which is one order of magnitude smaller than the equilibrium band gap. We can thus truncate the expansion at the level of single-electron excitations between low energy bands. We include configurations with one electron excited from the top valence band to one of the four lowest conduction bands. We thus keep \(N_{\rm b}=5\) adiabatic states \(\Phi_{i}(\mathbf{k},t)\) in the expansion at each \(\mathbf{k}\)-point. Each \(\Phi_{i}\) differs by exactly one occupied KS orbital, which therefore can also be labelled by the same index \(i\), with \(i=1\) corresponding to the top valence band, and \(i=2\dots N_{\rm b}\) to the conduction bands in ascending order of energy. The vibronic Hamiltonian (10) can be simplified by setting \(\varepsilon_{i}(\mathbf{k},t)=\epsilon_{i}(\mathbf{k},t)\) as a rigid potential shift. The evaluation of the NAC (11) can also be reduced to \[d_{ij}(\mathbf{k},t)\approx\frac{1}{2dt}(\bra{\phi_{i}(\mathbf{ k},t)}\phi_{j}(\mathbf{k},t+dt))\] \[-\bra{\phi_{i}(\mathbf{k},t+dt)}\phi_{j}(\mathbf{k},t))\,\,, \tag{12}\] which depends only on the \(N_{\rm b}\) KS orbitals that are kept at each \(\mathbf{k}\)-point. The EOM (9) assumes the continuity of the time dependent basis \(\ket{\Phi_{i}(\mathbf{k},t)}\) with \(t\). Therefore, it is crucial to fix the phase degree of freedom of the KS orbitals \(\phi_{i}(\mathbf{k},t)\). This can be achieved by consecutively applying a phase factor to \(\phi_{i}(\mathbf{k},t+dt)\to e^{i\theta}\phi_{i}(\mathbf{k},t+dt)\), where the phase \(e^{i\theta}\equiv O_{i}^{*}/|O_{i}|\) with \(O_{i}=\left\langle\phi_{i}(\mathbf{k},t)\right|\phi_{i}(\mathbf{k},t+dt)\right\rangle\) is determined by the overlap with the same orbital at the previous time step. Additional complexity of the dynamics simulations originates from the presence of time-reversal and inversion symmetry, which renders every band doubly degenerate, and strong spin-orbit coupling in ZrTe\({}_{5}\). The ambiguity in the doubly degenerate bands can be partially fixed by choosing a \(S_{z}\)-gauge such that the \(2\times 2\) spin \(S_{z}\) matrix becomes diagonal in each doubly degenerate manifold via a unitary transformation. Numerically, we find that the \(S_{z}\) gauge transformation is not sufficient to guarantee the orbital continuity along the dynamical path. Therefore, we propose the following way to better address the band degeneracy problem. The simulation starts with orbitals in the \(S_{z}\)-gauge, and apply unitary transformation in each doubly degenerate manifold in all following time steps, such that each rotated orbital \(\phi_{i}(\mathbf{k},t+dt)\) has maximal overlap with the same one at the previous step. This is achieved by diagonalizing a series of \(2\times 2\) matrices \(P_{rs}^{(j)}=\left\langle\phi_{r}^{(j)}(\mathbf{k},t+dt)\right|\hat{P}_{j} \left|\phi_{s}^{(j)}(\mathbf{k},t+dt)\right\rangle\) with \(\hat{P}_{j}=\left|\phi_{1}^{(j)}(\mathbf{k},t)\right\rangle\left\langle\phi_{1 }^{(j)}(\mathbf{k},t)\right|\). Here \(r,s\in[1,2]\) run through the two orbitals in the \(j\)th degenerate doublet. The diagonalization gives two eigenvectors, where one has a nonzero eigenvalue and is assigned to the first orbital of the \(j\)th doublet. The other eigenvector has zero eigenvalue, and is assigned to the second orbital in the doublet. Following this procedure, we numerically find that the self-overlap of each wavefunction at consecutive time steps always remains above 99.99%. Figure 3: **Carrier excitation dynamics of phonon-modulated ZrTe\({}_{5}\) system from first-principles simulations.** (a) Occupancy of the two lowest-energy conduction bands, \(n_{e}(i,\mathbf{k},t)\) with \(i=2,3\), as a function of simulation time \(t\) for 5 phonon periods starting with \(t_{0}=0\) ( the equilibrium configuration with zero phonon displacement) at \(\mathbf{k}=\mathbf{b}_{1}/720+\mathbf{b}_{2}/720+\mathbf{b}_{3}/80\). Here \((\mathbf{b}_{1},\mathbf{b}_{2},\mathbf{b}_{3})\) are the reciprocal primitive vectors (see Appendix A for details). (b) Time-dependence of the complex NAC amplitude \(hd_{13}(\mathbf{k},t)\) between the top valence band (\(i=1\)) and lowest conduction band (\(i=3\)) at the same \(\mathbf{k}\)-point as (a), with real part in yellow and imaginary part in red. The background dark blue to light blue indicates a band gap, \(E_{\mathrm{G}}=\varepsilon_{2}(\mathbf{k},t)-\varepsilon_{1}(\mathbf{k},t)\), varying from 27 meV to 132 meV. (c) The total excited states population \(n_{e}(t)\) integrated over the Brillouin zone as a function of \(t\) for the simulation starting with \(t_{0}=0\). (d) The time-evolution of normalized \(\bar{n}_{e}(t)/\bar{n}_{e}(5T)\)(black line), where \(\bar{n}_{e}(t)\) is the total excited states population \(n_{e}(t)\) averaged over simulations starting with \(t_{0}\) uniformly sampled by 10-points in a phonon period. The normalized experimental carrier density change \(\Delta n(t)/\Delta n(5T)\) are also plotted in orange line. We obtain \(\bar{n}_{e}(5T)=1.2\times 10^{16}\mathrm{cm}^{-3}\) from the theoretical simulation, and \(\Delta n(5T)=(0.28\pm 0.12)\times 10^{-18}\ \mathrm{cm}^{-3}\) estimated from the experiment. ### First-principles quantum dynamics simulation results The analysis of the one-dimensional (1D) toy model results in Sec. III.3 shows that the main contribution to the carriers in the excited band resulted from LZS tunneling in a narrow region of momentum space around the zone center \(k\in[-\pi/20,\pi/20]\). Expepeing a similar behavior for the realistic 3D model of ZrTe\({}_{5}\), we use a dense \(360\times 360\times 120\) uniform \(\mathbf{k}\)-grid covering the full Brillouin zone for the following dynamics simulations and use a shift of \(\Delta=\frac{0.5}{360}(\mathbf{b}_{1}+\mathbf{b}_{2})+\frac{0.5}{120}\mathbf{b }_{3}\) from the \(\Gamma\)-point for each \(\mathbf{k}\)-point. Here, \(\mathbf{b}_{i}\) are the reciprocal basis vectors. In Fig. 3 we present the excited state population \(n_{e}(i,\mathbf{k},t)\) as a function of simulation time \(t\) for five phonon cycles, starting at \(t_{0}=0\) with zero phonon displacement. Here we define \[n_{e}(i,\mathbf{k},t)\equiv\left|c_{i}(\mathbf{k},t)\right|^{2}, \tag{13}\] which is equivalent to the definition used in the toy model analysis in Sec. III.2. At momentum \(\mathbf{k}=\mathbf{b}_{3}/120+\Delta\) adjacent to the zone center, an electron is gradually excited from the top valence band (\(i=1\)) to the two lowest conduction bands (\(i=2,3\)) due to nonadiabatic effects. The complex NAC amplitude \(\hbar d_{13}(\mathbf{k},t)\) between the 1st and 3rd bands at the same \(\mathbf{k}\)-point is plotted in Fig. 3. The yellow line denotes the real part and the red line the imaginary part. The line width indicates the numerical noise, which is found to have negligible impact on the state population dynamics. The maximal difference in the dynamical state populations from the simulation using the (noisy) NAC amplitudes versus using smoothed data via application of a Savitzky-Golay filter is only about \(10^{-3}\). Clearly, Fig. 3,b) shows that a sharp transition of the state population \(n_{e}\) occurs at the peaks of the NAC. This also coincides with a minimum of the band gap, \(E_{\mathrm{G}}=\varepsilon_{2}(\mathbf{k},t)-\varepsilon_{1}(\mathbf{k},t)\), as indicated by the blue shading in the background of Fig. 3. We note that \(d_{12}(\mathbf{k},t)\) has a similar time-dependence as \(d_{13}(\mathbf{k},t)\) [both bands have degenerate energies \(\varepsilon_{2}(\mathbf{k},t)=\varepsilon_{3}(\mathbf{k},t)\)], yet with slightly larger amplitude. This difference in the NAC induces a larger electron occupancy in the \(i=2\) band compared to the one with \(i=3\), as shown in Fig. 3. In contrast, the NACs from \(i=1\) to \(i=4,5\) are smaller by more than one order of magnitude, resulting in negligibly small carrier excitations to these bands, \(n_{e}(i,\mathbf{k},t)<10^{-6}\) for \(i=4,5\). The first-principles dynamics simulations allow for a direct and quantitative comparison to experiment. First, in Fig. 3 we show the excited state carrier density, \(n_{e}(t)=\sum_{\mathbf{k}}w_{\mathbf{k}}\sum_{i=2}^{3}n_{e}(i,\mathbf{k},t)\) as a function of time \(t\). It increases from zero to about \(n_{e}(t=5T)\approx 2.4\times 10^{16}\mathrm{cm}^{-3}\) at the end of the simulation \(t=5T\) and exhibits qualitatively similar sharp transitions near dynamical band gap minimum as the momentum resolved quantity in Fig. 3. Within the \(360\times 360\times 120\) uniform \(\mathbf{k}\)-mesh of the Brillouin zone, we find that the dominant contributions come from \(\mathbf{k}=\frac{1}{360}\mathbf{b}_{1}+\frac{m}{660}\mathbf{b}_{2}+\frac{n}{1 20}\mathbf{b}_{3}+\Delta\) with \(l,m,n=0,\pm 1,\pm 2,-3\). Next, we account for the fact that in pump-probe experiments, the time-trace of differential transmission is obtained as an average over multiple runs and ZrTe\({}_{5}\) samples exhibit some degree of electronic heterogeneity and nanostrip functions, as observed in the THz nanoimaging [67]. To capture these phenomena on average, we define \(\bar{n}_{e}(t)=\frac{1}{16}\sum_{i=0}^{9}n_{e}(t)\big{|}_{t_{0}=\frac{1}{16}T}\), which is an average over simulations at 10 different starting times. In Fig. 3 we directly compare the time-dependence of a normalized \(\bar{n}_{e}(t)/\bar{n}_{e}(t=5T)\) (black), to the experimental data \(\Delta n(t)/\Delta n(t=5T)\) (orange). Both curves exhibit a similar growth pattern of the carrier density over time, and we also find the carrier density at the end of the simulation \(\bar{n}_{e}(t=5T)\approx 2.4\times 10^{16}\) cm\({}^{-3}\) to be in good agreement with the one estimated from experiment \(\Delta n(5T)=(0.28\pm 0.12)\times 10^{-16}\) cm\({}^{-3}\), considering that there is electronic heterogeneity present in the experimental ZrTe\({}_{5}\) sample [67]. The numerical estimation of \(n_{e}(t=5T)\) and \(\bar{n}_{e}(t=5T)\) reported here also includes a factor of 2 to take into account the double degeneracy of the top valence band in ZrTe\({}_{5}\) system. ## V Conclusion We report detailed first-principle and effective model simulations of the carrier excitation dynamics in coherent phonon-modulated ZrTe\({}_{5}\). Our results shed new light on recent pump-probe experiments [25] by providing a clear intuitive explanation of the experimental results. Both first-principle and effective model calculations reveal the importance of the phonon-induced topological phase transition in ZrTe\({}_{5}\) and the associated closing of the bulk gap for the observed excitation of carriers. We show that the excitations occur via Landau-Zener-Stuckelberg tunneling in a series of time-dependent avoided level crossings of Bloch states located in a narrow region of momentum space around the zone center. Our detailed time-dependent DFT simulations further show that the dominant tunneling occurs between the highest valence band and the lowest doubly degenerate conduction bands, while excitations to the next higher bands are negligible due to small transition matrix elements. We predict that the carrier density increases gradually with time and reaches a final value of \(1.2\times 10^{16}\mathrm{cm}^{-3}\) at \(t=5T\) when phonon coherence is lost in the experiment. These results are in good quantitative agreement with experiment. Our work thus demonstrates that the coherent charge excitation process in topological quantum materials such as ZrTe\({}_{5}\) can be understood and predicted quantitatively by first-principles quantum dynamics simulations. ## Acknowledgements This work was supported by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences, Materials Science and Engineering Division. The research was performed at the Ames Laboratory, which is operated for the U.S. DOE by Iowa State University under Contract No. DE-AC02-07CH11358. ## Appendix A DFT electronic structure and phonon calculations First-principles total energy and electronic-structure calculations for ZrTe\({}_{5}\) are based on DFT with the exchange correlation functional in generalized gradient approximation parametrized by Perdew, Burke, and Ernzerhof (PBE) [68]. Van der Waals interaction is included by Grimme's damped atom-pairwise dispersion corrections (D2) [69]. The calculations are performed using the Vienna _Ab initio_ Simulation Package (VASP) [70]. We use a plane-wave cutoff energy of 230 eV and include spin-orbit coupling for all the calculations. The phonon modes are calculated using the finite displacement approach as implemented in Phonopy [71]. Specifically, we use the primitive unit cell of experimental structure(\(a=3.987\) A, \(b=14.502\) A, and \(c=13.727\) A) [72]. The primitive vectors are \(\mathbf{a}_{1}=(1.994,-7.251,0)\) A, \(\mathbf{a}_{2}=(1.994,7.251,0)\) A, and \(\mathbf{a}_{3}=(0,0,13.727)\) A. The corresponding reciprocal primitive vectors are \(\mathbf{b}_{1}=(0.251,-0.069,0)2\pi\) A\({}^{-1}\), \(\mathbf{b}_{2}=(0.251,0.069,0)2\pi\) A\({}^{-1}\), and \(\mathbf{b}_{3}=(0,0,0.073)2\pi\) A\({}^{-1}\). Highly accurate wavefunctions at specific \(\mathbf{k}\)-points are generated for NAC calculations by setting an energy convergence criterion to \(10^{-9}\) eV and requiring a minimum of 20 electronic steps. To be self-contained, we present the key DFT band structure calculation results for the description of the \(A_{\mathrm{1g}}\) phonon-induced adiabatic topological phase transition observed in ZrTe\({}_{5}\) system as reported in Ref. [25]. Under the modulation of the \(A_{\mathrm{1g}}\) eigenmode as plotted in Fig. 4(f), the band gap \(E_{\mathrm{G}}(\Gamma)\) at zone center closes at distortion parameter \(\lambda=-2.2\) as shown in Fig. 4(e), implying a topological phase transition along the dynamical path. This is confirmed by the band structure analysis and topological invariant index calculation [25]. In Fig. 4(a-d), we plot the band structure along high-symmetry \(\mathbf{k}\)-path \(Z-\Gamma-Y\) at phonon distortion parameter \(\lambda=-3.0,-2.2,0,3.0\), decorated with red color indicating the Te\({}_{d}\)\(5p\)-orbital weight. Band inversion clearly occurs when \(\lambda\) passes through \(\lambda=-2.2\) the Dirac point. For \(\lambda<-2.2\), the adiabatic state of the system is in WTI, and switches to STI for \(\lambda>-2.2\).
2305.07453
Distributed Twins in Edge Computing: Blockchain and IOTA
Blockchain (BC) and Information for Operational and Tactical Analysis (IOTA) are distributed ledgers that record a huge number of transactions in multiple places at the same time using decentralized databases. Both BC and IOTA facilitate Internet-of-Things (IoT) by overcoming the issues related to traditional centralized systems, such as privacy, security, resources cost, performance, and transparency. Still, IoT faces the potential challenges of real-time processing, resource management, and storage services. Edge computing (EC) has been introduced to tackle the underlying challenges of IoT by providing real-time processing, resource management, and storage services nearer to IoT devices on the network's edge. To make EC more efficient and effective, solutions using BC and IOTA have been devoted to this area. However, BC and IOTA came with their pitfalls. This survey outlines the pitfalls of BC and IOTA in EC and provides research directions to be investigated further.
Anwar Sadad, Muazzam A. Khan, Baraq Ghaleb, Fadia Ali Khan, Maha Driss, Wadii Boulila, Jawad Ahmad
2023-04-29T14:06:34Z
http://arxiv.org/abs/2305.07453v1
# Distributed Twins in Edge Computing: Blockchain and IOTA ###### Abstract Blockchain (BC) and Information for Operational and Tactical Analysis (IOTA) are distributed ledgers that record a huge number of transactions in multiple places at the same time using decentralized databases. Both BC and IOTA facilitate Internet-of-Things (IoT) by overcoming the issues related to traditional centralized systems, such as privacy, security, resources cost, performance, and transparency. Still, IoT faces the potential challenges of real-time processing, resource management, and storage services. Edge computing (EC) has been introduced to tackle the underlying challenges of IoT by providing real-time processing, resource management, and storage services nearer to IoT devices on the network's edge. To make EC more efficient and effective, solutions using BC and IOTA have been devoted to this area. However, BC and IOTA came with their pitfalls. This survey outlines the pitfalls of BC and IOTA in EC and provides research directions to be investigated further. Blockchain, IOTA, Distributed Ledger, Edge Computing, IoT, Bitcoin ## I Introduction IoTs is an emerging technology capable of connecting the real world with devices that can generate or transmit data, communicate with one another, and remotely control objects via the Internet in the absence of humans. With the help of Wireless Sensors Networks and Radio Frequency Identification [1], IoT is extensively evolved in healthcare, industries, education, smart homes, smart cities, surveillance, and smart agriculture [2, 3, 4]. For data processing of IoT, cloud computing (CC) serves as a backbone due to its scalability and flexibility. However, there are significant issues associated with CC, like real-time processing, resource allocation, and security of data [5]. To provide real-time data processing, EC has been introduced, which allows data processing on the edge of the network. However, issues associated with its security still exist, such as leakage of sensitive information, denial of service attacks, access control, and privacy of data. Distributed immutable ledgers, such as BCs and IOTA, have recently shown feasibility in overcoming the aforementioned security issues of IoT. BCs use a peer-to-peer (P2P) network for communication which allows one to directly initiate payment and send it to another party without the involvement of any trusted third parties [6]. In other words, it is a distributed database system linked in a P2P network through advanced cryptographic techniques that use a distributed ledger to ensure the security of messages or transactions exchanged over the network [7]. BC (e.g., bitcoin and ethereum) faces challenges when used in small day-to-day transactions (e.g., micropayments). Therefore, the research community proposed IOTA to tackle such problems [8]. IOTA differs significantly from bitcoin because it is not developed based on BC technology. To facilitate low-cost micropayments, the developers of IOTA have developed an entirely different architecture using directed acyclic graph (DAG) known as tangling [9]. To overcome the underlying challenges of EC, including security, privacy, latency, accessibility, data leakage, scalability, resource handling, throughput, reliability, control management, and energy efficiency, many contributions based on BC and IOTA have been introduced in the last few years. To the best of our knowledge there is no survey or work exist in the literature that highlights the contributions of BC and IOTA in EC. Therefore, the purpose of this study is to present a survey that overviews recently proposed methods based on BC and IOTA in EC and conclude which technology (BC or IOTA) serves best based on chosen parameters for EC. The main contributions of the proposed work are as follows. * In literature, there is no work that discusses the contributions of BC and IOTA in EC. * We have discussed that how BC and IOTA cope with underlying challenges of EC. * We have comparatively analyze the role of BC and IOTA in EC. * Finally, we conclude with future direction in EC using BC and IOTA. The rest of the paper is organized as follows. Section 2 overviews both BC and IOTA and highlights contributions being made in EC using BC and IOTA. Section 3 comparatively analyzes proposed methods in terms of chosen parameters. Finally, Section 4 concludes the study with some recommendations. ## II Literature Survey This section briefly discusses BC and IOTA, and explores the contributions being made in EC using BC and IOTA. ### _Blockchain and IOTA_ This section introduces BC and IOTA along with their strength and weaknesses. The BC technology, as roughly shown in Figure 1, is known for its security which is protected based on three aspects such as, decentralization [10], advanced cryptographic algorithms [11], and consensus protocols [12]. As depicted in Figure 1, each system in BC technology can be represented as nodes. Whereas, each node hold a ledger that consists of information like block number, header, timestamp, hash to previous block, own hash and the record of transactions made in the chain. BC uses a distributed ledger that facilitates P2P communication without a trusted third party. It is evident that BC is a secure technology based on the principles of hashing and digital signatures. However, due to the progress made in quantum computing, BC security is at risk as it has been shown recently that some of its underlying cryptographic algorithms can easily be cracked by Grover's and Shor's [13] algorithms. In addition, there are various shortcomings of using BC in IoT, including the unfair/high transaction fee and the constrained nature of machine-to-machine communication. Therefore, new distributed ledger has emerged called IOTA [14], which overcomes the issues of BC concerning IoT and provide secure communication based on a concept called DAG. The underlying features provided by IOTA are scalability, zero fee transaction, quantum immune, security, and low resource requirements [15]. ### _Blockchain in Edge Computing_ This section presents the role and contributions of BC in EC, as depicted in Table I. The widespread success of CC is evident; however, it is not an all-in-one solution with the major issues including centralization of resources, high latency and jitter in gaming, augmented reality, and e-health [16]. EC was introduced to overcome the above mentioned issues by providing computing power and storage near to edge network [5]. However, EC alone cannot solve the issues regarding privacy and security of data in IoT. Hence, the focus shifted to exploring how distributed ledger-based technologies, including BC and IOTA, can be used to address such issues. Considering such importance, the authors in [17] presented a survey on the integration of BC and EC showing how problems of data privacy, integrity, leakage, and access control can be solved with such integration. For secured access control, the study in [10] proposed a distributed trusted authentication system using BC in EC. The study guarantees a secured access control and achieve activity traceability of terminals based on dynamic name resolution and elliptic curve cryptography. However, it stores authentication and logs data using a practical Byzantine fault tolerance consensus algorithm, compromise on authentication data may provide unintended access to the network by exploiting desired authentication or log data. To provide efficient authentication in IOTs, the authors in [18] proposed a method called SCAB-IoTA based on BC which employs a secure mechanism to make clusters based on angular distance. To become a member of cluster, each IoT device is required to authenticate itself if it is in the radius of the desired cluster. However, encryption and decryption of SCAB-IoTA consume more energy and the scalability has been set aside respectively. BC restricts adversaries from alteration of data but may compromise stakeholders' data. To preserve the privacy of parties involved in chains, the study in [19] presented a TrustChain solution by combining BC with trust concepts to exclude problems associated with traditional BC architectures. They defined trust as a qualitative or quantitative property of a trustee measured by a trustor for a given task in a specific context and in a specific time period. Fig. 1: Rough Presentation of BC Technology They have used a trust-based consensus management protocol to evaluate trust based on nodes' knowledge, experience, and reputation. One major issue related to such a technique is starvation because communication with the desired node depends on trust. Nodes with high trust can frequently communicate, whereas nodes with low trust will never be able to communicate with other nodes. To tackle resource allocation problem in EC, the study in [11] proposed a three-tier architecture based on BC technology. It consists of a group-agent strategy with trust computing, a stacked task sorting and ranking mechanism, and a secured and efficient content model. Fake edge devices can compromise a group's trustworthiness, and tasks with low ranks may face starvation. Uploading cipher text to the cloud and indexing it using the BC may result in overhead in the case of large-scale data. Multi-access EC [5] introduced an extended form of CC that allows storage services at the network edge to provide low-latency data retrieval. However, trust and incentive are two major problems in collaborative edge storage. To overcome such issues, the authors in [20] have proposed novel collaborative edge storage based on BC to address incentive and trust evaluation using the historical performance of edge servers. Edge server (data off-loader) can publish a task for which other edge servers contend for the task. Reliable edge servers with a good reputation and guaranteed response time can be selected. One major issue in such an approach is when a new edge server wants to enter the ecosystem with no previous performance record. BC can also help in the efficient utilization of resources in edge environments. For the utilization problem, the authors in [21] have proposed a system that can ensure fair and efficient utilization of resources on edge devices rendering it more scalable. They have proposed a data migration algorithm with consensus having low energy consumption in edge devices along with a new proof of stake mechanism. Due to high mobility in the edge environment, the nodes are moving in a small range, but the network topology remains the same. To adapt to topological changes, a migration algorithm has been designed to reallocate the data and block storage to devices dynamically. They have also used a cache mechanism to provide recent block allocation, which can reduce the overhead of missing blocks. One issue in such approach is the overflow of cache memory and other is the node having cache is down? Obviously, IoT devices are intended to share massive data with each other to impose quality-of-service [23]. However, data sharing may result in data leakage of providers' data. Such concern is realized in [24], which presented a data sharing model based on a secured data mechanism called "BP2P-FL". It is team-based data sharing with reward and punishment mechanisms that are used to ensure data sharing with high quality and reliability. Team-based data sharing provides a collaborative environment in which a team sponsor initiates a task \begin{table} \begin{tabular}{c|l|c|c|c|c|c|c|c|c|c|c|c} \hline No. & Authors & SEC & DP & LAT & ACC & DL & SCA & RH & THR & REL & CM & EE \\ \hline 1 & Guo, S., et al [10] & ✓ & ✗ & ✓ & ✗ & ✗ & ✓ & ✓ & ✓ & ✓ & ✗ & ✓ \\ 2 & Jayasinghe, U., et al [18] & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ & ✗ & ✗ \\ 3 & Zhang, L., et al [19] & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ & ✓ & ✓ & ✓ & ✗ \\ 4 & Yuan, L., et al [11] & ✓ & ✓ & ✓ & ✗ & ✓ & ✗ & ✓ & ✗ & ✓ & ✗ \\ 5 & Huang, Y., et al [20] & ✗ & ✗ & ✓ & ✗ & ✗ & ✓ & ✗ & ✓ & ✓ & ✓ \\ 6 & Miao, Q., et al [21] & ✗ & ✓ & ✓ & ✗ & ✓ & ✗ & ✗ & ✓ & ✗ & ✗ \\ 7 & Tzenetopoulos, A., et al [22] & ✗ & ✗ & ✓ & ✗ & ✗ & ✗ & ✓ & ✓ & ✓ & ✓ \\ 8 & Ahmad, A., et al [12] & ✓ & ✗ & ✓ & ✗ & ✓ & ✗ & ✓ & ✓ & ✗ & ✓ \\ 9 & Abdi, A.I., et al [5] & ✓ & ✓ & ✓ & ✓ & ✗ & ✓ & ✗ & ✓ & ✓ & ✓ \\ \hline \end{tabular} \end{table} TABLE I: Proposed solutions based on BC to tackle underlying challenges of EC and assigns it to its members. Team-sponsor evaluates the capability of members based on their contribution in task completion, and members with high efforts get rewarded, whereas poor participation of members is punished by excluding them from the team. The experimental results show that the proposed method exhibited high accuracy and enhanced privacy in IoT. ### _IOTA in Edge Computing_ This section addresses the contributions of IOTA in EC as depicted in Table II. As illustrated in Figure 2, each system in IOTA can be presented as node. There are three phases namely, Tip, Unconfirmed, and Fully Confirmed through which each node passes. Initially, when the nodes are added it passes through Tip phase, then Confirmed and when all nodes confirm the authenticity of the added node, it goes to Fully Confirmed phase and become part of the network. There are various challenges in integrating of BC with EC, such as scalability, latency, high energy consumption, fairness, and sensitivity to quantum computing. Therefore, IOTA, which uses DAGs, comes in to tackle issues introduced exist in BC. To address scalability in IoT, the authors in [14] proposed a Scalable Distributed Intelligence Tangle-based approach to allow the integration of IoT devices across various applications. They also have presented a new proof-of-work (PoW) that enhanced energy efficiency. The experimental results show the achievement of scalability along with maintaining energy efficiency. For PoW, they have used an isolated server that performs heavy tasks to minimize energy consumption. However, this study suffers from the isolated server's single-point-of-failure problem and the data processing management. For energy efficiency problem the authors in [25] have proposed a novel Mobile-Agent Distributed Intelligence Tangle-Based approach to manage resources and deploy IoT applications that are both scalable and energy efficient. They have performed a PoW on IoT devices to reduce energy consumption on resource-intensive devices. The proposed approach facilitates local interaction, collection, aggregation of transactional data, and an efficient route plan. One potential problem in such an approach is the performance of PoW on dedicated servers, which may increase the cost of maintenance. IOTA classifies nodes into full and light nodes and can manually connect light nodes with full nodes using the IOTA client balancer. This overcharges full nodes and degrades the performance of the IoT platform. To cope with such an issue, the authors in [15] introduced a mechanism that fairly distributes the task among all nodes. They have developed an enhanced resource allocation algorithm called weight least connection which has improved the balancing of data traffic among full nodes based on their weights and active connections. For secure data sharing, a framework-based solution in [26] has presented using masked authentication messaging (MAM) with a tangle. For the transfer of data to distributed ledgers, MAM provides an additional layer of security with cryptographic functionalities, which maintains the integrity, authenticity, and confidentiality of data. Since IoT devices are restricted in terms of memory and computing power, such a solution is quite expensive for IoT. However, the focus of security analysis in IoT is limited to the node level, while the interactive nature of the device has been ignored. To cover up such a hole, the authors in [27] have proposed a framework that is used to monitor and detect potential danger to IoT devices. IOTA has been used for generating the attack graphs, probably pop-up resources, along with attack traces that can be compromised. They have also identified the dependencies between various devices by which adversaries can employ severe attacks. The proposed IOTA based model has been tested against 37 syntactic smart home systems, showing that it is highly effective and efficient. However, using such a model in edge environments will result in high energy consumption, which needs to be minimized for deployment in highly scalable networks. Furthermore, an attack called Parasite Chain Attack (PCA) [30] in which the attacker invincibly builds a sub-tangle that results in double-spending. The consequences of such an attack may be catastrophic in terms of finance because it can affect the entire network by double-spending if it is not adequately prevented. Therefore, the study [28] presented a scheme to tackle the PCA attack. They have proposed an algorithm for the prevention of PCA based on price splitting to slow the formation of the parasite chain effectively. However, effective detection and prevention mechanisms are required to operate in a scalable environment consuming less energy. Sensors are primary elements in IoTs, for sensors' security and privacy, the study in [29] proposed L2sec, a cryptographic protocol to secure source data exchange over the IOTA. Obtained results have shown better performance in terms of effectiveness and scalability. ## III Comparative Analysis This section comparatively analyzes both BC and IOTA in terms of security, privacy, latency, accessibility, data leakage, scalability, resource handling, throughput, reliability, control management, and energy efficiency. Fig. 2: Rough Presentation of IOTA Technology The power and flexibility of IoT enable the creation of smart environments and reduce human efforts across many areas; namely, smart health, smart homes, smart cities, and smart vehicles [28]. To overcome the underlying challenges of EC, the concept of distributed ledgers (BC and IOTA) has been deployed in EC [12, 27]. It is obvious that the integration of BC and IOTA in EC is itself a challenge that offers more issues related to scalability, energy efficiency, and security. This survey is conducted with the purpose of studying the integration effects of BC and IOTA with EC. Solutions based on BC in EC are depicted in Figure 3, which shows the distribution of defined parameters that BC tries to tackle. Challenges offered by EC have been intelligently overcome using BC. However, BC itself has various limitations such as scalability, latency, power consumption, fee fairness, and privacy of stakeholders [27]. The latest proposed papers on BC in edge networks from 2019 to 2020 have been scrutinized in which it is found that energy efficiency, scalability, and resource handling are still set aside, which are serious issues related to EC. Issues related to privacy, data leakage, accessibility, and latency have been considered, and plenty of work is devoted to scalability, energy efficiency, and scalability. BC is resource intensive and designed for large transactions, whereas IoT consists of devices with limited resources, which makes it difficult for IoT to incorporate BC efficiently. Since IoT is a scalable network that connects devices ranging from hundreds to thousands in number, the new node requires days of delay to be part of the ledger. To be integrated with EC, BC requires more intelligent and lightweight solutions that tackle scalability, energy efficiency, and security preservation. Challenges offered by BC can be resolved by using IOTA [26] which is based on DAG, it is more secure and scalable than BC. It tackles scalability, free fairness, micro-payments, and energy efficiency problems effectively [22]. It is more resistant to quantum attacks as compared to BC [7]. Fig 4 depicts the contributions deployed in EC in recent years, from 2020 to 2022. The use of IOTA in EC tackled scalability, accessibility, and energy efficiency problems [12]. Plenty of recent works deal with privacy, data leakage, accessibility, scalability, and energy efficiency, where resource handling and latency still require a prominent solution. However, IOTA is also prone to attacks, namely conflicting transactions, blowball, lazy tips, API, and social engineering attacks (detailed study can be found in [31]). Considering such attacks, more robust and resistive techniques are required to be introduced. Furthermore, to facilitate EC, IOTA based solutions are required to efficiently use energy and provide intelligent resource handling mechanisms for enhancing the utilization of resources and improving the over-all performance of the IoT. Integration of BC with EC requires prominent solutions to tackle resource handling, scalability, and energy efficiency problems. For tackling such issues, IOTA is a good option for deployment in EC. However, the latency problems need to be considered and addressed to improve the system's overall performance. Data accumulation, backup, monitoring of devices, and control management [22] are also significant issues in EC that need to be addressed using BC and IOTA. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline No. & Authors & SEC & DP & LAT & ACC & DL & SCA & RH & TIR & REL & CM & EE \\ \hline 1 & Albooui, T., et al [14] & ✓ & ✓ & ✓ & ✗ & ✗ & ✓ & ✗ & ✓ & ✗ & ✓ & ✗ & ✓ \\ 2 & Albooui, T., et al [25] & ✓ & ✗ & ✓ & ✗ & ✗ & ✓ & ✓ & ✓ & ✓ & ✗ & ✓ \\ 3 & Hellani, H., et al [15] & ✓ & ✓ & ✗ & ✗ & ✗ & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ \\ 4 & Abdullah, S., et al [26] & ✓ & ✓ & ✗ & ✓ & ✗ & ✗ & ✗ & ✓ & ✓ & ✗ & ✗ \\ 5 & Fang, Z., et al [27] & ✓ & ✓ & ✗ & ✗ & ✓ & ✗ & ✗ & ✓ & ✓ & ✗ & ✓ \\ 6 & Chen, Y., et al [28] & ✓ & ✗ & ✓ & ✓ & ✓ & ✗ & ✗ & ✓ & ✗ & ✗ \\ 7 & Carelli, A., et al [29] & ✓ & ✓ & ✗ & ✗ & ✓ & ✓ & ✗ & ✗ & ✓ & ✗ & ✓ \\ \hline \end{tabular} \end{table} TABLE II: Proposed solutions based on IOTA to tackle underlying problems Fig. 4: Distribution of proposed solutions using IOTA in EC ## IV Conclusion and Future Work In this survey, we have studied recently proposed solutions deployed in EC based on BC and IOTA. The aim was to inspect the solutions in terms of security, privacy, latency, accessibility, data leakage, scalability, resource handling, throughput, reliability, control management, and energy efficiency. It has been found that BC comes with issues such as scalability, high transaction cost, energy inefficiency, and fee fairness when it is used in IoT. It has also been found that IOTA has more potential to address those problems using its DAG consensus mechanism. Along with scalability problems, resource handling, latency, energy efficiency, data accumulation, backup, monitoring of devices, and control management are significant issues in EC that have been overlooked in recent years. Energy efficiency and resource handling in EC is a hot research area requiring intelligent solutions to use BC or IOTA. Similarly, there is no progress being made to cope with backup, monitoring, and control management problems in EC. These areas are also requiring considerable motivation toward the solutions to such problems. Another important direction for future research in EC while using distributed ledgers is to provide solutions for the detection and avoidance of potential attacks on IoT devices.
2308.09482
Atom-by-atom protein generation and beyond with language models
Protein language models learn powerful representations directly from sequences of amino acids. However, they are constrained to generate proteins with only the set of amino acids represented in their vocabulary. In contrast, chemical language models learn atom-level representations of smaller molecules that include every atom, bond, and ring. In this work, we show that chemical language models can learn atom-level representations of proteins enabling protein generation unconstrained to the standard genetic code and far beyond it. In doing so, we show that language models can generate entire proteins atom by atom -- effectively learning the multiple hierarchical layers of molecular information that define proteins from their primary sequence to their secondary, and tertiary structure. We demonstrate language models are able to explore beyond protein space -- generating proteins with modified sidechains that form unnatural amino acids. Even further, we find that language models can explore chemical space and protein space simultaneously and generate novel examples of protein-drug conjugates. The results demonstrate the potential for biomolecular design at the atom level using language models.
Daniel Flam-Shepherd, Kevin Zhu, Alán Aspuru-Guzik
2023-08-16T17:56:17Z
http://arxiv.org/abs/2308.09482v1
# Atom-by-atom protein generation and beyond with language models ###### Abstract Protein language models learn powerful representations directly from sequences of amino acids. However, they are constrained to generate proteins with only the set of amino acids represented in their vocabulary. In contrast, chemical language models learn atom-level representations of smaller molecules that include every atom, bond, and ring. In this work, we show that chemical language models can learn atom-level representations of proteins enabling protein generation unconstrained to the standard genetic code and far beyond it. In doing so, we show that language models can generate entire proteins atom by atom- effectively learning the multiple hierarchical layers of molecular information that define proteins from their primary sequence to their secondary, and tertiary structure. We demonstrate language models are able to explore beyond protein space- generating proteins with modified sidechains that form unnatural amino acids. Even further, we find that language models can explore chemical space and protein space simultaneously and generate novel examples of protein-drug conjugates. The results demonstrate the potential for biomolecular design at the atom level using language models. Proteins are essential components of all life on Earth and are involved in every cellular process. As a result, protein engineering is one of the most important areas of scientific discovery. Significant progress has been made and proteins have been engineered for therapies against viruses [1] and cancer [2], as well as to alleviate genetic diseases directly [3; 4; 5]. Artificial intelligence has enormous potential to accelerate scientific progress and automate protein engineering. Already it has led to a breakthrough in highly accurate protein structure prediction [6]. In particular, language models have already begun to have a major impact on protein design [7; 8; 9]. The important functions proteins carry out and the structure responsible for them originate in the patterns of amino acids in their primary sequence. Indeed, most language models represent proteins using sequences of amino acids [7; 8; 9]. However, this ignores atom-level interactions, precluding the model from representing any atom-level protein modification. Allowing for atom-level representations would enable protein generation outside of the genetic code and allow language models to explore an expanded space of biomolecules. Specifically, this would make it possible for the model to propose new unnatural side chains, attach small molecules, and generate linkers between residues that form large macrocycles. To learn atom-level representations we must turn to chemical language models, however, these models are typically used for smaller drug-like molecules. Similar to their protein variants- chemical language models are deep neural networks trained using masking or next-token prediction [10] but use atom-level linear sequences parsed from molecular graphs [10; 11]. These sequences completely represent the molecule including all atoms, bonds, rings, aromaticity, branching, and stereochemistry. The two most prominent sequence representations are SMILES strings [12] or SELFIES strings [13] which are completely robust and always valid. Recently, chemical language models [10] were found to have the ability to generate larger, complex molecules, relative to small drug-like molecules such as the largest molecules in PubChem. These molecules are still much smaller than proteins, but this indicates that atom-level protein generation with language models is feasible. In this work, we demonstrate that chemical language models are capable of generating entire proteins atom by atom, including biomolecules beyond protein space. Specifically, we train models on various biomolecules including proteins from the protein databank. We also create two other synthetic biomolecular datasets, first modified proteins with unnatural amino acids, and proteins with small molecule attachments- specifically single domain antibodies (sdAbs) from the antibody structural database [14] attached to molecules from the ZINC dataset [15]. ' We discover that chemical language models can learn the language of proteins entirely from scratch- by learning to generate atom-level sequences that define proteins with valid primary sequences that correspond to meaningful secondary, and tertiary structure, which we check using AlphaFold [6] structure predictions. Importantly, the language model learns valid protein backbones and natural amino acid structures as well as the primary sequence patterns in the training proteins. We further demonstrate that language models can generate beyond the standard genetic code- proteins with novel sidechains that are more complex than the set of standard amino acids. Additionally, we also show that chemical language models can generate novel proteins and small molecules together at the same time as protein-drug conjugates. In particular, we find that the model learns both the protein space of the single domain antibodies and the chemical space defined by the ZINC molecules- generating antibody-drug conjugates with valid and novel pro tein sequences and structures attached to novel drug-like molecules warheads similar to the structures in ZINC. ## Results In this study, the datasets are constructed by using small proteins from the Protein Data Bank (PDB), specifically between 50 and 150 residues. We use atom-level graph representations of each protein so that sidechain modifications can be made directly. For training, each protein can be parsed to a linear string representation, and random data augmentation can be used to increase the training data size. We describe the main details and results for each dataset in the following sections. ### Proteins For the first dataset, which consists of proteins with standard amino acids and no sidechain modifications, we test the ability of the language model to explore protein space while maintaining protein structure and constraints. After training, as shown in Figure 1(A))- we generate a thousand (1K) samples from the language model and evaluate their atom, residue, and protein-level properties. At the protein level, we determine if generated samples are proteins and attempt to determine their primary sequence. If we can ascertain their primary sequence, we can use AlphaFold2 [6] to further evaluate if the model has learned the amino acid sequences that correspond to good structure predictions. Additionally, we study model samples for their distribution of amino acids and other atom-level properties that can be computed using rdkit [16]. We check if molecules generated by the model are actually proteins by analyzing if they preserve the basic structure of the protein backbone and natural amino acids form. First, we perform a backbone structure search and then attempt to arrange the backbone from the N terminus to the C terminus while simultaneously classifying each sidechain using another substructure search for the standard set of amino acids. If this is successful and there are no discontinuities in the backbone or other side chain errors, then we classify the sample as a protein and parse the amino acid sequence. By this process, we determine roughly \(\sim\) 68.2% of samples are proteins, furthermore, all the parsed amino acid sequences are unique (there are no duplicates and the model isn't repeating specific proteins) and novel (they are different from the training sequences). We compare the distribution of amino acids in the training sequences to the distribution learned by the model based on the generated samples. We plot histograms, in Fig. 1(B), displaying the frequency of occurrence of every amino acid in samples from both the model and the training data- both distributions are very similar and mostly overlap but for some amino acids, the language model slightly underestimates the training frequencies. Using Alphafold [6], in Fig. 1(B), we visualize se Figure 1: **Proteins** (A) Dataset preparation. The training workflow for the model: training, generation, amino acid sequence determination, and AlphaFold visualization. (B) Comparison of amino acid distributions. (C) AlphaFold visualizations of model-generated proteins coloured by pLDDT, including the PDB ID of the closest protein and its % sequence overlap. lected examples of proteins generated by the language model. In each sample, residues are color-coded according to pLDDT, which is a per-residue estimate of the model's confidence on a scale from 0 to 100. Regions with pLDDT \(>\) 90 are dark blue and have high accuracy. Regions with pLDDT from 90 down to 70 are still expected to be good predictions and are colored light blue that transitions to green (with decreasing confidence). Regions with pLDDT between 50 and 70 are lower confidence and are colored yellow to green. The regions with pLDDT \(<\) 50 are not confident and likely disordered-these are colored red. On this scale, in Fig. 1(C), we see that the proteins that are generated by the model result in good structure predictions- ranging between 70 and 90 pLDDT. This indicates that the model can generate proteins with well-defined structures that are not disordered. For a simple baseline comparison, we considered sequences of random amino acids, the structure predictions for these consistently result in disordered proteins with low pLDDT \(<\) 50. Additionally, in Fig. 1(C), the proteins generated by the language model contain a variety of secondary structures including alpha helices, beta sheets, and omega loops. Globally, the generated proteins combine many of these secondary structures into various and unique domains. We can conclude, based on these samples, and further examples in Supplementary Fig. S1, that language models can generate proteins, atom by atom, not just with valid primary sequences but proteins with meaningful secondary and tertiary structure. Furthermore, the generated proteins are similar to their nearest training examples in the PDB, to show this in Fig. 1(C) and Supplementary Fig. S1, under each protein we label the primary sequence percentage overlap between the generated proteins and their most similar PDB training example- which ranges from 86% to 40% with one other generated protein that has no nearest PDB training example. Based on this, it is evident that the model draws heavily from the amino acid sequence patterns in its training data but does not memorize them. Also, in Supplementary Fig. S2, we plot histograms comparing atom-level properties of the samples generated from the model with the training data. The model roughly approximates the training distribution of atoms but slightly underestimates some properties. ### Proteins with unnatural amino acids The next dataset, whose construction is depicted in Fig 2(A), consists of proteins that have random sidechain modifications creating proteins with unnatural amino acids. For this dataset, we select a subset of smaller proteins with 50 to 80 residues from the previous protein dataset. We then modify the selected protein by attaching a randomly chosen small fragment using a random attachment point on every sidechain. This produces a dataset of proteins that are entirely comprised of "unnatural" amino acids. We train language models on these Figure 2: **Proteins with unnatural amino acids** (A) Dataset of unnatural proteins built by random side chain modification. (B) Histograms and density plots of atom-level measures or properties. Density plots use a Gaussian kernel density estimator (KDE) that is fit to LogP, MW, and PSA values of the training molecules by tuning the bandwidth parameter. (C) Boxplots measuring the number of occurrences of small fragments in sidechains of model and training proteins. (D) Histograms of backbone size and heavy atom number per protein for model and training proteins. unnatural proteins in order to test the ability of chemical language models to generate biomolecules beyond protein space. Further details about building the training data can be found in the Methods section. Additionally, an example of the unnatural amino acids from a single training protein and model protein can be found in Supplementary Fig. S3. After training, we again generate 1K samples from the language model for evaluation- the results are shown in Fig 2, where we test the model's ability to capture atom-level and sidechain-level properties of the unnatural proteins. First, in Fig 2 (B), we see that the model learns the continuous atom-level properties of the training proteins including octanol-water partition coefficient (LogP) [17], exact molecular weight (MW) and the topological polar surface area (PSA), in addition to learning the number of carbon, nitrogen, and oxygen. Then in Fig 2 (C), we see the model learns a similar sidechain structure to the training sidechains as determined by a structure search in each sidechain for a basic set of small fragments. Lastly, in Fig 2 (D), the model learns to generate unnatural proteins of similar atom number and backbone size to the training proteins- but does tend to slightly underestimate the size of both. ### Antibody Drug Conjugates Next, we test the ability of the language model to generate proteins attached to small molecules and simultaneously explore protein space and chemical space. One of the most promising examples of this kind of biomolecule with immense therapeutic potential are antibody-drug conjugates, which are a form of cancer therapy intended to target and kill cancer cells but spare healthy cells [18]. Structurally, they are composed of an antibody attached to single or multiple anticancer drugs typically using some linker molecule. To construct a synthetic dataset of antibody-drug conjugates, as shown in Fig 3 (A), we attach a single drug-like molecule from the ZINC dataset [15] to single-domain antibodies (sdAbs) from the structural antibody dataset [14; 19] in order to test the ability of language models to generate antibody-drug conjugates. We use two possible linkers for cysteine attachments and two other linkers for lysine attachments. Both linkers are selected from real antibody-drug conjugates described in [18]. The linker is randomly attached to the small molecule from ZINC and the specific lysine or cysteine residue for attachment is also randomly chosen. Since there are only 1K sdAbs in the structural antibody dataset we use data augmentation to expand the dataset size to 250K proteins that can be attached to every molecule in ZINC. After training, we again generate 1K samples from the language model for evaluation, We first test the model's ability to explore protein space and learn the distribution of single-domain antibodies- the results are shown in Fig 3 (B-D). Similar to the standard protein data, we compare the distribution of amino acids in the training sequences to the distribution learned by the model. We Figure 3: **Antibody Drug Conjugates – sdAbs** (A) Single domain Antibody-drug conjugates dataset creation overview. (B) Comparison of amino acid distributions for training and model sdAbs. (C) Histogram comparing the size of training and model sdAbs (by number of residues). (D) Example sdAbs (with warheads excluded) generated by the language model visualized by Alphafold and coloured by pLDDT. Under each, we include the PDB ID of the closest protein and its % sequence overlap. plot histograms, in Fig. 4(B), displaying the frequency of occurrence of every amino acid in samples from both the model and the training data- from these, we can see the language model accurately learns the training distribution of amino acids. Similarly, in Fig. 4(C), the model accurately learns the size of the training sdAbs. Similar to the standard proteins, we can attempt to determine the amino acid sequences of the single-domain antibodies (ignoring the warheads). We determine roughly \(\sim 90.8\%\) of samples are proteins and their primary sequences are entirely unique and novel (there are no duplicates and all are different from training sequences). Even further, Alphafold structure predictions [6], visualized in Fig. 3(D) and Supplementary Fig. S7, confidently show that the language model produces sequences that fold into the expected structure for single domain antibodies. Additionally, based on the primary sequence overlap of model samples with their nearest PDB training example in Fig. 1(C) and Supplementary Fig. S1, the model, without memorizing, learns the amino acid sequence structure that defines the training sdAbs. The primary sequence overlap with training examples ranges from 63% to 93% in the supplementary information. Investigating further, we see that the model draws heavily from the sdAb sequences making new examples of sequences by memorizing small snippets of amino acids and using larger training snippets but with a large number of single mutations randomly distributed throughout the snippet. From the training examples and model samples, we detach and collect "warheads" which we expand the definition of to include the linker and sidechain in addition to the small molecule (warhead typically refers to just the small molecule). In Fig. 4 (B), two examples of train and model warheads are shown as graphs to clarify this. Additional model and training warheads are shown as graphs in Supplementary Fig. S9 and S8- as expected the same four linkers repeat across samples but the small molecules attached to them differ and are structurally similar to the ZINC molecules in the training warheads. We also evaluate the language model's warheads in terms of their atom-level properties. In Fig 4 (A), the model captures the atom-level properties of the training warheads, specifically, it learns the continuous atom-level properties of the training warheads including LogP [17], drug-likeness (QED) [20], Synthetic Accessibility Score (SA) and molecular graph complexity (BCT) as well as the number of atoms, bonds, rings and atoms in rings. However, the model slightly underestimates the main modes for QED and SA as well as the number of rings per warhead. Additionally, we assess the model warheads and compare them with the training warheads, we find that model warheads are unique (there are no duplicates and the model is not repeating a few examples) as well as novel (the model does not make exact copies of warheads from the training data). Given that the linkers are memorized, this indicates that the model is learning to generate new small molecules similar to ZINC molecules and effectively exploring chemical space at the same time it learns to explore the protein space defined by the sdAbs. Also, in Supplementary Fig. S2, we see that the model does learn the atom-level properties of the training antibody drug-conjugates. Additionally, in Supplementary Fig. S4-S6, we show a single train antibody drug-conjugate and four model samples. ## Discussion In this work, we show that chemical language models can generate, atom by atom, entire proteins, unnatural proteins, and protein drug conjugates. By analyzing generated samples we find that language models learn multiple hierarchical layers of molecular information that define the training biomolecules. This includes atom-level molecular properties or residue-level constraints for Figure 4: **Antibody Drug Conjugates– Warheads** (A) Histograms and density plots of atom-level measures or properties of warheads. Density plots use a Gaussian kernel density estimator fit to LogP, MW, and PSA values for the training molecules by tuning the bandwidth parameter. (B) Examples of model and train ”warheads”. backbone and amino acid structure as well as primary sequence patterns and motifs that define meaningful secondary and tertiary structure. Indeed, chemical language models learn to generate protein structures as sequence representations of atom-level graphs that are similar to the training proteins in the PDB. Effectively we demonstrate that chemical language models can also serve as biological language models- capable of learning the language of proteins atom by atom. Importantly, in contrast to protein language models that only learn representations of amino acid sequences, chemical language models generate entire molecular graphs, and because of this we are able to show that language models can be used to explore not just chemical space and in between chemical and protein space but also protein space itself, beyond protein space, or even both chemical and protein space at the same time. Further work should be done to ensure the model generates valid protein structure including correct backbone and amino acid form. This will also assist the model in learning distributions consisting of larger biomolecules including structures with more than 150 residues and multiple domains. Using memorizing Transformers [21] may help the model generate valid protein sequences. Also, other architectures built for longer sequence lengths [22] can increase the size and range of structures that the model can learn. Another limitation is that we do not consider the three-dimensional structure of the biomolecules and generate atom-level sequence representations. This problem can not be easily rectified because no training data with 3D information exists for unnatural proteins and protein-drug conjugates. A potential solution that does not requires training data would be to use reinforcement learning [23] or bayesian optimization [24] and guide the model to generate 3D structure using energy. The goal of this work is to demonstrate the power of chemical language models and their ability to learn atom-level representations of biomolecules. We envision future language models will be able to explore any combinatorial space in chemistry or biology using any representation type the user wishes [25]. ## I Methods ### Datasets From the PDB we successfully parse around \(\sim\)10K proteins between 50 and 150 residues. In all datasets, we only parse proteins that conform to atom-level graphs with no more than 2 macrocycles (created by residue-residue connections) - this makes primary sequence determination more successful. Given this constraint, we parse around \(\sim\)10K and \(\sim\)5K proteins from the PDB for the first two training datasets. In order to increase the size of the training data, we randomize the atom orderings of each protein in rdkit to obtain multiple different random copies of each biomolecule as SMILES (and then SELFIES strings). Using this data augmentation we expand all training datasets to around \(\sim\)250K sequences. We use rdkit[16] to represent each protein as atom-level graphs and make side-chain modifications. We also made use of Colabfold [26] for quick visualization and NGLview [27] for figure construction. ### Tokenization We use SELFIES [13] version one for the sequence representation of atom-level protein graphs. Other than special tokens like [BOS],[EOS],[PAD],[UNK], the vocab \(\mathcal{T}\) consists of standard selfies tokens, encoding all information in a molecular graph including: atom tokens {[C],[N],...}, bond tokens {[=C],[#N],...}, ring tokens: {[Ring1],[Ring2],...} branching tokens: {[Branch1_1],[Branch1_2],...}. In total for all datasets, the vocabulary is around \(\sim\)30 tokens. ### Language Modeling for Molecular Design In language modeling for molecular design, we want to estimate the unsupervised distribution of the training molecules \(\texttt{(mol_{1},mol_{2},\ldots,mol_{n})}\) each composed of variable length sequences of tokens from a chemical language \([\texttt{CT}]_{i}\) where \(\texttt{CT}\in\mathcal{T}\) such that \(\texttt{mol}=([\texttt{CT}]_{1},[\texttt{CT}]_{2},\ldots,[\texttt{CT}]_{n})\). The joint probabilities over a single molecule can be written as \[p(\texttt{mol})=\prod_{i=1}^{n}p([\texttt{CT}]_{n}|[\texttt{CT}]_{n-1},\ldots [\texttt{CT}]_{1}) \tag{1}\] These probabilities \(p([\texttt{CT}]_{n}\mid[\texttt{CT}]_{n-1},\ldots[\texttt{CT}]_{1})\) are modeled using a Transformer [28] that is trained using stochastic gradient descent. ### Training During training, we one-hot encode SELFIES sequences using a basic vocabulary that consists of 30 possible alphabet tokens. All language models are trained using next-token prediction conditioned on the entire sequence for context. The training data only uses sequences that have a maximum length of 1664 tokens. We trained language models with decoder only, GPT-like architecture [29] with 4 attention heads and between 1 and 10 Million parameters. Language models are implemented in Python 3 with PyTorch [30]. Molecules properties are computed using rdkit[16]. Acknowledgements A.A.-G. acknowledge funding from Dr. Anders G. Froseth. A.A.-G. also acknowledges support from the Canada 150 Research Chairs Program, the Canada Industrial Research Chair Program, and from Google, Inc. Models were trained using the Canada Computing Systems [31]. This research was undertaken thanks in part to funding provided to the University of Toronto's Acceleration Consortium from the Canada First Research Excellence Fund.
2310.13690
Planet Gap-opening Feedback on Disk Thermal Structure and Composition
(Exo-)planets inherit their budget of chemical elements from a protoplanetary disk. The disk temperature determines the phase of each chemical species, which sets the composition of solids and gas available for planet formation. We investigate how gap structures, which are widely seen by recent disk observations, alter the thermal and chemical structure of a disk. Planet-disk interaction is a leading hypothesis of gap formation and so such changes could present a feedback that planets have on planet-forming material. Both the planet gap-opening process and the disk thermal structure are well studied individually, but how the gap-opening process affects disk thermal structure evolution remains an open question. We develop a new modelling method by iterating hydrodynamical and radiative transfer simulations to explore the gap-opening feedback on disk thermal structure. We carry out parameter studies by considering different planet locations rp and planet masses Mp. We find that for the same rp and Mp, our iteration method predicts a wider and deeper gap than the non-iteration method. We also find that the inner disk and gap temperature from the iteration method can vary strongly from the non-iteration or disk without planets, which can further influence dust-trap conditions, iceline locations, and distribution of various ices, such as H2O, CO2, and CO on large dust grains ("pebbles"). Through that, a gap-opening planet can complicate the canonical picture of the non-planet disk C/O ratio and influence the composition of the next generation of planetesimals and planets.
Kan Chen, Mihkel Kama, Paola Pinilla, Luke Keyte
2023-10-20T17:50:29Z
http://arxiv.org/abs/2310.13690v1
# Planet Gap-opening Feedback on Disk Thermal Structure and Composition ###### Abstract (Exo-)planets inherit their budget of chemical elements from a protoplanetary disk. The disk temperature determines the phase of each chemical species, which sets the composition of solids and gas available for planet formation. We investigate how gap structures, which are widely seen by recent disk observations, alter the thermal and chemical structure of a disk. Planet-disk interaction is a leading hypothesis of gap formation and so such changes could present a feedback that planets have on planet-forming material. Both the planet gap-opening process and the disk thermal structure are well studied individually, but how the gap-opening process affects disk thermal structure evolution remains an open question. We develop a new modelling method by iterating hydrodynamical and radiative transfer simulations to explore the gap-opening feedback on disk thermal structure. We carry out parameter studies by considering different planet locations \(r_{\rm p}\) and planet masses \(M_{\rm p}\). We find that for the same \(r_{\rm p}\) and \(M_{\rm p}\), our iteration method predicts a wider and deeper gap than the non-iteration method. We also find that the inner disk and gap temperature from the iteration method can vary strongly from the non-iteration or disk without planets, which can further influence dust-trap conditions, iceline locations, and distribution of various ices, such as H\({}_{2}\)O, CO\({}_{2}\), and CO on large dust grains ("pebbles"). Through that, a gap-opening planet can complicate the canonical picture of the non-planet disk C/O ratio and influence the composition of the next generation of planetesimals and planets. keywords: protoplanetary discs - planet-disc interactions - hydrodynamics - radiative transfer - planets and satellites: composition ## 1 Introduction Chemical element abundance ratios in planets, and in comets or asteroids, are determined by the chemical composition and physical-chemical evolution of the protoplanetary disk they form in. The study of chemical element abundance ratios such as C/O (Oberg et al., 2011; Madhusudhan et al., 2014) or N/S (Turrini et al., 2021) may allow to connect planetary bodies to their formation history, which is important for understanding how the chemical diversity of planetary systems arises. The distribution of volatile chemical elements in the solid (dust, ice) and gas phases is set by the location of their icelines, which depend on the disk temperature structure. In this work, we employ hydrodynamical and radiative transfer models to study the feedback of planet-induced gaps on the temperature structure and hence the location of icelines. ALMA observations have revealed that rings and gaps in the dust and gas components are common in protoplanetary disks (e.g., Andrews et al., 2018; Oberg et al., 2021). One possible and intriguing explanation for the formation of such substructures is embedded young planets in disks. Despite great efforts, very few protoplanets have been detected in disks by direct imaging (Keppler et al., 2018, 2019; Haffert et al., 2019; Benisty et al., 2021; Currie et al., 2022; Hammond et al., 2023). Direct imaging is, however, biased towards super-Jupiter mass protoplanets, whereas most gaps may be due to lower-mass giant planets. Their masses can be inferred from the gap structure or gas kinematics (e.g., Zhang et al., 2018; Teague et al., 2018). Alternative scenarios to explain gaps and rings without planets have also been proposed, such as secular gravitational instabilities (Takahashi and Inutsuka, 2014), dust evolution (Birnsti et al., 2015), zonal flows (Flock et al., 2015), and icelines (Zhang et al., 2015). A gap in the disk implies a reduced optical depth in a radially confined region. This allows shorter wavelength photons to penetrate deeper and heat the disk midplane, as well as the edges of the gap, so gaps potentially affect the disk temperature structure. An opposite, cooling effect may result from fewer photons being scattered by dust towards the midplane. The balance of these effects around a given dust gap can be studied with Monte Carlo radiative transfer (RT) models (Brooome et al., 2022). Previous studies of temperature changes around gaps used analytically prescribed surface density profiles: Cleeves et al. (2015) explored the spatial distribution of molecular abundances resulting from increased heating due to an accreting protoplanet in a gap, while Broome et al. (2022) used Monte Carlo radiative transfer to investigate the dust temperature structure around analytical gap profiles in a hydrostatic 1+1D disk model. The temperature change caused by a gap can also affect the structure of the gap itself. Hydrodynamical (HD) simulations of planet-disk interactions and gap-opening processes assuming a locally isothermal equation of state (EoS) provide empirical formulas of gap depth and width (Fung et al., 2014; Kanagawa et al., 2015, 2016; Zhang et al., 2018; Duffell, 2020). Recently, Miranda and Rafikov (2019, 2020) suggest that the assumptions of the equation of state, locally isothermal or adiabatic assumptions, can affect the gap properties by altering the propagation of density waves. Additionally, Zhang and Zhu (2020) used simulations to show that the cooling timescale can influence the gap profile. Disk thermodynamics plays an important role in setting the location of different icelines in disks. An iceline of a specific molecule is the location where the temperature is low enough so that such molecules freeze out from the gas phase onto dust grains. Though direct measurements of the location of molecular icelines are rare in observations (e.g., water iceline van 't Hoff et al. (2018), CO iceline Zhang et al. (2017); van 't Hoff et al. (2017), icelines can play an important role in planet formation. Across icelines, the gas composition and ice reservoirs for the planet and planetesimal formation are changed (e.g., Oberg et al., 2011), and the efficiency of planetesimal formation can increase at the water iceline (e.g., Stevenson and Lunine, 1988; Schoonenberg and Ormel, 2017). In addition, dust trapping is closely related to the planet gap-opening process, which in combination with the location of icelines determine the location of planetesimal formation and their composition. Dust trapping in local pressure maxima is proposed to overcome rapid dust loss due to radial drift by the drag between the gas and the dust in disks Whipple (1972). For example, Pinilla et al. (2012) demonstrated that the pressure bump outside the gaps opened by planets can trap dust and produced ring-like structures as observed Conventionally, previous studies on gap modeling or thermal structures in disks only conduct HD or RT simulations, or combine the final results from HD to RT simulations to compare with observations. However, as planets open gaps in disks, the temperature around gaps could deviate significantly from the temperature adopted for disks without planets. In the meantime, the temperature changes affect the disk gas scale height \(H\) and volume density \(\rho\) distribution. In this paper, we build a new model to investigate the planet gap-opening process and the gap-opening feedback on disk thermal structure. Because the temperature controls which species can exist as solid ices, our model allows us to investigate the question: what is the feedback effect of giant planets on the composition of material subsequently accreted by the planets themselves, or by a new generation of forming planetesimals? In order to improve previous models, we first feed HD simulations with a more physical energy field from RT models. During the planet gap-opening process, we combine the HD and RT simulations together and iterate them. We implement the new temperature calculated by RT to correct the energy field of HD simulations. This paper is organized as follows. In Section 2 we describe our modeling method of how we iterate the hydrodynamical and radiative transfer simulations to study the gap-opening process. In Section 3, we present and quantify our modeling results of gap properties, disk temperature structure, and ice distributions. Section 4 discusses the impact of our results on disk composition, disk substructure observation, and the limits of our models. Section 5 summarizes the main conclusions of this paper. ## 2 Methods In this section, we describe the codes and setup of our hydrodynamical and radiative transfer simulations, as well as the workflow of how we iterate these two simulations to study the temperature structure of a disk with a gap-opening planet. ### Hydrodynamical simulations We conduct 2D hydrodynamical simulations in polar coordinate (\(r\), \(\phi\)) to study surface density evolution with FARGO3D (Benitez-Llambay and Masset, 2016). The main parameters of FARGO3D simulations are shown in Table 1. For grid setup, we conduct global disk simulations of a transition disk which extend from \(r_{min}=1\) au to \(r_{max}=100\) au. The global disk simulations for FARGO3D avoid the radial extrapolation of the sound speed \(c_{\rm s}\) setups for global disk RADMC-3D simulations. Such consideration is necessary, as the extrapolation could be imprecise for a radially non-smooth \(c_{\rm s}\) field. We set up mesh grids linearly distributed in \(\phi\) direction, and logarithmically distributed in \(r\) direction. The grid numbers (\(N_{r,\rm HD}\), \(N_{\phi,\rm HD}\)) resolve the gas scale height at the location of the planet with at least 5 grid cells and make the grid cells square shape at the planet location. We also do convergence tests by doubling the resolution, finding that the gap depth variations are less than 20%. So we keep on using the resolution in Table 1 to minimize simulation time during each iteration step. Regarding the physical model setup, we only include gas in our simulations without dust and the radial initial gas density profile is \(\Sigma_{g}=\Sigma_{0}(r/r_{0})^{-1}\). We assume the whole disk mass is \(0.028M_{\odot}\), which is a normal choice for solar-mass star (e.g., see review in Manara et al., 2022). The EoS is assumed locally isothermal and the flared disk is built with aspect ratio \(H/r\propto r^{1/4}\). However, since we update \(c_{\rm s}\) for each iteration step as described in section 2.3, we only use aspect ratios and flaring index as the initial conditions but do not need to use them at any later step of evolution. We adopt a Shakura and Sunyaev (1973) viscosity parameter \(\alpha=10^{-3}\). We use the scale-free parameter setup in FARGO3D which means \(G\), \(M_{\star}\), \(r_{0}=1\). Here we set \(r_{0}=r_{p}\) and fix planets at circular orbits. The indirect term of potential is included in the simulations. The planets are introduced into disks from the beginning of the simulations without including any accretion onto the planets. We also examine introducing planets into a disk with a mass-taper function but find no significant difference in the results. At the radial boundaries, we adopt power-law extrapolation densities and Keplerian extrapolation azimuthal velocities at both \(r_{\rm min}\) and \(r_{\rm max}\). In terms of radial velocities, we adopt an outflow inner boundary and a symmetric outer boundary. Periodic boundaries are imposed in the azimuthal direction. \begin{table} \begin{tabular}{l l l l} \hline \hline parameters & \multicolumn{3}{c}{values} \\ \hline \(\mathbf{M_{p}}\) & \multicolumn{2}{c}{\(3M_{1},1M_{1},100M_{\oplus},10M_{\oplus}\)} \\ \(\alpha\) & \multicolumn{2}{c}{0.001} \\ \hline \(r_{0}=r_{p}\) [au] & 4 & 10 & 30 \\ \(r_{min}\) [\(r_{0}\)] & 0.25 & 0.1 & 0.033 \\ \(r_{max}\) [\(r_{0}\)] & 25 & 10 & 3.3 \\ AspectRatio & 0.04 & 0.05 & 0.066 \\ \(\Sigma_{0}[M_{\star}/r_{0}^{2}]\) & 1.8\({}^{-4}\) & 4.5\({}^{-4}\) & 1.34\({}^{-3}\) \\ \(N_{r,HD}\) & 580 & 460 & 350 \\ \(N_{\phi,HD}\) & 790 & 630 & 480 \\ \hline \end{tabular} \end{table} Table 1: FARGO3D main parameters. Parameters in each column below the second row in this table are corresponding to the cases of planet location \(r_{p}=4\), 10, or 30 au, respectively. ### Radiative transfer simulations After obtaining the 2D \(\Sigma_{\rm g}\) and \(c_{\rm s}\) fields from FARGO3D simulations, we perform 3D Monte Carlo radiative transfer with RADMC-3D (Dullemond et al., 2012) to obtain the temperature structure. The output gas temperature \(T_{\rm gas}\) is used to update the corresponding \(c_{\rm s}\) field for FARGO3D (see Section 2.3). Within RADMC-3D simulations, all parameters are in units of cgs and the main parameters are shown in Table 2. For grid cell setup, RADMC-3D keeps the same global transition disk simulation domain in (\(\tau,\phi\)) direction as FARGO3D. The vertical domain is \(\theta=[\pi/2-0.5,\pi/2]\) with mirror symmetry along the midplane. The azimuthal and vertical directions are sampled in linear space, while the radial direction is sampled in logarithmic space. We test different combinations of grid resolutions and decide (\(N_{\rm r,RT}\), \(N_{\phi,\rm RT}\), \(N_{\phi,\rm RT}\) = (256, 30, 53) in radial, vertical, and azimuthal direction is a proper resolution for using \(N_{\rm photon}=10^{8}\) photon packages. For small \(N_{r},N_{\phi}\), the asymmetry temperature feature in disks due to eccentric gaps is not recovered properly. For larger \(N_{r},N_{\phi}\), the \(T_{\rm mid}\) map gets bad photon statistics and it is noisy unless we adopt a larger number of photons \(N_{\rm photon}>10^{9}\), which takes more than 10 hours with paralleling 40 threads for just one iteration step. Also, \(N_{\rm photon}=10^{8}\) gets similar smooth temperature results as \(N_{\rm photon}>10^{9}\) with more grid cells. Hence, we keep \(N_{\rm photon}=10^{8}\) for all the simulations presented in this paper. After RADMC-3D, we interpolate the values in RADMC-3D grid cells to match the (r, \(\phi\)) grid cells in FARGO3D. For the stellar parameters, we adopt typical values for a T Tauri star, \(M_{\star}=1M_{\odot}\), \(R_{\star}=1.7R_{\odot}\), and \(T_{\star}=4730\)K. We only consider stellar radiation as the heating source and ignore viscous heating. We assume silicate dust particles with isotropic scattering and the intrinsic density is \(3.710\,\rm g\,cm^{-3}\). We also assume dust to gas mass ratio \(\epsilon=0.01\) and dust grain size of \(0.1\mu m\). As the small dust grains couple well to the gas, we do not assume any dust settling. Also, we do not consider any dust evolution process, such as dynamics, growth, or fragmentation of particles (Birnstiel et al., 2010). The disk density distribution in three dimensions is assumed to be \[\rho_{d}(r,z,\phi)=\frac{\Sigma_{d}(r,\phi)}{\sqrt{2\pi}H(r)}\exp\left(-\frac{ z^{2}}{2H(r)^{2}}\right) \tag{1}\] where \(\Sigma_{d}(r,\phi)\) is the dust surface density and \(\Sigma_{d}(r,\phi)=\epsilon\Sigma_{B}(r,\phi)\). \(H(r)\) is the gas pressure scale height and \(z=r\tan\theta\). ### Workflow Our iterative approach makes use of FARGO3D and RADMC-3D codes. The workflow of our iteration method is illustrated in Figure1. The green, blue, and yellow boxes represent the steps of FARGO3D, RADMC-3D, and post-processing from RADMC-3D to FARGO3D, respectively. Our methodology consists of the following steps: **Step 1:** We set up our initial physical disk models without planets by assuming azimuthal symmetric 1D gas surface density \(\Sigma_{\rm g,0}(r)\) and aspect ratio \(H/r\) of the disks (shown in Box(1)). Then, we output \begin{table} \begin{tabular}{l c} \hline \hline parameters & values \\ \hline \(M_{\star}\) [\(M_{\odot}\)] & 1 \\ \(R_{\star}\) [\(R_{\odot}\)] & 1.7 \\ \(T_{\star}\) [K] & 4730 \\ \(N_{photon}\) & \(10^{8}\) \\ \(\epsilon\) & 0.01 \\ \(N_{r,RT}\) & 256 \\ \(N_{\phi,RT}\) & 30 \\ \(N_{\phi,RT}\) & 53 \\ \hline \hline \end{tabular} \end{table} Table 2: RADMC-3D parameters. Figure 1: The workflow of the iteration method. The green, blue, and yellow boxes represent the steps of FARGO3D, RADMC-3D, and post-processing between RADMC-3D and FARGO3D, respectively. The iteration step is 100 planet orbits. FARGO3D results of orbit to obtain initial 2D \(\Sigma_{\rm g,0}(r,\phi)\) and sound speed \(c_{\rm s,0}(r,\phi)\) map. Note that the energy field outputs in FARGO3D simulations in this paper are actually the isothermal \(c_{\rm s}\). **Step 2:** 2D surface density field from FARGO3D are read by RADMC-3D and extend to 3D volume density by following Eq. 1, where the scale height \(H\) is calculated from FARGO3D \(c_{\rm s}\) field. Then the dust radiative transfer simulations are conducted (Box (2)), and the output of the dust temperature \(T_{\rm dust}(r,\theta,\phi)\) is obtained. As RADMC-3D does not include any photochemistry simulations, we assume \(T_{\rm gas}(r,\theta,\phi))=T_{\rm dust}(r,\theta,\phi)\). From \(T_{\rm dust}(r,\theta,\phi)\), the mid-plane temperature \(T_{\rm mid}(r,\phi)\) can be obtained (Box (3)). Using this RADMC-3D temperature as the non-planet disk temperature can help us to get rid of the initial temperature profile assumption in FARGO3D. In fact, such a step is also done in Fig. 6(a) in Bae et al. (2019) to get the first Monte Carlo radiative transfer (MCRT) temperature, which aims to get rid of the assumed stellar irradiation-dominated temperature T\({}_{\rm irr}\). We also test the iteration process (assuming vertical hydrostatic equilibrium ) described in Appendix A in Bae et al. (2019) to get the multiple iteration MCRT temperature but the differences between the first MCRT temperature and multi-time MCRT temperature in our disk model are negligible. This MCRT iteration process makes no difference in our case but at least doubles our MCRT workload and costs much more computation time. So we directly use our RADMC-3D temperature for later steps. **Step 3:** By using the \(T_{\rm mid}(r,\phi)\) from the last step, we could infer a new \(c_{\rm s}\) field by assuming a vertical isothermal approximation. Even though we still use the isothermal assumption here, because of the non-smooth \(T_{\rm mid}(r,\phi)\) reflecting the gap-opening process, such new \(c_{\rm s}\) does not equal the initial isothermal \(c_{\rm s,0}\) anymore. The new \(c_{\rm s}\) is treated as the new gasenergy.dat file for the next FARGO3D run (Box (4)). This is the important step that moves beyond the isothermal assumption in the conventional non-iteration method and shows the feedback effect of the gap-opening process. In Appendix B, we have a test to compare a vertical density weighted temperature with \(T_{\rm mid}\). We find that they are similar, especially in gap regions. For simplicity, we use \(T_{\rm mid}\) in this paper. Next, we restart the FARGO3D simulation and evolve it over 100 orbital times (we assume the iteration step is 100 orbit here) (Box(5)) and as a result, we get the output as Box(6). Again, during the FARGO3D step, the EoS is assumed isothermal. Malygin et al. (2017); Pfeil and Klahr (2019) demonstrate that the thermal relaxation time varies across the disk, and in some regions, there is large cooling time (\(>\) 100 local orbits), where our iteration time is a good approximation. However, in some outer disk regions, like a few tens of au, (the specific regions depend on the model conditions) have short cooling time, where our choice of 100 orbits can be too long. As a test, we performed simulations with iteration steps of 50 orbits in Appendix A, and found no difference with the 100 orbits case. We also test iteration step of 100 orbits against 500 orbits in Appendix A, which do not converge very well in gap regions. It means the iteration step of 500 orbits could not replace 100 orbits. For these reasons, we keep 100 orbits for all the main simulations of this work. **Step 4:** The result of 100 orbit FARGO3D is used as the input for RADMC-3D (Box(7)). During the RADMC-3D setup, the gas pressure scale height \(H\) is given by the FARGO3D \(c_{\rm s}\), \(H=c_{\rm s}/\Omega\). Therefore, the extension of 2D \(\Sigma_{\rm g}\) to 3D volume density \(\rho\) (shown as Equation 1) can be also modified by the gap-opening feedback. **Step 5:** Repeat Step 2 to Step 4 and iterate until reaching a quasi-steady state, which also means the iteration process is from Box(2) to Box(7). We iterate all the simulations over 2000 planet orbital time which corresponds to \(1.6\times 10^{4}\) yrs for \(r_{\rm p}\) = 4 au, \(6.4\times 10^{4}\) yrs for \(r_{\rm p}\) = 10 au, \(3.3\times 10^{5}\) yrs for \(r_{\rm p}\) = 30 au, respectively. In summary, there is density and velocity evolution but no energy/\(c_{\rm s}\) evolution over time in HD simulations, while the energy/\(c_{\rm s}\) field is evolved by executing RT simulations. Meanwhile, the evolving \(c_{\rm s}\) field contains the information from the gap-opening feedback. As a comparison, in this paper, the conventional non-iteration method is running FARGO3D then RADMC-3D simulation once. To be more specific, non-iteration uses FARGO3D with the physical assumptions (initial isothermal \(c_{\rm s}\)) in Box(1) to obtain \(\Sigma_{\rm g}\). Then input this \(\Sigma_{\rm g}\) into RADMC-3D to get the temperature \(T_{\rm mid}\). The whole process is finished after doing this once. ## 3 Results In this section, we describe the results of our simulations, and compare the results between the iteration and non-iteration methods. ### Gas surface density Based on Step 3 in the iteration workflow described in section 2.3, we can obtain the surface density in disks. From left to right columns, Figure 2 shows the 2D gas surface density maps of gaps opened by planets at 2000 orbits in masses of \(3M_{\rm J}\), \(1M_{\rm J}\), \(100M_{\oplus}\), and \(10M_{\oplus}\) at orbital radii of 4 au. The iteration and non-iteration results are presented in upper and lower panels, respectively. The gaps from the iteration method are generally deeper and wider than their counterparts simulated by the non-iteration method. As gap structures are shown in most simulations, we quantify the gap width and depth from the data of surface density and compare iteration with the non-iteration models. In this work, we define the gap width \(\Delta_{\rm gap}\) with the method in Kanagawa et al. (2016) which is the radial region where \(\Sigma_{\rm gap}/\Sigma_{0}\leq 0.5\). Meanwhile, we define the gap depth \(\Sigma_{\rm gap}/\Sigma_{0}\) as that in Fung et al. (2014) which is the radial averaging value within \(2\times max(R_{\rm H},H)\) of the planet, where \(R_{\rm H}\) and \(H\) are hill radius and scale height at \(r_{\rm p}\). Both the gap width and gap depth are obtained by azimuthal averaging and the last 500-orbit averaging. Figure 3 displays the comparison of the normalized gap width \(\Delta_{\rm gap}/r_{\rm p}\) (green) and gap depth \(\Sigma_{\rm gap}/\Sigma_{0}\) (blue) as a function of \(M_{\rm p}\) of iteration (dashed lines) and non-iteration (solid lines) methods. From top to bottom panels, \(r_{\rm p}\) = 4, 10, and 30 au, respectively. Uncertainty of gap depths and widths are also shown, which come from time averaging of the last 500 orbits. The choice of final 500 orbits is because our simulations appear to reach quasi-steady states at around 1500 orbits though gap depth and gap eccentricity are still slightly fluctuating. Under the definition of gap width and gap depth here, \(M_{\rm p}\) = \(10M_{\oplus}\) cannot open gaps except for \(r_{\rm p}\) = 4 au of iteration. With regard to the \(\Sigma_{\rm gap}/\Sigma_{0}\) of non-iteration method, we find them consistent with the empirical formulas in Fung et al. (2014) except the eccentric case caused by \(M_{\rm p}\) = \(3M_{\rm J}\). In general, the iteration method infers a slightly wider gap width \(\Delta_{\rm gap}\) than the non-iteration method, whereas iteration predicts an order of magnitude deeper gap depth \(\Sigma_{\rm gap}/\Sigma_{0}\) than non-iteration. The reason for the deeper gap in iteration is the aspect ratio \(H/r\) at the gap region is smaller than the non-iteration. Based on equation 3 in Fung et al. (2014), a lower \(H/r\) lead to a smaller \(\Sigma_{\rm gap}\). As the iteration method predicts a deeper gap than the non-iteration method, which means a lower mass planet can possibly open a deep gap. For instance, in Figure 3(a), when \(r_{\rm p}\) = 4au, the iteration predicts that a Saturn mass planet can open a gap as deep as a Jupiter mass planet in the non-iteration method. This can help to explain why massive planets predicted by usual non-iteration simulations are supposed to be observable but have not actually been widely detected in real observations. Among the gap depth of the iteration method, as \(M_{\rm p}\) increases, \(\Sigma_{\rm gap}/\Sigma_{0}\) decreases, though this trend is invalidated to \(M_{\rm p}=3M_{\rm J}\) at \(r_{\rm p}=4\) or 10 au. In these two cases, planets open appreciable eccentric gaps and streamer structures appear, which increases gas density in gaps. Similar situations also happen in the non-iteration cases, though the streamers are less strong and the measured \(\Sigma_{\rm gap}/\Sigma_{0}\) are close for \(M_{\rm p}=1M_{\rm J}\) and \(3M_{\rm J}\). In terms of gap width \(\Delta_{\rm gap}\), as \(M_{\rm p}\) increases, \(\Delta_{\rm gap}\) increases. For \(M_{\rm p}=3M_{\rm J}\), it can open a gap roughly as wide as the planet orbit \(r_{\rm p}\) in our disk models. For a fixed \(M_{\rm p}\), if \(r_{\rm p}\) increases, the normalized gap width \(\Delta_{\rm gap}/r_{\rm p}\) is smaller and the gap depth \(\Sigma_{\rm gap}/\Sigma_{0}\) is shallower. This is because the higher disk scale height \(H/r\) in the outer disk makes pressure torque stronger to prevent the gap opening process. Besides the disk density profiles of planets at 4 au, Fig. 11 and 12 in Appendix show the 2D gas surface density map of planets at 10 au and 30 au. As planets move further away from the central stars, they open shallower gaps than their counterparts at 4 au. ### Midplane temperature After implementing dust radiative transfer (described in Step 2 in section 2.3) and assuming \(T_{\rm gas}=T_{\rm dust}\), we get the 3D \(T_{\rm gas}\) structure of disks. As we are concerned about icy-pebbles or planetesimals which mainly concentrate at the disk midplane, we focus on the midplane temperature \(T_{\rm mid}\) derived from both iteration and non-iteration methods. Figure 4 shows the comparisons between azimuthal averaged \(T_{\rm mid}\) (cyan lines) of planets at 4 au over 2000 orbits calculated by iteration method (upper panels) and non-iteration method (lower panels). The gray shading regions represent the \(T_{\rm mid}\) in different azimuthal angles. Three molecules and their pressure-dependent sublimation temperatures \(T_{\rm sub,H_{2}O}\), \(T_{\rm sub,CO_{2}}\), and \(T_{\rm sub,CO}\) (calculations follow the recipe in Hollenbach et al. (2009)) are marked as blue, green, and red dash-dotted lines, respectively. We use binding energy provided on KIDA1 database. The uncertainty of \(T_{\rm sub,CO}\) due to different binding energy choices is shown as a light red shading area. As for comparison, the constant sublimation temperatures \(T_{\rm sub,H_{2}O}=125\)K, \(T_{\rm sub,CO_{2}}=47\)K, \(T_{\rm sub,CO}=25\)K in Oberg et al. (2011) are marked as short horizontal arrows in corresponding colors. Overall, the iteration method predicts distinct \(T_{\rm mid}\) when compared with the non-iteration method at two regions. In particular, in the inner disk regions(\(r<10\)au) and the gap regions. Footnote 1: [https://kida.astrochem-tools.org](https://kida.astrochem-tools.org) At the inner disk, iterated \(T_{\rm mid}\) drops more rapidly than non-iterated \(T_{\rm mid}\). For example, we can clearly see the differences between Panel d and h in these three figures. As \(M_{\rm p}=10M_{\oplus}\), such low planet has negligible effects on disk temperature as they are difficult to open gaps to influence \(T_{\rm mid}\). Therefore, the difference between Panel d and h does not come from the planet opening gaps, instead, the difference comes from the methods we adopt, iteration or non-iteration. The underlying physical explanation will be discussed in more detail in section 4.3. In short, the puff-up of the scale height at the inner dust rim cause a strong shadowing effect and lower the temperature in these regions. At the gap regions, iteration predicts more significant \(T_{\rm mid}\) contrasts between inside gap regions and outside gap edges than the non-iteration. The highest contrast of \(T_{\rm mid}\) can be up to 40K (increase from 30K to 70K) when \(3M_{\rm J}\) or \(1M_{\rm J}\) at 4au of iteration method (see Panel a and b). The underlying explanation is the iteration tends to open deeper gaps than the non-iteration and allows more stellar photons to penetrate into the midplane and increase \(T_{\rm mid}\). However, the Figure 2: 2D gas density map of planets at fixed radius of 4 au over 2000 orbits of iteration method (upper panels) and non-iteration method (lower panels). From left to right columns, gaps are opened by planets of \(3M_{\rm J}\), \(1M_{\rm J}\), \(100M_{\oplus}\), and \(10M_{\oplus}\), respectively. peak values of \(T_{\rm mid}\) from both methods are similar in the same \(M_{\rm p}\) and \(r_{\rm p}\) conditions. Regarding the iteration results, as \(M_{\rm p}\) increases, the \(T_{\rm mid}\) at gaps increases more significantly. It is because more massive planets are able to open deeper and wider gaps and more stellar photons can penetrate deeper at the gap region and heat up midplane dust and gas. Such a trend is also seen in the non-iteration method. Furthermore, the midplane temperature of \(r_{\rm p}=10,30\) au are shown in Figure 1 and 2, respectively. By combing the sublimation temperature and the disk midplane temperature, we can measure where the midplane icelines of different molecules are in section 3.5. The numbers of icelines primarily rely on the values of \(T_{\rm sub}\) and the disk \(T_{\rm mid}\). If we use the values of binding energy suggested in Oberg et al. (2011), the overall profiles of sublimation temperature of all these three volatile will shift up or down. Figure 5 is a plot of \(T_{\rm mid}\) but with \(T_{\rm sub}\) calculated from binding energy adopted by Oberg et al. (2011). Compared with 4(a), now the whole \(T_{\rm sub,CO}\) shifts lower significantly and the CO iceline moves outward dramatically to around 90au, and only one CO iceline exists. Therefore, in this case, particles or pebbles with CO ices only exist in the very outer disk. ### Eccentricity When comparing the density results within either iteration or non-iteration method, if the planet masses \(M_{\rm p}<1M_{\rm J}\), they open quite circular gaps and more massive planets open deeper and wider gaps. In terms of the most massive cases of \(3M_{\rm J}\) in our modeling, the planets open eccentric gaps. Our results agree with the results from Kley & Dirksen (2006) who found that planets with mass \(M_{\rm p}>3M_{\rm J}\) open eccentric gaps in disks with a viscosity of \(\nu=10^{-5}\) or \(\alpha\approx 0.004\). In this section, we quantify the eccentricity \(e\) of the gaps opened by \(M_{\rm p}=3M_{\rm J}\) or \(1M_{\rm J}\) with two kinds of methods. Because the inner and outer edge of a gap has different eccentricities, we measure them separately. The first method is obtaining \(e\) by fitting ellipses to the shape of the inner/outer edges of gaps. The second method is using equation 28 in Ju et al. (2016) \[e(r)=\frac{\left|\int d\phi\Sigma(r,\phi)v_{r}\exp(i\phi)\right|}{\int d\phi \Sigma(r,\phi)v_{\phi}} \tag{2}\] to calculate \(e\) at the location of the inner/outer edges of gaps. Figure 6 displays an example of using the equation 2 to calculate the \(e\) as a function of radius of the simulation of \(3M_{\rm J}\) at 4 au over 2000 orbit by using iteration method. At this case, \(e\sim 0.06\) at the gap outer edge, which is not very different from the value in Kley & Dirksen (2006) though the disk parameters (e.g. viscosity \(\alpha\), aspect ratio \(H/r\)) are not exactly the same. Table 3 summarizes gap eccentricities \(e\) from different simulations measured by different methods, fitting ellipse in "graph" or calculating with "formula". Names of different cases are written in abbreviations. For example, 3mj4au_it_graph means the case of \(3M_{\rm J}\) at 4 au of iteration measured by graph method, and so on. In both methods, we average the values of the last 500 orbits (shown as "mean" in Table 3) and calculate their standard deviations (shown as "std" in Table 3). In general, both iteration and non-iteration methods get similar \(e\). Also, the values of \(e\) are closed from the graph and formula measuring method. \(M_{\rm p}=3M_{\rm J}\) induces relatively high \(e\sim 0.07\) when \(r_{\rm p}=4\) or 10 au, which could also be seen from the eccentric gaps in Figure 2 and Figure 1. \(M_{\rm p}=1M_{\rm J}\) only open gaps in almost circular shapes. The high eccentricity of a gap can have a non-neglectable effect on the temperature. For the most eccentric case, for example, \(e=0.08\) measured by graph fitting of 3mj4au of the iteration method, we can obtain \(r_{min}\)= 6 au and \(r_{max}=7.6\) au from fitting the ellipse of the outer gap edge. By plugging them into the corresponding temperature profile, we can find \(T_{\rm mid}\) varies from about 40 to 27K. In other words, CO ice might exist at the semi-major axis side but sublimate at the semi-minor axis side of the outer edge of gaps. ### Dust trap Dust grains in a disk experience radial drift unless they get trapped in gas pressure bumps (e.g. Pinilla et al. (2012, 2012); Dullemond et al. (2018)). We can investigate pressure gradient profiles around the gaps to see how our model affects dust-trapping conditions. Pressure Figure 3: Normalized gap widths \(\Delta_{\rm gap}/r_{\rm p}\) (green) and depths \(\Sigma_{\rm gap}/\Sigma_{\rm 0}\) (blue) and their uncertainties (error bars) as a function of \(M_{\rm p}\) at \(r_{\rm p}=4\) (panel (a)), 10 (panel (b)), and 30 au (panel (c)), respectively. Dashed and solid lines represent the iteration and non-iteration results. is given by \(P(r)=\Sigma(r)c_{8}^{2}(r)\). Here both the gas surface density and sound speed \(c_{\rm s}\) are azimuthally averaged after 2000 orbits. Different pressure gradients from iteration and non-iteration can lead to different efficiency of dust trapping. Thus, dust of different sizes could be distributed differently. The location of the pressure maximum, also named dust-trapping location \(r_{\rm dt}\) here, is when the pressure gradient is zero (\(\mathrm{d}\log P/\mathrm{d}\log r=0\)). We obtain the \(r_{\rm dt}\) of different cases of \(M_{\rm p}\) and \(r_{\rm p}\) of the iteration and non-iteration methods. We find that there are no dust traps in the cases of \(M_{\rm p}=10~{}M_{\oplus}\) when planets are at \(r_{\rm p}\) =10 au of the non-iteration method, and \(r_{\rm p}=30\) au of both methods. Figure 7 shows the normalized dust trapping location \((r_{\rm dt}-r_{\rm p})/R_{\rm H}\) as a function of planet mass \(M_{\rm p}\). If for a specific case, there is no \(\mathrm{d}\log P/\mathrm{d}\log r=0\), we put \((r_{\rm dt}-r_{\rm p})/R_{\rm H}=0\). In general, if a planet can form a pressure maximum to trap dust around the outer gap edge, \(r_{\rm dt}-r_{\rm p}=7\sim 10R_{\rm H}\) regardless of different \(M_{\rm p}\) and \(r_{\rm p}\). The dust-trapping locations \(r_{\rm dt}\) from both methods do not show a big difference, especially for \(r_{\rm p}=30\) au cases. In smaller \(r_{\rm p}\) cases, iteration tends to trap grains in slightly outer locations than non-iteration for a given \(M_{\rm p}\) and \(r_{\rm p}\). Furthermore, \(r_{\rm dt}-r_{\rm p}\) is roughly equal to the gap widths in each case. In other words, the outer gap edges are approximately the middle points between planets and dust-trapping locations. As small dust particles can couple well with gas and may flow through the dust trap, there should be a minimum grain size so that grains larger than this threshold can be trapped by the pressure bump. Thus, we can further infer what ratio of dust is trapped by a pressure bump. The minimum particle size that can be trapped is described by: Pinilla et al. (2012b) Figure 4: Midplane temperature \(T_{\rm mid}\) as a function of disk radius from RADMC-3D simulations of planets at 4 au over 2000 orbits of iteration method (upper panels) and non-iteration method (lower panels). The cyan solid lines represent the azimuthal averaged \(T_{\rm mid}\), while the shading areas represent the deviation of the profiles along different azimuthal angles. From left to right, there are results of \(3M_{\rm J}\), \(1M_{\rm J}\), \(100M_{\oplus}\), and \(10M_{\oplus}\), respectively. The blue, green, and red dash-dotted lines near the horizontal direction represent the pressure-dependent sublimation temperature \(T_{\rm mid,H_{\rm J}0}T_{\rm sub,CO_{2}}\), and \(T_{\rm sub,CO}\), respectively. \(T_{\rm sub,CO}\) is shading with a light red region to highlight the wide range of possible values calculated from different binding energies given by KIDA. As a comparison, constant \(T_{\rm sub}\) in Oberg et al. (2011) are marked with short arrows in these three colors on the right edge of each panel. The vertical cyan dashed lines and grey dotted lines mark the location of the planets and pressure maximum/dust trapping. Figure 5: Similar to Figure 4 (a) but \(T_{\rm sub}\) is calculated by using molecule binding energy in Oberg et al. (2011). Figure 6: Disk eccentricity calculated from Equation 2 for an example of \(3M_{\rm J}\) at 4au of iteration method. Planet location is marked with a vertical blue dashed line. The gap inner and outer edges are marked by red dashed lines. \[a_{\rm critical}\,=\,\frac{6\alpha\Sigma_{\rm g}}{\rho_{\rm s}\pi|(\mathrm{d}\log P/ \mathrm{d}\log r)|}\left|\left(\frac{3}{2}+\frac{\mathrm{d}\log\Sigma_{\rm g}}{ \mathrm{d}\log r}\right)\right| \tag{3}\] In our modeling, \(\alpha=10^{-3}\), \(\rho_{\rm s}=3.710g/cm^{3}\). We find that the iteration has similar \(a_{\rm critical}\) of about \(0.3\)cm as the non-iteration results at the location of pressure maximum \(r_{\rm dl}\). Furthermore, we assume grain size distribution follows Mathis et al. (1977), \(n(a)\propto a^{-P}\), where \(p=3.5\). The range of dust sizes is from \(0.1\mu m\) to \(a_{\rm frag}\), where \(a_{\rm frag}\) is the maximum particle size before they fragment due to turbulent relative velocities \(v_{f}\)(Birnstiel et al., 2012): \[a_{\rm frag}=\frac{2}{3\pi}\frac{\Sigma_{\rm g}}{\rho_{\rm s}\alpha}\frac{v_{f }^{2}}{c_{\rm s}^{2}} \tag{4}\] Follow Pinilla et al. (2012b), we set \(v_{f}=10\,\mathrm{m\,s^{-1}}\). We find that the iteration, \(a_{\rm frag}\) could be a few times higher than non-iteration. By using the number of particles at a size beam _nda_ times the mass of a particle(assume spherical particle) and integrating from \(a_{\rm critical}\) to \(a_{\rm frag}\), we can obtain the fraction of dust mass that gets trapped in the dust trapping regions discussed. The dust-trap fraction is: \(f_{trap}=(a_{frag}^{0.5}-a_{crit}^{0.5})/(a_{frag}^{0.5}-a_{\rm min}^{0.5})\). \(f_{\rm trap}\) represents the fraction of the dust mass that could be trapped in the pressure bump region compared to the total dust mass of the pressure bump region. If we take the \(3M_{\rm J}\) at \(4P_{\rm g}\) as an example, the region of the pressure bump (assume from outer gap edge \(r_{\rm gap,out}\) to \(2r_{\rm dl}-r_{\rm gap,out}\)), both iteration and non-iteration have similar averaged \(a_{\rm critical}=2\times 10^{-2}\) cm. If we assume \(a_{min}=10^{-5}\) cm, and plug in averaged \(a_{\rm frag}=10\) cm for iteration or averaged \(a_{\rm frag}=3\) cm for non-iteration. Therefore, we have \(f_{\rm trap}=0.96\) for iteration and \(f_{\rm trap}=0.92\) non-iteration, which means a slightly higher fraction of dust mass could be trapped in the pressure bump predicted by the iteration. Proper inclusion of dust evolution is needed to test this hypothesis in the future. ### Distribution of ice species Figure 4, D1, and D2 show \(T_{\rm gas}\) from the iteration method and how it deviates from the non-iteration approach. For this reason, the ages of H\({}_{2}\)O, CO\({}_{2}\), and CO are distributed in different locations when comparing the two methods. More ice means more solid masses could contribute to the pebble or planetesimal formation, while the available species of ice can affect the final planetesimal composition. Therefore, in this section, we use temperature and pressure information to show where different icelines are, where ice distributes throughout the whole disk, and what kinds of ice species form at dust trapping locations. For each specific molecular iceline, the number of radial icelines depends on the number of intersections between the disk midplane temperature \(T_{\rm mid}\) and the radial pressure-dependent sublimation temperature \(T_{\rm sub}\). We notice the bonding energy of CO has a wide range of values in KIDA and we show the uncertainty of the \(T_{\rm sub,CO}\) with light red shading regions in Figure 4, D1, and D2. Therefore, we need to keep in mind that the numbers and locations of CO iceline could vary due to adopting different bonding energy of CO. Figure 8 summarizes the ice distribution of H\({}_{2}\)O, CO\({}_{2}\), and CO throughout the whole disk. The left and right columns show iteration and non-iteration results. In each column, from top to bottom panels, \(M_{\rm p}=3M_{\rm J},1M_{\rm J},100M_{\rm\oplus},\) and \(10M_{\rm\oplus}\) are shown. In each panel, from bottom to top, cases of \(r_{\rm p}=4\), 10, and 30 au are shown. The ices of H\({}_{2}\)O, CO\({}_{2}\), and CO are displayed as horizontal blue, green, and red bars, respectively. Planet and dust trap locations are marked as cyan and grey dashed lines. To simplify, here we only consider the azimuthal averaged \(T_{\rm mid}\) to obtain the radial midplane iceline locations for different volatiles. However, for some high eccentric cases caused by massive planets, the iceline locations can vary at different azimuthal angles. In addition, we define an iceline as the boundary where volatile freeze out and condense into solid, but do not count it when volatile sublimates. Without considering the time evolution and dust drift, our static ice distribution model gives the following results for the main species: H\({}_{2}\)O ice: all modeling results from the iteration or non-iteration methods for different \(M_{\rm p}\) and \(r_{\rm p}\) yield only one water iceline in the disk, and similar iceline locations at around 1.2 au in our disk model. Therefore, planetesimals formed outside 1.2 au can have H\({}_{2}\)O ice. CO\({}_{2}\) ice: Both iteration and non-iteration methods in the massive planet cases can have two obvious CO\({}_{2}\) icelines shown in Panels a, b, \begin{table} \begin{tabular}{l l l l l} \hline \hline & e\({}_{\rm in}\),\({}_{\rm mean}\) & e\({}_{\rm in}\),\({}_{\rm std}\) & e\({}_{\rm out}\),\({}_{\rm mean}\) & e\({}_{\rm out}\),\({}_{\rm std}\) \\ \hline 3mj4au\({}_{\rm i}\),\({}_{\rm g}\),graph & 0.01 & 0 & 0.08 & 0.03 \\ 3mj4au\({}_{\rm i}\),\({}_{\rm f}\),formula & 0 & 0 & 0.06 & 0 \\ 1mj4au\({}_{\rm i}\),\({}_{\rm g}\),graph & 0.01 & 0 & 0.01 & 0.01 \\ 1mj4au\({}_{\rm i}\),\({}_{\rm f}\),formula & 0 & 0 & 0.01 & 0 \\ 3mj4au\({}_{\rm noni}\),\({}_{\rm g}\),graph & 0 & 0 & 0.08 & 0.04 \\ 3mj4au\({}_{\rm noni}\),\({}_{\rm f}\),formula & 0 & 0 & 0.07 & 0 \\ 1mj4au\({}_{\rm noni}\),\({}_{\rm g}\),graph & 0.02 & 0 & 0.01 & 0 \\ 1mj4au\({}_{\rm noni}\),\({}_{\rm f}\),formula & 0 & 0 & 0.01 & 0 \\ 3mj4au\({}_{\rm i}\),\({}_{\rm g}\),graph & 0.01 & 0 & 0.06 & 0.02 \\ 3mj4au\({}_{\rm i}\),\({}_{\rm g}\),formula & 0 & 0 & 0.07 & 0 \\ 1mj4au\({}_{\rm i}\),\({}_{\rm f}\),graph & 0.01 & 0 & 0.07 & 0.01 \\ 3mj4au\({}_{\rm i}\),\({}_{\rm f}\),formula & 0 & 0 & 0.01 & 0 \\ 1mj4au\({}_{\rm i}\),\({}_{\rm f}\),formula & 0 & 0 & 0.01 & 0 \\ 3mj4au\({}_{\rm i}\),\({}_{\rm f}\),formula & 0 & 0 & 0.01 & 0 \\ 3mj4au\({}_{\rm i}\),\({}_{\rm f}\),formula & 0 & 0 & 0 & 0 \\ 3mj4au\({}_{\rm i}\),\({}_{\rm f}\),formula & 0 & 0 & 0 & 0 \\ 3mj4au\({}_{\rm i}\),\({}_{\rm g}\),formula & 0 & 0 & 0 & 0 \\ 3mj4au\({}_{\rm i}\),\({}_{\rm g}\),\({}_{\rm g}\),\({}_{\rm g}\) & 0 & 0 & 0 & 0 \\ e, and f (except \(r_{\rm p}=30\) au in Panels b and f) in Figure 8. Because the presence of a massive planet (\(M_{\rm p}\geq 1M_{\rm J}\)) opens deep enough gaps that increase the \(T_{\rm mid}\), which causes CO\({}_{2}\) ice to sublimate at the gap regions and freeze out again in the outer disks. Locations of outer CO\({}_{2}\) icelines are close to gap outer edges. In the inner disk, locations of CO\({}_{2}\) icelines are predicted to be around 1.5 au in iteration whereas about 3 au in non-iteration. CO ice: One of the most distinct features between the two kinds of models is that the iteration predicts more complicated CO ice-lines features than non-iteration. Because of the \(T_{\rm mid}\) increase at the gap regions, all models of iteration suggest the CO ice would sublimate except \(10M_{\rm p}\) cases. For \(r_{\rm p}=4\) au, despite neglecting the short discontinuities in red bars (due to noise in radiative transfer temperature) in those iteration panels, we find three icelines of CO in iteration while only one iceline in non-iteration. In this case, the CO ice can exist in three discrete radial regions in iteration results. The first region, from 1 au to somewhere close to the inner edges of gaps, where the inner disk \(T_{\rm mid}\) drops. The second region, from somewhere near the outer edges of the gap to around 10 au, is ascribed to gap heating with the shadowing effect in the inner disk causing \(T_{\rm mid}\) to drop. The third region starting from about 25 au is due to the outer disk temperature decrease, which is broadly similar to the non-iteration CO ice distribution region outside 20 au. ## 4 Discussion Our coupled treatment of hydrodynamics and radiative transfer allows us to shed new light on the feedback of gap-opening planets on the temperature and pressure structure of the protoplanetary disk, which in turn may influence the composition of planetesimals and planets. We discuss this below, followed by a discussion of iceline and disk substructure, as well as caveats of this work and potential improvements for future models. Figure 8: Ice distribution in disks with different \(M_{\rm p}\) and \(r_{\rm p}\). The iteration and non-iteration results are shown in the left and right panels, respectively. High to low \(M_{\rm p}\) are listed from top to bottom panels. In each panel, for instance, \(3M_{\rm J}\) of iteration method, there are three groups of data representing the cases of planets at 4 (bottom), 10 (middle), and 30 au (top), respectively. The location of H\({}_{2}\)O, CO\({}_{2}\), and CO ice is in blue, green, and red bars. Vertical cyan dashed lines mark \(r_{\rm p}\) and grey dashed lines display the corresponding dust-trap location. Note that in some cases there is no grey-dashed line because there is not dust trapping. ### The C/O ratio as a planet formation tracer The carbon-to-oxygen ratio is a potential signature of the history of planet formation (Oberg et al., 2011). Physical and chemical models of protoplanetary disks, with varying degrees of complexity, have been developed to understand the radial behaviour of the C/O ratio (e.g., Cleeves et al., 2018; Zhang et al., 2019; Miotello et al., 2019; Bosman et al., 2021). Recently, an azimuthal C/O ratio variation in protoplanetary disk has also been reported and modelled (Keyte et al., 2023). As we have shown, planet-induced gaps introduce significant new complexity to this picture by creating feedback and altering the thermal structure. This in turn modifies various ice lines and thus the C/O-ratio imprinted on subsequently forming planetesimals and planets. To investigate how radial variations in the C/O ratio are affected by feedback from gap-opening planets, and the presence of a puffed-up inner rim, we follow the prescription from Oberg et al. (2011). We assume the only C and O carriers are H\({}_{2}\)O, CO, CO\({}_{2}\), refractory carbon, and silicate minerals, using the same abundances as that study. The total abundance of each species summed over the gas and solid phase does not vary with radius. Figure 9 shows how the gas- and solid-phase C/O ratio varies as a function of location in a disk, and for different \(M_{\rm p}\) at \(r_{\rm p}=4\) au, for models using our iteration method. For comparison, the baseline model without a planet is also shown, analogous to the standard "Oberg model" for the C/O profile. We note that analogous changes can be observed for planets at larger orbits, but due to the relevance to most known planetary systems which are close-in, as well as the analogy with the solar system, we focus here on the 4 au case. Based on our iterative models, the introduction of a gap-opening planet significantly alters the radial profile of the C/O ratio in the gas and solid phase, compared to the baseline (no-planet) case. Firstly, the presence of a gap makes the disk temperature (and pressure) profile strongly non-monotonous, which can create multiple iceline locations for a single chemical species. A monotonous, smoothly decreasing temperature profile underlies the widely studied picture of well-defined, unique icelines. In that case, the more refractory species (silicates, organic carbon, water ice) each have their iceline closer to the star than the more volatile species (e.g., CO). Secondly, by comparing the results from disks hosting different mass planets, we can see the planet gap-opening effect on C/O is stronger as planet mass increases. The reduced optical depth within the gap leads to increased heating which causes \(T_{\rm mid}\) to rise above the CO\({}_{2}\) sublimation temperature. This returns proportionally more oxygen than carbon back to the gas phase, thereby decreasing the gas-phase C/O ratio locally. Our results show that the feedback from gap-opening planets can significantly affect the gas- and solid-phase C/O ratio at small spatial scales within a protoplanetary disk. Such variations have important implications for the composition of icy planetesimals, and the gas from which giant planets accrete their envelopes. Additionally, our findings demonstrate that radially distinct regions of the disk can be characterised by the same C/O ratio, which complicates the usage of C/O as a formation tracer. To construct more accurate models, it is essential that future observations focus in measuring the C/O ratio at planet-forming scales. We note, however, that some of the variations seen in the radial location of molecular icelines in our models are as little as \(\sim 1\) au, which can be difficult to resolve even with ALMA, though the larger shifts (\(\sim 10\) au or more) can be more easily measured. The largest-scale variations are evident for high-mass planets at large separations (\(M_{\rm P}=3M_{\rm J}\) at 30 au in our model, Figure 8a). The degree to which the gap-modified gas and solid composition will be reflected in the atmospheric composition of a forming planet will further depend on the degree of mixing between the core and atmosphere, and the amount of sublimation that takes place during accretion. The scenario is further complicated by considering the vertical layer in which planets accrete their envelopes. Meridional flows from the disk surface may favour the accretion of gas and small grains from the disk surface layers, for example (e.g. Teague et al., 2019). ### Ice lines and dust rings The altered thermal and ice line structure of a disk with a gap-opening planet has implications for the observational study of disk substructure, both spectroscopy of the gas and also the dust rings which are widely observed in disks with ALMA. We illustrate this for the case outlined in Figure 4(a), a \(3M_{\rm J}\) planet at 4 au. Heating due to gap-opening increases the local midplane temperature above the CO sublimation temperature, introducing a new CO condensation front at the outer edge of the gap in a region of the disk where CO would otherwise be entirely frozen out. As also highlighted in Figure 4, a dust trap is located in the pressure maximum just outside a gap. Furthermore, results in the literature suggest regions near ice lines may be favorable for the pile-up of icy pebbles (e.g., Hyodo et al., 2019). As pebbles cross the ice line and sublimate, outward diffusion followed by recondensation may locally enhance the surface density outside of the iceline, triggering instabilities which can lead to rapid pebble and planetesimal growth (Drazkowska and Alibert, 2017). High dust-to-gas ratios and viscosity gradients produced by the density enhancement could further amplify the effect (e.g., Brauer et al., 2008; Ros and Johansen, 2013; Bitsch et al., 2014; Drazkowska and Dullemond, 2014; Flock et al., 2015). This rapid growth of pebbles around condensation fronts is tentatively supported by observations of disks such HL Tau, where the location of millimeter dust rings has been linked to the icelines of water and other key volatiles (Zhang et al., 2015). Similarly, grain size distributions inferred from ALMA observations of HD 163296 are consistent with the enhanced production of large grains at the CO iceline (Guidi et al., 2016). However, no unambiguous correlation between dust rings and ice lines on a standard monotonously radially decreasing temperature profile has been found. Results using empirical temperature estimates seem to disfavor such correlation (Long et al., 2018). Although icelines have been invoked to explain the rings and gaps observed in a handful of disks, such as HL Tau (Zhang et al., 2015), icelines are not a preferred explanation when looking at large surveys of protoplanetary disks (Huang et al., 2018; Long et al., 2018; Van Der Marel et al., 2019). This is because most of the locations of substructures do not coincide with the sublimation temperature of the main disk volatiles, when assuming that the disk temperature is set by stellar irradiation. Under this hypothesis, a correlation between the location of substructures and the stellar luminosity is expected. However, as we show in this work, this potential correlation may get much more complicated when a planet is embedded in the disk. The planet alters the temperature-pressure profile of the disk, moving the ice lines to different radii and even creating multiple, radially widely separated ice lines for a single species. Therefore, our current results suggest that it is not necessarily a correlation with the stellar luminosity as it is usually assumed, but that embedded gap-opening planets need to be accounted for to fully assess the locations of ice lines and their correlation with the locations of dust (pebble) rings. In addition, Pinilla et al. (2017) demonstrated that due to the variations of dust sticking properties, ice-covered dust particles can create "traffic jams", which result in rings and gaps when observed at different wavelengths. The inclusion of dust evolution models in the framework of our models is needed to test if multiple substructures are expected in the disks as a result of a single planet embedded and multiple icelines locations of different volatiles. ### Inner rim midplane temperature drops In this section, we discuss the temperature decrease in the inner few au in disks in section 3.2. As Figure 4, D1, and D2 show, even the lowest \(M_{\rm p}=10M_{\oplus}\) case which represents minor or no planet effect on the disk, the iteration method predicts strong \(T_{\rm mid}\) drops in this inner disk region. On the contrary, such an effect is not shown in the non-iteration method. The physical explanation is that the puff-up of the scale height can cause a strong shadowing effect to decrease the temperature within 10 au in the RADMC-3D simulation. The reason why there is a puffed-up scale height at the inner rim at 1 au in our transition disk models is that stellar photons hit a dust wall, increasing the dust temperature. As we assume the dust and gas temperatures are well coupled, the gas temperature is also high and causing the gas scale height to have a strong puff-up. This phenomenon is also suggested in Figure 3 in Dullemond et al. (2001), as well as Jang-Condell and Turner (2012, 2013); Siebenmorgen and Heymann (2012); Zhang et al. (2021). In our RADMC-3D setups, we input time evolving scale height for the surface density to volume density extension process. As the scale height indicated by the \(T_{\rm mid}\) from last RADMC-3D has puff-up in the inner rim, our iteration models can naturally capture such effects. However, in the non-iteration method, the input scale height for RADMC-3D is just the smooth flaring scale height as that in the FARGO3D setup. Therefore, the iteration can have an advantage in making use of the physical temperature obtained by radiative transfer for a specific disk model rather than using the initially assumed temperature as non-iteration. In addition, we measure the aspect ratio \(H/r\) of the puff-up inner rim at 1 au is about 0.035 and then it decreases to the lowest value of about 0.015 at about 1.5 au. For the region further away from 1.5au, \(H/r\) increases as a power law with a flaring index of 0.25 which is similar to the power law profile of non-iteration \(H/r\). The change in inner disk temperature structure between the commonly used isothermal (non-iteration) method and our iteration method also impacts the behaviour of elemental ratios like C/O. In our models using the iteration method, shadowing by the puffed-up inner rim causes \(T_{\rm mid}\) dropping off more quickly within the inner disk, moving the H\({}_{2}\)O and CO\({}_{2}\) icelines inwards. This translates to a steep rise in the gas-phase C/O ratio, as a large proportion of the total atomic oxygen is frozen-out into solids. In this scenario, C/O reaches unity within \(\sim 1.5\) au, compared to \(\sim 3\) au in the non-iteration and classical models. ### Assumptions and limitations There are a number of simplifications in our hydrodynamical and radiative transfer simulations that can be improved in future work. First, we only consider 2D hydrodynamical simulations in radial and azimuthal directions instead of full 3D hydrodynamical simulations, which benefits us for speeding up the whole iteration process. However, 3D hydrodynamical simulations can allow one to get rid of the vertical isothermal assumption which will be useful for addressing vertical stratified problems (e.g. ice-surface distribution, Figure 9: C/O ratio as a function of location in a disk, for different masses of a gap-opening planet at 4 au. From the upper middle to lower right panels, we show a planet mass of \(3M_{1}\), \(1M_{1}\), \(100M_{\oplus}\), and \(10M_{\oplus}\) at 4 au using the iteration method, and a disk without planets (upper left panel) for comparison. The blue and red solid lines show gas- and solid-phase C/O ratios, whereas the solar C/O ratio is marked by a green dashed line. The gap regions are shaded. The planet location and dust trap location are marked as vertical cyan and grey lines. gas molecule emission layers.) In this paper, we only focus our discussion on the midplane temperature and its effect on midplane ice distribution. Second, we have some simplifications about dust in our modelings. In hydrodynamical simulations, we do not include dust species in order to speed up the simulation process. In addition, only one small grain size, \(0.1\mu m\), is included in the radiative transfer simulations. If we consider grain size distribution or dust evolution process, like grain growth or fragmentation, it is still unclear how can these factors change the dust distribution and hence disk temperature. Because we do not have a dust density distribution in our models, we also neglect dust settling in our models. As a consequence, it is possible that when dust settling is included, less dust remains on the disk surface, allowing stellar radiation to penetrate deeper into the disk and increase \(T_{\rm mid}\). Our models also neglect to account for dynamical effects such as radial drift and mass accretion, which add considerable complexity. For example, studies have shown that radial drift can produce multiple icelines Cleeves (2016) or make icelines thermally unstable under typical disk conditions (Owen, 2020). Icy volatiles drift faster that those in the gas-phase, which results in the iceline progressively moving inwards, condensing more volatiles. The iceline then recedes as volatiles sublimate, on timescales much shorter that the disk lifetime (1000-10,000 years). Similarly, the mass accretion rate plays an important role in iceline evolution, with iceline moving inwards when accretion rates are high, and migrating back out in the later stage of disk evolution when the accretion rate decreases (Oka et al., 2011). The combined effects of radial drift and mass accretion can cause molecular icelines to moves inwards by as much as 60% (Piso et al., 2015) Third, we do not consider viscous heating which can be dominant in the midplane of the inner disk (e.g. Broome et al. (2022)). Thus, the viscous heating may have sufficient effects on increasing the very inner disk \(T_{\rm mid}\). This may have a strong effect on our 4au cases. Also, our models do not capture shock heating from the planet which can be significant for massive planet cases. Finally, we choose 100 orbits as our iteration step to implement the feedback from RADMC-3D to FARGO3D. However, we notice that the thermal relaxation time can vary from about 100 to 0.1 dynamic timescale from 1 to a few tens au (Malygin et al., 2017; Pfeil and Klahr, 2019) and our model can not capture this. The number of the iteration step we decide is a balance between the total simulation time and reflecting the gap opening thermal feedback properly. One possible way to improve the approach is if we are only concerned about the radial temperature structure but ignore the azimuthal variations, we can use fewer photon package numbers for fewer azimuthal grid cell RADMC-3D simulations to speed up each iteration step and do more iterations. In future work, we will focus on the improvement of some of these limitations, in particular the effect of including dust in the models. ## 5 Conclusions In this paper, we present a new method to study the gap-opening effect on protoplanetary disk temperature structure by iterating hydrodynamical and radiative transfer simulations. We quantify the planet-opening gap profiles including gap width, depth, and eccentricity, and explore the dust-trapping condition in outer gap edges. By obtaining the temperature profiles in disks, we study the volatile iceline locations and ultimately provide new C/O ratio for disks with embedded planets. During the modeling, we compare our iteration models with the conventional non-iteration models and conduct parameter studies of different planet masses \(M_{\rm p}\) and planet locations \(r_{\rm p}\). Our main conclusions are as follows: (i) Gap profiles: the iteration method predicts deeper and more eccentric gaps than the non-iteration. The most significant difference in gap depth comparison between these two methods is seen at \(1M_{\rm J}\) at 4 au or 10 au, where the iteration gap depth is about an order of magnitude deeper than the non-iteration. (ii) Dust trap: both iteration and non-iteration indicate similar locations of pressure maximum for dust trapping \(r_{\rm d}\), which is about 7-10 \(R_{\rm H}\) further away from \(r_{\rm p}\). However, the iteration predicts a larger fragmentation grain size across the pressure bump, and as a consequence, a slightly higher fraction of dust could be trapped in the pressure bump. (iii) Midplane temperature: our iteration models can capture the gap-opening process by a planet and its effect on the time evolution of the disk temperature structure, whereas the conventional non-iteration models do not capture. By implementing the iteration method, we show that the strong midplane temperature drops in the inner few au of disks because of the shadowing effect caused by the puff-up disk inner rim. Meanwhile, the midplane temperature \(T_{\rm mid}\) increases significantly in the gap regions due to more photons can penetrate into the midplane. The maximum \(T_{\rm mid}\) contrast between gap edges and gap center is about 40K when \(3M_{\rm J}\) or \(1M_{\rm J}\) presents at 4 au. (iv) Icelines: because of \(T_{\rm mid}\) drops, CO\({}_{2}\) and CO ice may exist in the inner disk region (at a few au) in the iteration model. At gap regions, both iteration and non-iteration predict that CO\({}_{2}\) or CO ice may sublimate. As a result, the non-iteration suggests that more than one CO\({}_{2}\) or CO ice can appear in a disk, whereas the iteration models propose that even more CO icelines might exist in giant planet-forming disks. This result suggests that the combination of an embedded planet and different locations of the iceline of the same volatile can still explain the observed substructures in protoplanetary disks. (v) C/O ratio (Fig. 9): the radial C/O ratio profile across the disk is significantly more complex due to the presence of gaps opened by giant planets in comparison to disk models without embedded planets. As a consequence, the feedback of the planet-opening gap on the disk thermal structure can significantly influence the composition of material available for the giant planet's atmosphere or for the next generation of planet formation. ## Acknowledgments We are thankful to the referee for the constructive report. We thank Jaehan Bae, Jeffrey Fung, Min-Kai Lin, Alexandros Ziampras for help and useful discussions. K.C. acknowledges support by UCL Dean's Prize and China Scholarship Council. L.K. acknowledges funding via a Science and Technology Facilities Council (STFC) studentship. M.K. has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No. 101079231 (EXOHOST), and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee (grant number 10051045). ## Data Availability Data from our numerical models are available on reasonable request to the corresponding author. The FARGO3D code is publicly available from [https://fargo3d.bitbucket.io/download.html](https://fargo3d.bitbucket.io/download.html). The RADMC-3D code is available from [https://www.ita.uni-heidelberg.de/](https://www.ita.uni-heidelberg.de/) dullemond/software/radmc-3d/.
2305.06995
Bounds for theta sums in higher rank II
In the first paper of this series we established new upper bounds for multi-variable exponential sums associated with a quadratic form. The present study shows that if one adds a linear term in the exponent, the estimates can be further improved for almost all parameter values. Our results extend the bound for one-variable theta sums obtained by Fedotov and Klopp in 2012.
Jens Marklof, Matthew Welsh
2023-05-11T17:32:26Z
http://arxiv.org/abs/2305.06995v1
# Bounds for theta sums in higher rank II+ ###### Abstract In the first paper of this series we established new upper bounds for multi-variable exponential sums associated with a quadratic form. The present study shows that if one adds a linear term in the exponent, the estimates can be further improved for almost all parameter values. Our results extend the bound for one-variable theta sums obtained by Fedotov and Klopp in 2012. ## 1 Introduction For \(M>0\), a real \(n\times n\) symmetric matrix \(X\), and \(\boldsymbol{x},\boldsymbol{y}\in\mathbb{R}^{n}\), we define a _theta sum_ as the exponential sum \[\theta_{f}(M,X,\boldsymbol{x},\boldsymbol{y})=\sum_{\boldsymbol{m}\in \mathbb{Z}^{n}}f\left(M^{-1}(\boldsymbol{m}+\boldsymbol{x})\right)\mathrm{e} \left(\tfrac{1}{2}\boldsymbol{m}X^{\,t}\boldsymbol{m}+\boldsymbol{m}^{\,t} \boldsymbol{y}\right), \tag{1.1}\] where \(f:\mathbb{R}^{n}\to\mathbb{C}\) is a rapidly decaying cut-off and \(\mathrm{e}(z)=\mathrm{e}^{2\pi\mathrm{i}z}\) for any complex \(z\). If \(f=\chi_{\mathcal{B}}\) is the characteristic function of a bounded set \(\mathcal{B}\subset\mathbb{R}^{n}\) we have the finite sum \[\theta_{f}(M,X,\boldsymbol{x},\boldsymbol{y})=\sum_{\boldsymbol{m}\in \mathbb{Z}^{n}\cap(\mathcal{M}\mathcal{B}-\boldsymbol{x})}\mathrm{e}(\tfrac{1} {2}\boldsymbol{m}X^{\,t}\boldsymbol{m}+\boldsymbol{m}^{\,t}\boldsymbol{y}). \tag{1.2}\] In this case we will also use the notation \(\theta_{f}=\theta_{\mathcal{B}}\). In this paper we will focus on the case when \(\mathcal{B}\) is the open rectangular box \((0,b_{1})\times\cdots\times(0,b_{n})\subset\mathbb{R}^{n}\). The theorems below remain valid if \(f=\chi_{\mathcal{B}}\) is replaced by any function \(f\) in the Schwartz class \(\mathcal{S}(\mathbb{R}^{n})\) (infinitely differentiable, with rapid decay of all derivatives). The results in the latter case follow from a simpler version of the argument for the sharp truncation, so we do not discuss them here. The principal result of part I [10] in this series is the following. **Theorem 1.1**.: _Fix a compact subset \(\mathcal{K}\subset\mathbb{R}^{n}_{>0}\), and let \(\psi:[0,\infty)\to[1,\infty)\) be an increasing function such that_ \[\int_{0}^{\infty}\psi(t)^{-2n-2}dt<\infty. \tag{1.3}\] _Then there exists a subset \(\mathcal{X}(\psi)\subset\mathbb{R}^{n\times n}_{\rm sym}\) of full Lebesgue measure such that_ \[\theta_{\mathcal{B}}(M,X,\mathbf{x},\mathbf{y})=O_{X}\big{(}M^{\frac{n}{2}}\psi(\log M) \big{)} \tag{1.4}\] _for all \(M\geq 1\), \(\mathbf{b}=(b_{1},\ldots,b_{n})\in\mathcal{K}\), \(X\in\mathcal{X}(\psi)\), \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{n}\). The implied constants are independent of \(M\), \(\mathbf{b}\), \(\mathbf{x}\) and \(\mathbf{y}\)._ For example, for any \(\epsilon>0\), the function \(\psi(x)=(x+1)^{\frac{1}{2n+2}+\epsilon}\) satisfies the condition (1.3), which produces the bound \(M^{\frac{n}{2}}(\log M)^{\frac{1}{2n+2}+\epsilon}\) for almost every \(X\) and any \(\mathbf{x}\) and \(\mathbf{y}\). This improved the previously best bound due to Cosentino and Flaminio [3] by a factor of \((\log M)^{n}\). Moreover, in the case \(n=1\), theorem 1.1 recovers the optimal result obtained by Fiedler, Jurkat and Korner [5]. In what follows we establish a stronger bound than (1.4), for example \(M^{\frac{n}{2}}(\log M)^{\frac{1}{2n+4}+\epsilon}\), but now only valid for almost every \(\mathbf{y}\). In the case \(n=1\), theorem 1.2 recovers theorem 0.1 of Fedotov and Klopp [4]. **Theorem 1.2**.: _Fix a compact subset \(\mathcal{K}\subset\mathbb{R}^{n}_{>0}\times\mathbb{R}^{n}\), and let \(\psi:[0,\infty)\to[1,\infty)\) be an increasing function such that_ \[\int_{0}^{\infty}\psi(t)^{-2n-4}dt<\infty. \tag{1.5}\] _Then there exists a subset \(\tilde{\mathcal{X}}(\psi)\subset\mathbb{R}^{n\times n}_{\rm sym}\times\mathbb{ R}^{n}\) of full Lebesgue measure such that_ \[\theta_{\mathcal{B}}(M,X,\mathbf{x},\mathbf{y})=O_{X,\mathbf{y}}\big{(}M^{\frac{n}{2}}\psi( \log M)\big{)} \tag{1.6}\] _for all \(M\geq 1\), \((\mathbf{b},\mathbf{x})\in\mathcal{K}\), and \((X,\mathbf{y})\in\tilde{\mathcal{X}}(\psi)\). The implied constants are independent of \(M\), \(\mathbf{b}\) and \(\mathbf{x}\)._ The paper is organized as follows. In section 2 we review some basic properties of theta functions and the Jacobi group. The Jacobi group is defined as the semi-direct product \(H\rtimes G\) of the Heisenberg group \(H\) and the symplectic group \(G={\rm Sp}(n,\mathbb{R})\), and, following a construction due to Lion and Vergne [8], the theta function associated to a Schwartz function \(f\in\mathcal{S}(\mathbb{R}^{n})\) is a function \(\Theta_{f}:H\rtimes G\to\mathbb{C}\) that, for appropriate \(g\in G\) and \(h\in H\), is a simple rescaling of the theta sums \(\theta_{f}\). The theta functions \(\Theta_{f}\) satisfy an automorphy equation, theorem 3.1, under a certain subgroup \(\tilde{\Gamma}\subset H\rtimes G\). This subgroup, defined in section 3, projects to the discrete subgroup \(\Gamma={\rm Sp}(n,\mathbb{Z})\subset G\). In order to exploit additional savings from the linear term parameterized by \(\mathbf{y}\), we found it necessary to have a better understanding of the shape of the cusp of \(\Gamma\backslash G\) than in the first paper in this series [10]. For this reason we define in section 3.1 a new fundamental domain for \(\Gamma\backslash G\) which has "box-shape" cusps, as explicated in section 3.2. Section 4 contains the proof of theorem 1.2, which is based on a Borel-Cantelli type argument together with a multi-dimensional dyadic decomposition of the characteristic function of the open unit cube \((0,1)^{n}\) that is naturally realized as an action of the diagonal subgroup of \(G\). The execution of the Borel-Cantelli argument rests on a kind of "uniform continuity" property of a certain height function on \(H\rtimes G\) that controls the theta function \(\Theta_{f}\), see corollary 4.1. The required property is proved in section 4.1, see lemma 4.4, whose proof is the motivation for the creation of the fundamental domain and the study of its cuspidal regions in sections 3.1 and 3.2. We remark that the interaction of the dyadic decomposition with the \(H\) coordinate in the Jacobi group leads to additional complications not seen in [10], see section 4.2. ## 2 Theta functions and the Jacobi group The theta function \(\Theta_{f}\) associated to a Schwartz function \(f\in\mathcal{S}(\mathbb{R}^{n})\) is a complex-valued function defined on the Jacobi group \(H\rtimes G\), the semi-direct product of the Heisenberg group \(H\) with the rank \(n\) symplectic group \(G=\operatorname{Sp}(n,\mathbb{R})\). Here \(H\) is the set \(\mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{R}\) with multiplication given by \[(\boldsymbol{x}_{1},\boldsymbol{y}_{1},t_{1})(\boldsymbol{x}_{2}, \boldsymbol{y}_{2},t_{2})=(\boldsymbol{x}_{1}+\boldsymbol{x}_{2},\boldsymbol {y}_{1}+\boldsymbol{y}_{2},t_{1}+t_{2}+\tfrac{1}{2}(\boldsymbol{y}_{1}\,^{t} \boldsymbol{x}_{2}-\boldsymbol{x}_{1}\,^{t}\boldsymbol{y}_{2})), \tag{2.1}\] and \(G\) is the group of \(2n\times 2n\) real matrices \(g\) preserving the standard symplectic form: \[g\begin{pmatrix}0&-I\\ I&0\end{pmatrix}^{t}g=\begin{pmatrix}0&-I\\ I&0\end{pmatrix} \tag{2.2}\] with \(I\) the \(n\times n\) identity. Alternatively, writing \(g\) in \(n\times n\) blocks, \[G=\left\{\begin{pmatrix}A&B\\ C&D\end{pmatrix}:A\,^{t}B=B\,^{t}A,\ C\,^{t}D=D\,^{t}C,\ A\,^{t}D-B\,^{t}C=I \right\}. \tag{2.3}\] We note that \(G\) acts on \(H\) by automorphisms via \[h^{g}=(\boldsymbol{x}A+\boldsymbol{y}C,\boldsymbol{x}B+\boldsymbol{y}D,t),\ \text{where}\ h=(\boldsymbol{x},\boldsymbol{y},t),\ g=\begin{pmatrix}A&B\\ C&D\end{pmatrix}, \tag{2.4}\] so we may define the semidirect product \(H\rtimes G\), the Jacobi group, with multiplication \[(h_{1},g_{1})(h_{2},g_{2})=(h_{1}h_{2}^{g_{1}^{-1}},g_{1}g_{2}). \tag{2.5}\] The theta function is defined by \[\Theta_{f}(h,g)=\sum_{\boldsymbol{m}\in\mathbb{Z}^{n}}(W(h)R(g)f)(\boldsymbol {m}), \tag{2.6}\] where \(W\) is the Schrodinger representation of \(H\) and \(R\) is the Segal-Shale-Weil (projective) representation of \(G\). We refer the reader to [10] for details regarding these representations, including the slightly non-standard definition of \(W\) and the unitary cocycle \(\rho:G\times G\to\mathbb{C}\) satisfying \(R(g_{1}g_{2})=\rho(g_{1},g_{2})R(g_{1})R(g_{2})\). We recall here that for \[g=\begin{pmatrix}I&X\\ 0&I\end{pmatrix}\begin{pmatrix}Y^{\frac{1}{2}}&0\\ 0&{}^{t}Y^{-\frac{1}{2}}\end{pmatrix}\in G, \tag{2.7}\] we have \[\Theta_{f}((\mathbf{x},\mathbf{y},t),g)\\ =(\det Y)^{\frac{1}{4}}\mathrm{e}(-t+\tfrac{1}{2}\mathbf{x}\,^{t}\mathbf{y}) \sum_{\mathbf{m}\in\mathbb{Z}^{n}}f((\mathbf{m}+\mathbf{x})Y^{\frac{1}{2}})\mathrm{e}( \tfrac{1}{2}(\mathbf{m}+\mathbf{x})X^{\,t}(\mathbf{m}+\mathbf{x})+\mathbf{m}\,^{t}\mathbf{y}). \tag{2.8}\] For \(f(\mathbf{x})=\exp(-\pi\mathbf{x}\,^{t}\mathbf{x})\) and \(h=(0,0,0)\), we recover \((\det Y)^{\frac{1}{4}}\) times the classical Siegel theta series that is holomorphic in the complex symmetric matrix \(Z=X+\mathrm{i}Y\). Here we choose \(Y^{\frac{1}{2}}\) to be the upper-triangular matrix with positive diagonal entries such that \(Y^{\frac{1}{2}\,t}Y^{\frac{1}{2}}=Y\), and we emphasize that \(Y^{-\frac{1}{2}}\) is always interpreted as \((Y^{\frac{1}{2}})^{-1}\) and not \((Y^{-1})^{\frac{1}{2}}\). For general \(g\in G\) we have the Iwasawa decomposition, \[g=\begin{pmatrix}A&B\\ C&D\end{pmatrix}=\begin{pmatrix}I&X\\ 0&I\end{pmatrix}\begin{pmatrix}Y^{\frac{1}{2}}&0\\ 0&tY^{-\frac{1}{2}}\end{pmatrix}\begin{pmatrix}\mathrm{Re}(Q)&-\mathrm{Im}(Q) \\ \mathrm{Im}(Q)&\mathrm{Re}(Q)\end{pmatrix}, \tag{2.9}\] where \(X,Y\) are symmetric and \(Q\) is unitary. Explicitly, we have \[Y =(C\,^{t}C+D\,^{t}D)^{-1}\] \[X =(A\,^{t}C+B\,^{t}D)(C\,^{t}C+D\,^{t}D)^{-1}\] \[Q ={}^{t}Y^{\frac{1}{2}}(D+\mathrm{i}C). \tag{2.10}\] We often further decompose \(Y=UV\,^{t}U\) with \(U\) upper-triangular unipotent and \(V\) positive diagonal, so \(Y^{\frac{1}{2}}=UV^{\frac{1}{2}}\). It is easy to express the Haar measure \(\mu\) on \(G\) in these coordinates, \[\mathrm{d}\mu(g)=\mathrm{d}Q\prod_{1\leq i\leq j\leq n}\mathrm{d}x_{ij}\prod_{ 1\leq i<j\leq n}\mathrm{d}u_{ij}\prod_{1\leq j\leq n}v_{j}^{-n+j-2}\mathrm{d}v _{jj}, \tag{2.11}\] where \(\mathrm{d}Q\) is Haar measure on \(\mathrm{U}(n)\) and \(\mathrm{d}x_{ij}\), \(\mathrm{d}u_{ij}\), \(\mathrm{d}v_{jj}\) are respectively the Lebesgue measures on the entries of \(X\), \(U\), \(V\). We can also express the Haar measure on the open, dense set of \(g\) which can be written as \[g=\begin{pmatrix}I&X\\ 0&I\end{pmatrix}\begin{pmatrix}A&0\\ 0&{}^{t}A^{-1}\end{pmatrix}\begin{pmatrix}I&0\\ T&I\end{pmatrix} \tag{2.12}\] with \(A\in\mathrm{GL}(n,\mathbb{R})\) and \(X\) and \(T\) symmetric. In these coordinates we have \[\mathrm{d}\mu(g)=c(\det A)^{-2n-1}\prod_{1\leq i\leq j\leq n}\mathrm{d}x_{ij} \prod_{1\leq i,j\leq n}\mathrm{d}a_{ij}\prod_{1\leq i\leq j\leq n}\mathrm{d}t _{ij} \tag{2.13}\] where \(c\) is a positive constant and \(\mathrm{d}x_{ij}\), \(\mathrm{d}a_{ij}\), \(\mathrm{d}t_{ij}\) are respectively the Lebesgue measure on the entries of \(X\), \(A\), \(T\), see [10]. We note that the Haar measure \(\tilde{\mu}\) on the Jacobi group is simply \[\mathrm{d}\tilde{\mu}(h,g)=\mathrm{d}\mathbf{x}\ \mathrm{d}\mathbf{y}\,\mathrm{d}t \,\mathrm{d}\mu(g), \tag{2.14}\] with \(h=(\mathbf{x},\mathbf{y},t)\) and \(\mathrm{d}\mathbf{x}\), \(\mathrm{d}\mathbf{y}\), and \(\mathrm{d}t\) the Lebesgue measures. We often make use of the following refinements of the Iwasawa decomposition. For \(1\leq l\leq n\) and the same \(Q\) as in (2.9), we write \(g\in G\) as \[\begin{pmatrix}I&R_{l}&T_{l}-S_{l}\,^{t}R_{l}&S_{l}\\ 0&I&&{}^{t}S_{l}&0\\ 0&0&&I&&0\\ 0&0&&-\,^{t}R_{l}&I\end{pmatrix}\begin{pmatrix}U_{l}V_{l}^{\frac{1}{2}}&0&0&0 \\ 0&Y_{l}^{\frac{1}{2}}&0&X_{l}\,^{t}Y_{2}^{-\frac{1}{2}}\\ 0&0&{}^{t}U_{l}^{-1}V_{l}^{-\frac{1}{2}}&0\\ 0&0&0&{}^{t}Y_{l}^{-\frac{1}{2}}\end{pmatrix}\begin{pmatrix}\mathrm{Re}(Q)&- \mathrm{Im}(Q)\\ \mathrm{Im}(Q)&\mathrm{Re}(Q)\end{pmatrix}, \tag{2.15}\] where \(R_{l}\) and \(S_{l}\) are \(l\times(n-l)\) matrices, \(T_{l}\) is \(l\times l\) symmetric, \(U_{l}\) is \(l\times l\) upper-triangular unipotent, \(V_{l}\) is \(l\times l\) positive diagonal, \(X_{l}\) is \((n-l)\times(n-l)\) symmetric, and \(Y_{l}\) is \((n-l)\times(n-l)\) positive definite symmetric. We note that for \(l=n\) we recover \(X=T_{l}\) and the factorization \(Y=U_{l}V_{l}\,^{t}U_{l}\). In what follows we use \(g_{l}=g_{l}(g)\in\mathrm{Sp}(n-l,\mathbb{R})\) to denote the matrix \[g_{l}=\begin{pmatrix}I&X_{l}\\ 0&I\end{pmatrix}\begin{pmatrix}Y_{l}^{\frac{1}{2}}&0\\ 0&{}^{t}Y_{l}^{-\frac{1}{2}}\end{pmatrix}. \tag{2.16}\] These decompositions are closely related to the Langlands decompositions of the maximal parabolic subgroups \(P_{l}\) of \(G\). For \(1\leq l<n\), \(P_{l}\) is the subgroup of \(g\in G\) which can be written in the form \[\begin{pmatrix}I&R_{l}&T_{l}-S_{l}\,^{t}R_{l}&S_{l}\\ 0&I&&{}^{t}S_{l}&0\\ 0&0&&I&0\\ 0&0&&-\,^{t}R_{l}&I\end{pmatrix}\begin{pmatrix}a_{l}I&0&0&0\\ 0&I&0&0\\ 0&0&a_{l}^{-1}I&0\\ 0&0&0&I\end{pmatrix}\begin{pmatrix}U_{l}&0&0&0\\ 0&A_{l}&0&B_{l}\\ 0&0&{}^{t}U_{l}^{-1}&0\\ 0&C_{l}&0&D_{l}\end{pmatrix} \tag{2.17}\] where \(R_{l}\) and \(S_{l}\) are \(l\times(n-l)\) matrices, \(T_{l}\) is \(l\times l\) symmetric, \(a_{l}>0\), \(U_{l}\in\mathrm{GL}(l,\mathbb{R})\) with \(\det U_{l}=\pm 1\), and \(g_{l}=\begin{pmatrix}A_{l}&B_{l}\\ C_{l}&D_{l}\end{pmatrix}\in\mathrm{Sp}(n-l,\mathbb{R})\). The maximal parabolic \(P_{n}\) is the subgroup of \(g\in G\) that can be written as \[\begin{pmatrix}I&T_{n}\\ 0&I\end{pmatrix}\begin{pmatrix}a_{n}I&0\\ 0&a_{n}^{-1}I\end{pmatrix}\begin{pmatrix}U_{n}&0\\ 0&{}^{t}U_{n}{}^{-1}\end{pmatrix} \tag{2.18}\] where \(T_{n}\) is \(n\times n\) symmetric, \(a_{n}>0\), and \(U_{n}\in\mathrm{GL}(n,\mathbb{R})\) with \(\det U_{n}=\pm 1\). The factorizations (2.17), (2.18) are in fact the Langlands decompositions of \(P_{l}\), \(P_{n}\). The first paper in this series [10] contains more details on parabolic subgroups and their Langlands decompositions, and we refer the readers to [12], particularly sections 4.5.3 and 5.1, [7], particularly section 7.7, and the authors' lecture notes [9] for further details. ## 3 The subgroups \(\Gamma\) and \(\tilde{\Gamma}\) We denote by \(\Gamma\) the discrete subgroup \(\Gamma=\mathrm{Sp}(n,\mathbb{Z})\subset G\). Recalling the notation of [10], for \[\gamma=\begin{pmatrix}A&B\\ C&D\end{pmatrix}\in\Gamma, \tag{3.1}\] we set \(h_{\gamma}=(\boldsymbol{r},\boldsymbol{s},0)\in H\) where the entries or \(\boldsymbol{r}\) are \(0\) or \(\frac{1}{2}\) depending on whether the corresponding diagonal entry of \(C\,^{t}D\) is even or odd, and the entries of \(\boldsymbol{s}\) are \(0\) or \(\frac{1}{2}\) depending on whether the corresponding diagonal entry of \(A\,^{t}B\) is even or odd. As in [10], we now define the group \(\tilde{\Gamma}\subset H\rtimes G\) by \[\tilde{\Gamma}=\{((\boldsymbol{m},\boldsymbol{n},t)h_{\gamma},\gamma)\in H \rtimes G:\gamma\in\Gamma,\boldsymbol{m}\in\mathbb{Z}^{n},\boldsymbol{n}\in \mathbb{Z}^{n},t\in\mathbb{R}\}. \tag{3.2}\] The relevance of the subgroup \(\tilde{\Gamma}\) is made apparent by the following theorem, see theorem 4.1 in [10]. **Theorem 3.1**.: _For any \((uh_{\gamma},\gamma)\in\tilde{\Gamma}\) and \((h,g)\in H\rtimes G\), there is a complex number \(\varepsilon(\gamma)\) with \(|\varepsilon(\gamma)|=1\) such that_ \[\Theta_{f}((uh_{\gamma},\gamma)(h,g))=\varepsilon(\gamma)\rho(\gamma,g){\rm e }\left(-t+\tfrac{1}{2}\boldsymbol{m}\,^{t}\boldsymbol{n}\right)\Theta_{f}(h,g), \tag{3.3}\] _where \(u=(\boldsymbol{m},\boldsymbol{n},t)\)._ A proof of this theorem is found in [8] but with \(\Gamma\) replaced by the finite index subgroup for which \(h_{\gamma}=(0,0,0)\). The automorphy under the full \(\tilde{\Gamma}\) is proved in [11], but only for the special function \(f(\boldsymbol{x})=\exp(-\pi\boldsymbol{x}\,^{t}\boldsymbol{x})\). It is shown in [8] that this \(f\) is an eigenfunction for all the operators \(R(k(Q))\), with \(R\) the Segal-Shale-Weil representation and \(Q\in\mathrm{U}(n)\), and it can be seen from the theory built in [8] that the automorphy for any Schwartz function follows from that for \(\exp(-\pi\boldsymbol{x}\,^{t}\boldsymbol{x})\). A self-contained proof along the lines of [8] is presented in the authors' lecture notes [9]. ### Fundamental domains We say that a closed set \(\mathcal{D}\subset G\) is a fundamental domain for \(\Gamma\backslash G\) if * for all \(g\in G\) there exists \(\gamma\in\Gamma\) such that \(\gamma g\in\mathcal{D}\) and * if for \(g\in\mathcal{D}\) there is a non-identity \(\gamma\in\Gamma\) such that \(\gamma g\in\mathcal{D}\), then \(g\) is contained in the boundary of \(\mathcal{D}\). Similarly a closed set \(\tilde{\mathcal{D}}\subset H\rtimes G\) is a fundamental domain for \(\tilde{\Gamma}\backslash(H\rtimes G)\) if * for all \((h,g)\in H\rtimes G\) there exists \(\tilde{\gamma}\in\tilde{\Gamma}\) such that \(\tilde{\gamma}(h,g)\in\tilde{\mathcal{D}}\) and * if for \((h,g)\in\tilde{\mathcal{D}}\) there is a non-identity \(\tilde{\gamma}\in\tilde{\Gamma}\) such that \(\tilde{\gamma}(h,g)\in\tilde{\mathcal{D}}\), then \((h,g)\) is contained in the boundary of \(\tilde{\mathcal{D}}\). We note that if \(\mathcal{D}\) is a fundamental domain for \(\Gamma\backslash G\), then \[\tilde{\mathcal{D}}=\left\{(\boldsymbol{x},\boldsymbol{y},0)\in H:|x_{j}|,|y_{ j}|\leq\frac{1}{2}\right\}\times\mathcal{D} \tag{3.4}\] is a fundamental domain for \(\tilde{\Gamma}\backslash(H\rtimes G)\). In contrast to our previous paper [10], here we need to make careful use of the shape of our fundamental domain \(\mathcal{D}\) in the cuspidal regions. Drawing inspiration for the fundamental domain for \(\mathrm{GL}(n,\mathbb{Z})\backslash\mathrm{GL}(n,\mathbb{R})\) constructed in [6] as well as from the reduction theory developed in [2] (see also [1]), we construct in this section a new fundamental domain \(\mathcal{D}=\mathcal{D}_{n}\) for \(\Gamma\backslash G\). In the following section we study the cuspidal region of \(\mathcal{D}_{n}\). For \(n=1\), we let \(\mathcal{D}_{1}\subset G\) denote the standard fundamental domain for \(\Gamma\backslash G=\mathrm{SL}(2,\mathbb{Z})\backslash\mathrm{SL}(2,\mathbb{R})\). That is, \[\mathcal{D}_{1}=\left\{\begin{pmatrix}1&x\\ 0&1\end{pmatrix}\begin{pmatrix}y^{\frac{1}{2}}&0\\ 0&y^{-\frac{1}{2}}\end{pmatrix}\begin{pmatrix}\cos\phi&-\sin\phi\\ \sin\phi&\cos\phi\end{pmatrix}:|x|\leq\frac{1}{2},x^{2}+y^{2}\geq 1,0\leq \phi<2\pi\right\}. \tag{3.5}\] We now define fundamental domains \(\mathcal{D}_{n}\) inductively using the decomposition (2.15) for \(l=1\). Writing \(g\in G\) as \[g=\begin{pmatrix}1&\boldsymbol{r}_{1}&t_{1}-\boldsymbol{s}_{1}\,^{t} \boldsymbol{r}_{1}&\boldsymbol{s}_{1}\\ 0&I&{}^{t}\boldsymbol{s}_{1}&0\\ 0&0&1&0\\ 0&0&-\,^{t}\boldsymbol{r}_{1}&I\end{pmatrix}\begin{pmatrix}1&0&0&0\\ 0&I&0&X_{1}\\ 0&0&1&0\\ 0&0&0&I\end{pmatrix}\begin{pmatrix}v_{1}^{\frac{1}{2}}&0&0&0\\ 0&Y_{1}^{\frac{1}{2}}&0&0\\ 0&0&v_{1}^{-\frac{1}{2}}&0\\ 0&0&0&{}^{t}Y_{1}{}^{-\frac{1}{2}}\end{pmatrix}k(Q), \tag{3.6}\] where \(\boldsymbol{r}=\boldsymbol{r}(g)\in\mathbb{R}^{n-1}\), \(\boldsymbol{s}=\boldsymbol{s}(g)\in\mathbb{R}^{n-1}\), \(t_{1}=t_{1}(g)\in\mathbb{R}\), \(X_{1}=X_{1}(g)\) is symmetric, \(v_{1}=v_{1}(g)>0\), \(Y_{1}=Y_{1}(g)\) is positive definite symmetric, and \(Q\in\mathrm{U}(n)\), we define \(\mathcal{D}_{n}\) as the set of all \(g\in G\) satisfying * \(v_{1}(g)\geq v_{1}(\gamma g)\) for all \(\gamma\in\Gamma\), * \(g_{1}(g)\in\mathcal{D}_{n-1}\), see (2.16), and * the entries of \(\boldsymbol{r}_{1}(g)\), \(\boldsymbol{s}_{1}(g)\), and \(t_{1}(g)\) are all less than or equal to \(\frac{1}{2}\) in absolute value with the first entry of \(\boldsymbol{r}_{1}\) greater than or equal to \(0\). **Proposition 3.2**.: \(\mathcal{D}_{n}\) _is a fundamental domain for \(\Gamma\backslash G\)._ Proof.: We begin by showing that for \(g\in G\), \(\sup_{\gamma\in\Gamma}v_{1}(\gamma g)\) is indeed obtained by some \(\gamma\in\Gamma\). From (2.10), we have for \[\gamma=\begin{pmatrix}A&B\\ C&D\end{pmatrix}\in\Gamma \tag{3.7}\] that \[v_{1}(\gamma g)^{-1}=\boldsymbol{c}Y\,^{t}\boldsymbol{c}+(\boldsymbol{c}X+ \boldsymbol{d})Y^{-1\,t}(\boldsymbol{c}X+\boldsymbol{d}) \tag{3.8}\] where \[g=\begin{pmatrix}I&X\\ 0&I\end{pmatrix}\begin{pmatrix}Y^{\frac{1}{2}}&0\\ 0&{}^{t}Y^{-\frac{1}{2}}\end{pmatrix}k(Q) \tag{3.9}\] and \(\boldsymbol{c}\), \(\boldsymbol{d}\) are the first rows of \(C\), \(D\). Since \(Y\) is positive definite, there are only finitely many \(\boldsymbol{c}\) such that \(\boldsymbol{c}Y\,^{t}\boldsymbol{c}\), and hence \(v_{1}(\gamma g)^{-1}\), is below a given bound. Similarly, for a fixed \(\boldsymbol{c}\), the positive definiteness of \(Y^{-1}\) implies that there are only finitely many \(\boldsymbol{d}\) such that \(v_{1}(\gamma g)^{-1}\) is below a given bound. It follows that there are only finitely many \(\gamma\in\Gamma_{1}\backslash\Gamma\) such that \(v_{1}(\gamma g)\) is larger than a given bound, where \(\Gamma_{1}=\Gamma\cap P_{1}\) and we recall \(P_{1}\) is given by (2.17). As \(v_{1}(\gamma g)=v_{1}(g)\) for \(\gamma\in\Gamma_{1}\) it follows that \(v_{1}(\gamma g)\) is maximized for some \(\gamma\in\Gamma\). Let \(\gamma_{0}\) be so that \(v_{1}(\gamma_{0}g)\) is maximal. We now decompose an arbitrary \(\gamma\in\Gamma_{1}\) as in (2.17), \[\gamma=\begin{pmatrix}1&\boldsymbol{r}_{1}&t_{1}-\boldsymbol{s}_{1}\,^{t} \boldsymbol{r}_{1}&\boldsymbol{s}_{1}\\ 0&I&{}^{t}\boldsymbol{s}_{1}&0\\ 0&0&1&0\\ 0&0&-\,^{t}\boldsymbol{r}_{1}&I\end{pmatrix}\begin{pmatrix}\pm 1&0&0&0\\ 0&A_{1}&0&B_{1}\\ 0&0&\pm 1&0\\ 0&C_{1}&0&D_{1}\end{pmatrix} \tag{3.10}\] with \[\gamma_{1}=\begin{pmatrix}A_{1}&B_{1}\\ C_{1}&D_{1}\end{pmatrix}\in\operatorname{Sp}(n-1,\mathbb{Z}). \tag{3.11}\] Proceeding inductively, there exists \(\gamma_{1}\) such that \(\gamma_{1}g_{1}(\gamma_{0}g)=g_{1}(\gamma\gamma_{0}g)\in\mathcal{D}_{n-1}\). Now, we can change \(\boldsymbol{r}_{1}(\gamma)\), \(\boldsymbol{s}_{1}(\gamma)\), \(t_{1}(\gamma)\), and the \(\pm\), noting that this does not change \(g_{1}(\gamma\gamma_{0}g)\), so that the entries of \(\boldsymbol{r}_{1}(\gamma\gamma_{0}g)\), \(\boldsymbol{s}_{1}(\gamma\gamma_{0}g)\) and \(t_{1}(\gamma\gamma_{0}g)\) are all \(\leq\frac{1}{2}\) in absolute value and the first entry of \(\boldsymbol{r}_{1}(\gamma\gamma_{0}g)\) is nonnegative. Therefore \(\gamma\gamma_{1}g\in\mathcal{D}_{n}\) as required. We now suppose that \(g\in\mathcal{D}_{n}\) and there is a non-identity \(\gamma\in\Gamma\) such that \(\gamma g\in\mathcal{D}_{n}\). We set \[\gamma=\begin{pmatrix}A&B\\ C&D\end{pmatrix},\quad g=\begin{pmatrix}I&X\\ 0&I\end{pmatrix}\begin{pmatrix}Y^{\frac{1}{2}}&0\\ 0&{}^{t}Y^{-\frac{1}{2}}\end{pmatrix}k(Q). \tag{3.12}\] By the maximality, we have \(v_{1}(g)=v_{1}(\gamma g)\) and therefore \[v_{1}^{-1}=\boldsymbol{c}Y\,^{t}\boldsymbol{c}+(\boldsymbol{c}X+\boldsymbol{ d})Y^{-1}\,{}^{t}(\boldsymbol{c}X+\boldsymbol{d}) \tag{3.13}\] where \(\boldsymbol{c}\) and \(\boldsymbol{d}\) are the first rows of \(C\) and \(D\). Let us first consider the case when \(\boldsymbol{c}\neq 0\). To show that \(g\) is on the boundary of \(\mathcal{D}_{n}\) in this case, we consider \[g_{\epsilon}=\begin{pmatrix}I&X\\ 0&I\end{pmatrix}\begin{pmatrix}(1-\epsilon)^{\frac{1}{2}}Y^{\frac{1}{2}}&0\\ 0&(1-\epsilon)^{-\frac{1}{2}\,t}Y^{-\frac{1}{2}}\end{pmatrix}k(Q) \tag{3.14}\] for \(0<\epsilon<1\). We have \(v_{1}(g_{\epsilon})=(1-\epsilon)v_{1}(g)\) and \[v_{1}(\gamma g_{\epsilon})^{-1}=(1-\epsilon)\boldsymbol{c}Y\,^{ t}\boldsymbol{c}+(1-\epsilon)^{-1}(\boldsymbol{c}X+\boldsymbol{d})Y^{-1}\,{}^{t}( \boldsymbol{c}X+\boldsymbol{d})\\ =\left((1-\epsilon)-(1-\epsilon)^{-1}\right)\boldsymbol{c}Y\,^{t} \boldsymbol{c}+v_{1}(g_{\epsilon})^{-1} \tag{3.15}\] by (3.13). Since \(v_{1}(\gamma g_{\epsilon})>v_{1}(g_{\epsilon})\), we have that \(g_{\epsilon}\not\in\mathcal{D}_{n}\). As \(g_{\epsilon}\) can be made arbitrarily close to \(g\), we conclude that \(g\) is on the boundary of \(\mathcal{D}_{n}\). If \(\boldsymbol{c}=0\), then from (3.13) we have \[v_{1}(g)^{-1}=(d^{(1)}-\boldsymbol{d}^{(2)\,\,t}\boldsymbol{r}_{1})^{2}v_{1}(g )^{-1}+\boldsymbol{d}^{(2)}Y_{1}^{-1\,t}\boldsymbol{d}^{(2)} \tag{3.16}\] where \(\boldsymbol{d}=\begin{pmatrix}d^{(1)}&\boldsymbol{d}^{(2)}\end{pmatrix}\) are as above \[Y=\begin{pmatrix}1&\boldsymbol{r}_{1}\\ 0&I\end{pmatrix}\begin{pmatrix}v_{1}&0\\ 0&Y_{1}\end{pmatrix}\begin{pmatrix}1&0\\ -{}^{t}\boldsymbol{r}_{1}&I\end{pmatrix}. \tag{3.17}\] This time we consider \[g_{\epsilon}=\begin{pmatrix}I&X\\ 0&I\end{pmatrix}\begin{pmatrix}Y_{\epsilon}^{\frac{1}{2}}&0\\ 0&{}^{t}Y_{\epsilon}^{-\frac{1}{2}}\end{pmatrix}k(Q) \tag{3.18}\] with \[Y_{\epsilon}=\begin{pmatrix}1&\boldsymbol{r}_{1}\\ 0&I\end{pmatrix}\begin{pmatrix}(1-\epsilon)v_{1}&0\\ 0&Y_{1}\end{pmatrix}\begin{pmatrix}1&0\\ -{}^{t}\boldsymbol{r}_{1}&I\end{pmatrix}. \tag{3.19}\] We have \(v_{1}(g_{\epsilon})=(1-\epsilon)v_{1}(g)\) and \[v_{1}(\gamma g_{\epsilon})^{-1}=(1-\epsilon)^{-1}(d^{(1)}- \boldsymbol{d}^{(2)\;t}\boldsymbol{r}_{1})^{2}v_{1}(g)^{-1}+\boldsymbol{d}^{( 2)}Y_{1}^{-1\;t}\boldsymbol{d}^{(2)}\\ =v_{1}(g_{\epsilon})^{-1}+\left(1-(1-\epsilon)^{-1}\right) \boldsymbol{d}^{(2)}Y_{1}^{\;t}\boldsymbol{d}^{(2)} \tag{3.20}\] from (3.16). If \(\boldsymbol{d}^{(2)}\neq 0\), then \(v_{1}(\gamma g_{\epsilon})>v_{1}(g_{\epsilon})\) and we conclude that \(g\) is on the boundary of \(\mathcal{D}_{n}\) as before. When \(\boldsymbol{c}=0\) and \(\boldsymbol{d}^{(2)}=0\) we have \(d^{(1)}=\pm 1\), and so \(\gamma\in\Gamma_{1}\). We decompose \(\gamma\) as in (3.10) and define \(\gamma_{1}\) as in (3.11). By the construction of \(\mathcal{D}_{n}\), we have \(g_{1}(g)\in\mathcal{D}_{n-1}\) and \(g_{1}(\gamma g)=\gamma_{1}g_{1}(g)\in\mathcal{D}_{n-1}\). By induction, we have that either \(\gamma_{1}\) is the identity or \(g_{1}(g)\) is on the boundary of \(\mathcal{D}_{n-1}\). In the latter case we have that \(g\) is on the boundary of \(\mathcal{D}_{n}\), and so it remains to consider \[\gamma=\begin{pmatrix}\pm 1&\boldsymbol{r}_{1}&\pm t_{1}\mp\boldsymbol{r}_{1} {}^{t}\boldsymbol{s}_{1}&\boldsymbol{s}_{1}\\ 0&I&\pm\,{}^{t}\boldsymbol{s}_{1}&0\\ 0&0&\pm 1&0\\ 0&0&\mp\boldsymbol{r}_{1}&I\end{pmatrix}. \tag{3.21}\] If any of the entries of \(\boldsymbol{r}_{1}(\gamma)\) or \(\boldsymbol{s}_{1}(\gamma)\) is not zero, then the corresponding entry of \(\boldsymbol{r}_{1}(g)\) or \(\boldsymbol{s}_{1}(g)\) is \(\pm\frac{1}{2}\) and so \(g\) is on the boundary of \(\mathcal{D}_{n}\). Similarly if \(t_{1}(\gamma)\neq 0\), we have \(t_{1}(g)=\pm\frac{1}{2}\) and again \(g\) is on the boundary of \(\mathcal{D}_{n}\). If all of \(\boldsymbol{r}_{1},\boldsymbol{s}_{1},t_{1}\) are \(0\), the sign must be \(-\) as \(\gamma\) is not the identity, and it follows that the first entry of \(\boldsymbol{r}_{1}(g)\) is \(0\) and \(g\) is again on the boundary of \(\mathcal{D}_{n}\). The following proposition records some useful properties of \(\mathcal{D}_{n}\). It and its proof are very similar to the analogous statement for the different fundamental domain used in [10], see proposition 3.1 there. **Proposition 3.3**.: _Let \(g\in\mathcal{D}_{n}\) and write_ \[g=\begin{pmatrix}I&X\\ 0&I\end{pmatrix}\begin{pmatrix}Y^{\frac{1}{2}}&0\\ 0&Y^{-\frac{1}{2}}\end{pmatrix}k(Q),\quad Y=UV\,{}^{t}U, \tag{3.22}\] _where \(X\) is symmetric, \(Y\) is positive definite symmetric, \(U\) upper triangular unipotent, \(V\) positive diagonal, and \(Q\in\mathrm{U}(n)\), and_ \[V=\begin{pmatrix}v_{1}&\cdots&0\\ \vdots&\ddots&\vdots\\ 0&\cdots&v_{n}\end{pmatrix},\quad Y=\begin{pmatrix}1&\mathbf{r}_{1}\\ 0&I\end{pmatrix}\begin{pmatrix}v_{1}&0\\ 0&Y_{1}\end{pmatrix}\begin{pmatrix}1&0\\ {}^{t}\mathbf{r}_{1}&I\end{pmatrix}. \tag{3.23}\] _Then we have_ 1. \(v_{n}\geq\frac{\sqrt{3}}{2}\) _and_ \(v_{j}\geq\frac{3}{4}v_{j+1}\) _for_ \(1\leq j\leq n-1\)_,_ 2. _for all_ \(\mathbf{x}=\begin{pmatrix}x^{(1)}&\mathbf{x}^{(2)}\end{pmatrix}\in\mathbb{R}^{n}\)__ \[\mathbf{x}Y\,^{t}\mathbf{x}\asymp_{n}v_{1}(x^{(1)})^{2}+\mathbf{x}^{(2)}Y_{1}\,^{t}\mathbf{x}^{ (2)}.\] (3.24) Proof.: For the first, we observe that by the inductive construction of \(\mathcal{D}_{n}\), we have that \[g_{n-1}(g)=\begin{pmatrix}1&x_{n-1}(g)\\ 0&1\end{pmatrix}\begin{pmatrix}v_{n}^{\frac{1}{2}}&0\\ 0&v_{n}^{-\frac{1}{2}}\end{pmatrix}\in\mathcal{D}_{1}. \tag{3.25}\] As \(\mathcal{D}_{1}\) is the standard fundamental domain for \(\mathrm{SL}(2,\mathbb{Z})\backslash\mathrm{SL}(2,\mathbb{R})\), we conclude that \(v_{n}\geq\frac{\sqrt{3}}{2}\). To demonstrate that \(v_{j}\geq\frac{3}{4}v_{j+1}\), we note that by the construction of \(\mathcal{D}_{n}\), it suffices to consider only \(j=1\). We start with \[v_{1}^{-1}\leq\mathbf{c}Y\,^{t}\mathbf{c}+(\mathbf{c}X+\mathbf{d})Y^{-1}\,^{t}(\mathbf{c}X+\mathbf{d}) \tag{3.26}\] for any \(\begin{pmatrix}\mathbf{c}&\mathbf{d}\end{pmatrix}\in\mathbb{Z}^{2n}\) nonzero and primitive. Choosing \(\mathbf{c}=0\) and \(\mathbf{d}=\begin{pmatrix}0&1&0\cdots&0\end{pmatrix}\), we have \[v_{1}^{-1}\leq v_{1}^{-1}(r_{1}^{(1)})^{2}+v_{2}^{-1}, \tag{3.27}\] where \(r_{1}^{(1)}\) is the first entry of \(\mathbf{r}_{1}\). Since \(0\leq r_{1}^{(1)}\leq\frac{1}{2}\), we conclude that \(v_{1}\geq\frac{3}{4}v_{2}\). To demonstrate the second part of the proposition, we let \(\mathbf{y}_{1},\ldots,\mathbf{y}_{n}\) denote the rows of \[Y^{\frac{1}{2}}=\begin{pmatrix}1&\mathbf{r}_{1}\\ 0&I\end{pmatrix}\begin{pmatrix}v_{1}^{\frac{1}{2}}&0\\ 0&Y_{1}^{\frac{1}{2}}\end{pmatrix}. \tag{3.28}\] Setting \(\mathbf{y}=x_{2}\mathbf{y}_{2}+\cdots+x_{n}\mathbf{y}_{n}\), where the \(x_{j}\) are the entries of \(\mathbf{x}\), our aim is to prove that for some constants \(0<c_{1}<1<c_{2}\) depending only on \(n\), \[c_{1}\left(||\mathbf{y}_{1}||^{2}x_{1}^{2}+||\mathbf{y}||^{2}\right)\leq||x_{1}\mathbf{y}_ {1}+\mathbf{y}||^{2}\leq c_{2}\left(||\mathbf{y}_{1}||^{2}x_{1}^{2}+||\mathbf{y}||^{2} \right), \tag{3.29}\] from which the lower bound in (3.24) follows as \(||\mathbf{y}_{1}||^{2}\geq v_{1}\). The upper bound in (3.24) follows from (3.29) and \(v_{1}\gg||\mathbf{y}_{1}||^{2}\), which is verified below, see (3.35). Expanding the expression in the middle of (3.29), we find that it is enough to show that \[2|x_{1}\mathbf{y}_{1}\,^{t}\mathbf{y}|\leq(1-c_{1})\left(||\mathbf{y}_{1}||^{2}x_{1}^{2}+|| \mathbf{y}||^{2}\right), \tag{3.30}\] \[2|x_{1}\boldsymbol{y}_{1}\,^{t}\boldsymbol{y}|\leq(c_{2}-1)\left(||\boldsymbol{y}_{ 1}||^{2}x_{1}^{2}+||\boldsymbol{y}||^{2}\right). \tag{3.31}\] The upper bound (3.31) is trivial if \(c_{2}=2\), and the upper bound (3.30) would follow from \[|\boldsymbol{y}_{1}\,^{t}\boldsymbol{y}|\leq(1-c_{1})||\boldsymbol{y}_{1}||\ || \boldsymbol{y}||. \tag{3.32}\] We let \(0<\phi_{1}<\pi\) denote the angle between \(\boldsymbol{y}_{1}\) and \(\boldsymbol{y}\) and \(0<\phi_{2}<\frac{\pi}{2}\) denote the angle between \(\boldsymbol{y}_{1}\) and the hyperplane \(\operatorname{span}(\boldsymbol{y}_{2},\dots,\boldsymbol{y}_{n})\). We have \(\phi_{2}\leq\min(\phi_{1},\pi-\phi_{1})\), and so \(|\cos\phi_{1}|\leq|\cos\phi_{2}|\). We bound \(\cos\phi_{2}\) away from \(1\) by bounding \(\sin\phi_{2}\) away from \(0\). We have \[|\sin\phi_{2}|=\frac{||\boldsymbol{y}_{1}\wedge\dots\wedge\boldsymbol{y}_{n}|| }{||\boldsymbol{y}_{1}||\ ||\boldsymbol{y}_{2}\wedge\dots\wedge\boldsymbol{y}_{n}||}=\frac{v_{1}^{ \frac{1}{2}}}{||\boldsymbol{y}_{1}||}, \tag{3.33}\] so it suffices to show that \(v_{1}^{\frac{1}{2}}\gg||\boldsymbol{y}_{1}||\). Here \(\wedge\) denotes the usual wedge product on \(\mathbb{R}^{n}\) and the norm on \(\bigwedge^{k}\mathbb{R}^{n}\) is given by \[||\boldsymbol{a}_{1}\wedge\dots\wedge\boldsymbol{a}_{k}||^{2}=\det\begin{pmatrix} \boldsymbol{a}_{1}\\ \vdots\\ \boldsymbol{a}_{k}\end{pmatrix}\begin{pmatrix}^{t}\boldsymbol{a}_{1}&\dots&^{t }\boldsymbol{a}_{k}\end{pmatrix}. \tag{3.34}\] Using the inductive construction of \(\mathcal{D}_{n}\) and the fact that the entries of \(\boldsymbol{r}_{1}(Y),\boldsymbol{r}_{1}(Y_{1}),\dots\) are at most \(\frac{1}{2}\) in absolute value, we observe that \(U\) has entries bounded by a constant depending only on \(n\). We find that \[||\boldsymbol{y}_{1}||^{2}\ll v_{1}+\dots+v_{n}\ll v_{1} \tag{3.35}\] with the implied constant depending on \(n\). ### Shape of the cusp As explicated in [1] and [2], the cusp of \(\Gamma\backslash G\) can be partitioned into \(2^{n}-1\) box-shaped regions. These regions are in correspondence with the conjugacy classes of proper parabolic subgroups of \(G\) and are formed as \(K\) times the product of three subsets, one for each of the components - nilpotent, diagonal, and semisimple - of the Langlands decomposition of \(P\). In what follows we use the fundamental domain \(\mathcal{D}_{n}\) constructed in section 3.1 to prove a variation of this fact, although only for the maximal parabolic subgroups (2.17), (2.18). Our main result for this section is proposition 3.6, which roughly states that if \(g\in G\) is close enough the boundary in a precise sense, then \(g\) can be brought into \(\mathcal{D}_{n}\) by an element \(\gamma\) in some maximal parabolic subgroup which depends on the way \(g\) approaches the boundary. For \(1\leq l<n\) we denote by \(\Gamma_{l,1}\) and \(\Gamma_{l,2}\) the subgroups of \(\Gamma_{l}=\Gamma\cap P_{l}\) given by \[\Gamma_{l,1}=\left\{\begin{pmatrix}A&0&0&0\\ 0&I&0&0\\ 0&0&{}^{t}A^{-1}&0\\ 0&0&0&I\end{pmatrix}:A\in\operatorname{GL}(l,\mathbb{Z})\right\} \tag{3.36}\] and \[\Gamma_{l,2}=\left\{\begin{pmatrix}I&0&0&0\\ 0&A&0&B\\ 0&0&I&0\\ 0&C&0&D\end{pmatrix}:\begin{pmatrix}A&B\\ C&D\end{pmatrix}\in\operatorname{Sp}(n-l,\mathbb{Z})\right\}. \tag{3.37}\] For \(l=n\), we set \[\Gamma_{n,1}=\left\{\begin{pmatrix}A&0\\ 0&{}^{t}A^{-1}\end{pmatrix}:A\in\operatorname{GL}(n,\mathbb{Z})\right\}, \tag{3.38}\] and we let \(\Gamma_{n,2}\) be trivial. We now define for \(g\in G\) and \(1\leq l\leq n\), \[v_{l}(\Gamma_{l}g)=\min_{\gamma\in\Gamma_{l}}v_{l}(\gamma g)=\min_{\gamma\in \Gamma_{l,1}}v_{l}(\gamma g) \tag{3.39}\] and, for \(1\leq l<n\), \[v_{l+1}(\Gamma_{l}g)=\max_{\gamma\in\Gamma_{l}}v_{l+1}(\gamma g)=\max_{\gamma \in\Gamma_{l,2}}v_{l+1}(\gamma g). \tag{3.40}\] We note that in the proof of proposition 3.2, we saw that the maximum in (3.40) does exist. As for the minimum in (3.39), we simply note that \[v_{l}(AU_{l}V_{l}{}^{t}U_{l}{}^{t}A)=\boldsymbol{a}U_{l}V_{l}{}^{t}U_{l}{}^{t} \boldsymbol{a} \tag{3.41}\] where \(\boldsymbol{a}\) is the last row of \(A\in\operatorname{GL}(l,\mathbb{Z})\), so the positive definiteness of \(U_{l}V_{l}{}^{t}U_{l}\) implies that there are only finitely many values of \(v_{l}(AU_{l}V_{l}{}^{t}U_{l}{}^{t}A)\) below a given bound. We now define a fundamental domain \(\mathcal{D}^{\prime}_{l}\) for the action of \(\operatorname{GL}(l,\mathbb{Z})\) on \(l\times l\) positive definite symmetric matrices. We set \(\mathcal{D}^{\prime}_{1}=\{y>0\}\) and \[\mathcal{D}^{\prime}_{2}=\left\{\begin{pmatrix}1&r\\ 0&1\end{pmatrix}\begin{pmatrix}v_{1}&0\\ 0&v_{2}\end{pmatrix}\begin{pmatrix}1&0\\ r&1\end{pmatrix}:0\leq r\leq\frac{1}{2},\ r^{2}+\frac{v_{1}}{v_{2}}\geq 1 \right\}, \tag{3.42}\] the standard fundamental domain for \(\operatorname{GL}(2,\mathbb{Z})\) acting on \(2\times 2\) positive definite symmetric matrices. The domain \(\mathcal{D}^{\prime}_{l}\) for \(l>2\) is then defined inductively as the set of all \[Y=\begin{pmatrix}1&\boldsymbol{r}\\ 0&I\end{pmatrix}\begin{pmatrix}v_{1}&0\\ 0&Y_{1}\end{pmatrix}\begin{pmatrix}1&0\\ \boldsymbol{r}&1\end{pmatrix} \tag{3.43}\] such that 1. \(v_{1}(Y)\geq v_{1}(AY^{\,t}A)\) for all \(A\in\operatorname{GL}(l,\mathbb{Z})\), 2. \(Y_{1}\in\mathcal{D}^{\prime}_{l-1}\), and 3. \(|r_{j}|\leq\frac{1}{2}\) and \(0\leq r_{1}\leq\frac{1}{2}\) where \(r_{j}\) are the entries of \(\boldsymbol{r}\). This is in fact the set of \(Y\) such that \(Y^{-1}\) is in Grenier's fundamental domain, see [6] and [12], so we do not prove that \(\mathcal{D}^{\prime}_{l}\) is a fundamental domain here. We do however record the following properties of \(\mathcal{D}^{\prime}_{l}\). **Lemma 3.4**.: _Let \(UV\,^{t}U\in\mathcal{D}^{\prime}_{l}\) with_ \[V=\begin{pmatrix}v_{1}&\cdots&0\\ \vdots&\ddots&\vdots\\ 0&\cdots&v_{l}\end{pmatrix} \tag{3.44}\] _positive diagonal and \(U\) upper triangular unipotent. Then we have_ 1. \(v_{j}\geq\frac{3}{4}v_{j+1}\) _for_ \(1\leq j<l\)_,_ 2. _for any_ \(\boldsymbol{x}\in\mathbb{R}^{l}\)_,_ \[\boldsymbol{x}UV\,^{t}U\,^{t}\boldsymbol{x}\asymp\boldsymbol{x}V\,^{t} \boldsymbol{x}\] (3.45) _with implied constant depending only on_ \(l\)_, and_ 3. \[\min_{A\in\operatorname{GL}(l,\mathbb{Z})}v_{l}(AUV\,^{t}U\,^{t}A)\asymp v_{l }(UV\,^{t}U)\] (3.46) _with implied constant depending only on_ \(l\)_._ Proof.: The first and second parts are proved in proposition 3.1 of [10]. To prove the third part, we note that with \(\boldsymbol{a}\) the last row of \(A\), \[v_{l}(AUV\,^{t}U\,^{t}A)=\boldsymbol{a}UV\,^{t}U\,^{t}\boldsymbol{a}\gg \boldsymbol{a}V\,^{t}\boldsymbol{a}, \tag{3.47}\] by the second part of the lemma. Applying the first part of the lemma we have \(\boldsymbol{a}V\,^{t}\boldsymbol{a}\gg v_{l}||\boldsymbol{a}||^{2}\geq v_{l}\), and (3.46) follows. As the proof is almost identical to the proof of the third part of lemma 3.4, we record the following lemma for later use. **Lemma 3.5**.: _If \(g\in\mathcal{D}_{n}\), then for all \(1\leq l<n\),_ \[v_{l}(\Gamma_{l}g)\asymp v_{l}(g) \tag{3.48}\] _with the implied constant depending only on \(n\)._ Proof.: We recall from the second part of proposition 3.3 that for \(\boldsymbol{x}\in\mathbb{R}^{l}\), \[\boldsymbol{x}U_{l}V_{l}\,^{t}U_{l}\,^{t}\boldsymbol{x}\gg\boldsymbol{x}V_{l} \,^{t}\boldsymbol{x}. \tag{3.49}\] We have \[v_{l}(\Gamma_{l}g)=\min_{\begin{subarray}{c}\boldsymbol{c}\in\mathbb{Z}^{l}\\ \boldsymbol{c}\neq 0\end{subarray}}\boldsymbol{c}U_{l}V_{l}\,^{t}U_{l}\,^{t} \boldsymbol{c}\gg\min_{\begin{subarray}{c}\boldsymbol{c}\in\mathbb{Z}^{l}\\ \boldsymbol{c}\neq 0\end{subarray}}\boldsymbol{c}V_{l}\,^{t}\boldsymbol{c}. \tag{3.50}\] Now as \(\boldsymbol{c}\neq 0\), we have \(c_{j}^{2}\geq 1\) for some \(1\leq j\leq l\), and so \[v_{l}(\Gamma_{l}g)\gg v_{j}(g)\gg v_{l}(g) \tag{3.51}\] by the first part of proposition 3.3. We are now ready to prove the main result for this section. **Proposition 3.6**.: _For \(1\leq l\leq n\), there are constants \(a_{l}>0\) such that for \(l<n\), if \(g\in G\) satisfies \(v_{l}(\Gamma_{l}g)\geq a_{l}v_{l+1}(\Gamma_{l}g)\), and for \(l=n\) if \(g\in G\) satisfies \(v_{n}(\Gamma_{n}g)\geq a_{n}\), then there exists \(\gamma\in\Gamma_{l}\) so that \(\gamma g\in\mathcal{D}_{n}\). Moreover, for this \(\gamma\) we have \(v_{l}(\Gamma_{l}g)\asymp v_{l}(\gamma g)\) and, for \(l<n\), \(v_{l+1}(\Gamma_{l}g)=v_{l+1}(\gamma g)\)._ We remark that this proposition can be extended to any of the parabolic subgroups \(P_{L}\) of \(G\) by taking intersections of the maximal parabolics. However some care needs to be taken regarding the possible non-uniqueness of the \(\gamma\) bringing \(g\) into \(\mathcal{D}_{n}\). Since it is unnecessary for our goals, we do not discuss this here. Proof.: By multiplying \(g\) by \[\gamma_{1}=\begin{pmatrix}A^{\prime}&0&0&0\\ 0&A&0&B\\ 0&0&{}^{t}(A^{\prime})^{-1}&0\\ 0&C&0&D\end{pmatrix}\in\Gamma_{l}, \tag{3.52}\] we may assume that \(U_{l}V_{l}{}^{t}U_{l}\in\mathcal{D}_{l}^{\prime}\) and \[\begin{pmatrix}I&X_{l}\\ 0&I\end{pmatrix}\begin{pmatrix}Y_{l}^{\frac{1}{2}}&0\\ 0&{}^{t}Y_{l}{}^{-\frac{1}{2}}\end{pmatrix}\in\mathcal{D}_{n-l}. \tag{3.53}\] We recall that for \(\gamma=\begin{pmatrix}A&B\\ C&D\end{pmatrix}\), \[v_{1}(\gamma g)^{-1}=\boldsymbol{c}Y^{\,t}\boldsymbol{c}+(\boldsymbol{c}X+ \boldsymbol{d})Y^{-1\,t}(\boldsymbol{c}X+\boldsymbol{d}) \tag{3.54}\] where \(\boldsymbol{c}\), \(\boldsymbol{d}\) are the first rows of \(C\), \(D\). Now, writing \(\boldsymbol{c}=\begin{pmatrix}\boldsymbol{c}^{(1)}&\boldsymbol{c}^{(2)}\end{pmatrix}\), \(\boldsymbol{d}=\begin{pmatrix}\boldsymbol{d}^{(1)}&\boldsymbol{d}^{(2)}\end{pmatrix}\) and \[X =\begin{pmatrix}T_{l}+R_{l}X_{l}{}^{t}R_{l}&S_{l}+R_{l}X_{l}\\ {}^{t}S_{l}+X_{l}{}^{t}R_{l}&X_{l}\end{pmatrix}, \tag{3.55}\] \[Y =\begin{pmatrix}U_{l}&R_{l}\\ 0&I\end{pmatrix}\begin{pmatrix}V_{l}&0\\ 0&Y_{l}\end{pmatrix}\begin{pmatrix}{}^{t}U_{l}&0\\ {}^{t}R_{l}&I\end{pmatrix}, \tag{3.56}\] see (2.15), we obtain \[v_{1}(\gamma g)^{-1}= \boldsymbol{c}^{(1)}U_{l}V_{l}{}^{t}U_{l}{}^{t}\boldsymbol{c}^{( 1)}+(\boldsymbol{c}^{(1)}R_{l}+\boldsymbol{c}^{(2)})Y_{l}{}^{t}(\boldsymbol{c} ^{(1)}R_{l}+\boldsymbol{c}^{(2)})\] \[+\left(\boldsymbol{c}^{(1)}(T_{l}-S_{l}{}^{t}R_{l})+\boldsymbol{ c}^{(2)}{}^{t}S_{l}+\boldsymbol{d}^{(1)}-\boldsymbol{d}^{(2)\;t}R_{l}\right){}^{t}U_{l} ^{-1}V_{l}^{-1}U_{l}^{-1}\] \[\qquad{}^{t}\big{(}\boldsymbol{c}^{(1)}(T_{l}-S_{l}{}^{t}R_{l})+ \boldsymbol{c}^{(2)}{}^{t}S_{l}+\boldsymbol{d}^{(1)}-\boldsymbol{d}^{(2)\;t} R_{l}\big{)}\] \[+\left(\boldsymbol{c}^{(1)}(S_{l}+R_{l}X_{l})+\boldsymbol{c}^{(2) }X_{l}+\boldsymbol{d}^{(2)}\right)Y_{l}^{-1}\] \[\qquad{}^{t}\big{(}\boldsymbol{c}^{(1)}(S_{l}+R_{l}X_{l})+ \boldsymbol{c}^{(2)}X_{l}+\boldsymbol{d}^{(2)}\big{)}. \tag{3.57}\] If \(\boldsymbol{c}^{(1)}\neq 0\), then, since \(U_{l}V_{l}\,^{t}U_{l}\in\mathcal{D}^{\prime}_{l}\), we have \[v_{1}(\gamma g)^{-1}\geq\boldsymbol{c}^{(1)}U_{l}V_{l}\,^{t}U_{l}\,^{t} \boldsymbol{c}^{(1)}\gg\boldsymbol{c}^{(1)}V_{l}\,^{t}\boldsymbol{c}^{(1)} \gg v_{l} \tag{3.58}\] by the second part of lemma 3.4. Since, for \(l<n\), \[\begin{pmatrix}I&X_{l}\\ 0&I\end{pmatrix}\begin{pmatrix}Y_{l}^{\frac{1}{2}}&0\\ 0&\,^{t}Y_{l}^{-\frac{1}{2}}\end{pmatrix}\in\mathcal{D}_{n-l}, \tag{3.59}\] we have \(v_{l+1}\gg 1\), see proposition 3.3, and so \(v_{l}\gg a_{l}\) by the hypothesis. For \(l=n\), we directly have \(v_{n}\gg a_{n}\) by hypothesis. Since also \(v_{1}\gg v_{l}\) by lemma 3.4, we have \(v_{1}v_{l}\gg a_{l}^{2}\), so by taking \(a_{l}\) to be a sufficiently large constant, it follows that \(v_{1}\geq v_{1}(\gamma g)\). For \(l<n\), if \(\boldsymbol{c}^{(1)}=0\) but \(\left(\boldsymbol{c}^{(2)}\quad\boldsymbol{d}^{(2)}\right)\neq 0\), then we have \[v_{1}(\gamma g)^{-1}\geq\boldsymbol{c}^{(2)}Y_{l}\,^{t}\boldsymbol{c}^{(2)}+ (\boldsymbol{c}^{(2)}X_{l}+\boldsymbol{d}^{(2)})Y_{l}^{-1}\,^{t}(\boldsymbol{ c}^{(2)}X_{l}+\boldsymbol{d}^{(2)})\geq v_{l+1}(g)^{-1} \tag{3.60}\] since \(g_{l}(g)\in\mathcal{D}_{n-l}\). We have \(v_{l+1}^{-1}\geq a_{l}v_{l}^{-1}\gg a_{l}v_{1}^{-1}\), so \(v_{l+1}^{-1}\geq v_{1}^{-1}\) for \(a_{l}\) sufficiently large, and it follows that \(v_{1}\geq v_{1}(\gamma g)\). Now, if \(l=n\) or if \(\boldsymbol{c}^{(1)}\), \(\boldsymbol{c}^{(2)}\), and \(\boldsymbol{d}^{(2)}\) are all \(0\), then we have \(\boldsymbol{d}^{(1)}\neq 0\) and \[v_{1}(\gamma g)^{-1}=\boldsymbol{d}^{(1)}\,^{t}U_{l}^{-1}V_{l}^{-1}U_{l}^{-1} \,^{t}\boldsymbol{d}^{(1)}\geq v_{1}^{-1} \tag{3.61}\] as \(U_{l}V_{l}\,^{t}U_{l}\in\mathcal{D}^{\prime}_{l}\). We have verified that for any \(\gamma\in\Gamma\), \(v_{1}\leq v_{1}(\gamma g)\), which is the first condition defining the fundamental domain \(\mathcal{D}_{n}\). Restricting to \(\gamma\in\Gamma_{1}\), which fixes \(v_{1}(g)\), the same argument as above shows that \(v_{2}(g)\geq v_{2}(\gamma g)\) for all \(\gamma\in\Gamma_{1}\). Continuing this way, we find that the \(v_{j}\), \(1\leq j\leq l\) are all maximal (over \(\Gamma_{j,2}\)), and so, by the construction of \(\mathcal{D}_{n}\), there is a \(\gamma\in\Gamma_{l}\) with the form \[\gamma=\begin{pmatrix}A&B\\ 0&\,^{t}A^{-1}\end{pmatrix}, \tag{3.62}\] where \(A\) is upper-triangular unipotent (so \(\gamma\in\Gamma_{l}\) for all \(l\)) such that \(\gamma g\in\mathcal{D}_{n}\). ## 4 Proof of the main theorem In the following subsection we gather some technical lemmas regarding the height function needed in the proof of theorem 1.2, see section 4.2. This height function is motivated by the following corollary from [10]. **Corollary 4.1**.: _For a Schwartz function \(f\in\mathcal{S}(\mathbb{R}^{n})\) and \((h,g)\in\tilde{\mathcal{D}}\), and \(A>0\), we have_ \[\Theta_{f}(h,g)\ll_{f,A}(\det Y)^{\frac{1}{4}}(1+\boldsymbol{x}Y\,^{t} \!\boldsymbol{x})^{-A} \tag{4.1}\] _where_ \[g=\begin{pmatrix}I&X\\ 0&I\end{pmatrix}\begin{pmatrix}Y^{\frac{1}{2}}&0\\ 0&\,^{t}Y^{-\frac{1}{2}}\end{pmatrix}k(Q). \tag{4.2}\] We remark that in [10] this is obtained as a consequence of full asymptotics of the theta function in the various cuspidal regions. We also remark that in [10] we use a slightly different fundamental domain, however an examination of the proof there shows that the fundamental domain can be replaced by any set satisfying the conclusions of proposition 3.3. ### Heights and volumes For a fixed \(A>0\) sufficiently large depending only on \(n\), we define the function \(D:\tilde{\Gamma}\backslash(H\rtimes G)\to\mathbb{R}_{>0}\) by \[D\left(\tilde{\Gamma}(h,g)\right)=\det Y(\gamma g)\left(1+\mathbf{x}(uh_{\gamma}h^{ \gamma^{-1}})Y(\gamma g)\,^{t}\mathbf{x}(uh_{\gamma}h^{\gamma^{-1}})\right)^{-A} \tag{4.3}\] where \((uh_{\gamma},\gamma)\in\tilde{\Gamma}\) is so that \((uh_{\gamma},\gamma)(h,g)\in\tilde{\mathcal{D}}\). Here we write \(h\in H\) as \(h=(\mathbf{x}(h),\mathbf{y}(h),t(h))\). For completeness, in case there are more than one \((uh_{\gamma},\gamma)\in\tilde{\Gamma}\) such that \((uh_{\gamma},\gamma)(h,g)\in\tilde{\mathcal{D}}\), then we define \(D\left(\tilde{\Gamma}(h,g)\right)\) to be the largest of the finite number of values (4.3). This point is not essential as these values are within constant multiples of each other; see the argument in lemma 4.4 for how this can be proved. We begin by analyzing the growth of the height function. We let \(\tilde{\mu}\) denote the Haar probability measure on \(\tilde{\Gamma}\backslash(H\rtimes G)\), which is \(\mu\), the Haar probability measure on \(\Gamma\backslash G\), times the Lebesgue measure on the entries of \(h=(\mathbf{x},\mathbf{y},t)\). **Lemma 4.2**.: _For \(R\geq 1\) we have_ \[\tilde{\mu}(\{\tilde{\Gamma}(h,g)\in\tilde{\Gamma}\backslash(H\rtimes G):D( \tilde{\Gamma}(h,g))\geq R\})\ll R^{-\frac{n+2}{2}} \tag{4.4}\] _with the implied constant depending only on \(n\)._ Proof.: We recall that \(g\in\mathcal{D}_{n}\) is written as \[g=\begin{pmatrix}U&X\,^{t}U^{-1}\\ 0&\,^{t}U^{-1}\end{pmatrix}\begin{pmatrix}V^{\frac{1}{2}}&0\\ 0&V^{-\frac{1}{2}}\end{pmatrix}k(Q) \tag{4.5}\] for \(U\) upper-triangular unipotent, \(X\) symmetric, \(Q\in\mathrm{U}(n)\), and \[V=V(g)=\begin{pmatrix}v_{1}&\cdots&0\\ \vdots&\ddots&\vdots\\ 0&\cdots&v_{n}\end{pmatrix} \tag{4.6}\] positive diagonal. The Haar measure \(\mu\) on \(G\) is then proportional to Lebesgue measure with respect to the entries of \(X\) and the off-diagonal entries of \(U\), \(\mathrm{U}(n)\)-Haar measure on \(Q\), and the measure given by \[v_{1}^{-n-1}v_{2}^{-n}\cdots v_{n}^{-2}\mathrm{d}v_{1}\mathrm{d}v_{2}\cdots \mathrm{d}v_{n} \tag{4.7}\] on \(V\). By proposition 3.3, we observe that the set in (4.4) is contained in the set of \((h,g)\) satisfying \(v_{j}\geq cv_{j+1}\) for all \(1\leq j<n\) and some \(c>0\) in addition to \(\det Y\geq R\) and \(\mathbf{x}Y\,^{t}\mathbf{x}\leq R^{-\frac{1}{A}}(\det Y)^{\frac{1}{A}}\). Moreover, the variables \(\mathbf{x},\mathbf{y},t\) as well as \(U\), \(X\) are constrained to compact sets, and so the measure of the set (4.4) is \[\ll R^{-\epsilon}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! where \(\epsilon=\frac{n}{2A}\). Changing variables \(v_{j}=\exp(u_{j})\), the integral in (4.8) is \[R^{-\epsilon}\int\hskip-10.0pt\cdots\int\limits_{\begin{subarray}{c }u_{j}-u_{j+1}\geq\log c\\ u_{1}+\cdots+u_{n}\geq\log R\end{subarray}}\exp\big{(}-(n+\tfrac{1}{2}-\epsilon)u _{1}-(n-\tfrac{1}{2}-\epsilon)u_{2}\\ -\cdots-(\tfrac{3}{2}-\epsilon)u_{n}\big{)}\mathrm{d}u_{1} \mathrm{d}u_{2}\cdots\mathrm{d}u_{n}. \tag{4.9}\] We now make the linear change of variables \(s_{j}=u_{j}-u_{j+1}\) for \(j<n\) and \(s_{n}=u_{1}+\cdots+u_{n}\). This transformation has determinant \(n\) and its inverse is given by \[u_{j}=-\frac{1}{n}\sum_{1\leq i<j}is_{i}+\frac{1}{n}\sum_{j\leq i<n}(n-i)s_{i} +\frac{1}{n}s_{n}. \tag{4.10}\] We find that the exponent in (4.9) is then \[-\sum_{1\leq j\leq n}(n-j+\tfrac{3}{2}-\epsilon)u_{j}=-\left(\frac{n+2}{2}- \epsilon\right)s_{n}-\sum_{1\leq j<n}\frac{j(n-j)}{2}s_{j}. \tag{4.11}\] As \(\frac{j(n-j)}{2}>0\) for \(j<n\), the bound (4.4) follows. Lemma 4.4 below contains a key estimate, establishing a kind of 'uniform continuity' for \(\log D\). The proof of this lemma is the primary motivation for defining our new fundamental domain and studying the shape of its cusp in sections 3.1 and 3.2. For the proof, we first establish a similar kind of 'uniform continuity' for the functions \(v_{l}(\Gamma_{l}g)\) and \(v_{l+1}(\Gamma_{l}g)\) that are essential to section 3.2. **Lemma 4.3**.: _Let \(g,g_{0}\in G\) with \(||g_{0}-I||\leq 1\), then_ \[v_{l}(g)\asymp v_{l}(gg_{0}),\ v_{l}(\Gamma_{l}g)\asymp v_{l}(\Gamma_{l}gg_{0} ),\ v_{l+1}(\Gamma_{l}g)\asymp v_{l+1}(\Gamma_{l}gg_{0}) \tag{4.12}\] _for all \(1\leq l\leq n\) with implied constants depending only on \(n\)._ Proof.: We first note that we may in fact work with \(||I-g_{0}||\leq\epsilon\) as then the statement would follow by repeated application of the estimates. In fact, we may assume \(||I-g_{0}^{-1}||\leq\epsilon\) as well. Now write \[g=\begin{pmatrix}I&X\\ 0&I\end{pmatrix}\begin{pmatrix}Y^{\frac{1}{2}}&0\\ 0&{}^{t}Y^{-\frac{1}{2}}\end{pmatrix}\begin{pmatrix}R&-S\\ S&R\end{pmatrix}, \tag{4.13}\] with \(R+\mathrm{i}S\in\mathrm{U}(n)\), so in particular \(R\,{}^{t}R+S\,{}^{t}S=I\). With \(g_{0}=\begin{pmatrix}A&B\\ C&D\end{pmatrix}\), we have from (2.10) that \[Y(gg_{0})^{-1}={}^{t}Y^{-\frac{1}{2}}\big{(}SA\,{}^{t}A\,{}^{t}S +RC\,{}^{t}A\,{}^{t}S+SA\,{}^{t}C\,{}^{t}R+RC\,{}^{t}CR\\ +SB\,{}^{t}B\,{}^{t}S+RD\,{}^{t}B\,{}^{t}S+SB\,{}^{t}D\,{}^{t}R+RD\,{} ^{t}D\,{}^{t}R\big{)}Y^{-\frac{1}{2}}. \tag{4.14}\] As \(||g_{0}-I||\leq\epsilon\), we have \[{}^{t}Y(gg_{0})^{-\frac{1}{2}}={}^{t}Y^{-\frac{1}{2}}(I+O(\epsilon)). \tag{4.15}\] On the other hand, letting \(\boldsymbol{y}_{j}\) and \(\boldsymbol{y}_{J}^{\prime}\) denote the rows of \({}^{t}Y^{-\frac{1}{2}}\) and \({}^{t}Y(gg_{0})^{-\frac{1}{2}}\), we have \[v_{1}(g)^{-\frac{1}{2}}=||\boldsymbol{y}_{1}||,\quad v_{1}(gg_{0})^{-\frac{1}{ 2}}=||\boldsymbol{y}_{1}^{\prime}|| \tag{4.16}\] and for \(2\leq l\leq n\), \[v_{l}(g)^{-\frac{1}{2}}=\frac{||\boldsymbol{y}_{1}\wedge\cdots\wedge \boldsymbol{y}_{l}||}{||\boldsymbol{y}_{1}\wedge\cdots\wedge\boldsymbol{y}_{l -1}||},\quad v_{l}(gg_{0})^{-\frac{1}{2}}=\frac{||\boldsymbol{y}_{1}^{\prime} \wedge\cdots\wedge\boldsymbol{y}_{l}^{\prime}||}{||\boldsymbol{y}_{1}^{ \prime}\wedge\cdots\wedge\boldsymbol{y}_{l-1}^{\prime}||}, \tag{4.17}\] and so \(v_{l}(g)\asymp v_{l}(gg_{0})\) follows. Now let \(\gamma\in\Gamma_{l}\) be so that \(v_{l}(\Gamma_{l}g)=v_{l}(\gamma g)\). We have \[v_{l}(\Gamma_{l}gg_{0})\leq v_{l}(\gamma gg_{0})\ll v_{l}(\gamma g)=v_{l}( \Gamma_{l}g), \tag{4.18}\] and the reverse bound follows by switching the roles of \(g\) and \(gg_{0}\), and using \(||g_{0}^{-1}-I||\leq\epsilon\). The final estimate in (4.12) is proved in the same way. **Lemma 4.4**.: _If \((h,g),(h_{0},g_{0})\in G\) with \(||g_{0}-I||\leq 1\) and \(h_{0}=(\boldsymbol{x}_{0},\boldsymbol{y}_{0},t_{0})\) satisfies \(||\boldsymbol{x}_{0}||,||\boldsymbol{y}_{0}||\leq 1\), then_ \[D(\tilde{\Gamma}(h,g))\asymp D(\tilde{\Gamma}(h,g)(h_{0},g_{0})). \tag{4.19}\] Proof.: We observe as in lemma 4.3, we may in fact assume \[||g_{0}-I||\leq\epsilon,\ ||\boldsymbol{x}_{0}||\leq\epsilon,\ \text{and}\ || \boldsymbol{y}_{0}||\leq\epsilon. \tag{4.20}\] Moreover, it suffices to show that \(D(\tilde{\Gamma}(h,g)(h_{0},g_{0}))\gg D(\tilde{\Gamma}(h,g))\) as the other inequality follows from switching \((h,g)\) and \((h,g)(h_{0},g_{0})\) as we may assume in addition that \((h_{0},g_{0})^{-1}=(h_{0}^{-g_{0}},g_{0}^{-1})\) also satisfies (4.20). Now let us suppose that \((h,g)\in\tilde{\mathcal{D}}\) so that \[D(\tilde{\Gamma}(h,g))=(\det Y(g))(1+\boldsymbol{x}(h)Y(g)\,{}^{t}\boldsymbol {x}(h))^{-A}. \tag{4.21}\] Let \(1\leq l\leq n\) be the largest index such that \(v_{l}(g)\geq av_{l+1}(g)\) (or \(v_{n}(g)\geq a\) when \(l=n\)) where \(a\) is a constant determined by the constants in proposition 3.6 and lemma 4.3. If no such \(l\) exists, then we have \(v_{j}(g)\asymp 1\) for all \(j\), and lemma 4.3 implies that \(v_{j}(gg_{0})\asymp 1\) as well. The bounds \[D(\tilde{\Gamma}(h,g)(h_{0},g_{0}))\gg 1\gg D(\tilde{\Gamma}(h,g)) \tag{4.22}\] then follow immediately. Now assuming that such a maximal \(l\) exists, we have that \(v_{j}(g)\asymp 1\) for all \(j>l\). For these \(j\), lemma 4.3 then implies that \(v_{j}(gg_{0})\asymp 1\), and it follows that \(v_{j}(\gamma gg_{0})\asymp 1\) for \(\gamma\in\Gamma_{l}\) such that \(g_{l}(\gamma gg_{0})\in\mathcal{D}_{n-l}\), see (2.16). By lemma 3.5, we have \(v_{l}(\Gamma_{l}g)\gg v_{l}(g)\), and so \[v_{l}(\Gamma_{l}g)\gg av_{l+1}(g)=av_{l+1}(\Gamma_{l}g) \tag{4.23}\] since \(g_{l}(g)\in\mathcal{D}_{n-l}\). Via lemma 4.3, this implies that \(v_{l}(\Gamma_{l}gg_{0})\gg av_{l+1}(\Gamma_{l}gg_{0})\), so \(a\) can be chosen large enough so that \(gg_{0}\) satisfies the hypotheses of proposition 3.6, and we let \(\gamma\in\Gamma_{l}\) be so that \(\gamma gg_{0}\in\mathcal{D}\). We write \[\gamma=\begin{pmatrix}A_{1}&*&*&*\\ 0&*&*&*\\ 0&0&*&0\\ 0&*&*&*\end{pmatrix}, \tag{4.24}\] where \(A_{1}\in\operatorname{GL}(l,\mathbb{Z})\). From the estimates above, we have \[\det Y(\gamma gg_{0})\asymp\det U_{l}(\gamma gg_{0})V_{l}(\gamma gg _{0})\,^{t}U_{l}(\gamma gg_{0})=\det U_{l}(gg_{0})V_{l}(gg_{0})\,^{t}U_{l}(gg_ {0})\\ \asymp\det U_{l}(g)V_{l}(g)\,^{t}U_{l}(g)\asymp\det Y(g), \tag{4.25}\] where the equality follows from the fact that \(\gamma\in\Gamma_{l}\) normalizes the first matrix in (2.15) and \(\det A_{1}=\pm 1\). It now remains to consider the factors \(1+\boldsymbol{x}(*)Y(*)\,^{t}\boldsymbol{x}(*)\) in the definition of the height function \(D\). Let \(u=(\boldsymbol{m},\boldsymbol{n},0)\) with \(\boldsymbol{m},\boldsymbol{n}\in\mathbb{Z}^{n}\) be so that \((uh_{\gamma},\gamma)(h,g)(h_{0},g_{0})\in\tilde{\mathcal{D}}\). Recalling the definition of \(h_{\gamma}=(\boldsymbol{r},\boldsymbol{s},0)\) following (3.1), we have that \(\boldsymbol{r}^{(1)}=0\) where \(\boldsymbol{r}=\begin{pmatrix}\boldsymbol{r}^{(1)}&\boldsymbol{r}^{(2)}\end{pmatrix}\). Moreover, writing \(\boldsymbol{x}=\begin{pmatrix}\boldsymbol{x}^{(1)}&\boldsymbol{x}^{(2)}\end{pmatrix}\), we have \(\boldsymbol{x}^{(1)}((hh_{0}^{g^{-1}})^{\gamma^{-1}})=\boldsymbol{x}^{(1)}(hh _{0}^{g^{-1}})A_{1}^{-1}\). Using proposition 3.3 together with the fact that \(u\) minimizes the absolute values of the entries of \(\boldsymbol{x}(uh_{\gamma}(hh_{0}^{g^{-1}})^{\gamma^{-1}})\), we have \[1+\boldsymbol{x}(uh_{\gamma}(hh_{0}^{g^{-1}})^{\gamma^{-1}})Y( \gamma gg_{0})\,^{t}\boldsymbol{x}(uh_{\gamma}(hh_{0}^{g^{-1}})^{\gamma^{-1}}) \\ \ll 1+\boldsymbol{x}(h_{\gamma}(hh_{0}^{g^{-1}})^{\gamma^{-1}} )Y(\gamma gg_{0})\,^{t}\boldsymbol{x}(h_{\gamma}(hh_{0}^{g^{-1}})^{\gamma^{-1 }}), \tag{4.26}\] and from the estimates above on the \(v_{j}(\gamma gg_{0})\) for \(j>l\), we have \[1+\boldsymbol{x}(h_{\gamma}(hh_{0}^{g^{-1}})^{\gamma^{-1}})Y( \gamma gg_{0})\,^{t}\boldsymbol{x}(h_{\gamma}(hh_{0}^{g^{-1}})^{\gamma^{-1}}) \\ \asymp 1+\boldsymbol{x}^{(1)}(h_{\gamma}(hh_{0}^{g^{-1}})^{ \gamma^{-1}})U_{l}(\gamma gg_{0})V_{l}(\gamma gg_{0})\,^{t}U_{l}(\gamma gg_{0 })\,^{t}\boldsymbol{x}(h_{\gamma}(hh_{0}^{g^{-1}})^{\gamma^{-1}}). \tag{4.27}\] Using the expressions for \(h_{\gamma}\), \((hh_{0}^{g^{-1}})^{\gamma^{-1}}\), and that \[U_{l}(\gamma gg_{0})V_{l}(\gamma gg_{0})\,^{t}U_{l}(\gamma gg_{0})=A_{1}U_{l} (gg_{0})V_{l}(gg_{0})\,^{t}U_{l}(gg_{0})\,^{t}A_{1}, \tag{4.28}\] the right side of (4.27) is equal to \[1+\boldsymbol{x}^{(1)}(hh_{0}^{g^{-1}})U_{l}(gg_{0})V_{l}(gg_{0})\,^{t}U_{l}( gg_{0})\,^{t}\boldsymbol{x}^{(1)}(hh_{0}^{g^{-1}})\asymp 1+\boldsymbol{x}(hh_{0}^{g^{-1}})Y( gg_{0})\,^{t}\boldsymbol{x}(hh_{0}^{g^{-1}}) \tag{4.29}\] by the above bounds on \(v_{j}(gg_{0})\) for \(j>l\). Recalling that \[g=\begin{pmatrix}I&X(g)\\ 0&I\end{pmatrix}\begin{pmatrix}Y(g)^{\frac{1}{2}}&0\\ 0&{}^{t}\!Y(g)^{-\frac{1}{2}}\end{pmatrix}k(g) \tag{4.30}\] with \(k(g)\in K=G\cap\operatorname{SO}(2n,\mathbb{R})\), we set \(h^{\prime}_{0}=h_{0}^{k(g)^{-1}}\) and note that \[||\mathbf{x}(h^{\prime}_{0})||^{2}+||\mathbf{y}(h^{\prime}_{0})||^{2}=||\mathbf{x}(h_{0})|| ^{2}+||\mathbf{y}(h_{0})||^{2}. \tag{4.31}\] Since \(Y(gg_{0})=Y(g)^{\frac{1}{2}}Y(k(g)g_{0})\,^{t}Y(g)^{\frac{1}{2}}\) and \(\mathbf{x}(hh_{0}^{g^{-1}})=\mathbf{x}(h)+\mathbf{x}(h^{\prime}_{0})Y(g)^{-\frac{1}{2}}\), the right side of (4.29) is equal to \[1+\mathbf{x}(h)Y(g)^{\frac{1}{2}}Y(k(g)g_{0})\,^{t}Y(g)^{-\frac{1}{2} \,t}\mathbf{x}(h)\\ +2\mathbf{x}(h)Y(g)^{\frac{1}{2}}Y(k(g)g_{0})\,^{t}\mathbf{x}(h^{\prime}_ {0})+\mathbf{x}(h^{\prime}_{0})Y(k(g)g_{0})\,^{t}\mathbf{x}(h^{\prime}_{0}). \tag{4.32}\] We have that \(||g_{0}-I||\leq\epsilon\) implies \(Y(k(g)g_{0})=I+O(\epsilon)\) as in (4.14), so if (4.31) is at most \(\epsilon^{2}\) as well, with \(\epsilon\) sufficiently small, then (4.32) is \[\asymp 1+\mathbf{x}(h)Y(g)\,^{t}\mathbf{x}(h), \tag{4.33}\] where we have used \[2|\mathbf{x}(h)Y(g)^{\frac{1}{2}}Y(k(g)g_{0})\,^{t}\mathbf{x}(h^{\prime} _{0})|\\ \leq\sqrt{\mathbf{x}(h^{\prime}_{0})Y(k(g)g_{0})^{2\,\,t}\mathbf{x}(h^{ \prime}_{0})}\left(\mathbf{x}(h)Y(g)\,^{t}\mathbf{x}(h)+1\right)\ll\epsilon\left(\bm {x}(h)Y(g)\,^{t}\mathbf{x}(h)+1\right) \tag{4.34}\] to bound the third term in (4.32). The bound \(D(\tilde{\Gamma}(h,g)(h_{0},g_{0}))\gg D(\tilde{\Gamma}(h,g)\) now follows. ### Proof of theorem 1.2 We recall the following lemma from [10]. **Lemma 4.5**.: _There exists a smooth, compactly supported function \(f_{1}:\mathbb{R}\to\mathbb{R}_{\geq 0}\) such that_ \[\chi_{1}(x)=\sum_{j\geq 0}\left(f_{1}\left(2^{j}x\right)+f_{1}\left(2^{j}(1-x) \right)\right), \tag{4.35}\] _where \(\chi_{1}\) is the indicator function of the open unit interval \((0,1)\)._ Now, following the method of [10], we define for a subset \(S\subset\{1,\dots,n\}\) and \(\mathbf{j}=(j_{1},\dots,j_{n})\in\mathbb{Z}^{n}\) with \(j_{i}\geq 0\), \[g_{\mathbf{j},S}=\begin{pmatrix}A_{\mathbf{j}}E_{S}&0\\ 0&A_{\mathbf{j}}^{-1}E_{S}\end{pmatrix}\in G \tag{4.36}\] where \(E_{S}\) is diagonal with \((i,i)\) entry \(-1\) if \(i\in S\), \(+1\) if \(i\not\in S\), and \[A_{\mathbf{j}}=\begin{pmatrix}2^{j_{1}}&\cdots&0\\ \vdots&\ddots&\vdots\\ 0&\cdots&2^{j_{n}}\end{pmatrix}. \tag{4.37}\] We also set \(h_{S}=(\mathbf{x}_{S},0,0)\in H\) where \(\mathbf{x}_{S}\) has \(i\)th entry \(-1\) if \(i\in S\) and \(0\) if \(i\not\in S\). As in [10], we have \[\chi_{\mathcal{B}}(\mathbf{x})=\sum_{\mathbf{j}\geq 0}\sum_{S\subset\{1,\ldots,n\}}f_{n} \left((\mathbf{x}B^{-1}+\mathbf{x}_{S})A_{\mathbf{j}}E_{S}\right), \tag{4.38}\] where \(\chi_{\mathcal{B}}\) is the indicator function of the rectangular box \(\mathcal{B}=(0,b_{1})\times\cdots\times(0,b_{n})\), \(B\) is the diagonal matrix with entries \(b_{1},\ldots,b_{n}\), \[f_{n}(x_{1},\ldots,x_{n})=\prod_{1\leq j\leq n}f_{1}(x_{j}), \tag{4.39}\] and the sums are over \(\mathbf{j}\in\mathbb{Z}^{n}\) with nonnegative entries. Let \(\psi:[0,\infty)\to[1,\infty)\) be an increasing function. Then for \(C>0\) we define \(\mathcal{G}_{\mathbf{j}}(\psi,C)\) to be the set of \(\tilde{\Gamma}(h,g)\in\tilde{\Gamma}\backslash(H\rtimes G)\) such that \[D\big{(}\tilde{\Gamma}(h,g)(1,\begin{pmatrix}\mathrm{e}^{-s}I&0\\ 0&\mathrm{e}^{s}I\end{pmatrix})(h_{S},g_{\mathbf{j},S})\big{)}^{\frac{1}{4}}\leq C\psi (s) \tag{4.40}\] for all \(S\subset\{1,\ldots,n\}\) and \(s\geq 1\). **Lemma 4.6**.: _Suppose that \(\psi\) satisfies_ \[\int_{0}^{\infty}\psi(x)^{-(2n+4)}\mathrm{d}x\leq C_{\psi} \tag{4.41}\] _for some \(C_{\psi}\geq 1\). Then_ \[\tilde{\mu}\left(\tilde{\Gamma}\backslash(H\rtimes G)-\mathcal{G}_{\mathbf{j}}( \psi,C)\right)\ll C_{\psi}C^{-(2n+4)}2^{j_{1}+\cdots+j_{n}}. \tag{4.42}\] Proof.: Suppose that \(\tilde{\Gamma}(h,g)\not\in\mathcal{G}_{\mathbf{j}}(\psi,C)\), so there exists \(S\subset\{1,\ldots,n\}\) and \(s\geq 1\) such that \[D\big{(}\tilde{\Gamma}(h,g)(1,\begin{pmatrix}\mathrm{e}^{-s}I&0\\ 0&\mathrm{e}^{s}I\end{pmatrix})(h_{S},g_{\mathbf{j},S})\big{)}^{\frac{1}{4}}\geq C \psi(s). \tag{4.43}\] We let \(k\) be a nonnegative integer such that \[\frac{k}{K_{\mathbf{j}}}\leq s<\frac{k+1}{K_{\mathbf{j}}}, \tag{4.44}\] where \(K_{\mathbf{j}}=K2^{j_{1}+\cdots+j_{n}}\) with \(K\) a constant to be determined. We have \[(1,\begin{pmatrix}\mathrm{e}^{-s}I&0\\ 0&\mathrm{e}^{s}I\end{pmatrix})(h_{S},g_{\mathbf{j},S})=(1,\begin{pmatrix}\mathrm{e }^{-\frac{k}{K_{\mathbf{j}}}I}&0\\ 0&\mathrm{e}^{\frac{k}{K_{\mathbf{j}}}I}\end{pmatrix})(h_{S},g_{\mathbf{j},S})(h_{1},g_ {1}), \tag{4.45}\] where, with \(s^{\prime}=s-\frac{k}{K_{\mathbf{j}}}\), \[h_{1}=((\mathrm{e}^{s^{\prime}}-1)\mathbf{x}_{S}A_{\mathbf{j}}E_{S},0,0),\quad g_{1}= \begin{pmatrix}\mathrm{e}^{-s^{\prime}}I&0\\ 0&\mathrm{e}^{s^{\prime}}I\end{pmatrix}. \tag{4.46}\] As \(|s^{\prime}|\leq K_{\mathbf{j}}^{-1}\), we can make \(K\) sufficiently large so that \((h_{1},g_{1})\) satisfies the conditions of lemma 4.4. From this and the fact that \(\psi\) is increasing, we have that \[D\big{(}\tilde{\Gamma}(h,g)(1,\begin{pmatrix}\mathrm{e}^{-\frac{k}{K_{\mathbf{j}}} }I&0\\ 0&\mathrm{e}^{\frac{k}{K_{\mathbf{j}}}}I\end{pmatrix})(h_{S},g_{\mathbf{j},S})\big{)}^{ \frac{1}{4}}\gg C\psi\left(\frac{k}{K_{\mathbf{j}}}\right). \tag{4.47}\] By lemma 4.2 and the fact that right multiplication is volume preserving, we have that the set of \(\tilde{\Gamma}(h,g)\) satisfying (4.47) has \(\tilde{\mu}\)-volume bounded by a constant times \[C^{-2n-4}\psi\left(\frac{k}{K_{\mathbf{j}}}\right)^{-2n-4}. \tag{4.48}\] Bounding the volume of the set \(\tilde{\Gamma}\backslash(H\rtimes G)-\mathcal{G}_{\mathbf{j}}(\psi,C)\) by summing (4.48) over \(S\subset\{1,\ldots,n\}\) and nonnegative \(k\in\mathbb{Z}\), we obtain the bound \[C^{-(2n+4)}\sum_{k\geq 0}\psi\left(\frac{k}{K_{\mathbf{j}}}\right)^{-(2n+4)}\ll C ^{-(2n+4)}\left(\psi(0)+\int_{0}^{\infty}\psi\left(\frac{x}{K_{\mathbf{j}}}\right) ^{-(2n+4)}\mathrm{d}x\right) \tag{4.49}\] as \(\psi(x)\) is increasing. The bound (4.42) follows by changing variables. We now proceed to the proof of theorem 1.2. Proof of theorem 1.2.: From (4.38) we express \(\theta_{\mathcal{B}}(M,X,\mathbf{x},\mathbf{y})\) as \[\sum_{S\subset\{1,\ldots,n\}}\sum_{\mathbf{j}\geq 0}\sum_{\mathbf{m}\in\mathbb{Z}^{n}}f_{ n}\left(\frac{1}{M}(\mathbf{m}+\mathbf{x}+M\mathbf{x}_{S}B)B^{-1}E_{S}A_{\mathbf{j}}\right) \mathrm{e}\left(\frac{1}{2}\mathbf{m}X^{\,\mathbf{t}}\mathbf{m}+\mathbf{m}^{\,\mathbf{t}}\mathbf{y} \right). \tag{4.50}\] We break the sum in (4.50) into terms \(\mathbf{j}\) such that \(2^{j_{i}}b_{j_{i}}^{-1}\leq M\) for all \(i\) and terms \(\mathbf{j}\) such that \(2^{j_{i}}b_{j_{i}}^{-1}>M\) for some \(i\). Using (2.8), we write the first part as \[\mathrm{e}(\tfrac{1}{2}\mathbf{x}X^{\,t}\mathbf{x})M^{\frac{n}{2}}(\det B)^{\frac{1}{2 }}\sum_{\begin{subarray}{c}\mathbf{j}\geq 0\\ 2^{j_{i}}b_{j_{i}}^{-1}\leq M\end{subarray}}2^{-\frac{1}{2}(j_{1}+\cdots+j_{n}) }\Theta_{f_{n}}\left((h,g(MB,X))(h_{S},g_{\mathbf{j},S})\right), \tag{4.51}\] where \(h=(\mathbf{x},\mathbf{y}-\mathbf{x}X,0)\) and \[g(MB,X)=\begin{pmatrix}I&X\\ 0&I\end{pmatrix}\begin{pmatrix}\frac{1}{M}B^{-1}&0\\ 0&MB\end{pmatrix}. \tag{4.52}\] Bounding this is the main work of the proof, but we first bound the contribution of the terms \(\boldsymbol{j}\) with a large index. Suppose that \(L\subset\{1,\ldots,n\}\) is not empty and that \(2^{j_{l}}>b_{j_{l}}M\) for all \(l\in L\). Then the compact support of \(f_{1}\) implies that the sum over \(\boldsymbol{m}^{(L)}\), the vector of entries of \(\boldsymbol{m}\) with index in \(L\), has a bounded number of terms. We write \[\boldsymbol{m}X^{\,\,t}\boldsymbol{m}=\boldsymbol{m}^{(L)}X^{(L,L)}\,^{t} \boldsymbol{m}^{(L)}+2\boldsymbol{m}^{(L)}X^{(L,L^{\prime})}\,^{t}\boldsymbol {m}^{(L^{\prime})}+\boldsymbol{m}^{(L^{\prime})}X^{(L^{\prime},L^{\prime})} \,^{t}\boldsymbol{m}^{(L^{\prime})}, \tag{4.53}\] where \(L^{\prime}\) is the complement of \(L\), and \(X^{(L_{1},L_{2})}\) is the matrix of entries of \(X\) with row and column indices in \(L_{1}\) and \(L_{2}\) respectively. We have (4.39) that \(f_{n}\left(\frac{1}{M}(\boldsymbol{m}+\boldsymbol{x}+M\boldsymbol{x}_{S}B)B^ {-1}E_{S}A_{\boldsymbol{j}}\right)\) factors as \[f_{\#L}\left(\frac{1}{M}(\boldsymbol{m}^{(L)}+\boldsymbol{x}^{(L )}+M\boldsymbol{x}_{S}^{(L)})(B^{(L,L)})^{-1}E_{S}^{(L,L)}A_{\boldsymbol{j}}^{ (L,L)}\right)\\ \times f_{\#L^{\prime}}\left(\frac{1}{M}(\boldsymbol{m}^{(L^{ \prime})}+\boldsymbol{x}^{(L^{\prime})}+M\boldsymbol{x}_{S}^{(L^{\prime})})(B^ {(L^{\prime},L^{\prime})})^{-1}E_{S}^{(L^{\prime},L^{\prime})}A_{\boldsymbol{ j}}^{(L^{\prime},L^{\prime})}\right), \tag{4.54}\] and so, by inclusion-exclusion and the boundedness of \(f_{\#L}\), the terms \(\boldsymbol{j}\) of (4.50) with \(\boldsymbol{j}_{l}>b_{j_{l}}M\) for some \(i\) is at most a constant times \[\sum_{\begin{subarray}{c}L\subset\{1,\ldots,n\}\\ L\neq\emptyset\end{subarray}}\sum_{S\subset L}\sum_{\boldsymbol{m}^{(L)}} \big{|}\theta_{\mathcal{B}^{(L^{\prime})}}(M,X^{L^{\prime},L^{\prime}}, \boldsymbol{x}^{(L^{\prime})},\boldsymbol{y}^{(L^{\prime})}+\boldsymbol{m}^{ (L)}X^{(L,L^{\prime})})\big{|}, \tag{4.55}\] where the sum over \(\boldsymbol{m}^{(L)}\) has a bounded number of terms, \(\mathcal{B}^{(L^{\prime})}\) is the edge of \(\mathcal{B}\) associated to \(L^{\prime}\), and we have used the decomposition (4.38) to express \(\theta_{\mathcal{B}^{(L^{\prime})}}(M,X^{L^{\prime},L^{\prime}},\boldsymbol{ x}^{(L^{\prime})},\boldsymbol{y}^{(L^{\prime})}+\boldsymbol{m}^{(L)}X^{(L,L^{ \prime})})\) as \[\sum_{S^{\prime}\subset L^{\prime}}\sum_{\boldsymbol{j}_{L^{\prime }}}\sum_{\boldsymbol{m}_{L^{\prime}}}f_{\#L^{\prime}}\left(\frac{1}{M}( \boldsymbol{m}^{(L^{\prime})}+\boldsymbol{x}^{(L^{\prime})}+M\boldsymbol{x} _{S}^{(L^{\prime})})(B^{(L^{\prime},L^{\prime})})^{-1}E_{S}^{(L^{\prime},L^{ \prime})}A_{\boldsymbol{j}}^{(L^{\prime},L^{\prime})}\right)\\ \times\mathrm{e}\left(\tfrac{1}{2}\boldsymbol{m}^{(L^{\prime})}X ^{(L^{\prime},L^{\prime})}\,^{t}\boldsymbol{m}^{(L^{\prime})}+\boldsymbol{m}^ {(L^{\prime})}\,^{t}(\boldsymbol{y}^{(L^{\prime})}+\boldsymbol{m}^{(L)}X^{(L,L^{\prime})})\right). \tag{4.56}\] When \(L=\{1,\ldots,n\}\), the corresponding part of (4.55) is clearly bounded. For any other \(L\), we may apply theorem 1.1 (emphasizing the importance of the uniformity in \(\boldsymbol{y}\)) to conclude for any \(\epsilon>0\), there are full measure sets \(\mathcal{X}^{(n-\#L)}=\mathcal{X}^{(n-\#L)}(\epsilon)\) such that if \(X^{(L^{\prime},L^{\prime})}\in\mathcal{X}^{(n-\#L)}\), the corresponding part of (4.55) is \(\ll M^{\frac{n-\#L}{2}+\epsilon}\) for any \(\epsilon>0\). It follows that (4.55) is \(\ll M^{\frac{n}{2}}\) assuming that \(X\) is such that \(X^{(L^{\prime},L^{\prime})}\in\mathcal{X}^{(n-\#L)}\) for all nonempty \(L\subset\{1,\ldots,n\}\). We now return to (4.51). We let \(\mathcal{X}_{\boldsymbol{j}}(\psi,C)\) to be the set of \((X,\boldsymbol{y})\) with all entries in the interval \((-\frac{1}{2},\frac{1}{2}]\) such that there exist \(\boldsymbol{u}\in(-\frac{1}{2},\frac{1}{2})^{n}\), \(A\in\mathrm{GL}(n,\mathbb{R})\) and \(T\in\mathbb{R}_{\mathrm{sym}}^{n\times n}\) satisfying \[\sup_{B\in\mathcal{K}}||\left((BA)^{-1}-I\right)A_{\boldsymbol{j}}||\leq\epsilon, \tag{4.57}\] \(||T||\leq\epsilon\), and \[\tilde{\Gamma}\bigg{(}(\mathbf{u},\mathbf{y}-\mathbf{u}X,0),\begin{pmatrix}I&X\\ 0&I\end{pmatrix}\begin{pmatrix}A&0\\ 0&{}^{t}A^{-1}\end{pmatrix}\begin{pmatrix}I&0\\ T&I\end{pmatrix}\bigg{)}(h_{S},g_{\mathbf{j},S})\in\mathcal{G}_{\mathbf{j}}(\psi,C). \tag{4.58}\] Here we let \(\epsilon>0\) be a sufficiently small constant, \(\mathcal{G}_{\mathbf{j}}(\psi,C)\) is defined in (4.40), and \(\mathcal{K}\) is the compact subset from the statement of theorem 1.2 identified with the compact subset of positive diagonal matrices \(B\) in the obvious way. We then set \(\mathcal{X}(\psi)\) to be the set of \((X,\mathbf{y})\in\mathbb{R}^{n\times n}_{\text{sym}}\times\mathbb{R}^{n}\) such that \[(X+R,\mathbf{y}R+\mathbf{s}_{R}+\mathbf{s})\in\bigcup_{C>0}\bigcap\mathcal{X} _{\mathbf{j}}(\psi,C2^{a(j_{1}+\cdots+j_{n})})\\ \cap\bigcap_{\begin{subarray}{c}L\subset\{1,\ldots,n\}\\ L\neq\emptyset\end{subarray}}\{(X_{1},\mathbf{y}_{1})\in\mathbb{R}^{n\times n} \times\mathbb{R}^{n}:X_{1}^{(L^{\prime},L^{\prime})}\in\mathcal{X}^{(n-\#L)}\} \tag{4.59}\] for some \((R,\mathbf{s})\in\mathbb{Z}^{n\times n}\times\mathbb{Z}^{n}\), where \(\mathbf{s}_{R}\in\mathbb{R}^{n}\) has entries \(0\) or \(\frac{1}{2}\) depending on whether the corresponding diagonal entry of \(R\) is even or odd, and \(a>0\) is a constant to be determined. We first verify that \(\mathcal{X}(\psi)\) has full measure, noting that it is enough to show that \[\bigcup_{C>0}\bigcap_{\mathbf{j}\geq 0}\mathcal{X}_{\mathbf{j}}(\psi,C2^{a(j_{1}+ \cdots+j_{n})}) \tag{4.60}\] has full measure in the subset \(\mathcal{X}_{0}\) of \(\mathbb{R}^{n\times n}_{\text{sym}}\times\mathbb{R}^{n}\) having all entries in the interval \((-\frac{1}{2},\frac{1}{2}]\). Let us suppose that the Lebesgue measure of the complement of \(\mathcal{X}_{\mathbf{j}}(\psi,C)\) in \(\mathcal{X}_{0}\) is greater than some \(\delta>0\), which we assume is small. Now, with respect to the measure \((\det A)^{-2n-1}\prod_{i,j}\mathrm{d}a_{ij}\) on \(\mathrm{GL}(n,\mathbb{R})\), the volume of the set of \(A\in\mathrm{GL}(n,\mathbb{R})\) satisfying (4.57) is within a constant multiple (depending on \(\mathcal{K}\)) of \(2^{-n(j_{1}+\cdots+j_{n})}\). Then, using the expression (2.13), (2.14) for the Haar measure on \(H\rtimes G\), we have \[\tilde{\mu}\left(\tilde{\Gamma}\backslash(H\rtimes G)-\mathcal{G}_{\mathbf{j}}( \psi,C)\right)\gg\delta 2^{-n(j_{1}+\cdots+j_{n})}, \tag{4.61}\] with implied constant depending on \(\mathcal{K}\). From lemma 4.6 it follows that \[\text{meas}\left(\mathcal{X}_{0}-\mathcal{X}_{\mathbf{j}}(\psi,C)\right)\ll C_{ \psi}C^{-2n-4}2^{(n+1)(j_{1}+\cdots+j_{n})}, \tag{4.62}\] and we find that \[\text{meas}\left(\mathcal{X}_{0}-\bigcup_{C>0}\bigcap_{\mathbf{j} \geq 0}\mathcal{X}_{\mathbf{j}}(\psi,C2^{a(j_{1}+\cdots+j_{n})})\right)\\ \ll\lim_{C\to\infty}C_{\psi}C^{-2n-4}\sum_{\mathbf{j}\geq 0}2^{((n+1 )-a(2n+4))(j_{1}+\cdots+j_{n})}=0 \tag{4.63}\] as long as \(a>\frac{n+1}{2n+4}\). Now let us suppose that \((X,\mathbf{y})\in\mathcal{X}(\psi)\). By theorem 3.1, the size of the theta functions in (4.51) is invariant under the transformation on the left of (4.59), so we may assume that \(X\in\mathcal{X}_{0}\) as well. In particular, we have that \((X,\mathbf{y})\) is in \(\mathcal{X}_{\mathbf{j}}(\psi,C2^{a(j_{1}+\cdots+j_{n})})\) for some \(C>0\) (independent of \(\mathbf{j}\)) and all \(\mathbf{j}\geq 0\). We have from corollary 4.1 and the definition of the height function \(D\) that \[\ll M^{\frac{n}{2}}\sum_{S\subset\{1,\ldots,n\}}\sum_{\begin{subarray}{c}\mathbf{ j}\geq 0\\ 2^{j_{i}}b_{j_{i}}^{-1}\leq M\end{subarray}}2^{-\frac{1}{2}(j_{1}+\cdots+j_{n}) }D\left(\tilde{\Gamma}(h,g(MB,X))(h_{S},g_{\mathbf{j},S})\right)^{\frac{1}{4}} \tag{4.64}\] bounds (4.51). Now for all \(\mathbf{j}\geq 0\) there is a \(\tilde{\Gamma}(h^{\prime},g)\in\mathcal{G}_{\mathbf{j}}(\psi,C2^{a(j_{1}+\cdots+j _{n})})\) with \(g\) of the form \[g=\begin{pmatrix}I&X\\ 0&I\end{pmatrix}\begin{pmatrix}A&0\\ 0&{}^{t}\!A^{-1}\end{pmatrix}\begin{pmatrix}I&0\\ T&I\end{pmatrix} \tag{4.65}\] satisfying (4.57) and \(||T||\leq\epsilon\) and \(h^{\prime}\) having the for \((\mathbf{u},\mathbf{y}-\mathbf{u}X,0)\) for some \(\mathbf{u}\in(-\frac{1}{2},\frac{1}{2})^{n}\). We have \[(h^{\prime},g)(1,\begin{pmatrix}\frac{1}{M}I&0\\ 0&MI\end{pmatrix})(h_{S},g_{\mathbf{j},S})=(h,g(MB,X))(h_{S},g_{\mathbf{j},S})(h_{1},g _{1}), \tag{4.66}\] where \[h_{1}=\left(-\mathbf{x}_{S}A_{\mathbf{j}}E_{S}+\mathbf{x}_{S}(BA)^{-1}A_{\mathbf{j}}E_{S}+ \frac{1}{M}(\mathbf{u}-\mathbf{x})B^{-1}A_{\mathbf{j}}E_{\mathbf{j}},0,0\right) \tag{4.67}\] and \[g_{1}=g_{\mathbf{j},S}^{-1}\begin{pmatrix}BA&0\\ 0&{}^{t}(BA)^{-1}\end{pmatrix}\begin{pmatrix}I&0\\ \frac{1}{M^{2}}T&I\end{pmatrix}g_{\mathbf{j},S}. \tag{4.68}\] Recalling that \(2^{j_{i}}\leq M\), the conditions (4.57) and \(||T||\leq\epsilon\) implies that \((h_{1},g_{1})\) satisfies the conditions of lemma 4.4 for all \(M\), which then implies \[D(\tilde{\Gamma}(h,g(MB,X)(h_{S},g_{\mathbf{j},S}))^{\frac{1}{4}} \asymp D\left(\tilde{\Gamma}(h^{\prime},g)(1,\begin{pmatrix}\frac{1}{M}I&0\\ 0&MI\end{pmatrix})(h_{S},g_{\mathbf{j},S})\right)^{\frac{1}{4}}\\ \ll C2^{a(j_{1}+\cdots+j_{n})}\psi(\log M) \tag{4.69}\] since \((h^{\prime},g)\in\mathcal{G}_{\mathbf{j}}(\psi,C2^{a(j_{1}+\cdots+j_{n})})\). Taking \(a=\frac{2n+3}{4n+8}\) so that \(\frac{n+1}{2n+4}<a<\frac{1}{2}\), it follows that (4.64) is bounded by \[\ll CM^{\frac{n}{2}}\psi(\log M)\sum_{\mathbf{j}\geq 0}2^{-(\frac{1}{2}-a)(j_{1}+ \cdots+j_{n})}\ll CM^{\frac{n}{2}}\psi(\log M), \tag{4.70}\] and theorem 1.2 follows.
2306.17304
Characters in p-adic Heisenberg and Lattice Vertex Operator Algebras
We study characters of states in $p$-adic vertex operator algebras. In particular, we show that the image of the character map for both the $p$-adic Heisenberg and $p$-adic lattice vertex operator algebras contain infinitely-many non-classical $p$-adic modular forms which are not contained in the image of the algebraic character map.
Daniel Barake, Cameron Franc
2023-06-29T21:08:50Z
http://arxiv.org/abs/2306.17304v3
# Characters in \(\boldsymbol{P}\)-adic Heisenberg and lattice vertex operator algebras ###### Abstract. We study characters of states in \(p\)-adic vertex operator algebras. In particular, we show that the image of the character map for both the \(p\)-adic Heisenberg and \(p\)-adic lattice vertex operator algebras contain infinitely-many non-classical \(p\)-adic modular forms which are not contained in the image of the algebraic character map. We obtain also new expressions for square-bracket modes in the Heisenberg VOA which are used in the study of such characters. _E-mail addresses_: [email protected], [email protected] ## 1. Introduction We study \(p\)-adic properties of certain vertex operator algebras. Motivated by both physical and number-theoretical methods, the authors of [10] introduce the study of \(p\)-adic VOAs which arise from a completion of the axioms for usual (algebraic) VOAs. The existence of \(p\)-adic variants of known VOAs such as the Virasoro, Monster and Heisenberg was central to this work, however the precise image of the character map on such VOAs described in Sections 9 and 10 remains undetermined. Here, we expand on the techniques of [10] to provide results which we hope will assist in resolving this problem. Our focus is first on the \(p\)-adic Heisenberg VOA which we denote by \(S_{1}\), following [10]. The character of Heisenberg algebra \(S_{\mathrm{alg}}\) is extended to a surjective \(\mathbb{Q}_{p}\)-linear map \[\boldsymbol{f}:S_{\mathrm{alg}} \to\mathbb{Q}_{p}[E_{2},E_{4},E_{6}]\] \[a \mapsto\eta(q)Z(a,q) \tag{1.1}\] where \(E_{2}\), \(E_{4}\) and \(E_{6}\) denote the Eisenstein series of weights \(2\), \(4\) and \(6\) respectively, normalized such that they have constant term equal to \(1\) (cf. [11], [12]). Note above that \(\boldsymbol{f}\) is a rescaling of the character map \(Z(a,q)\) which ensures the \(\eta(q)^{-1}\) factor is removed from the image. The \(p\)-adic completion of \(\mathbb{Q}[E_{4},E_{6}]\) by the sup-norm yields Serre's ring of \(p\)-adic modular forms \(\mathfrak{M}_{p}\) (cf. [11]), and so motivated by this construction, states in \(S_{1}\) are formed as \(p\)-adic limits of convergent sequences of states in \(S_{\mathrm{alg}}\). Theorem 9.7 of [10] then establishes that \(\boldsymbol{f}\) extends further to a map into \(\mathfrak{M}_{p}\), thus obtaining \(p\)-adic modular forms as characters of states in \(S_{1}\) amounts to making the following diagram commute: Here, the upward arrows denote the respective completion functors. Unlike the algebraic case in which it is known that \(\boldsymbol{f}\) is surjective on the ring of quasi-modular forms, it is yet undetermined whether every \(p\)-adic modular form arises as the character of some state from a \(p\)-adic VOA. It was established in Section 10 of [10] that the image of \(\boldsymbol{f}\) on \(S_{1}\) contains the \(p\)-adic Eisenstein series of weight \(2\) whose \(q\)-expansion is given by \[G_{2}^{\star}(q)=\frac{p-1}{24}+\sum_{n\geq 1}\sigma^{\star}(n)q^{n} \tag{1.2}\] where \(\sigma^{\star}(n)\) denotes the sum of all divisors of \(n\) which are coprime to a prime \(p\). In Section 3 of this discussion, we expand further on this fact; we prove **Theorem 1.1**.: _The image of the character map on the \(p\)-adic Heisenberg VOA \(S_{1}\) contains infinitely-many non-classical \(p\)-adic modular forms. In particular, the image contains_ \[2^{(t+1)/2}t(t-2)!!G_{2}^{(t-1)/2}(q)G_{t+1}^{\star}(q)\] _where \(t,l\geq 1\) are odd and \(l\not=-1\mod p-1\)._ The appearance of the factors \(G_{2}^{(t-1)/2}\) in the image above is new and provides some clarity on the image of \(\boldsymbol{f}\) on \(S_{1}\). Note that setting \(t=1\) above and normalizing yields the classical \(p\)-adic Eisenstein series \(G_{k}^{\star}\) where \(p-1\nmid k\). From the results of [10], one may also deduce the existence of \(p\)-adic variants of VOAs based on lattice theories \(V_{\Lambda}\) (cf. [11], [12]), formed from the \(p\)-adic completion with respect to the sup-norm. Such VOAs will be denoted by \(\widehat{V}_{\Lambda}\). We have an analogue of the character map seen in eq. (1.1): \[\boldsymbol{f}:V_{\Lambda} \to\mathbb{Q}_{p}[E_{2},E_{4},E_{6}]\] \[a \mapsto\eta(q)^{d}Z(a,q) \tag{1.3}\] where \(d\) is the rank of the lattice \(\Lambda\). It will be clear in our discussion when working over \(S_{\mathrm{alg}}\) or \(V_{\Lambda}\), and so the use of \(\boldsymbol{f}\) for both characters should cause no confusion. In this case, concrete examples of \(p\)-adic characters have yet to be constructed prior to this work. Since the theory is comparable to that of \(S_{\mathrm{alg}}\) albeit with the emergence of theta series (cf. [13]), an analogous version of Theorem 1.1 in \(\widehat{V}_{\Lambda}\) seems plausible. In fact, in Section 4 we construct infinite families of states in \(\widehat{V}_{\Lambda}\) which give rise to \(p\)-adic modular forms in the sense of Katz [14]. Our second theorem is then: **Theorem 1.2**.: _Let \(\Lambda\) be an even unimodular lattice. The image of the character map on the \(p\)-adic lattice VOA \(\widehat{V}_{\Lambda}\) contains infinitely-many non-classical Katz \(p\)-adic modular forms. In particular, the image contains the sum_ \[2tG_{l+1}^{\star}(q)\sum_{k\geq 0}\binom{t-1}{2k}\frac{(2k)!}{k!}G_{2}(q)^{k} \Theta_{\Lambda,t,k}(q)\] _where \(t,l\geq 1\) are odd with \(l\not=-1\mod p-1\), and each \(\Theta_{\Lambda,t,k}\) is quasi-modular._ The precise definition of \(\Theta_{\Lambda,t,k}\) is given in the statement of Proposition 4.3. In this case, by setting \(t=1\) and normalizing, we obtain the product \(G_{k}^{\star}\Theta_{\Lambda}\) where \(\Theta_{\Lambda}\) is the theta series for \(\Lambda\) which is well-known to be modular since \(\Lambda\) is even and unimodular. Instrumental to establishing the connection between VOAs and modular forms is the notion of transforming a VOA by way of a map \(z\mapsto\phi(z)\), introduced first in [15]. An overview of this technique can be found in detail in [10] and in Chapter 5 of [11]. In particular, setting \(\phi(z)=e^{z}-1\) formulates the vertex operators as living on a torus rather than a sphere, and these are given by \[Y(a,e^{z}-1)e^{kz}=Y[a,z]=\sum_{n\in\mathbb{Z}}a[n]z^{-n-1} \tag{1.4}\] where \(k\in\mathbb{Z}\) denotes the weight of the state \(a\). This resulting "square-bracket" VOA is equipped with a new grading which respects the grading on the space of quasi-modular forms. Expressing sequences of square-bracket states in terms of the "usual" round-bracket formalism will be crucial for establishing their \(p\)-adic convergence in both \(S_{1}\) and \(\widehat{V}_{\Lambda}\). Thus in Section 2.3, we provide an overview of the square-bracket formalism for \(S_{\mathrm{alg}}\) as well as possibly-novel expressions for the Heisenberg square-bracket modes of positive and negative weight (eqs. (2.10) and (2.12) respectively). These expressions are combinatorial in nature, as they are given and derived by use of integer compositions (cf. [10]). ## 2. Background ### Vertex Operator Algebras For elementary notions in VOA theory, the reader is invited to consult [11]. A vertex operator algebra consists of a \(\mathbb{Z}\)-graded vector space \(V\) \[V=\coprod_{n\in\mathbb{Z}}V_{(n)}\] where \(\dim V_{(n)}<\infty\) for all \(n\) and \(V_{(n)}=0\) for \(n\ll 0\). We say the state \(a\in V\) is homogeneous of weight \(k\) if \(a\in V_{(k)}\). The vertex operator linear map \[Y: V \to\mathrm{End}(V)[[z,z^{-1}]]\] \[a \mapsto Y(a,z)=\sum_{n\in\mathbb{Z}}a(n)z^{-n-1}\] packages the modes \(a(n)\in\mathrm{End}(V)\) of the state \(a\) into a formal Laurent series, and these satisfy the truncation condition given by \[a(n)b=0,\ \ \ n\gg 0 \tag{2.1}\] for any \(a,b\in V\). The vacuum state \(\mathbf{1}\in V_{(0)}\) satisfies the vaccum and creation properties, respectively, \[Y(\mathbf{1},z)=1 \tag{2.3}\] \[Y(a,z)\mathbf{1}\in V[[z]]\ \ \ \text{and}\ \ \ \lim_{z\to 0}Y(a,z)\mathbf{1}=a \tag{2.2}\] and the conformal state \(\omega\in V_{(2)}\) has vertex operator \[Y(\omega,z)=\sum_{n\in\mathbb{Z}}L(n)z^{-n-2}\ \left(\sum_{n\in\mathbb{Z}} \omega(n)z^{-n-1}\right)\] where the \(L(i)\in\mathrm{End}(V)\) generate a copy of the Virasoro algebra with central charge \(c_{V}\in\mathbb{C}\): \[[L(m),L(n)]=(m-n)L(m+n)+\frac{c_{V}}{12}(m^{3}-m)\delta_{m+n,0}. \tag{2.4}\] The \(L(0)\)-eigenspace decomposition of \(V\) coincides with the grading of \(V\), that is, \(L(0)a=ka\) where \(a\in V_{(k)}\), and the \(L(-1)\)-derivative property holds: \[Y(L(-1)a,z)=\partial_{z}Y(a,z), \tag{2.5}\] or equivalently, \(L(-1)\mathbf{1}=0\). Finally, the Jacobi identity is satisfied for any \(a,b\in V\) and formal commuting variables \(z_{0},z_{1},z_{2}\): \[z_{0}^{-1}\delta\left(\frac{z_{1}-z_{2}}{z_{0}}\right)Y(a,z_{1}) Y(b,z_{2})-z_{0}^{-1}\delta\left(\frac{z_{2}-z_{1}}{-z_{0}}\right)Y(b,z_{2})Y(a,z_{1})\] \[=z_{2}^{-1}\delta\left(\frac{z_{1}-z_{0}}{z_{2}}\right)Y(Y(a,z_{0 })b,z_{2}) \tag{2.6}\] where \(\delta(z)=\sum_{n\in\mathbb{Z}}z^{n}\) is the formal delta series. We shall also require the notion of a \(V\)-module in arguments involving lattice VOAs. A \(V\)-module \(M\) is a \(\mathbb{C}\)-graded vector space \[M=\coprod_{h\in\mathbb{C}}M_{(h)}\] such that \(\dim M_{(h)}<\infty\) for all \(h\) and \(M_{(h)}=0\) for \(h\ll 0\). Here also, the \(L(0)\)-eigenspace decomposition coincides with the grading above. We call \(a\in M\) homogeneous of weight \(h\) if \(a\in M_{(h)}\). We have an analogous vertex operator linear map \[Y_{M}:V \to\operatorname{End}(M)[[z,z^{-1}]]\] \[a \mapsto Y_{M}(a,z)=\sum_{n\in\mathbb{Z}}a(n)z^{-n-1}\] where the truncation and vacuum conditions (eqs. (2.1) and (2.2)) as well as the Jacobi identity (eq. (2.6)) hold true for the above modes. Given \(a\in V_{(k)}\) and \(m\in\mathbb{Z}\), one shows that \(a(n):V_{(m)}\to V_{(m+k-n-1)}\) (cf. [10]). Denote then the zero mode \(o(a)\) of \(a\) as \(a(k-1)\) such that \(o(a):V_{(m)}\to V_{(m)}\), and extend the definition to all of \(V\) additively. By tracing the zero mode over each subspace \(V_{(n)}\) (recall \(\dim V_{(n)}<\infty\) for all \(n\in\mathbb{Z}\)) and packaging these into a generating function, we obtain the character \[Z(a,q)=q^{-c_{V}/24}\sum_{n\in\mathbb{Z}}\operatorname{Tr}_{V_{(n)}}o(a)q^{n}\] where \(q=e^{2\pi iz}\) denotes the nome. In the case of \(V\)-modules, the character is defined in a natural way wherein the zero mode \(o(a)=a(k-1)\in\operatorname{End}(M)\) for \(a\in V_{(k)}\) preserves each homogeneous space \(M_{(h)}\). ### The Heisenberg VOA We briefly outline the construction of the Heisenberg VOA \(S_{\operatorname{alg}}\). A rigorous treatment can be found in many texts such as [10], [11], [12], [13]. For generalities on Lie algebras alongside their representations, we follow Chapter 9 of [1]. Let \(\mathfrak{h}\) be a finite-dimensional vector space viewed as an abelian Lie algebra. Then associated the (untwisted) affine Lie algebra \[\widehat{\mathfrak{h}}=\big{(}\mathfrak{h}\otimes\mathbb{C}[t,t^{-1}]\big{)} \oplus\mathbb{C}\mathbf{k}\] where \(\mathbf{k}\) is central, and with bracket \[[h\otimes t^{m},h^{\prime}\otimes t^{n}]=\delta_{m+n,0}\mathbf{k}\] for \(h,h^{\prime}\in\mathfrak{h}\) and \(m,n\in\mathbb{Z}\). There are two graded subalgebras which we write with respect to the notation given in [11] (by weights) as: \[\widehat{\mathfrak{h}}_{+} =\mathfrak{h}\otimes t^{-1}\mathbb{C}[t^{-1}]\] \[\widehat{\mathfrak{h}}_{-} =\mathfrak{h}\otimes t\mathbb{C}[t].\] Define \(\widehat{\mathfrak{h}}_{\leq 0}=\widehat{\mathfrak{h}}_{-}\oplus( \mathfrak{h}\oplus\mathbf{k})\), and denote by \(\mathcal{U}\) the universal enveloping algebra. Let \(\mathbb{C}\mathbf{1}\) be the \(1\)-dimensional \(\widehat{\mathfrak{h}}_{\leq 0}\)-module where \(\widehat{\mathfrak{h}}_{-}\) acts trivially on \(\mathbb{C}\) and \(\mathbf{k}\) acts as the identity. Construct the induced module \[S_{\operatorname{alg}}=\operatorname{Ind}_{\mathcal{U}(\widehat{\mathfrak{h} }_{\leq 0})}^{\mathcal{U}(\widehat{\mathfrak{h}})}\mathbb{C}\mathbf{1}=\mathcal{ U}(\widehat{\mathfrak{h}})\otimes_{\mathcal{U}(\widehat{\mathfrak{h}}_{\leq 0})} \mathbb{C}\mathbf{1}\] where we write the action of \(h\otimes t^{n}\) on \(S_{\operatorname{alg}}\) as \(h(n)\) for \(n\in\mathbb{Z}\). Note that \(S_{\operatorname{alg}}\cong S(\widehat{\mathfrak{h}}_{+})\), the symmetric algebra. That is, \(\{h(-1),h(-2),h(-3),\dots\}\) form a basis for \(S_{\operatorname{alg}}\). Also note the \(h(n)\) for \(n\geq 1\) act on \(S_{\operatorname{alg}}\) as \(n\partial_{h(-n)}\), and \(h(0)\) acts as zero. The Heisenberg vertex operator (or Heisenberg field) is defined as \[h(z)=\sum_{n\in\mathbb{Z}}h(n)z^{-n-1}\in\operatorname{End}(S_{\operatorname{ alg}})[[z,z^{-1}]].\] This is no more than a generating function for the operators \(h(n)\in\operatorname{End}(S_{\operatorname{alg}})\). Hence we may view \(S_{\operatorname{alg}}\) as being generated by the Fourier coefficients of the state \(h(-1)1\) (see [10]). ### Square-Bracket Formalism We have just seen that the set \(\{h(-1),h(-2),h(-3),\ldots\}\) forms a basis for \(S_{\operatorname{alg}}\), and these are the Fourier coefficients of the state \(h(-1)1\). From [11], we may use eq. (1.4) to transform \(h(z)\) as \[Y_{\phi}(h(-1)1,z)=Y(h(-1)1,\phi(z))\phi^{\prime}(z). \tag{2.7}\] provided the state \(h(-1)1\) is primary, that is if \(L(n)a=0\) for all \(n\geq 1\). The identity \(L(n)(h(-1)1)=h(n-1)1\) is easily proven, and it follows that \(h(-1)1\) is primary of weight \(1\). Let \(\phi(z)=e^{z}-1\in\operatorname{Aut}(\mathbb{C}[[z]])\). Using eq. (2.7), we proceed to derive explicit expressions for the square-bracket modes first by writing \[Y_{\phi}(h(-1)1,z)=Y(h(-1)1,e^{z}-1)e^{z}=\sum_{n\in\mathbb{Z}}h[n]z^{-n-1},\] then multiplying by \(z^{n}\) to obtain \[h[n]=\operatorname{Res}_{z}Y(h(-1)1,e^{z}-1)(e^{z})(z^{n}).\] Note that \(Y(h(-1)1,e^{z}-1)\) truncates when applied to any state in \(S_{\operatorname{alg}}\), and that we can use a change of variables theorem (cf. eq. (1.1.3) of [11]) with \(w=e^{z}-1\) to show \[h[n]=\operatorname{Res}_{w}h(w)\left(\log(w+1)\right)^{n} \tag{2.8}\] Observe that \(h[0]=h(0)\). Consider first the case where \(n\geq 1\). Integrating the geometric series and raising to the \(n\)-th power yields the expression \[\left(\log(w+1)\right)^{n}=\left(\sum_{k\geq 1}\frac{(-1)^{k-1}w^{k}}{k}\right) ^{n} \tag{2.9}\] and this consists only of non-negative powers of \(w\). For our fixed \(n\geq 1\), extracting the coefficient of \(w^{k}\) for some \(k\geq 0\) amounts to summing all possible ways of multiplying \(n\) summands in eq. (2.9) to produce a term of degree \(k\). All such possible ways are in correspondence with the set \(\mathcal{C}(k,n)\) of all \(n\)-compositions \((k_{1},k_{2},\ldots,k_{n})\) of \(k\) (cf. [12]) where \(k_{i}\geq 1\) for all \(1\leq i\leq n\). So for \(n\geq 1\) we have \[\operatorname{Coeff}_{w^{k}}\left(\log(w+1)\right)^{n}=\sum_{(k_{1},k_{2}, \ldots,k_{n})\in\mathcal{C}(k,n)}\left(\prod_{i=1}^{n}\frac{(-1)^{k_{i}-1}}{k _{i}}\right),\] which gives the following expression for the square-bracket modes of negative weight: \[h[n]=\sum_{k\geq 1}\left(\sum_{(k_{1},\ldots,k_{n})\in\mathcal{C}(k,n)} \left(\prod_{i=1}^{n}\frac{(-1)^{k_{i}-1}}{k_{i}}\right)\right)h(k),\qquad n \geq 1. \tag{2.10}\] Note that when \(n>k\), \(|\mathcal{C}(k,n)|=0\) here. Using eq. (2.10), the first such modes are \[h[1] =h(1)-\frac{1}{2}h(2)+\frac{1}{3}h(3)-\frac{1}{4}h(4)+\cdots\] \[h[2] =h(2)-h(3)+\frac{11}{12}h(4)-\frac{1}{6}h(5)+\cdots\] \[h[3] =h(3)-\frac{3}{2}h(4)+\frac{7}{4}h(5)-\frac{15}{8}h(6)+\cdots\] \[\vdots\] We now derive an expression for the square-bracket modes of positive weight. Once again by eq. (2.8), we require a formulation for the coefficients of \(\left(\log(w+1)\right)^{n}\) when \(n\leq 1\). Power series long division gives the following recursive definition: Let \(c_{0}=1\), \(c_{i}=0\) for \(i<0\) and \[c_{k}=\sum_{i\geq 1}\frac{(-1)^{i+1}}{i+1}c_{k-i}. \tag{2.11}\] so that \[c_{1}=\frac{1}{2},\quad c_{2}=-\frac{1}{12},\quad c_{3}=\frac{1}{24},\quad c_{4 }=-\frac{19}{720},\cdots\] That is, \(c_{k}=\operatorname{Coeff}_{w^{k-1}}\left(\log(w+1)\right)^{-1}\). The integer composition argument used for the square-bracket modes of negative weight will not work in the same manner here, since the Laurent expansion of \(\left(\log(w+1)\right)^{-1}\) consists of terms \(w^{-1}\) and \(w^{0}\). Suppose however, that we have an \(n\)-composition of \(k\) where we allow for parts equal to \(-1\) and \(0\). For integers \(k_{1},\ldots,k_{n}\geq 1\), we may write this composition as \[(k_{1}-2)+(k_{2}-2)+\cdots+(k_{n}-2)=k\] \[k_{1}+k_{2}+\cdots+k_{n}=k+2n\] and the above re-arrangement shows that by adding \(2\) to each part, this is in fact the same as an ordinary \(n\)-composition of \(k+2n\). Putting all of this together, we obtain the following expression for the square-bracket modes of positive weight: \[h[-n]=\sum_{k\leq-n}\left(\sum_{(k_{1},\ldots,k_{n})\in\mathcal{C}(k+2n,n)} \prod_{i=1}^{n}c_{k_{i}-1}\right)h(k),\qquad n\geq 1. \tag{2.12}\] The coefficients \(c_{i}\) are defined recursively in eq. (2.11). Here, the first few modes are \[h[-1]=h(-1)+\frac{1}{2}h(0)-\frac{1}{12}h(1)+\frac{1}{24}h(2)- \frac{19}{720}h(3)+\cdots\] \[h[-2]=h(-2)+h(-1)+\frac{1}{12}h(0)-\frac{1}{240}h(2)+\frac{1}{24 0}h(3)-\cdots\] \[h[-3]=h(-3)+\frac{3}{2}h(-2)+\frac{1}{2}h(-1)+\frac{1}{240}h(1)+.\] \[\vdots\] ### Some Square-Bracket States We construct as in [11] a family of square-bracket states in \(S_{\mathrm{alg}}\) which map under \(\boldsymbol{f}\) (recall eq. (1.1)) to an appropriate subspace of the ring of quasi-modular forms. In order to assess \(p\)-adic convergence in the subsequent Lemmas 3.6 and 3.7, we must write these square-bracket states in the round-bracket formalism, that is, in terms of the basis \(\{h(-1),h(-2),h(-3)\ldots\}\) of \(S_{\mathrm{alg}}\). **Lemma 3.1**.: _For an odd integer \(t\geq 1\), we have_ \[h[-1]^{t}\,\boldsymbol{l}=\sum_{k\geq 0}\binom{t}{2k}\frac{(2k)!}{k!(-24)^{k}}h (-1)^{t-2k}\,\boldsymbol{l}. \tag{3.1}\] Proof.: We proceed by induction. Recall that in \(S_{\mathrm{alg}}\) we have \[h[-1]=h(-1)-\frac{1}{12}h(1)+\frac{1}{24}h(2)-\frac{19}{720}h(3)+\cdots\] so it is clear that \(h[-1]\boldsymbol{1}=h(-1)\boldsymbol{1}\) which agrees with the statement. Suppose the claim holds up to some \(t>1\). Then applying \(h[-1]\) to the right-hand side of eq. (3.1) gives \[\sum_{k\geq 0}\binom{t}{2k}\frac{(2k)!}{k!(-24)^{k}}h(-1)^{t-2k+1 }\boldsymbol{1}+\sum_{k\geq 0}\binom{t}{2k}\frac{2(2k)!(t-2k)}{k!(-24)^{k+1}}h(- 1)^{t-2k-1}\boldsymbol{1}\] \[=\sum_{k\geq 0}\binom{t}{2k}\frac{(2k)!}{k!(-24)^{k}}h(-1)^{t-2k +1}\boldsymbol{1}+\sum_{k\geq 0}\binom{t}{2k+1}\frac{(2k+2)!}{(k+1)!(-24)^{k+1}}h(- 1)^{t-2k-1}\boldsymbol{1}.\] Re-indexing the second sum, we get \[=h(-1)^{t+1}+\sum_{k\geq 1}\binom{t}{2k}\frac{(2k)!}{k!(-24)^{k}}h(-1)^{t-2k+1} \mathbf{1}+\sum_{k\geq 1}\binom{t}{2k-1}\frac{(2k)!}{k!(-24)^{k}}h(-1)^{t-2k+1} \mathbf{1},\] and by using Pascal's rule this equals \(h[-1]^{t+1}\mathbf{1}\) which completes the induction. _Remark_.: It is interesting to note that the expression given in Lemma 3.1 is related to generalized Hermite polynomials \[\operatorname{He}_{n}^{\alpha}(x)=(x-\alpha\partial_{x})^{n}\cdot 1=e^{-\frac{ \alpha(\partial_{x})^{2}}{2}}x^{n} \tag{3.2}\] for \(n\geq 0\) and where \(\alpha>0\) (cf. [12, AAR99]). Expanding the rightmost side of eq. (3.2): \[\operatorname{He}_{n}^{\alpha}(x)=\sum_{k\geq 0}\frac{(-\alpha)^{k}}{2^{k}(k)!} \partial_{x}^{2k}x^{n}=\sum_{k\geq 0}\frac{(n)!(-\alpha)^{k}}{2^{k}(k)!(n-2k)!}x ^{n-2k}=\sum_{k\geq 0}\binom{n}{2k}\frac{(-\alpha)^{k}(2k)!}{2^{k}(k)!}x^{n-2k}.\] By comparing eq. (3.2) to repeated application of \(h[-1]\) on \(\mathbf{1}\), setting \(x=h(-1)\), \(\alpha=1/12\) and \(1=\mathbf{1}\) above gives the expression of Lemma 3.1. **Lemma 3.2**.: _For an odd integer \(t\geq 1\), we have_ \[h(1)h[-1]^{t}\,\boldsymbol{1}=\sum_{k\geq 0}\binom{t}{2k+1}\frac{(2k+1)!}{k!(- 24)^{k}}h(-1)^{t-2k-1}\,\boldsymbol{1}.\] Proof.: The proof is similar to the computation of the second sum in Lemma 3.1. The following proposition is a slight generalization of Lemma 10.2 of [13] which outlines the family of square-bracket states which we will be considering throughout. **Proposition 3.3**.: _For odd integers \(r,t\geq 1\), we have_ \[(r-1)!h[-r]h[-1]^{t}\,\boldsymbol{1}=\sum_{n\geq 0}\sum_{k \geq 0}\binom{t}{2k}\frac{n!S_{r}^{(n+1)}(2k)!}{k!(-24)^{k}}h(-n-1)h(-1)^{t-2k} \,\boldsymbol{1}\] \[-\sum_{k\geq 0}\binom{t}{2k+1}\frac{B_{r+1}(2k+1)!}{k!(r+1)(-24)^{k }}h(-1)^{t-2k-1}\,\boldsymbol{1}\] _where_ \[S_{r}^{(n+1)}=\frac{1}{n!}\sum_{j\geq 0}\binom{n}{j}(-1)^{n+j}(j+1)^{r-1}\] _are the Stirling numbers of the second kind._ Proof.: First, using Lemma 3.1 we compute \[(r-1)!h[-r]h[-1]^{t}\boldsymbol{1}\\ =(r-1)!\operatorname{Res}_{z}\,z^{-r}e^{z}\sum_{n\in\mathbb{Z}}h( n)\left(\sum_{k\geq 0}\binom{t}{2k}\frac{(2k)!}{k!(-24)^{k}}h(-1)^{t-2k} \mathbf{1}\right)(e^{z}-1)^{-n-1}\,.\] Applying Lemma 3.2 gives \[=(r-1)!\operatorname{Res}_{z}\,z^{-r}e^{z}\left(\sum_{n\geq 0 }\sum_{k\geq 0}\binom{t}{2k}\frac{(2k)!}{k!(-24)^{k}}h(-n-1)h(-1)^{t-2k} \mathbf{1}\left(e^{z}-1\right)^{n}\right)\\ +(r-1)!\operatorname{Res}_{z}\,z^{-r}e^{z}\sum_{k\geq 0}\binom{t}{2k+ 1}\frac{(2k+1)!}{k!(-24)^{k}}h(-1)^{t-2k-1}\mathbf{1}\left(e^{z}-1\right)^{-2}.\] It was shown in [13] that \[(r-1)!\operatorname{Res}_{z}\,z^{-r}e^{z}(e^{z}-1)^{n}=n!S_{r}^{(n+1)}\] \[(r-1)!\operatorname{Res}_{z}\,z^{-r}e^{z}(e^{z}-1)^{-2}=-\frac{B_{ r+1}}{r+1}\] and by comparing with the claim, this concludes the proposition. ### Proof of Theorem 1.1 We may now take the character of the family of states considered in Proposition 3.3 by using the following reformulation of equation (44) of [10] which gives an explicit description for the character of the state \(a=h[-k_{1}]h[-k_{2}]\cdots h[-k_{r}]\mathbf{1}\) with \(k_{i}\geq 1\) under the map \(\boldsymbol{f}\) encountered in eq. (1.1): \[\boldsymbol{f}(a)=\sum_{(\ldots\{s,t\},\ldots)\in\mathcal{P}(\Phi,2)}\prod_{ \{s,t\}}\frac{2(-1)^{s+1}}{(s-1)!(t-1)!}G_{s+t}(q). \tag{3.3}\] Here, \(\mathcal{P}(\Phi,2)\) denotes all possible partitions of the set \(\Phi\) into parts \(\{s,t\}\) of size \(2\). Then for each such partition, the product is taken over each part. _Remark_.: Here and throughout, we employ the normalization \[G_{k}(q)=-\frac{B_{k}}{2k}+\sum_{n\geq 1}\sigma_{k-1}(n)q^{n} \tag{3.4}\] seen in [11] and so the results of the papers cited here have been adjusted to this change. The following combinatorial fact will be of use in proving the subsequent proposition. **Lemma 3.4**.: _Let \(n\geq 0\) be an integer. The number of ways of partitioning a set of \(2n\) elements into exactly \(n\) pairs is \((2n-1)!!\). This is the product of all odd integers no greater than \(2n-1\), and if \(n=0\) then \((-1)!!=1\)._ Proof.: Let \(n\geq 0\). Without regard for order, the number of ways of choosing \(2\) elements from \(2n\), then \(2\) elements from \(2n-2\) and so on, is \[\frac{1}{n!}\binom{2n}{n}\binom{2n-2}{2}\cdots\binom{2}{2}=\frac{(2n)!}{n!2^{n }}=\frac{(n)(2n-1)(n-1)(2n-3)\cdots(1)}{n!}=(2n-1)!!.\] Note that setting \(n=0\) in the second term above results in \((-1)!!=1\) which agrees with the fact that there is only one way of partitioning the empty set into zero pairs. **Proposition 3.5**.: _For odd integers \(r,t\geq 1\), we have_ \[\boldsymbol{f}(h[-r]h[-1]^{t}\boldsymbol{1})=\frac{2^{(t+1)/2}t(t-2)!!}{(r-1 )!}G_{2}^{(t-1)/2}(q)G_{r+1}(q).\] Proof.: We make use of eq. (3.3). In this case, \(\Phi=\{r,1_{1},1_{2},\ldots,1_{t}\}\) where we have labelled the \(1\)s for clarity. Since \(t\) is odd, \(|\Phi|\) is even and so \(\mathcal{P}(\Phi,2)\neq 0\). Suppose first that \(r\) is paired with \(1_{1}\), and the remaining \(1_{2},\ldots,1_{t}\) are paired amongst themselves. The \(t-1\) remaining \(1\)s then get put in \((t-1)/2\) parts of size \(2\), hence \[\left(\frac{2}{(r-1)!}G_{r+1}(q)\right)\left(\frac{2}{(1-1)!}G_{2}(q)\right)^{ (t-1)/2}=\frac{2^{(t+1)/2}}{(r-1)!}G_{2}^{(t-1)/2}(q)G_{r+1}(q)\] where we recall that \(r\) is odd here. Using Lemma 3.4, there are then \((t-2)!!\) distinct ways of partitioning the remaining \(1_{2},\ldots,1_{t}\) into \((t-1)/2\) parts of size \(2\). Each way yields the same expression as above and so we multiply by \((t-2)!!\). Finally, we may repeat this process \(t\) times, since \(r\) can be paired with \(1_{2}\) then with \(1_{3}\) and so on, until \(r\) is paired with \(1_{t}\). This yields the expression \[\frac{2^{(t+1)/2}t(t-2)!!}{(r-1)!}G_{2}^{(t-1)/2}(q)G_{r+1}(q)\] which establishes the result. With this, we denote (in the notation of [12]) the rescaled state \[u_{r,t}=(1-p^{r})(r-1)!h[-r]h[-1]^{t}\mathbf{1}\] for some prime \(p>3\). By Proposition 3.5, \[\boldsymbol{f}\big{(}u_{r,t}\big{)}=(1-p^{r})(2^{(t+1)/2}t(t-2)!!)G_{2}^{(t-1)/ 2}(q)G_{r+1}(q). \tag{3.5}\] For each fixed odd \(t,l\geq 1\) where \(l\not=-1\mod p-1\), we will assess the convergence of the sequence \[(u_{p^{a}(p-1)+l,t})_{a\geq 0}=\Big{(}(1-p^{p^{a}(p-1)+l})(p^{a}(p-1))!h[-p^{a}(p -1)-l]h[-1]^{t}\mathbf{1}\Big{)}_{a\geq 0} \tag{3.6}\] in the \(p\)-adic topology. In order to do this, we require that the square-bracket states above be written in the round-bracket formalism. This was done in Proposition 3.3. The following is then an extension of Lemma 10.4 of [12] and establishes the \(p\)-adic convergence of the terms involving Stirling numbers. **Lemma 3.6**.: _Fix \(t,l\geq 1\) odd, and let \(p\) be an odd prime with \(r=p^{a}(p-1)+l\), \(s=p^{b}(p-1)+l\) and \(a\leq b\). Then for any fixed \(k\) within the range \(0\leq k\leq\lfloor t/2\rfloor\) and any \(n\geq 0\) we have_ \[(1-p^{r})\binom{t}{2k}\frac{n!S_{r}^{(n+1)}(2k)!}{k!(-24)^{k}}\equiv(1-p^{s} )\binom{t}{2k}\frac{n!S_{s}^{(n+1)}(2k)!}{k!(-24)^{k}}\mod p^{a+x+1}.\] _for some fixed integer \(x\)._ Proof.: Let \(|\cdot|_{p}\) denote the \(p\)-adic absolute value. We have \[\left|\binom{t}{2k}\frac{n!S_{r}^{(n+1)}(2k)!}{k!(-24)^{k}}-\binom{t}{2k} \frac{n!S_{s}^{(n+1)}(2k)!}{k!(-24)^{k}}\right|_{p}=\left|n!S_{r}^{(n+1)}-n!S_ {s}^{(n+1)}\right|_{p}\left|\binom{t}{2k}\frac{(2k)!}{k!(-24)^{k}}\right|_{p}.\] The rightmost term is dependent only on \(t\) and \(k\) which are fixed, and so denote by \(x\) the \(p\)-adic valuation of this term, which is also fixed. We show now that \(n!S_{r}^{(n+1)}\equiv n!S_{s}^{(n+1)}\mod p^{a+1}\). From the statement of Proposition 3.3, we have the formula \[n!S_{r}^{(n+1)}=\sum_{j=0}^{n}\binom{n}{j}(-1)^{n+j}(j+1)^{r-1}.\] If \(p\mid(j+1)\) then of course \((j+1)^{r-1}\equiv 0\mod p^{a+1}\). Suppose then, that \(p\nmid(j+1)\). This means that \(p\) and \((j+1)\) are coprime, and subsequently that \(p^{a+1}\) and \((j+1)\) are coprime. Recall that \(\varphi(p^{a+1})=p^{a}(p-1)\) where \(\varphi\) denotes the Euler totient function. So by Euler's theorem, \[(j+1)^{p^{a}(p-1)}\equiv 1\mod p^{a+1}.\] Note also that \((j+1)^{l-1}\) is an integer independent of \(r\) and so by multiplying both sides of the above congruence by this factor we get \((j+1)^{r-1}\equiv(j+1)^{l-1}\mod p^{a+1}\). Thus \[n!S_{r}^{(n+1)}\equiv\sum_{\begin{subarray}{c}j=0\\ p\nmid(j+1)\end{subarray}}^{n}\binom{n}{j}(-1)^{n+j}(j+1)^{l-1}\mod p^{a+1}\] and since the right hand side above does not depend on \(r\), this establishes that \(n!S_{r}^{(n+1)}\equiv n!S_{s}^{(n+1)}\mod p^{a+1}\). Putting everything together, we get \[\binom{t}{2k}\frac{n!S_{r}^{(n+1)}(2k)!}{k!(-24)^{k}}\equiv\binom{t}{2k}\frac {n!S_{s}^{(n+1)}(2k)!}{k!(-24)^{k}}\mod p^{a+x+1}.\] Finally since \(x\) is fixed, for sufficently large \(a\) it is clear that \[1-p^{p^{a}(p-1)+l}\equiv 1-p^{p^{b}(p-1)+l}\mod p^{a+x+1},\] and so by combining the above two congruences together, we obtain the desired result. Next we establish the \(p\)-adic convergence of the terms involving the Bernoulli numbers. This is done in the following lemma by use of the classical Kummer congruences (cf. [10] for the statement and proof). **Lemma 3.7**.: _Fix \(t,l\geq 1\) odd where \(l\not\equiv-1\mod p-1\), and let \(p\geq 5\) be a prime with \(r=p^{a}(p-1)+l\), \(s=p^{b}(p-1)+l\) and \(a\leq b\). Then for \(k\) in the range \(0\leq k\leq\lfloor(t-1)/2\rfloor\), we have_ \[(1-p^{r})\binom{t}{2k+1}\frac{B_{r+1}(2k+1)!}{k!(r+1)(-24)^{k}}\equiv(1-p^{s}) \binom{t}{2k+1}\frac{B_{s+1}(2k+1)!}{k!(s+1)(-24)^{k}}\mod p^{a+y+1}\] _for some fixed integer \(y\)._ Proof.: First write \[\left|(1-p^{r})\frac{B_{r+1}}{r+1}-(1-p^{s})\frac{B_{s+1}}{s+1}\right|_{p} \left|\binom{t}{2k+1}\frac{(2k+1)!}{k(-24)^{k}}\right|_{p}.\] Once again the rightmost term is dependent only on \(t\) and \(k\) which are fixed, and so denote by \(y\) the \(p\)-adic valuation of this term, which is also fixed. Notice that \[p^{a}(p-1)+l+1\equiv p^{b}(p-1)+l+1\mod p^{a}(p-1)\] and \(p^{a}(p-1)=\varphi(p^{a+1})\). Since \(l\) is odd, both \(r+1\) and \(s+1\) are even. Since \(l\not\equiv-1\mod p-1\), \(p-1\) does not divide \(l+1\) and so both \(r+1\) and \(s+1\) are not divisible by \(p-1\) (since \(p>3\)). Thus by Kummer's congruence we obtain \[\left|(1-p^{r})\frac{B_{r+1}}{r+1}-(1-p^{s})\frac{B_{s+1}}{s+1}\right|_{p} \left|\binom{t}{2k+1}\frac{(2k+1)!}{k(-24)^{k}}\right|_{p}=\frac{1}{p^{a+y+1}}\] which gives us the right congruence. We are now in a position to prove Theorem 1.1. The congruences established in both Lemmas 3.6 and 3.7 imply that \[\lim_{a\to\infty}u_{p^{a}(p-1)+l,t}=u_{l,t}\] for some state \(u_{l,t}\) in the \(p\)-adic Heisenberg VOA \(S_{1}\). Then in the \(p\)-adic topology and from eq. (3.5), we have \[\boldsymbol{f}(u_{l,t}) =\lim_{a\to\infty}(1-p^{p^{a}(p-1)+l})2^{(t+1)/2}t(t-2)!!G_{2}^{( t-1)/2}(q)G_{p^{a}(p-1)+l+1}(q)\] \[=2^{(t+1)/2}t(t-2)!!G_{2}^{(t-1)/2}(q)\lim_{a\to\infty}(1-p^{p^{a }(p-1)+l+1})G_{p^{a}(p-1)+l+1}(q).\] Since \(\lim_{a\to\infty}1-p^{p^{a}(p-1)+l+1}=1\) and \(\lim_{a\to\infty}p^{a}(p-1)+l+1=l+1\), \[\boldsymbol{f}(u_{l,t})=2^{(t+1)/2}t(t-2)!!G_{2}^{(t-1)/2}(q)G_{l+1}^{*}(q)\] where \(G_{l+1}^{*}(q)\) is a \(p\)-adic Eisenstein modular form of weight \(l+1\). This proves Theorem 1.1. ## 4. P-adic Characters in Lattice VOAs ### Lattice VOAs We recount relevant facts about lattice VOAs \(V_{\Lambda}\) which are necessary in the computation of characters in the subsection below. In the context of [13] or [12], let \(\Lambda\) be an even (integral) unimodular lattice of rank \(d\) with associated symmetric \(\mathbb{Z}\)-bilinear form \((\cdot,\cdot)\), and let \[\mathfrak{h}=\Lambda\otimes_{\mathbb{Z}}\mathbb{C}.\] Following the process outlined in Section 2.2, construct \(S_{\mathrm{alg}}\) where we denote by \(a(n)\) the action of \(a\otimes t^{n}\) on \(S_{\mathrm{alg}}\) for \(a\in\mathfrak{h}\), and where \[[a(m),b(n)]=\delta_{m+n,0}(a,b)m.\] for integers \(m,n\in\mathbb{Z}\). Denote by \(\mathbb{C}\{\Lambda\}\) the group algebra with basis \(\{e^{\alpha}\mid\alpha\in\Lambda\}\) and form the space \[V_{\Lambda}=S_{\mathrm{alg}}\otimes\mathbb{C}\{\Lambda\}.\] Vertex operators of states in \(V_{\Lambda}\) with trivial \(\mathbb{C}\{\Lambda\}\) tensor factor are defined in the same way as in \(S_{\mathrm{alg}}\). We identify \(S_{\mathrm{alg}}\) with \(S_{\mathrm{alg}}\otimes e^{0}\) hence the vacuum of \(V_{\Lambda}\) is the same as that of \(S_{\mathrm{alg}}\). On \(V_{\Lambda}\), states \(a\in\mathfrak{h}\) with \(n\neq 0\) act on \(S_{\mathrm{alg}}\) as before: \[a(n)(b\otimes e^{\alpha})=a(n)b\otimes e^{\alpha},\] whereas the operator \(a(0)\) acts on \(\mathbb{C}\{\Lambda\}\) as \[a(0)(b\otimes e^{\alpha})=\langle a,\alpha\rangle(b\otimes e^{\alpha}). \tag{4.1}\] Define the dual lattice \(\Lambda^{\circ}=\{\,\alpha\in\mathfrak{h}\mid\langle\alpha,\Lambda\rangle \subset\mathbb{Z}\,\}\). Since \(\Lambda\) is integral, \(\Lambda\subset\Lambda^{\circ}\) and so we may consider the coset decomposition \[\Lambda^{\circ}\bigcup_{i\in\Lambda^{\circ}/\Lambda}(\Lambda+\lambda_{i})\] From [10], each space \[V_{\Lambda+\lambda_{i}}=S_{\mathrm{alg}}\otimes\mathbb{C}\{\Lambda+\lambda_{i}\}\] constitute the complete set of non-isomorphic irreducible \(V_{\Lambda}\)-modules, and furthermore, a theorem of Zhu shows that each \(Z_{\Lambda+\lambda_{i}}(a,q)\) is modular. It is easy to show ([11], Lemma 4.2.1) that \(Z_{\Lambda+\lambda_{i}}(a,q)=0\) for any \(a=S_{\mathrm{alg}}\otimes e^{\alpha}\in V_{\Lambda}\) for some non-zero \(\alpha\in\Lambda\). The proof follows from the fact that any state in \(V_{\Lambda}\) with non-zero \(\mathbb{C}\{\Lambda\}\) tensor factor has no mode which preserves grading. As such we will consider only characters of states of the form \(S_{\mathrm{alg}}\otimes e^{0}\in V_{\Lambda}\), and we henceforth omit writing the trivial \(\mathbb{C}\{\Lambda\}\) tensor factor for such states. For each \(\alpha\in\Lambda+\lambda_{i}\), we have \[V_{\Lambda+\lambda_{i}}=\bigoplus_{\alpha\in\Lambda+\lambda_{i}}S_{\mathrm{ alg}}\otimes e^{\alpha} \tag{4.2}\] and so following [11], denote for \(a\in V_{\Lambda}\) the functions \[Z_{\alpha}(a,q) =Z_{S_{\mathrm{alg}}\otimes t(e_{\alpha})}(a,q)\] \[Z_{i}(a,q) =Z_{\Lambda+\lambda_{i}}(a,q).\] It is then clear from eq. (4.2) that \[Z_{i}(a,q)=\sum_{\alpha\in\Lambda+\lambda_{i}}Z_{\alpha}(a,q). \tag{4.3}\] Computing characters in \(V_{\Lambda}\) is similar to that in \(S_{\mathrm{alg}}\), however one must be cautious due to discrepancies caused by the behaviour of \(a(0)\) exposed in eq. (4.1) above. In particular, equation eq. (3.3) of [12] does not apply for most states in \(V_{\Lambda}\). We resort then to using the following general recursive expression given first in [14] from which eq. (3.3) is based: Given any \(V\)-module \(M\) and for \(a,b\in V\), we have \[Z_{M}(a[-n]b,q)=\delta_{n,1}\mathrm{Tr}_{M}\left(o(a)o(b)q^{L_{0}-c_{V}/24} \right)+\sum_{m\geq 1}\frac{2(-1)^{m+1}}{m!(n-1)!}G_{n+m}(q)Z_{M}(a[m]b,q) \tag{4.4}\] where we have again applied the normalization seen in eq. (3.4). ### Proof of Theorem 1.2 We take \(a\in V_{\Lambda}\) such that \(\langle a,a\rangle=1\) and \(\alpha\in\Lambda^{\circ}\) throughout. We will consider as in Section 3.1 the states \[v_{r,t}=(r-1)!a[-r]a[-1]^{t}\mathbf{1}\in S_{\mathrm{alg}}\otimes e^{0}\] where \(r,t\) are odd integers, and where \(t\geq 1\) and \(r\geq 3\). In order to assess convergence later, we must re-write \(v_{r,t}\) in the round-bracket formalism. This will be similar to what was done in Proposition 3.3, however from eq. (4.1), we must be careful with the product \(a(0)\). Recall that from eq. (2.12) we have \[a[-1]=a(-1)+\frac{1}{2}a(0)-\frac{1}{12}a(-1)+\cdots.\] We wish to find expressions for \(a[-1]^{t}\mathbf{1}\) and \(a(1)a[-1]^{t}\mathbf{1}\). Since both are elements of \(S_{\mathrm{alg}}\otimes e^{0}\) however, the product \(a(0)\) in the expansion above acts as zero since \(\langle a,0\rangle=0\). We will thus obtain the same expressions for \(a[-1]^{t}\mathbf{1}\) and \(a(1)a[-1]^{t}\mathbf{1}\) as the ones in the Heisenberg case given in Lemmas 3.1 and 3.2 with \(h=a\). For the same reason, the statement of Proposition 3.3 holds in in this case as well: **Proposition 4.1**.: _Let \(a\in V_{\Lambda}\) where \(\langle a,a\rangle=1\) and \(\alpha\in\Lambda^{\circ}\). For \(r,t\) odd integers and \(t\geq 1\) and \(r\geq 3\), we have_ \[v_{r,t}=\sum_{n\geq 0}\sum_{k\geq 0}\binom{t}{2k}\frac{n!S_{r}^{ (n+1)}(2k)!}{k!(-24)^{k}}a(-n-1)a(-1)^{t-2k}\textbf{1}\] \[\qquad\qquad\qquad-\sum_{k\geq 0}\binom{t}{2k+1}\frac{B_{r+1}(2k+1)! }{k!(r+1)(-24)^{k}}a(-1)^{t-2k-1}\textbf{1}\] We now take the character of \(v_{r,t}\) over \(S_{\mathrm{alg}}\otimes e^{\alpha}\). The following is an extension of Lemma 4.2.2 of [1] and is proven similarly. **Lemma 4.2**.: _Let \(a\in V_{\Lambda}\) where \(\langle a,a\rangle=1\) and \(\alpha\in\Lambda^{\circ}\). For \(t\geq 0\) we have_ \[Z_{\alpha}(a[-1]^{t}\textbf{1},q)=\left(\sum_{k\geq 0}\binom{t}{2k}\frac{(2k)!} {k!}(a,\alpha)^{t-2k}G_{2}(q)^{k}\right)q^{\frac{1}{2}(\alpha,\alpha)}/\eta(q) ^{d}.\] Proof.: We proceed by induction on \(t\). The case \(t=0\) is simply the graded dimension over \(S_{\mathrm{alg}}\otimes e^{0}\), and so let \(t=1\). Then using eq. (4.4), \[Z_{\alpha}(a[-1]\textbf{1},q)=\langle a,\alpha\rangle Z_{\alpha}(\textbf{1},q) =\langle a,\alpha\rangle q^{\frac{1}{2}}/\eta(q)^{d}\] where we used the fact that \(o(a)=\langle a,\alpha\rangle\) since we work over \(S_{\mathrm{alg}}\otimes e^{\alpha}\). Suppose the claim holds up to \(t-1\). Then using eq. (4.4) again, we obtain \[Z_{\alpha}(a[-1]^{t}\textbf{1},q) =Z_{\alpha}(a[-1]a[-1]^{t-1}\textbf{1},q)\] \[=\langle a,\alpha\rangle Z_{\alpha}(a[-1]^{t-1}\textbf{1},q)+2(t -1)G_{2}(q)Z_{\alpha}(a[-1]^{t-2}\textbf{1},q). \tag{4.5}\] By induction, we write the first term of eq. (4.5) as \[\langle a,\alpha\rangle^{t}+\left(\sum_{k\geq 1}\binom{t-1}{2k}\frac{(2k)!}{k! }(a,\alpha)^{t-2k}G_{2}^{k}(q)\right)q^{\frac{1}{2}(\alpha,\alpha)}/\eta(q)^{d}, \tag{4.6}\] and similarly the second term of eq. (4.5) equates to \[\left(\sum_{k\geq 0}\binom{t-2}{2k}\frac{2(t-1)(2k)!}{k!}(a,\alpha)^ {t-2k-2}G_{2}(q)^{k+1}\right)q^{\frac{1}{2}(\alpha,\alpha)}/\eta(q)^{d}\] \[=\left(\sum_{k\geq 0}\frac{(k+1)(2k+1)!(2)(t-1)!}{(k+1)(2k+1)!(t-2k -2)!(k)!}(a,\alpha)^{t-2k-2}G_{2}(q)^{k+1}\right)q^{\frac{1}{2}(\alpha,\alpha)} /\eta(q)^{d}\] \[=\left(\sum_{k\geq 0}\binom{t-1}{2k+1}\frac{(2k+2)!}{(k+1)!} \langle a,\alpha\rangle^{t-2k-2}G_{2}(q)^{k+1}\right)q^{\frac{1}{2}(\alpha, \alpha)}/\eta(q)^{d} \tag{4.7}\] \[=\left(\sum_{k\geq 1}\binom{t-1}{2k-1}\frac{(2k)!}{k!}\langle a, \alpha\rangle^{t-2k}G_{2}(q)^{k}\right)q^{\frac{1}{2}(\alpha,\alpha)}/\eta(q)^ {d}.\] Summing eqs. (4.6) and (4.7) and using Pascal's rule as in Lemma 3.1, we obtain \[Z_{\alpha}(a[-1]^{t}\mathbf{1},q)=\left(\sum_{k\geq 0}\binom{t}{2k}\frac{(2k)!}{k! }\langle a,\alpha\rangle^{t-2k}G_{2}(q)^{k}\right)q^{\frac{1}{2}(\alpha,\alpha )}/\eta(q)^{d}\] which proves the result. Following [1], we will be denoting \[f_{\alpha,t}(q)=\sum_{k\geq 0}\binom{t}{2k}\frac{(2k)!}{k!}G_{2}(q)^{k}\langle a,\alpha\rangle^{t-2k}\] in the following analogue of Proposition 3.5: **Proposition 4.3**.: _Let \(a\in V_{\Lambda}\) where \((a,a)=1\). For \(r,t\) odd integers and \(t\geq 1\) and \(r\geq 3\) we have_ \[Z(v_{r,t},q)=2tG_{r+1}(q)\left(\sum_{k\geq 0}\binom{t-1}{2k}\frac{(2k)!}{k!}G_{ 2}(q)^{k}\Theta_{\Lambda,t-2k-1}(q)\right)/\eta(q)^{d}\] _where_ \[\Theta_{\Lambda,t-2k-1}(q)=\sum_{\alpha\in\Lambda}\langle a,\alpha\rangle^{t- 2k-1}q^{\frac{1}{2}(\alpha,\alpha)}.\] Proof.: We use eq. (4.4). This gives \[Z_{\alpha}(v_{r,t},q) =(r-1)!\sum_{m\geq 1}\frac{2(-1)^{m+1}}{m!(r-1)!}G_{r+m}(q)Z_{ \alpha}(a[m]a[-1]^{t}\mathbf{1},q)\] \[=2G_{r+1}(q)Z_{\alpha}(a[1]a[-1]^{t}\mathbf{1},q)\] \[=2tG_{r+1}(q)f_{\alpha,t-1}(q)q^{\frac{1}{2}(\alpha,\alpha)}/\eta (q)^{d}.\] where we have made use of Lemma 4.2 in the last equality. Using eq. (4.3), \[Z_{i}(v_{r,t},q) =2tG_{r+1}(q)\left(\sum_{\alpha\in\Lambda+\lambda_{i}}f_{\alpha,t -1}(q)q^{\frac{1}{2}(\alpha,\alpha)}\right)/\eta(q)^{d}\] \[=2tG_{r+1}(q)\left(\sum_{k\geq 0}\binom{t-1}{2k}\frac{(2k)!}{k! }G_{2}(q)^{k}\sum_{\alpha\in\Lambda+\lambda_{i}}\langle a,\alpha\rangle^{t-2k -1}q^{\frac{1}{2}(\alpha,\alpha)}\right)/\eta(q)^{d}\] \[=2tG_{r+1}(q)\left(\sum_{k\geq 0}\binom{t-1}{2k}\frac{(2k)!}{k! }G_{2}(q)^{k}\Theta_{\Lambda+\lambda_{i},t-2k-1}(q)\right)/\eta(q)^{d}.\] The proof follows from the fact that specializing \(i=0\) gives \(Z_{0}(v_{r,t},q)=Z(v_{r,t},q)\). The theta series \(\Theta_{\Lambda,t,k}(q)\) in the statement of Proposition 4.3 appears troublesome since it is not quite the regular theta series \(\Theta_{\Lambda}(q)\) of \(\Lambda\). It was shown in [1] and exposed in Theorem 4.2.6 of [1] that \(\Theta_{\Lambda,t,k}(q)\) in fact lies in the space of quasi-modular forms on the congruence subgroup \(\Gamma_{0}(N)\) for some \(N\geq 1\). Hence the image of the character map for \(V_{\Lambda}\) is indeed quasi-modular, and we may consider \(p\)-adic limits. With the assumptions of Proposition 4.3 in place, we proceed as in Section 3.2 by denoting \[u_{r,t}=(1-p^{r})v_{r,t}=(1-p^{r})(r-1)!a[-r]a[-1]^{t}\mathbf{1}\] and considering the sequence of states \(\big{(}u_{p^{b}(p-1)+l,l}\big{)}_{b\geq 0}\) for fixed odd \(t\geq 1\) where \(p\geq 3\) is a prime and \(l\not\equiv-1\mod p-1\). Since the round-bracket states obtained in Proposition 4.1 are identical to those in the Heisenberg case, both Lemmas 3.6 and 3.7 establish the \(p\)-adic convergence of this sequence to a state \(u_{t,l}\) in the \(p\)-adic lattice VOA \(\widehat{V}_{\Lambda}\). Then by taking the limit in the \(p\)-adic topology as in Section 3.2 we obtain \[\boldsymbol{f}(u_{t,l})=2tG_{l+1}^{\star}(q)\left(\sum_{k\geq 0}\binom{t-1}{2k} \frac{(2k)!}{k!}G_{2}(q)^{k}\Theta_{\Lambda,t-2k-1}(q)\right).\] where \(\boldsymbol{f}\) is the character map in eq. (1.3). This proves Theorem 1.2. _Remark_.: Though Theorems 1.1 and 1.2 both elaborate on the image of the \(p\)-adic character map, such computations are rather tedious since the character is not a ring homomorphism. It is then natural to speculate whether there exists an ameliorated theory which is better suited to the study of the \(p\)-adic character. One such possibility is to develop a theory of Hecke operators arising from \(S_{\mathrm{alg}}\) or \(V_{\Lambda}\). For example, a realization of the operator \(U_{p}\) as in [10] could serve as a useful tool in assessing the surjectivity of the \(p\)-adic character on \(\mathfrak{M}_{p}\). Though this is only hypothetical, the authors of [11], [11], and [12] have shown that such operators do arise algebraically in the context of conformal field theory, namely out of the Fock space of the Fermionic algebra. It is then an open question whether similar structures exist in \(S_{\mathrm{alg}}\), \(V_{\Lambda}\), or in any of their various incarnations.
2306.00678
Automated mixing of maximally localized Wannier functions into target manifolds
Maximally localized Wannier functions (MLWFs) are widely used to construct first-principles tight-binding models that accurately reproduce the electronic structure of materials. Recently, robust and automated approaches to generate these MLWFs have emerged, leading to natural sets of atomic-like orbitals that describe both the occupied states and the lowest-lying unoccupied ones (when the latter can be meaningfully described by bonding/anti-bonding combinations of localized orbitals). For many applications, it is important to instead have MLWFs that describe only certain target manifolds separated in energy between them -- the occupied states, the empty states, or certain groups of bands. Here, we start from the full set of MLWFs describing simultaneously all the target manifolds, and then mix them using a combination of parallel transport and maximal localization to construct orthogonal sets of MLWFs that fully and only span the desired target submanifolds. The algorithm is simple and robust, and it is applied to some paradigmatic but non-trivial cases (the valence and conduction bands of silicon, the top valence band of MoS$_2$, the $3d$ and $t_{2g}$/$e_g$ bands of SrVO$_3$) and to a mid-throughput study of 77 insulators.
Junfeng Qiao, Giovanni Pizzi, Nicola Marzari
2023-06-01T13:52:54Z
http://arxiv.org/abs/2306.00678v1
# Automated mixing of maximally localized Wannier functions into target manifolds ###### Abstract Maximally localized Wannier functions (MLWFs) are widely used to construct first-principles tight-binding models that accurately reproduce the electronic structure of materials. Recently, robust and automated approaches to generate these MLWFs have emerged, leading to natural sets of atomic-like orbitals that describe both the occupied states and the lowest-lying unoccupied ones (when the latter can be meaningfully described by bonding/anti-bonding combinations of localized orbitals). For many applications, it is important to instead have MLWFs that describe only certain target manifolds separated in energy between them--the occupied states, the empty states, or certain groups of bands. Here, we start from the full set of MLWFs describing simultaneously all the target manifolds, and then mix them using a combination of parallel transport and maximal localization to construct orthogonal sets of MLWFs that fully and only span the desired target submanifolds. The algorithm is simple and robust, and it is applied to some paradigmatic but non-trivial cases (the valence and conduction bands of silicon, the top valence band of MoS\({}_{2}\), the \(3d\) and \(t_{2g}/e_{g}\) bands of SrVO\({}_{3}\)) and to a mid-throughput study of 77 insulators. ## I Introduction Maximally localized Wannier functions (MLWFs) [1; 2; 3; 4] are accurate reduced-order models [5] for the electronic structures of periodic crystals. The generation of MLWFs from Bloch wavefunctions typically requires a choice of initial projectors, which are often conjectured from chemical intuition with trial and error. For metals, or when considering both the valence and conduction bands (VCB) of insulators and semiconductors, one typically deals with bands that overlap (i.e., are entangled) [2] with higher-energy bands. In such cases, since the low-energy electronic structure can often be well described by a tight-binding model of atomic-like orbitals, the initial projectors are usually chosen from the hydrogenic \(s,p,d,f\) orbitals. However, when it comes to the case of identifying the optimal projectors for valence bands (VB) alone, or especially conduction bands (CB) which are mixed with higher-energy bands, it might become difficult to find good initial guesses. Indeed, the VB/CB often consists of bonding/anti-bonding orbitals, or combination of atomic orbitals which are more challenging to guess or to describe, unless the crystal offers a very simple chemical picture. Achieving separate Wannierization of target manifolds is also advantageous for many applications. Some physical properties (such as the electric polarization) depend only on the Wannier functions (WFs) of the occupied manifold (sum of Wannier centers of all the valence WFs). Moreover, using dedicated MLWFs means that one can obtain smaller tight-binding models that are thus more efficient when computing, e.g., transport properties of large systems. Koopmans spectral functionals also require separate occupied and unoccupied manifolds [6]. Last, low-energy models, such as those used in correlated-electrons calculations [7; 8; 9; 10], require a description of the correlated manifold. Several approaches have been developed in the past few years to simplify the construction of MLWFs. The selected columns of the density matrix (SCDM) algorithm [11] uses QR decomposition with column pivoting on the density matrix to automatically generate initial projection orbitals, and a sensible choice of the density matrix can be obtained from the projectability of Bloch states onto pseudo-atomic orbitals from pseudopotentials [12]. The optimal projection functions method [13] starts with a larger manifold and generates the MLWFs of the valence manifold by a single rotation matrix, which is computed by a product of a series of Givens rotations. The dually localized Wannier functions method [14] adds an additional term to spread functional, to localize the WFs in both space and energy, achieving a separation of VB and CB. Here, we propose a different approach to automatically mix optimal MLWFs spanning valence and conduction into several submanifolds, provided that these submanifolds are gapped in their energy spectrum. This naturally applies to the case of separate Wannierizations of valence and conduction manifolds, but more generally extends to arbitrary groups of bands separated in energy. We start from the Wannierization of a larger manifold (e.g., the VCB manifold), that we do not discuss here since robust methods already exist: in addition to hydrogenic \(s,p,d,f\) initial projections, partly occupied WF method [15; 16], the fully automated SCDM method [11; 12], or the projectability-disentangled Wannier function (PDWF) that we recently introduced are available (in particular, the latter appears as a very general and remarkably robust approach allowing, e.g., to construct \(\sim 1.3\) million PDWFs for \(\sim 22\) thousands materials [17]). Once these MLWFs are obtained, we then diagonalize the Wannier Hamiltonian at every \(k\)-point and partition the states into submanifolds (e.g., valence, conduction): they are grouped together if they fall inside the desired energy interval. Next, we fix the gauge randomness of the submanifolds using parallel transport [18]. Finally, the MLWFs for each submanifold are generated by maximally localizing their spread functionals, independently. Since the submanifolds are already isolated in energy (i.e., no disentanglement is needed), and parallel transport provides a continuous gauge, the final maximal localization converges effortlessly. In the case of separating VB and CB, the final two groups of MLWFs span the fully occupied valence and the fully unoccupied conduction manifolds, and their shapes closely resemble bonding and anti-bonding orbitals, respectively. Compared with SCDM, the present method works fully in reciprocal space, reducing the computer memory requirements and also being computationally faster. Compared with the optimal projection functions method [13] or the dually localized Wannier functions method [14], we do not change the spread functional but use the original one in Marzari and Vanderbilt [1], thus the resulting WFs are maximally-localized in their original definition; moreover, the parallel transport step is non-iterative and always quickly provides a good starting point for the final maximally-localization step, avoiding potential convergence issues that might occur in an iterative method. In the following, we first discuss and validate the present method, which we name manifold-remixed Wannier function (MRWF), on the VCB of a 3D material (silicon), the VCB of a 2D material (MoS\({}_{2}\)), the top VB of MoS\({}_{2}\), and the \(3d\) manifold of SrVO\({}_{3}\). We also discuss the bonding/anti-bonding character of the resulting MLWFs, as well as band interpolation accuracy. To analyze statistics of band interpolation quality and demonstrate the robustness of the present approach, we Wannierize the VB and CB of a diverse set of 77 insulators, with the number of atoms between 1 and 45. ## II Results ### The manifold separation algorithm While obtaining the starting WFs is not the focus of this paper, we remind here that the standard Wannierization algorithm [1; 2] requires initial projection orbitals \(\ket{g_{n}}\) to guide the spread minimization and find the most meaningful minimum and the related unitary transformation matrices \(U_{\mathbf{k}}\) at each \(k\)-point \(\mathbf{k}\). The projectors \(\ket{g_{n}}\) are used to rotate the original Bloch wavefunctions \(\ket{\psi_{m\mathbf{k}}}\) into \[\ket{\tilde{\psi}_{n\mathbf{k}}}=\sum_{m=1}^{M}\ket{\psi_{m\mathbf{k}}}\bra{ \psi_{m\mathbf{k}}}g_{n}, \tag{1}\] where \(n\) and \(m\) are the indices for WFs and Bloch bands, respectively; \(M\) is the total number of Bloch bands; and \(\mathbf{k}\) is the Bloch quasi-momentum. Note that \(\ket{\tilde{\psi}}\) are independent of any arbitrary rotation gauge for the \(\ket{\psi_{n\mathbf{k}}}\). For metals or for VCB of insulators, one typically starts with hydrogenic \(s,p,d,f\) orbitals [1] as the initial guesses for all the corresponding valence electrons. Then, the MLWFs can be generated using either the standard disentanglement [2] and maximal localization algorithms [1] or minimizing directly the total spread, such as the partly occupied WF method [15] or a variational formulation [19]. Instead of hydrogenic orbitals, one can use SCDM [11] or the recently introduced projectability disentanglement [17] for a fully automated Wannierization. Irrespective of the approach taken to obtain MLWFs describing the VCB, these MLWFs will be the starting point of the present algorithm, with the next step to separate e.g. the VB and CB manifolds from the disentangled MLWFs that span both simultaneously. Note that while in the following we use the separation of VB and CB as an example to illustrate the method for clarity and simplicity, the present approach is not limited to the case of two submanifolds, but can be applied to any groups of bands separated in energy. Since the disentanglement procedure aims at obtaining the lowest-possible spreads, it typically achieves this goal by mixing states originating from all the submanifolds (e.g., VB and CB) of interest. To decompose the manifold into two orthogonal submanifolds, we diagonalize the Wannier-gauge Hamiltonian \(H_{\mathbf{k}}^{W}\) (the superscript \(W\) indicates the Wannier gauge), \[H_{\mathbf{k}}^{W}=V_{\mathbf{k}}\mathcal{E}_{\mathbf{k}}V_{\mathbf{k}}^{*}, \tag{2}\] where \(\mathcal{E}_{\mathbf{k}}\) and \(V_{\mathbf{k}}\) are the eigenvalues and the eigenvectors, respectively; \(*\) denotes conjugate transpose. Usually the eigenvalues and eigenvectors returned from linear algebra computer programs are already sorted in ascending order of eigenvalues; if not, we sort them in ascending order, so that the matrices are partitioned into two blocks, \[\begin{split}\mathcal{E}_{\mathbf{k}}&=\begin{bmatrix} \mathcal{E}_{\mathbf{k}}^{1}&0\\ 0&\mathcal{E}_{\mathbf{k}}^{2}\end{bmatrix},\\ V_{\mathbf{k}}&=\begin{bmatrix}V_{\mathbf{k}}^{1}&V_{\mathbf{k}}^{2} \end{bmatrix},\end{split} \tag{3}\] where \(V_{\mathbf{k}}^{1}\in\mathbb{C}^{N\times P}\) (\(V_{\mathbf{k}}^{2}\in\mathbb{C}^{N\times Q}\)) corresponds to states whose eigenvalues \(\mathcal{E}_{\mathbf{k}}^{1}\) (\(\mathcal{E}_{\mathbf{k}}^{2}\)) are below (above) the band gap, and 0 represents a zero matrix. Here, \(N\) is the number of WFs of the VCB manifold, \(P\) and \(Q\) are the number of WFs in the valence (below band gap) and the conduction (above band gap) submanifolds, respectively, such that \(N=P+Q\). Next, all the Wannier-gauge operators are rotated according to \(V_{\mathbf{k}}^{1}\) for the valence submanifold: for instance, the overlap matrices \(M_{\mathbf{k},\mathbf{b}}^{W}\) (for computing the spread functional) is rotated by \[M_{\mathbf{k},\mathbf{b}}^{W,1}=V_{\mathbf{k}}^{1*}M_{\mathbf{k},\mathbf{b}}^{W} V_{\mathbf{k}+\mathbf{b}}^{1}, \tag{4}\] where \[\begin{split} M_{\mathbf{k},\mathbf{b}}^{W}&=U_{ \mathbf{k}}^{*}M_{\mathbf{k},\mathbf{b}}U_{\mathbf{k}+\mathbf{b}},\\ M_{\mathbf{k}\mathbf{b}}&=\left\langle u_{m, \mathbf{k}}|u_{n,\mathbf{k}+\mathbf{b}}\right\rangle,\end{split} \tag{5}\] \(U_{\mathbf{k}}\) are the unitary transformations from the VCB manifold Wannierization, and \(|u_{m,\mathbf{k}}\rangle\) is the periodic part of Bloch wavefunction \(|\psi_{m,\mathbf{k}}\rangle=e^{i\mathbf{k}\mathbf{r}}\,|u_{m,\mathbf{k}}\rangle\). For more details on the notations of \(M_{\mathbf{k},\mathbf{b}}\) and \(\mathbf{b}\)-vectors, see Marzari and Vanderbilt [1]. Consistently, the \(\mathcal{E}_{\mathbf{k}}^{1}\) is used as the new eigenvalues. Now the problem is reformulated into a Wannierization of an isolated submanifold with \(P\) WFs for VB. Similarly, the conduction manifold operators are rotated by \(V_{\mathbf{k}}^{2}\), and an analogous Wannierization of an isolated submanifold with \(Q\) WFs. Indeed, the first-step Wannierization of VCB has already disentangled the MLWFs from all the remaining higher-energy bands, so that these MLWFs span all the target submanifolds we are interested in. To achieve our goal, we are left with Wannierizations of two isolated submanifolds and thus the subsequent steps do not need any disentanglement. Such a two-step procedure makes the whole algorithm quite robust, especially when Wannierizing the CB, for which it is difficult to provide good initial projections of the corresponding anti-bonding orbitals. The remaining difficulty of the Wannierization of the two isolated submanifolds is caused by the diagonalization in Eq. (2). Indeed, since the Hamiltonians \(H_{\mathbf{k}}^{W}\) are independently diagonalized at each \(\mathbf{k}\), the resulting eigenvectors will have different gauges at different \(k\)-points, requiring additional Wannierizations in each submanifold. Since these Wannierizations are carried out on submanifolds that have isolated bands, the minimization algorithm is typically more robust to the choice of initial projections compared to the case of disentanglement. One could simply resort to random Gaussian initial projections followed by maximal localization to reach the MLWFs for the two submanifolds, respectively; or even brute-force maximal localization starting from the random gauge after the Wannier Hamiltonian diagonalization. However, a direct maximal localization starting from a random gauge is not robust--we observe that, in many cases, the maximal localization fails due to zeros in the diagonal of the overlap matrices \(M_{\mathbf{k},\mathbf{b}}\); and, even if it converges, it displays the same issues of random Gaussian projections: a large number of iterations, and oscillatory evolution of spread and the sum of MLWF centers during the minimization process (see Fig. S1 in supplementary information (SI) for discussions on the convergences of these choices). Moreover, when the number of \(k\)-points \(N_{\mathbf{k}}\) is large, the maximal localization is much harder to converge. A better solution is finding good starting gauge for the two submanifolds in an automated fashion. To tackle this challenge, we adopt the parallel transport algorithm [18] to construct smooth gauges for the two submanifolds. For an isolated manifold, the existence of a smooth gauge is determined by its topological obstructions, which are characterized by the Chern numbers (one in 2D and three in 3D). If the Chern numbers are \(0\) (as it is the case for systems with time-reversal symmetry), then it can be proven [20] that it is possible to construct a continuous gauge explicitly by the following procedure [18]: (a) Suppose \(k_{i}\in[0,1]\) (in fractional coordinates) for \(i=x,y,z\): propagate (using singular value decomposition of overlap matrices \(M_{\mathbf{k},\mathbf{b}}^{W}\) to maximally align the gauge between neighboring \(k\)-points) the Bloch wavefunctions \(|u_{n\mathbf{0}}\rangle\) at \(\Gamma\) along \(k_{x}\) from \(k\)-point \(\mathbf{0}=(0,0,0)\) to \(\mathbf{1}=(1,0,0)\), to construct a continuous gauge across these \(k\)-points. The new gauge is not necessarily quasi-periodic, i.e., satisfying the Bloch theorem imposed on \(|u_{n\mathbf{k}}\rangle\) by \(|u_{n\mathbf{k}+\mathbf{K}}\rangle=\tau_{\mathbf{K}}\,|u_{n\mathbf{k}}\rangle\) where \(\tau_{\mathbf{K}}=e^{-i\mathbf{K}\cdot\mathbf{r}}\) is the translation operator in reciprocal space, and \(\mathbf{K}\) is a reciprocal lattice vector. In general, instead, the two states are related by \(|u_{n\mathbf{1}}\rangle=(\tau_{\mathbf{1}}\,|u_{n\mathbf{0}}\rangle)V_{\text{ obs}}\), where this expression defines the obstruction matrix \(V_{\text{obs}}\) quantifying the misalignment of the propagated gauge and the gauge required by Bloch theorem at \((1,0,0)\). To fulfill the quasi-periodic boundary condition, we can therefore multiply each \(|u_{n\mathbf{k}}\rangle\) by \(e^{-k_{x}L}\) (note \(k_{x}\in[0,1]\)) where \(V_{\text{obs}}=\exp(L)\): in this way, we obtain a continuous gauge that also satisfies Bloch theorem, i.e., the new obstruction matrix in this modified gauge is the identity matrix. (b) For each \(k_{x}\), propagate along \(k_{y}\) from \((k_{x},0,0)\) to \((k_{x},1,0)\). Now we obtain a series of obstruction matrices \(V_{\text{obs}}(k_{x})\) along \(k_{x}\). If the winding number [18; 21; 22] of the determinants of \(V_{\text{obs}}(k_{x})\) vanishes (i.e., the Chern number is \(0\)), then there is a continuous function that maps \(V_{\text{obs}}(k_{x})\) to identity [18]. We then multiply the gauge by this mapping, so that the new gauge satisfies the quasi-periodic boundary condition in the \(k_{x}-k_{y}\) plane. Ref. [18] explicitly constructs the continuous mapping by their "column interpolation" method for the Kane-Mele model, which is a 2D fermionic time-reversal-symmetric model (i.e., having a vanishing Chern number) but can present a non-zero \(\mathbb{Z}_{2}\) number; as a comparison, previous methods had difficulties in handling \(\mathbb{Z}_{2}\) systems [18; 23; 24], sometimes requiring model-specific information [25; 26]. (c) For each \((k_{x},k_{y})\), propagate along \(k_{z}\) from \((k_{x},k_{y},0)\) to \((k_{x},k_{y},1)\). Now the obstruction matrices \(V_{\text{obs}}(k_{x},k_{y})\) depend both on \(k_{x}\) and \(k_{y}\). Similar to point (b), if the two winding numbers of the determinants of \(V_{\text{obs}}(k_{x},0)\) and \(V_{\text{obs}}(0,k_{y})\) vanish, then there is a continuous function that maps \(V_{\text{obs}}(k_{x},k_{y})\) to identity. We then multiply the gauge with this mapping and obtain the final gauge satisfying the quasi-periodic boundary condition in 3D. Ref. [18] demonstrates this constructive algorithm to obtain a continuous gauge for a 3D system (silicon). The results also show that the continuous gauge can be further smoothened by the standard maximal localization procedure [1] to construct MLWFs. We stress that the algorithm is non-iterative and fast, thus solv ing the problem of finding good initial WFs for isolated manifolds in an efficient and robust way. As shown in Fig. S1, parallel transport generates a much better starting point than random Gaussian projections or random gauges: the convergence of maximal localization is much faster, and the spread and the sum of MLWF centers smoothly evolve during minimization. We note that since the propagation of gauge requires overlap matrices between a particular set of nearest-neighboring \(k\)-points, in the SI Section S2 we present a procedure so that parallel transport can be applied to any arbitrary crystal structure. In summary, the sequential parallel transports move the obstructions to the Brillouin zone edges, and the "column interpolation" method fixes the quasi-periodicity. Our tests on a set of 77 insulators (see discussion later in Section II.5) show that this algorithm is able to construct a good initial gauge, and maximal localization is able to construct MLWFs without issue. We now mention that since we propagate the gauge starting from the first \(k\)-point \((0,0,0)\), there is still one gauge arbitrariness at this \(\Gamma\) point. Here, we suggest to insert an additional step that first minimizes the spread functional w.r.t. a single rotation matrix \(W\) for the first \(k\)-point, before performing the standard maximal localization w.r.t. all \(k\)-points to obtain MLWFs. Indeed, thanks to the small size of \(W\), this first preliminary step is computationally efficient, and can help in further improving the overall robustness of the full algorithm that we are presenting here. To achieve this, we derive the expression of the gradient of the spread \(\Omega\) w.r.t. the rotation matrix \(W\) in SI Section S1. We then use this gradient with a manifold optimization algorithm [27] to minimize \(\Omega\) w.r.t. \(W\), where \(W\) is constrained on the unitary matrix manifold \(\left\{W\in\mathbb{C}^{K\times K}|W^{*}W=I\right\}\), where \(K=P\) for the valence manifold, or \(K=Q\) for the conduction manifold. This minimization provides us with a single rotation matrix \(W\) that further improves the localization, while still preserving the parallel transport gauge: we stress that, in addition to increasing the robustness of the algorithm as mentioned earlier, this additional step can thus be beneficial for cases where the parallel transport gauge is implicitly assumed during the derivation of equations (for instance, Wannier interpolation of Berry curvature [28], or Wannier interpolation of nonlinear optical responses [29; 30]). After the parallel transport and the single rotation, the resulting WFs are close to the ideal MLWFs. However, since parallel transport only generates a continuous quasi-periodic gauge, it typically does not provide the smallest possible spread. It is therefore helpful to perform a final smoothing of the gauge [18] by running a final maximal localization step (see examples in Sections II.2 and II.3). This can be achieved using either the original Marzari-Vanderbilt localization [1] or a matrix manifold optimization w.r.t. gauge matrices at all the \(k\)-points, i.e., optimization on a product manifold of a series of unitary matrices \(\prod_{\mathbf{k}}\left\{U_{\mathbf{k}}\in\mathbb{C}^{K\times K}|U_{\mathbf{k }}^{*}U_{\mathbf{k}}=I\right\}\), where \(K=P\) for valence and \(K=Q\) for conduction manifolds. As already mentioned, the multi-step procedure that we propose here aims at making the whole algorithm more robust, since every step produce a better starting point for the final iterative localization algorithm. In summary, we start from an initial manifold that has been already singled out from the remaining high-energy states using standard procedure such as disentanglement and maximal localization (e.g., very accurately using projectability disentanglement [17] to extract as much as possible the bonding and anti-bonding characters from all the bands). The subsequent diagonalizations of Wannier-gauge Hamiltonians separate the manifold into (two) orthogonal submanifolds (for VB & CB, respectively). The (two) parallel-transport steps (for the relevant submanifolds) construct continuous gauges, fixing the randomness caused by the independent Hamiltonian diagonalization at each \(k\)-point. The rotation w.r.t. a single unitary matrix removes the gauge arbitrariness of parallel transport at the first \(k\)-point. The final maximal localizations ultimately smoothen the gauge, leading to two sets of MLWFs, each of which spans the submanifold for VB or CB. In SI Section S3, we prove that the final gauge transformation has block diagonal structure, i.e., the MRWFs are transformed according to \[U(\mathbf{k})=\begin{bmatrix}U_{\mathrm{VB}}(\mathbf{k})&0\\ 0&U_{\mathrm{CB}}(\mathbf{k})\end{bmatrix}, \tag{6}\] where \(U_{\mathrm{VB}}(\mathbf{k})\) and \(U_{\mathrm{CB}}(\mathbf{k})\) are unitary matrices for VB and semi-unitary matrices for CB, respectively. ### Silicon To test the validity of the present method, we first disentangle and maximally localize the VCB of silicon into 8 WFs, using the standard hydrogenic \(s\) and \(p\) projections with energy window disentanglement (we use hydrogenic projectors and energy window disentanglement here to demonstrate that the present approach works well as long as the entire VCB are accurately described; one can also use PDWF to construct MLWFs spanning the entire VCB). The resulting WFs, two of which have \(s\)-character and six of which have \(p\)-character, are shown in the VCB column of Fig. 1a. For the valence manifold, after running the Hamiltonian diagonalization and parallel transport, we obtain four WFs with spreads around \(4\,\mathrm{\SIUnitSymbolAngree}^{2}\), but their shapes do not have clear physical meaning, since the gauge of the first \(k\)-point is still arbitrary (see PT column inside the VB column of Fig. 1a); after running the single rotation of the \(W\) matrix, their spreads are further minimized to around \(3\,\mathrm{\SIUnitSymbolAngree}^{2}\), and the shapes now resemble the bonding orbitals between neighboring silicon atoms (PT+SR column inside the VB column of Fig. 1a); a final maximal localization further reduces the spreads to around \(2\,\mathrm{\SIUnitSymbolAngree}^{2}\) and the four spreads end up becoming iden tical (PT(+SR)+ML column inside the VB column of Fig. 1a) thus respecting the symmetry of the full system. For the conduction manifold (CB columns of Fig. 1a), again the shapes of WFs after parallel transport have no clear meaning. However, even after the single rotation, the shapes of WFs still do not resemble the expected anti-bonding orbitals, and only after the final maximal localization the anti-bonding shape is recovered. Note that, in this simple case of silicon, for both valence and conduction manifolds we reach the same set of WFs whether we run a maximal localization directly after parallel transport, or a maximal localization after parallel transport + single rotation, so the two cases are merged into one column in Fig. 1a under the header PT(+SR)+ML. The total spreads for the VB and the CB manifolds after parallel transport + maximal localization are \(8.584\,\mathrm{\SIUnitSymbolAngstrom}^{2}\) and \(31.899\,\mathrm{\SIUnitSymbolAngstrom}^{2}\), respectively. As expected, their sum is larger than the value for the VCB manifold (\(29.619\,\mathrm{\SIUnitSymbolAngstrom}^{2}\)) after disentanglement and maximal localization, since in the VCB case there is additional freedom to further minimize the spread by remixing bonding and anti-bonding WFs into pure \(s\) and \(p\) orbitals (we highlight that using atom-centered \(s,p\) projections does not lead to the most localized orbitals for VCB in silicon; with a choice of atom-centered \(sp^{3}\) projections, the total spreads can further decrease to \(26.761\,\mathrm{\SIUnitSymbolAngstrom}^{2}\), where four WFs have spreads \(3.522\,\mathrm{\SIUnitSymbolAngstrom}^{2}\) and another four \(3.168\,\mathrm{\SIUnitSymbolAngstrom}^{2}\)). In addition, we note that since the Hamiltonian diagonalization returns a random gauge, the spreads for parallel transport and parallel transport + single rotation are different in each run, but the spreads of PT(+SR)+ML should always be the same, since the algorithm should always manage to find the maximally-localized gauge in this simple case. To quantify how our multi-step procedure increases the overall robustness of the algorithm while at the same time reducing its computational cost, we show in SI Fig. S1 the evolution of WF spreads and centers during maximal localizations. Starting from the random gauge directly after Hamiltonian diagonalization (Fig. S1a), it takes 28,430 iterations to converge; using random Gaussians as initial guesses (Fig. S1b), the number of iterations decreases significantly to 812; with the parallel transport gauge (Fig. S1c), the number of iterations further decreases to 228, and the evolution of spreads and centers is much smoother; the best starting gauge is the one after single rotation (Fig. S1d), which only takes 40 iterations to converge, without any oscillations in the evolution of spreads and centers. Note that the spreads of valence MRWFs from PT(+SR)+ML are the same as MLWFs obtained from a direct Wannierization of the valence bands, i.e., the valence MRWFs after separation span the original DFT valence manifold, thus the initial VCB Wannierization does not cause any delocalization of the valence MRWFs (see SI Section S3 for a proof). We now discuss the quality of the band interpolation. The WFs for VB & CB are constructed from the initial VCB manifold obtained from a preliminary disentanglement and maximal localization. Therefore, if VB/CB are properly Wannierized, their band interpolation quality should be similar to that of VCB MLWFs. Thus, in the following paragraphs, we compare the band interpolations of VB/CB MLWFs, VCB MLWFs, and DFT bands. Figure 1: **WF shapes and band interpolation of silicon.** (a) WF shape of various Wannierization methods. In the header, the left, center, and right columns correspond to the valence plus conduction bands (VCB), valence bands (VB), and conduction bands (CB), respectively. The PT column shows WFs after running parallel transport (PT); The PT+SR column shows WFs after running PT and single rotation (SR) of \(W\) matrix; the PT(+SR)+ML column shows WFs after running PT (and optionally SR) and maximal localization (ML), since PT+SR+ML and PT+ML fall to the same minimum. The numbers under each WF are the spread and its multiplicity: e.g., 1.668 (2) means there are two WFs having spread \(1.668\,\mathrm{\SIUnitSymbolAngstrom}^{2}\). The last row \(\sum\Omega_{\mathrm{WF}}\) is the total spread of each calculation. The blue spheres are the silicon atoms. (b) Band structure comparison of DFT, VCB, VB, and CB. The horizontal dashed blue line is the upper limit of the frozen window for disentanglement (DIS). The VCB are computed by Wannier interpolation from DIS+ML; the VB and CB are computed by Wannier interpolation from PT+ML. Once the starting VCB manifold was properly disentangled and could well reproduce the DFT band structure, as shown in Fig. 1b, the WFs after parallel transport + maximal localization for the VB and the CB manifolds can reproduce the corresponding part of the VCB Wannier interpolated bands with high accuracy, being visually indistinguishable. To quantitatively evaluate the band interpolation quality, we compute the average band distance, \(\eta_{\text{isolated}}\), between the VCB and VB/CB bands [12; 17; 31]. The \(\eta_{\text{isolated}}\) is defined as \[\eta_{\text{isolated}}^{\text{A-B}}=\sqrt{\frac{\sum_{\mathbf{n}\mathbf{k} }(\epsilon_{\mathbf{n}\mathbf{k}}^{\text{A}}-\epsilon_{\mathbf{n}\mathbf{k}} ^{\text{B}})^{2}}{N_{b}N_{\mathbf{k}}}}, \tag{7}\] where \(\epsilon_{\mathbf{n}\mathbf{k}}\) are the eigenvalues of a band structure, and its superscript A or B refers to the eigenvalues of two different bands, A or B, which can be DFT bands, or Wannier interpolated bands of VCB, VB, or CB; \(N_{b}\) and \(N_{\mathbf{k}}\) are the number of bands and \(k\)-points, respectively. For silicon, we obtain: \(\eta_{\text{isolated}}^{\text{VCB}-\text{VB}}=6.6\,\text{meV}\) and \(\eta_{\text{isolated}}^{\text{VCB}-\text{CB}}=15.5\,\text{meV}\). In general, the VB interpolation is more accurate than CB since the VB MLWFs usually have smaller spreads. To improve the CB interpolation quality, one might need to increase \(k\)-point sampling, as we discuss in Section II.3.2. ### MoS\({}_{\mathbf{2}}\) Next, we test the method on a two-dimensional (2D) MoS\({}_{2}\) monolayer. For VCB Wannierization, we use the standard hydrogenic Mo \(d\) and S \(s,p\) projections (the semicore states are excluded, so in total 9 VBs and the lowest 4 CBs are Wannierized). Since the VB and the lowest four CBs of MoS\({}_{2}\) are isolated, 13 WFs are maximally localized from 13 bands without disentanglement. The 13 MLWFs can be well characterized into 4 groups by their angular momentum: as shown in the VCB column of Fig. 2b, from top to bottom, 3 resemble \(d_{z^{2}}\), 2 resemble \(d_{xy}\), and the remaining 8 resembles \(sp^{3}\) hybridized orbitals. #### ii.3.1 Valence and conduction bands For the valence manifold, both after PT and after PT+SR, the WFs still do not have a clear resemblance to bonding orbitals; after PT+ML or PT+SR+ML, the WFs can be well grouped into six hybrids of Mo \(d_{z^{2}}\) + S \(p\), two \(s\)-like WFs near sulfur atoms, and one WF floating inside the hexagonal cage and having \(C_{3h}\) symmetry, originating from the hybridization of three properly oriented Mo \(d_{z^{2}}\) orbitals from the three nearest Mo atom. For the conduction manifold, WFs after PT are already close to the anti-bonding hybrid orbitals, and the further SR or ML steps help to slightly reduce the spreads and result in more symmetrized WF shapes. We notice that in contrast to intuition, in this case it took more iterations to converge starting from PT+SR gauge than directly from the PT gauge, as shown in the SI Fig. S2. Although the PT+SR cases start from a smaller total spread, the maximal localizations got stuck longer in plateaus in the final stages, leading to longer iterations. However, in all cases, the evolutions are smooth and converge in less than 500 iterations, since both valence and conduction are isolated manifolds themselves, and PT is able to construct good starting gauges. In terms of band interpolation, again the Wannier in Figure 2: **WF shapes and band interpolation of MoS\({}_{\mathbf{2}}\).** (a) WF shape of various Wannierization methods. The notations are the same as Fig. 1a, except that in some cases the additional star sign (*) indicates that the WFs are grouped together if their spreads are roughly similar, and only one of the shapes is shown: e.g., 4.560 (3)* means there are three WFs having similar spreads, and their average spread is 4.560 \(\text{\AA}^{2}\). The yellow and the silver spheres are the S and Mo atoms, respectively. (b) Band structure comparison of DFT, VCB, VB, and CB. The VCB are computed by Wannier interpolation after maximal localization (ML); the VB and CB are computed by Wannier interpolation from PT+ML. The DFT bands are almost indistinguishable since the VCB, VB, and CB Wannier-interpolated bands overlap essentially exactly with the DFT bands. terpolated VB and CB overlap essentially exactly with the DFT bands as well as the Wannier interpolated VCB, as shown in Fig. 2b, and demonstrated quantitatively by the excellent band-distance values: \(\eta_{\mathrm{isolated}}^{\mathrm{VCB-VB}}=0.19\,\mathrm{meV}\) and \(\eta_{\mathrm{isolated}}^{\mathrm{VCB-CB}}=0.51\,\mathrm{meV}\). #### iii.3.2 Single top valence band In practical applications, the highest valence and lowest conduction bands are of high interest since they are critical for electronic transport properties. However, the Wannierization of a single band remains elusive since it is difficult to write down a proper initial projection, resulting from a complete hybridization of many different atomic orbitals. However, in the MoS\({}_{2}\) case, since the top valence band is isolated with respect to all other bands, we can use our algorithm to construct a smooth gauge for that single band, demonstrating the more general applicability of our method, beyond the separation of VB & CB. Figure 3a shows the band interpolation of this single-band WF, and the inset shows the shape of this highly symmetric WF in real space. As usual, since the separate Wannierizations in each submanifold have less degrees of freedom compared with the Wannierization of the initial manifold, the WF spreads for separate Wannierization are usually larger. Indeed, the single WF has a relatively large spread (\(9.288\,\mathrm{\AA}^{2}\)). For such a large spread, artificial interactions between periodic copies of the same WF in different supercells (where the supercell size is determined by the \(k\)-point sampling) may start to become non-negligible. Indeed, we observe some small oscillations at the minimum of the band along \(\Gamma\) to M and along M to K, whose zoom-ins are shown in Figs. 3b and 3c. By increasing the \(k\)-point sampling from the \(12\times 12\times 1\) (\(0.2\,\mathrm{\AA}^{-1}\) density, same as the VCB Wannierization) to \(18\times 18\times 1\), the interpolation quality improves significantly (see Figs. 3b and 3c). This means that the band interpolation error is not caused by our separation method, but by the insufficient \(k\)-point sampling. Therefore, if one targets a very high band interpolation quality, the \(k\)-point sampling might need to be increased when considering a submanifold only. ### SrVO\({}_{\mathbf{3}}\) Here we test the present method on the metallic perovskite SrVO\({}_{3}\), which is a correlated material on the \(t_{2g}\) Hubbard manifold. At the DFT level, there is a small gap (\(37.8\,\mathrm{meV}\) at R point) separating the \(3d\) manifold from bands above, thus satisfying the requirement of isolated manifold for the present method. Starting from 24 MLWFs for VCB, we generate two sets of MLWFs: 5 for \(3d\) and 19 for the remaining manifold. For brevity, we only show the comparison of DFT and Wannier-interpolated bands in Fig. 4a, while the respective WF shapes and spreads are shown in SI Figs. S3 and S4. For the \(3d\) manifold, the PT gauge is already quite close to the maximally localized gauge: maximal localization only slightly decreases the total spread from \(9.815\,\mathrm{\AA}^{2}\) of PT to \(9.629\,\mathrm{\AA}^{2}\), and symmetrizes the shapes of WFs (the \(3d\) columns of Fig. S3). For the remaining manifold, it is quite hard to converge: only after 13,027 iterations (Fig. S4) the maximal localization can converge to real-valued, spatially-symmetrized MLWFs (Fig. S3). In this case, the single rotation greatly helps in improving the convergence: only 1544 iterations are needed to converge to the same MLWFs starting from PT+SR gauge, also removing the oscillations in spreads and centers during Figure 3: **Band structure comparison of the single MoS\({}_{2}\) top valence band.** (a) Comparison of DFT and Wannier-interpolated top VB. The bottom-left inset shows the shape of the resulting single MLWF. (b) and (c) are zoom-in comparisons of the Wannier-interpolated bands using different \(k\)-point samplings: red dashed line for \(12\times 12\times 1\), blue dotted dashed line for \(14\times 14\times 1\), and green dotted line for \(18\times 18\times 1\). The \(16\times 16\times 1\) is not shown since it has similar quality to the \(14\times 14\times 1\). maximal localization (Fig. S4). For the band interpolation, again the respective manifolds are accurately reproduced, as demonstrated by the bands in Fig. 4 and the band distances: \(\eta_{\rm isolated}^{\rm VCB-3\it d}=6.74\,\rm meV\) and \(\eta_{\rm isolated}^{\rm VCB-others}=3.00\,\rm meV\). Furthermore, since the \(t_{2g}\) and \(e_{g}\) manifolds are gapped in energy, we can also separate them into two submanifolds. As shown in Fig. 4b, the \(t_{2g}\) and \(e_{g}\) bands are again reproduced very well. ### Results on 77 insulators Finally, we test our method on a set of 77 insulators with number of atoms in the unit cell ranging from 1 to 45. This is the same as the insulator set of Ref. [12], except that 4 (He, Ne, Ar\({}_{2}\), Kr\({}_{2}\)) of the 81 materials are excluded since they consist of closed-shell noble-gas atoms, where the valence electrons are fully occupied (and there is thus no need for separate Wannierizations). This comprehensive test set not only validates the correctness of the present method, but also helps improve its generality to cover edge cases (e.g., the additional treatment of \(\mathbf{b}\)-vectors in the SI Section S2) that would be difficult to discover with only a few test cases. The separate Wannierization is implemented as a fully automated AiiDA [32, 33, 34] workflow, which first runs the Wannierization of VCB using PDWF [17], then splits the VCB manifold with the method discussed here (see Section VI for the Wannier.jl code implementation), and then runs two separate maximal localizations using Wannier90 for the VB and the CB manifolds, respectively. All the Wannierizations finish successfully and have excellent band interpolation quality, which we measure by the band distance [12, 17, 31]\(\eta_{\rm isolated}\) for comparisons between isolated bands (VB of separate Wannierization w.r.t. VB of DFT, VB of separate Wannierization w.r.t. VB of VCB Wannierization, CB of separate Wannierization w.r.t. CB of VCB Wannierization), and \(\eta_{2}\) for comparisons involving CB of DFT (VCB Wannierization w.r.t. VCB of DFT, CB of separate Wannierization w.r.t. CB of DFT), \[\eta_{2}^{\rm A-B}=\sqrt{\frac{\sum_{\pi\mathbf{k}}\tilde{f}_{\pi\mathbf{k}}( \epsilon_{n\mathbf{k}}^{\rm A}-\epsilon_{n\mathbf{k}}^{\rm B})^{2}}{\sum_{n \mathbf{k}}\tilde{f}_{n\mathbf{k}}}}, \tag{8}\] where \(\tilde{f}_{n\mathbf{k}}=\sqrt{f_{n\mathbf{k}}^{\rm A}(E_{F}+2,\sigma)f_{n \mathbf{k}}^{\rm B}(E_{F}+2,\sigma)}\) and \(f(E_{F}+2,\sigma)\) is the Fermi-Dirac distribution with the Fermi energy set to \(2\,\rm eV\) above the real Fermi energy \(E_{F}\) to compare also part of the conduction bands; the smearing width is set to \(\sigma=0.1\,\rm eV\). The statistics of \(\eta\) are shown in Fig. 5. It is worth noting that the VB interpolation (\(\eta_{\rm isolated}^{\rm VB-DFT}=0.859\,\rm meV\)) is even more accurate than the VCB interpolation (\(\eta_{2}^{\rm VCB-DFT}=2.609\,\rm meV\)): this is partly because \(\eta_{2}^{\rm VCB-DFT}\) is averaged over all bands, including the larger error of CB interpolation; to exclude the effect of averaging, we also compute the band distance of the VB of VCB Wannierization w.r.t. the VB of DFT, \(\eta_{\rm isolated}^{\rm VB(VCB)-DFT}=1.721\,\rm meV\), which is larger than \(\eta_{\rm isolated}^{\rm VB-DFT}=0.859\,\rm meV\), showing that the VB interpolation is indeed more accurate than the VCB interpolation--this can be explained by two facts: (1) the valence MRWFs are constructed by unitary transformations of Bloch states, thus the valence manifold is exactly preserved (see SI Section S3 for a proof); (2) the valence MRWFs are more localized than the VCB MLWFs (will be discussed in the next paragraph), leading to a better Fourier interpolation quality. The CB interpolation (\(\eta_{2}^{\rm CB-DFT}=7.619\,\rm meV\)) is slightly worse than the CB of VCB Wannierization (\(\eta_{2}^{\rm CB(VCB)-DFT}=6.616\,\rm meV\)), since the CB MLWFs are more delocalized than the VCB MLWFs (will be discussed in the next Figure 4: **Comparison of DFT and Wannier-interpolated bands for SrVO\({}_{\mathbf{3}}\)**. (a) The VCB are computed by Wannier interpolation using PDWF; the \(3d\) bands (\(3d\)) and the remaining bands (others) are computed by Wannier interpolations from PT+ML. At DFT level, the \(3d\) bands have a small gap (\(37.8\,\rm meV\) at R point) separated from the bands above; the present algorithm successfully separates the \(3d\) submanifold from the remaining manifold. (b) The \(t_{2g}\) and \(e_{g}\) bands are further separated starting from the \(3d\) bands of (a). paragraph); moreover, it appears much larger than the \(\eta_{2}^{\rm VCB-DFT}=2.609\,\)mev since \(\eta\) is defined as an average over all bands--the (accurate) VB interpolations are not taken into account in the computation of \(\eta_{2}^{\rm CB-DFT}\). In addition, and most importantly, the VB/CB to VCB distances are \(\eta_{\rm isolated}^{\rm VB-VCB}=2.219\,\)mev and \(\eta_{\rm isolated}^{\rm CB-VCB}=3.835\,\)mev, showing that the submanifolds are well separated with little loss of interpolation accuracy compared with the starting-point VCB Wannierization. For completeness, we also show the statistics of max band distance, which is a stricter measure of band interpolation quality, in the SI Fig. S5. Now we discuss the localization of MLWFs by comparing the average spread from the 77 materials. For VCB Wannierization, the average spread \(\Omega^{\rm VCB}=1.178\,\)A\({}^{2}\); after separation (followed by maximal localization of VB and CB, respectively), the average spread of VB MLWFs is slightly more localized (\(\Omega^{\rm VB}=1.079\,\)A\({}^{2}\)); while that of CB MLWFs are more delocalized (\(\Omega^{\rm CB}=2.919\,\)A\({}^{2}\)). This is consistent with the intuition that the VB MLWFs are the more localized bonding orbitals whereas the CB MLWFs are the more delocalized anti-bonding orbitals. Finally, as discussed in the previous section, the separated MLWFs have less degrees of freedom compared with the VCB MLWFs, thus the sum of the spreads of VB and CB (\(\Omega^{\rm VB}+\Omega^{\rm CB}\)) is in general larger than that of VCB (\(\Omega^{\rm VCB}\)). Figure 6b shows the percentage increase of \(\Omega^{\rm VB}+\Omega^{\rm CB}\) over \(\Omega^{\rm VCB}\). On average, there is a 52.9% increase of the spread. Note, however, that in three cases (Na\({}_{2}\)Se, Ca\({}_{4}\)O\({}_{14}\)V\({}_{4}\), and HK) there are a 39.2%, 56.0%, and 5.1% decreases of spreads, respectively. For the first two cases, there are few large-spread VCB MLWFs, probably because the VCB Wannierizations are trapped in local minima; during the separation Wannierizations, the parallel transport algorithm is able to find a smoother gauge, thus reaching more localized MLWFs for both VB and CB, leading to smaller \(\Omega^{\rm VB}+\Omega^{\rm CB}\) than \(\Omega^{\rm VCB}\). For completeness, we show the band-structure comparisons, the band distances, the evolution of spreads and the evolution of the sum of WF centers during maximal localization in SI [35], for each of the 77 materials. The smooth evolution of spreads and sum of WF centers during maximal localization for CB and VB demonstrate that parallel transport is able to construct continuous gauge, thus maximal localization has no difficulty in further smoothening the gauge. ## III Conclusions We introduce an automated method (manifold-remixed Wannier functions (MRWF)) to separate band manifolds by constructing MLWFs for the respective submanifolds that have finite energy gaps between them. The method naturally extends to the case of valence and conduction manifolds, but also to any other case of isolated groups of bands. First, we start with a properly Wannierized valence plus conduction manifold, obtained using any manual or automated method (the recently introduced projectability disentangled WF [17] is particularly suitable for this application since it is able to robustly and reliably construct MLWFs that preserve as much as possible the anti-bonding characters). Then, we split the manifolds by diagonalizing the Wannier-gauge Hamiltonian into submanifolds for target energy ranges, respectively. Next, using parallel transport, we construct smooth gauges for each submanifold to fix the randomness caused by the independent Hamiltonian diagonalization at every \(k\)-point. Finally, we maximally localize the parallel-transport gauge to obtain smooth MLWFs Figure 5: **Band distances of 77 insulators.** From top to bottom: histograms of band distances for valence plus conduction bands (VCB) w.r.t. DFT, valence bands (VB) of VCB Wannierization w.r.t. DFT, VB Wannierization w.r.t. UCB Wannierization, conduction bands (CB) of VCB Wannierization w.r.t. DFT, CB Wannierization w.r.t. DFT, and CB Wannierization w.r.t. VCB Wannierization. The red and blue bars are the histograms and cumulative histograms, respectively. The vertical lines indicate the mean and median values. for the desired manifolds. Before the final maximal localization, we can optionally run a preliminary rotation w.r.t. a single unitary matrix to fix the remaining gauge randomness intrinsic to parallel transport. Often the final maximal localization is able to find the maximally-localized gauge directly; however, the single rotation step helps to improve the robustness of the final maximal localization, and has the additional benefit of improving localization while still preserving the parallel transport gauge, which might be relevant in some applications. Results on silicon and MoS\({}_{2}\) show that the final valence (conduction) MLWFs restore faithfully chemical intuition for bonding/anti-bonding orbitals, and accurately reproduce the valence/conduction part of the band structure of the valence plus conduction manifold. Moreover, we demonstrate that the method is not limited to the separation of valence and conduction manifolds, but also applicable to any system with band groups separated by a finite gap: for instance, the single top valence band of MoS\({}_{2}\); or the \(3d\), \(t_{2g}\), and \(e_{g}\) manifolds of SrVO\({}_{3}\). Furthermore, we implement fully automated AiiDA [32, 33, 34] workflows to carry out the whole separate Wannierization process, and test the present method on a set of 77 insulators. Statistics show that the band interpolation achieves excellent accuracy at the meV scale, and on average the sum of VB and CB MLWF spreads increase around 50% w.r.t. the VCB MLWF spreads. Thus, we highlight that to ensure accurate band interpolation quality, the \(k\)-point sampling density for separated Wannierization might need to be increased, as demonstrated in the Wannierization of the MoS\({}_{2}\) top valence band. As an outlook, we envision several applications that one may find useful with the preset approach: the analysis of bonding/anti-bonding orbitals based on MLWFs; material properties that rely solely on the occupied manifold, such as the electric polarization; spectral theories that require separate sets of localized orbitals for both occupied and unoccupied states (for instance, the Koopmans functionals to predict accurately the electronic band gap [6]); and the dynamical mean field theory for correlated electrons. ## IV Methods The DFT calculations are carried out by Quantum ESPRESSO [36], using the SSSP efficiency (version 1.1, PBE functional) library[31] for pseudopotentials and its recommended energy cutoffs. The high-throughput calculations for 77 insulators are managed by the AiiDA [32, 33, 34] infrastructure which submits Quantum ESPRESSO and Wannier90 [4] calculations to remote clusters, parses, and stores the results into a database, while also orchestrating all sequences of simulations and workflows. The automated AiiDA workflows are open-source and hosted on GitHub [37]. Semicore states from pseudopotentials are excluded from Wannierizations, except for a few cases where the semicore states overlap with valence states; in such cases, all the semicore states are Wannierized. A regular \(k-\)point mesh is used for the Wannier calculations, with a \(k-\)point spacing of \(0.2\,\mathrm{\SIUnitSymbolAngstrom}^{-1}\), as selected by the protocol in Vitale _et al._[12]. Figures are generated by matplotlib [38]. ## V Data Availability All data generated for this work can be obtained from the Materials Cloud Archive ([https://doi.org/10.24435/materialscloud:2f-hs](https://doi.org/10.24435/materialscloud:2f-hs)). ## VI Code Availability All codes used for this work are open-source; the latest stable versions can be downloaded at [http://www.wannier.org/](http://www.wannier.org/) for Wannier90, [https://www.quantum-espresso.org/](https://www.quantum-espresso.org/) for Quantum ESPRESSO, [https://www.aiida.net/](https://www.aiida.net/) for AiiDA, and https:// Figure 6: **WF spreads of 77 insulators.** (a) From top to bottom: histograms of VCB, VB, and CB MLWF spreads. The upper limit of the right y-axis of each panel is the total number of WFs, respectively. (b) Histogram of the percentage increase of the sum of VB and CB spreads w.r.t. VCB spreads. github.com/aiidateam/aiida-wannier90-workflows for aiida-wannier90-workflows. The MRWF method is implemented in an open-source Julia [39] package named Wannier.jl, which is available at [https://github.com/qiaojunfeng/Wannier.jl](https://github.com/qiaojunfeng/Wannier.jl), and [https://www.wannierjl.org/](https://www.wannierjl.org/) for the accompanying documentation/tutorials. ## VII Acknowledgements We thank Antoine Levitt and Michael F. Herbst for helpful discussions and feedback on the Wannier.jl implementation. We acknowledge financial support from the NCCR MARVEL (a National Centre of Competence in Research, funded by the Swiss National Science Foundation, grant No. 205602), the Swiss National Science Foundation (SNSF) Project Funding (grant 200021E.206190 "FISH4DIET"). The work is also supported by a pilot access grant from the Swiss National Supercomputing Centre (CSCS) on the Swiss share of the LUMI system under project ID "PILOT MC EPFL-NM 01", a CHRONOS grant from the CSCS on the Swiss share of the LUMI system under project ID "REGULAR MC EPFL-NM 02", and a grant from the CSCS under project ID s0178. ## VIII Author contributions J. Q. implemented and tested the method. G. P. and N. M. supervised the project. All authors analyzed the results and contributed to writing the manuscript. ## IX Competing interests The authors declare that there are no competing interests.
2304.09743
The Price of Explainability for Clustering
Given a set of points in $d$-dimensional space, an explainable clustering is one where the clusters are specified by a tree of axis-aligned threshold cuts. Dasgupta et al. (ICML 2020) posed the question of the price of explainability: the worst-case ratio between the cost of the best explainable clusterings to that of the best clusterings. We show that the price of explainability for $k$-medians is at most $1+H_{k-1}$; in fact, we show that the popular Random Thresholds algorithm has exactly this price of explanability, matching the known lower bound constructions. We complement our tight analysis of this particular algorithm by constructing instances where the price of explanability (using any algorithm) is at least $(1-o(1)) \ln k$, showing that our result is best possible, up to lower-order terms. We also improve the price of explanability for the $k$-means problem to $O(k \ln \ln k)$ from the previous $O(k \ln k)$, considerably closing the gap to the lower bounds of $\Omega(k)$. Finally, we study the algorithmic question of finding the best explainable clustering: We show that explainable $k$-medians and $k$-means cannot be approximated better than $O(\ln k)$, under standard complexity-theoretic conjectures. This essentially settles the approximability of explainable $k$-medians and leaves open the intriguing possibility to get significantly better approximation algorithms for $k$-means than its price of explainability.
Anupam Gupta, Madhusudhan Reddy Pittu, Ola Svensson, Rachel Yuan
2023-04-19T15:29:12Z
http://arxiv.org/abs/2304.09743v1
# The Price of Explainability for Clustering+ ###### Abstract Given a set of points in \(d\)-dimensional space, an explainable clustering is one where the clusters are specified by a tree of axis-aligned threshold cuts. Dasgupta et al. (ICML 2020) posed the question of the _price of explainability_: the worst-case ratio between the cost of the best explainable clusterings to that of the best clusterings. We show that the price of explainability for \(k\)-medians is at most \(1+H_{k-1}\); in fact, we show that the popular Random Thresholds algorithm has _exactly_ this price of explainability, matching the known lower bound constructions. We complement our tight analysis of this particular algorithm by constructing instances where the price of explainability (using _any_ algorithm) is at least \((1-o(1))\ln k\), showing that our result is best possible, up to lower-order terms. We also improve the price of explainability for the \(k\)-means problem to \(O(k\ln\ln k)\) from the previous \(O(k\ln k)\), considerably closing the gap to the lower bounds of \(\Omega(k)\). Finally, we study the algorithmic question of finding the best explainable clustering: We show that explainable \(k\)-medians and \(k\)-means cannot be approximated better than \(O(\ln k)\), under standard complexity-theoretic conjectures. This essentially settles the approximability of explainable \(k\)-medians and leaves open the intriguing possibility to get significantly better approximation algorithms for \(k\)-means than its price of explainability. ## 1 Introduction Clustering is a central topic in optimization, machine learning, and algorithm design, with \(k\)-medians and \(k\)-means being two of the most prominent examples. In recent years, mainly motivated by the impressive but still mysterious advances in machine learning, there has been an increased interest in the transparency and in the explainability of solutions. In the context of clustering, this was formalized in a highly influential paper by Dasgupta et al. [2]. To motivate the concept of explainability, consider the task of clustering \(n\) points in \(\mathbb{R}^{d}\) into \(k\) clusters. If we solve \(k\)-means, the clusters are in general given by a Voronoi diagram where each cluster/cell is defined by the intersection of hyperplanes. Each cluster may be defined using up to \(k-1\) hyperplanes, each one of them possibly depending on all \(d\) dimensions with arbitrary coefficients. Since the dimensions typically correspond to features (e.g., "age", "weight", and "height" are natural features in a dataset of people), arbitrary linear combinations of these features may be difficult to interpret. To achieve more explainable solutions, we may need to restrict our algorithms to find clusters with simpler descriptions. The model in [2] achieves explainability in an elegant way resembling the classical notion of decision trees in theoretical computer science. Specifically, a clustering is called _explainable_ if it is given by a decision tree, where each internal node splits data points with a _threshold cut_ in a single dimension (feature), and each of the \(k\) leaves corresponds to a unique cluster. This leads to more explainable solutions already in two dimensions (see, e.g., Figure 1); the benefit is even more clear in higher dimensions. Indeed, the binary tree structure gives an easy sequential procedure for classifying points, and since each threshold cut is axis-aligned, there is no linear combinations of features. Moreover, the total number of dimensions/features used to describe the clustering is at most \(k-1\), independent of \(d\), which is attractive for high-dimensional data1. Footnote 1: We remark that dimensionality reduction for \(k\)-median and \(k\)-means show that one can reduce the dimension of the data points to \(O(\log(k)/\epsilon^{2})\)[19, 10]. However, those techniques take arbitrary linear combinations of the original dimensions and therefore destroy explainability. Explainability is thus a very desirable and appealing property, but the best explainable clustering may have cost much higher than the cost of the best unrestricted clusterings. This tradeoff is captured by the _price of explainability_: the loss in cost/quality if we restrict ourselves to explainable clusterings. Figure 1: Example from [2]. The optimal \(5\)-means clustering (left) uses combinations of both features. The explainable clustering (middle) uses axis-aligned rectangles summarized by the threshold tree (right). In their original paper, Dasgupta et al. [2] gave a greedy algorithm which takes an arbitrary "reference" clustering and produces an explainable clustering from it. It repeatedly adds threshold cuts which separate the centers of the reference clustering until the threshold tree has one leaf for each center of the reference clustering. Since only the points separated in the threshold tree from their closest reference center suffer an increase in cost, their algorithm repeatedly selects a threshold cut that separates the fewest points from their closest reference center. They proved that it outputs an explainable clustering with cost \(O(k)\) times higher for the case of \(k\)-medians, and \(O(k^{2})\) times higher for the case of \(k\)-means. They also show a lower bound of \(\Omega(\ln k)\) for both of these problems. Since the greedy algorithm's analysis is tight, an alternative strategy was independently proposed by [1, 16, 14]: take random cuts instead! The strategy is especially elegant in the case of \(k\)-medians (the distribution of cuts is more complex than uniform in the case of \(k\)-means): Repeatedly select threshold cuts _uniformly at random_ among those that separate centers of the reference clustering. We refer to this as the Random Thresholds algorithm (see SS2.1 for a formal description). While the algorithm is easy to describe, its performance guarantee has remained an intriguing question. There are simple instances in which it increases the cost by a factor of \(1+H_{k-1}\), where \(H_{k-1}=\nicefrac{{1}}{{1}}+\nicefrac{{1}}{{2}}+\nicefrac{{1}}{{3}}+\ldots+ \nicefrac{{1}}{{(k-1)}}\) is the \((k-1)^{th}\) harmonic number (see SS5), and this was conjectured to be the worst case for the Random Thresholds algorithm [11]. On a high level, a difficulty in analyzing the Random Thresholds algorithm is that it may take prohibitively expensive cuts with a small probability. To avoid this and other difficulties, the results in [11, 14, 15] considered more complex variants that intuitively forbid such expensive cuts. Specifically, [11] gave a variant that outputs a threshold tree whose expected cost increases by at most a \(O(\ln^{2}k)\) factor, and both [14, 15] obtain a better performance guarantee of \(O(\ln k\ln\ln k)\) for their variants of the Random Thresholds algorithm. These results give an exponential improvement over that in [2] but fail to settle the price of explainability, and they leave open the conjectured performance of the natural Random Thresholds algorithm. Our main results on the price of explainability are (a) to settle this conjecture in the affirmative (i.e., to give a _tight_ analysis of the Random Thresholds algorithm), and (b) to show that its price of explainability of \(1+H_{k-1}=(1+o(1))\ln k\) is not only asymptotically correct, but also _tight up to lower order terms_: we cannot do much better regardless of the algorithm. **Theorem 1** (Upper bound for \(k\)-medians).: _The price of explainability for \(k\)-medians is at most \(1+H_{k-1}\). Specifically, given any reference \(k\)-medians clustering, the Random Thresholds algorithm outputs an explainable clustering with expected cost at most \(1+H_{k-1}\) times the cost of the reference clustering._ **Theorem 2** (Lower Bound for \(k\)-medians).: _There exist instances of \(k\)-medians for which any explainable clustering has cost at least \((1-o(1))\ln k\) times the cost of the optimal \(k\)-medians clustering._ These results resolve the performance of the Random Thresholds algorithm and the price of explainability for \(k\)-medians. For \(k\)-means, we are unable to settle the price of explainability completely, but we make significant progress in closing the gap between known upper and lower bounds. Here, the best upper bound before our work was \(O(k\ln k)\)[15] (see also [1] for better guarantees when the input is low-dimensional). Moreover, we know instances where any single threshold cut increases the cost of the clustering by a factor \(\Omega(k)\) (see, e.g., [13]), and hence the price of explainability of \(k\)-means is at least \(\Omega(k)\). It is tempting to guess that the \(O(k\ln k)\) guarantee in [1] is tight, for the following reason. The first lower bound \(\Omega(\ln k)\) for \(k\)-means in [10] is obtained by arguing that (i) a single threshold cut increases the cost by at least that of the reference clustering and (ii) a threshold tree has height \(\Omega(\ln k)\), and so the total cost increases by a constant \(\Omega(\ln k)\) times. Since we have examples where any single cut increases the cost by \(\Omega(k)\), it is reasonable to hope for more complex instances to combine the two sources of hardness, and lose a \(\Omega(k)\cdot\Omega(\ln k)\) factor. However, we prove that this is _not_ the case and give an improved upper bound: **Theorem 3** (Upper bound for \(k\)-means).: _The price of explainability for \(k\)-means is at most \(O(k\ln\ln k)\). Specifically, given any reference \(k\)-means clustering, there exists an algorithm that outputs an explainable clustering with expected cost at most \(O(k\ln\ln k)\) times the reference cost._ Hence the price of explainability for \(k\)-means lies between \(\Omega(k)\) and \(O(k\ln\ln k)\). We leave the tight answer as an intriguing open problem. In particular, we conjecture that the lower bound is tight and that it is achieved by the \(k\)-means variant of the Random Thresholds algorithm. (The ideas for \(k\)-means also extend to \(\ell_{p}\) norms for \(p\geq 2\); details will appear in a forthcoming version.) Our final contribution is to _study the approximability of explainable clustering_. So far, the literature has mostly focused on settling the price of explainability [10, 11, 13, 15, 16, 17] and its behavior in a bi-criteria setting [21] where the explainable clustering is allowed to form more than \(k\) clusters. These algorithms give upper bounds on the approximability of explainable clustering since they are all efficient, and the cost of an optimal unconstrained clustering is a valid lower bound on the best explainable one. Recent work of [13, 14] asked the question: _how well can we approximate the best explainable clustering?_ They showed that the problem is APX-hard, but left open the question of whether the problem can be approximated better. Resolving this natural question positively would have the advantage of finding good explainable clusterings for those instances that do admit such clusterings, which is often the experience for more practical instances. Our result shows a surprising hardness for the \(k\)-medians and \(k\)-means problem. **Theorem 4** (Approximability).: _The explainable \(k\)-medians and \(k\)-means problems are hard to approximate better than \((\nicefrac{{1}}{{2}}-o(1))\ln k\), unless P=NP._ These results show that we cannot approximate \(k\)-medians much better than its price of explainability (unless P=NP); the approximability for \(k\)-means remains tantalizingly open. ### Outline and Technical Overview Upper bounding the performance of the Random Thresholds algorithm.Our main result is the tight analysis (up to lower order terms) of the price of explainability for \(k\)-medians. The upper bound of \(1+H_{k-1}=(1+o(1))\ln k\) is given by a tight analysis of the natural Random Thresholds algorithm. We now sketch the main ingredients of this analysis. We start with two easy but useful observations: (i) by linearity of expectations, it is sufficient to bound the expected cost of a single point, and (ii) by translation, this point can be assumed to be the origin. We thus reduce the problem to that of analyzing the expected distance from the origin to the last remaining center (i.e., the center in the same leaf of the threshold tree as the origin). We call this process the _Closest Point Process_ and define it formally in SS2. Algorithm has no better guarantee than \(1+H_{k-1}\).For this discussion, let us make a simplifying assumption that is _not without loss of generality_: the \(k\) centers are located on separate axes, so that center \(i\) is at \(\mathbf{e}_{i}\cdot d_{i}\), with \(d_{1}\leq d_{2}\leq\ldots\leq d_{k}\), hence the closest center is at a distance \(d_{1}\). As cuts are selected uniformly at random, the first cut removes some center \(i_{1}\) with probability \(\nicefrac{{d_{i_{1}}}}{{\sum_{j}d_{j}}}\). Conditioned on that, the second cut removes center \(i_{2}\) with probability \(\nicefrac{{d_{i_{2}}}}{{\sum_{j\neq i_{1}}d_{j}}}\), and so on. In other words, at each step, a center \(i\) is separated from the origin with probability proportional to its distance \(d_{i}\). For the further special case when \(d_{2}=d_{3}=\ldots=d_{k}=D\), the expected distance to the last remaining center is: \[\Pr[\text{1 is last center}]\cdot d_{1}+(1-\Pr[\text{1 is last center}])\,D\leq d_{1}+(1-\Pr[\text{1 is last center}])\cdot D\] \[\qquad\qquad\qquad=d_{1}+\big{(}1-\big{(}1-\tfrac{d_{1}}{(k-1) D+1}\big{)}\big{(}1-\tfrac{d_{1}}{(k-2)D+1}\big{)}\cdots\big{(}1-\tfrac{d_{1}}{D+1} \big{)}\big{)}\cdot D\,.\] This is an increasing function of \(D\) and tends to \((1+H_{k-1})\cdot d_{1}\) when \(D/d_{1}\to\infty\), which shows that the Random Thresholds algorithm cannot be better than the conjectured factor of \(1+H_{k-1}\) (see also SS5.1 for a formal description). Inductive argument and reduction to worst-case instances.But can this special setting really be the worst-case? Perhaps surprisingly, we prove that this is the case. An inductive argument can help remove the assumption that \(d_{2}=d_{3}=\ldots=d_{k}\): Since \((1+H_{k-1})\cdot d_{1}\) is the right answer for \(k\leq 2\), we can try to proceed inductively on the number of centers to analyze \[\sum_{i=1}^{k}\Pr[\text{first cut removes center $i$}]\cdot\mathbb{E}[ \text{expected cost of process with centers $[k]\setminus\{i\}$}]\,.\] Since each sub-instance in the sum has \(k-1\) centers, we can use the induction hypothesis to bound every term except \(i=1\) in the sum by \((1+H_{k-2})\cdot d_{1}\). To bound the cost of the instance with centers \([k]\setminus\{1\}\), we could proceed based on the following natural observation: the farther away a center is, the smaller probability it has to be the last remaining center, since it is more likely to be cut/removed at each step). This would mean that the expected cost of the process with centers \([k]\setminus\{1\}\) is at most \(\tfrac{d_{2}+d_{3}+\ldots+d_{k}}{k-1}\). And substituting, we get that the expected distance to the last center is at most \[\Pr[\text{first cut removes center $1$}]\cdot\frac{d_{2}+d_{3}+\ldots+d_{k}}{k -1}+(1-\Pr[\text{first cut removes center $1$}])\,(1+H_{k-2})\cdot d_{1}\,,\] which is at most \((1+H_{k-1})\cdot d_{1}\) using that \(\Pr[\text{first cut removes center $1$}]=\tfrac{d_{1}}{d_{1}+d_{2}+\ldots+d_{k}}\). Several research groups found the above inductive proof for the separate-axis special case, and it was one of the main motivations for the conjectured performance of the Random Thresholds algorithm. To prove it for the general case, it "only" remains to remove the assumption that each center is located on a separate axis. This assumption, however, turns out to be highly non-trivial to overcome. One indication of this difficulty is that, in the general case, there are arbitrary correlations between centers: whether center \(i\) is removed impacts the probability that \(j\) is removed. This causes most natural monotonicity conditions not to hold anymore. For example, when centers are arbitrarily located, a far center can be more likely to be the last one than a closer one. We overcome these difficulties in a technical proof that manages to show that the worst-case is as above. In this proof, we write the points as a conic combination of cuts, view the cost as a function of this embedding, and naturally try to bound its derivative. This is where the technical challenges appear: since the derivative is also not "well-behaved" we define a better-behaved upper bound called the "pseudo-derivative", and show that this pseudo-derivative is maximized when all points are at the same distance \(D\) from the origin (even when they are not along separate axes). We then bound the pseudo-derivative for the non-separate-axis uniform case. This is the technically most challenging part of the paper, and we present it in SS5. A Simpler proof via Competing Exponential Clocks.Interestingly, we can present not just one but two proofs of the correct \((1+o(1))\ln k\) bound: we give an alternative simpler proof which takes the viewpoint of _competing exponential clocks_ (previously used, e.g., for the multiway cut problem [1, 14, 15]). In the separate-axis case, it boils down to sampling an exponential random variable \(Z_{i}\) with rate \(d_{i}\) for each center \(i\). Two well-known properties of the exponential distribution are that (i) the probability that \(i\) "rings first" is proportional to its rate, i.e., \(\Pr[Z_{i}\leq\min_{j\neq i}Z_{j}]=d_{i}/\sum_{j}d_{j}\) and that (ii) the distribution is memoryless \(\Pr[Z_{i}\geq s+t\mid Z_{i}\geq t]=\Pr[Z_{i}\geq s]\). This implies that taking cuts in the order of the random variables \(\{Z_{i}\}_{i\in[k]}\) (until one center remains) is identical to the Closest Point Process. We now analyze the competing exponential clocks as follows. For a center \(i\in[k]\) with \(i\geq 2\), let \(Q_{i}\) be the probability that \(i\) is the last center among the faraway centers \([k]\setminus\{1\}\). Conditioning on this, \(i\) is the last center and we pay a distance of \(d_{i}\) instead of \(d_{1}\) if \(Z_{1}\leq Z_{i}\). Now for the probability of \(Z_{1}\leq Z_{i}\) to be maximized, \(Z_{i}\) should be as large as possible _in the event when \(i\) is last among \([k]\setminus\{1\}\)_. So we can upper bound the contribution of center \(i\) by considering the upper quantile of the exponential distribution of \(Z_{i}\) with total probability mass \(Q_{i}\). Now standard calculations show that the total contribution of center \(i\) to the cost is \(d_{1}\left(Q_{i}-Q_{i}\ln(Q_{i})\right)\). We thus get the upper bound \[\underbrace{d_{1}}_{\text{contribution of close center 1}}+\underbrace{d_{1} \sum_{i=2}^{k}\left(Q_{i}-Q_{i}\ln(Q_{i})\right)}_{\text{contribution of far centers}}=d_{1}(2+\ln(k-1))\,,\] where used that the entropy \(\sum_{i=2}^{k}-Q_{i}\ln(Q_{i})\) is at most \(\ln(k-1)\). What is particularly nice about this viewpoint is that the analysis does not use the assumption of centers being on separate axes. Indeed, we can define exponential random variables for each cut (as we did in our first proof), and the whole machinery goes through. A small complication arises due to cuts that separate the closest center 1 along with other points from the origin, but we can give a less precise but still tight (up to lower order terms) bound. Apart from achieving the factor \((1+o(1))\ln k\), the arguments are also arguably cleaner and easier than even the prior non-tight analyses of the Random Thresholds algorithm. We present these arguments in SS2. Lower-Bounding the Price of Explainability.Recall that the \(\Omega(\ln k)\) on the price of explainability for \(k\)-medians [11] is based on the following idea 1. Select \(k\) centers uniformly at random from a hypercube \(\{0,1\}^{d}\), and 2. Add a 1-ball around each center with \(d\) points, one per dimension, giving \(dk\) points.. The optimal unconstrained clustering has cost \(dk\), so how expensive is the best explainable clustering? Any pair of centers expect to differ in \(\nicefrac{{d}}{{2}}\) coordinates, and so by concentration, their distance \(\approx d/2\) whp. Furthermore, in a sub-instance with \(k^{\prime}\) centers, any cut separates \(k^{\prime}\) points from their closest center, and these incur cost \(\approx d/2\). As the threshold tree has a height of at least \(\log_{2}k\), the total cost of any explainable clustering can now be seen to be at least \(\approx(dk/2)\log_{2}k\). While asymptotically tight, the above symmetric construction does not lead to stronger lower bounds than \(\frac{1}{2}\log_{2}k\). We instead use an asymmetric construction to achieve our tight lower bound of \((1-o(1))\ln k\), and it gives us hardness of approximability too! 1. Place a special center at the origin, and take a \(1\)-ball around it giving \(d\) points. 2. The remaining centers are located at the characteristic vectors of some carefully chosen subsets of \(\{1,\dots,d\}\), and 3. Finally, add many points colocated with the centers which force any good threshold tree to have one leaf per center. Now the only way to separate a center from the origin is to employ a threshold cut along a dimension, which corresponds to an element in the set corresponding to that center. Our threshold cuts must thus form a _hitting set_ of the set system corresponding to the non-special centers. Furthermore, the number of points separated from their closest center is equal to the size of this hitting set. This tight connection allows us to apply the known results for the hitting set problem, and we get a \((1-o(1))\ln k\) lower bound on the price of explainability for \(k\)-medians. In addition, the connection together with Feige's landmark paper [10] implies our hardness of approximation results. (Interestingly enough, [11] give a very similar construction, but with different parameters, which only gives them NP-hardness.) We remark that the hardness result for \(k\)-means follows from that of \(k\)-medians since all points and centers are located on the hypercube, and thus the \(\ell_{1}\)-distances equal the squared \(\ell_{2}\)-distances. We present the reduction from hitting set and its implications in SS3. Improvements for \(k\)-means.Our final result is an \(O(k\ln\ln k)\) price of explainability for \(k\)-means. We observe that there are two ways to achieve the weaker \(O(k\ln k)\) bound. The first transforms the \(k\)-means instance into \(k\)-medians, but this distorts distances by at most \(k\) using the Cauchy-Schwarz inequality; then we lose another \(O(\ln k)\) using our analyses above. Another follows the approach of [1], of finding cuts that have a good cost-to-balance ratio. Both these approaches are tight, but we show that they cannot be tight at the same time! I.e., if we lose a factor of \(\Omega(k)\) due to Cauchy-Schwarz, then the cuts partition the instance into parts that are a constant factor smaller, and the loss becomes a geometric sum that sums to \(O(k)\). A quantitative version of this tradeoff gives our result; the details appear in SS4. Outline.We present the simpler exponential clocks-based proof for \(k\)-medians in SS2, followed by the matching hardness in SS3. The result for \(k\)-means is in SS4, followed by the tight \(1+H_{k-1}\) bound for \(k\)-medians in SS5. ### Further Related Work We now discuss some of the related results beyond those mentioned above. Some works consider the effect of the dimension \(d\) of the price of explainability. Laber and Murtinho [13] showed an \(O(d\ln k)\) price of explainability for \(k\)-medians, which was improved by Esfandiari et al. [1] to \(O(\min\{d\ln^{2}d,\ln k\ln\ln k\})\). Charikar and Hu [1] showed that the price of explainability is at most \(k^{1-2/d}\operatorname{poly}\log k\) for \(k\)-means, and a lower bound tight up to poly-logarithmic terms. Esfandiari et al. [1] also gave a lower bound of \(\Omega(d)\) for \(k\)-medians. Frost et al. [13] posed the question of getting better guarantees using more than \(k\) clusters; Makarychev and Shan [14] showed how to open \((1+\delta)k\) centers and get a guarantee of \(O(\nicefrac{{1}}{{\delta}}\cdot\ln^{2}k\ln\ln k)\) for \(k\)-means. The algorithmic problem has received much less attention. Bandyapadhyay et al. [11] gave algorithms that find the best \(k\)-medians and \(k\)-means clusterings in time \(n^{2d}\cdot(dn)^{O(1)}\). They also showed NP-hardness, and \(W[2]\)-hardness of finding the best explainable clustering; interestingly, their hardness construction is also based on the hitting set problem and is very similar to ours, but they use a different setting of parameters and hence only infer an NP-hardness. Laber [15] gave an APX-hardness based on a reduction from finding vertex covers in triangle-free graphs. Our result showing a logarithmic hardness essentially settles the question for \(k\)-medians. Both the \(k\)-medians and \(k\)-means problems have been studied extensively in the unconstrained setting (i.e., without the explainability requirement), both for geometric spaces (see, e.g., [11, 12, 13, 14]) and general metric spaces (see, e.g., [11, 15]). The techniques and algorithms for those settings seem orthogonal to those used for our problems. ### Preliminaries and Notation Given points \(\mathcal{X}=\{\mathbf{x}^{1},\ldots,\mathbf{x}^{n}\}\subseteq\mathbb{R}^{d}\), a _clustering_\(\mathcal{C}\) of \(\mathcal{X}\) is a partition of \(\mathcal{X}\) into _clusters_\(\{C^{1},\ldots,C^{k}\}\). Each cluster \(C^{i}\) is assigned a center \(\mathbf{\mu}^{i}\) (giving _distinct_ centers \(\mathcal{U}=\{\mathbf{\mu}^{1},\ldots,\mathbf{\mu}^{k}\}\subseteq\mathbb{R}^{d}\)). Let \(\pi(\mathbf{x})\) be the center \(\mathbf{\mu}^{j}\in\mathcal{U}\) corresponding to the cluster \(C^{j}\) containing \(x\), and define the \(q\)-norm cost of a clustering \(\mathcal{C}\) with centers \(\mathcal{U}\) as \[\operatorname{cost}_{q}\left(\pi,\mathcal{U}\right)=\sum_{\mathbf{x}\in\mathcal{ X}}\|\mathbf{x}-\pi(\mathbf{x})\|_{q}^{q}. \tag{1}\] The \(k\)-medians and \(k\)-means costs of a clustering are simply the minimum values for the parameters \(q=1\) and \(2\), minimized over all possible centers \(\mathcal{U}\). _Threshold Cuts and Trees._ We call a hyperplane of the form \(x_{i}\leq\theta\) a _threshold cut_, and represent it as \((i,\theta)\). A _threshold tree_\(T\) is a binary tree with each non-leaf node \(u\) corresponding to a threshold cut \((i_{u},\theta_{u})\). Define \(B_{u}\subseteq\mathbb{R}^{d}\) as the region corresponding to node \(u\in T\), where \(B_{r}:=\mathbb{R}^{d}\) for \(r\) being the root of \(T\); if nodes \(l(u)\) and \(r(u)\) are the left and right children of node \(u\), then \[B_{l(u)}:=B_{u}\cap\{\mathbf{x}\mid x_{i_{u}}\leq\theta_{u}\}\qquad\text{and} \qquad B_{r(u)}:=B_{u}\cap\{\mathbf{x}\mid x_{i_{u}}>\theta_{u}\}.\] _Explainable Clusterings._ Given points \(\mathcal{X}\) and a threshold tree \(T\), the clustering \(\mathcal{C}_{T}\) of \(\mathcal{X}\)_explainable_ by the threshold tree \(T\) is the partition of \(\mathcal{X}\) induced by the regions corresponding to leaves in \(T\), i.e., each leaf \(\ell\) of \(T\) generates a cluster \(C^{\ell}:=\mathcal{X}\cap B_{\ell}\) of \(\mathcal{C}_{T}\). A clustering \(\mathcal{C}\) of \(\mathcal{X}\) is said to be an _explainable clustering_ if there exists a threshold tree \(T\) such that \(\mathcal{C}=\mathcal{C}_{T}\). For a set of centers \(\mathcal{U}\), a threshold tree \(T\)_separates_\(\mathcal{U}\) if each of the regions corresponding to leaves in \(T\) contains exactly one center in \(\mathcal{U}\). Let \(\mu^{\ell}\) denote the unique center in the singleton set \(\mathcal{U}\cap\mathcal{B}_{\ell}\) for leaf \(\ell\) in \(T\). For any set of points \(\mathcal{X}\), centers \(\mathcal{U}\), and a threshold tree \(T\) that separates \(\mathcal{U}\), each leaf in \(T\) corresponds to a cluster \(C^{\ell}\) in the clustering \(\mathcal{C}_{T}\), and also to a center \(\mu^{\ell}\). Such a tree induces an assignment \(\pi_{T}:\mathcal{X}\to\mathcal{U}\) from points to centers. With this, we can define \[\operatorname{cost}_{q}\left(T\right)=\operatorname{cost}_{q}(\pi_{T},\mathcal{ U})=\sum_{\mathbf{x}\in\mathcal{X}}\|\mathbf{x}-\pi_{T}(\mathbf{x})\|_{q}^{q}. \tag{2}\] ## 2 Explainable \(k\)-medians via Exponential Clocks We now give a bound of \((1+o(1))\,\ln k\) on the price of explainability for \(k\)-medians. This is slightly weaker than the bound of \(1+H_{k-1}\approx\ln k+O(1)\) promised in Theorem 1, but the proof is simpler and more illuminating. (We give the proof of the tight bound in SS5.) ### The Random Threshold Algorithm and the Closest Point Process Let us first formalize the Random Thresholds algorithm: given a reference clustering for point set \(\mathcal{X}\) which opens centers \(\mathcal{U}\) and maps the data points to centers using \(\pi:\mathcal{X}\to\mathcal{U}\), we construct a threshold tree \(T\) randomly as follows. For simplicity, let \(\mathcal{X}\subseteq[a,b]^{d}\) for some \(a,b\in\mathbb{R}\). We start with the trivial threshold tree with the root corresponding to all of \(\mathbb{R}^{d}\). Now while the leaves of \(T\) do not give us a separating partition for \(\mathcal{U}\), we pick a dimension \(i\in[d]\) and a value \(\theta\in[a,b]\) independently and uniformly at random. For each leaf \(u\) of \(T\), if this threshold cut separates at least one pair of centers which share the region \(B_{u}\), partition the leaf using the threshold cut. It is easy to see that as long as all the centers in \(\mathcal{U}\) are distinct, this process outputs a threshold tree that separates \(\mathcal{U}\). The main question is: what is the cost of the resulting explainable clustering \(\mathcal{C}_{T}\), in expectation? Since the algorithm does not depend on the data points \(\mathcal{X}\), and it is invariant under translations and scaling, we can use linearity of expectations and focus on the following simpler problem: **Definition 5** (Closest Point Process).: Given a set of \(k\) points \(U\subseteq\mathbb{R}^{d}\), let \(\mathbf{p}^{*}:=\arg\min_{\mathbf{p}\in U}\|\mathbf{p}\|_{1}\) be the point in \(U\) closest to the origin. Assume \(\|\mathbf{p}^{*}\|_{1}=1\). Run the Random Thresholds algorithm to create a random threshold tree \(T\) that separates this point set \(U\). Consider leaf node \(u\in T\) whose corresponding region \(B_{u}\subseteq\mathbb{R}^{d}\) contains the origin, and let \(\mathbf{\widehat{p}}\) be the unique point of \(U\) in this region \(B_{u}\). Define \[f(U):=\mathbb{E}[\;\|\mathbf{\widehat{p}}\|_{1}\;]. \tag{3}\] Finally, define \(\alpha(k):=\max_{U:|U|=k}f(U)\). **Lemma 6** (Focus on Closest Point).: _Given a reference clustering \(\pi:\mathcal{X}\to\mathcal{U}\), the expected cost of the explainable clustering produced by the Random Thresholds algorithm is_ \[\mathbb{E}[\mathrm{cost}(\pi_{T},\mathcal{U})]\leq\alpha(|\mathcal{U}|)\cdot \mathrm{cost}(\pi,\mathcal{U})\,.\] _Therefore, the price of explainability is at most \(\alpha(k)\)._ Given this reduction (which we prove in Appendix A), the main result of this section is: **Theorem 7** (Exponential Clocks).: _For any set \(U\) with \(k\) points, \(f(U)\leq(1+o(1))\,\ln k\)._ ### The Exponential Clocks Viewpoint: the Last Point We now focus on bounding the value \(f(U)\) for any point set \(U\in\mathbb{R}^{d}\). We first impose some structure, just for the sake of analysis. Since \(\ell_{1}\) metrics can be written as a non-negative sum of cut metrics (see, e.g., [10]), again using the data-obliviousness and translation-invariance of the algorithm we can assume the following without loss of generality (see Appendix A.1). 1. there are \(d=2^{k}\) dimensions (one for each subset \(S\subseteq U\) of the points), and 2. the instance is specified by non-negative values \(\{z_{S}\}_{S\subseteq U}\) such that for each point \(\mathbf{p}\in U\), it lies at location \[\mathbf{p}_{S}:=z_{S}\mathbf{1}(\mathbf{p}\in S).\] Hence the distance of a point \(\mathbf{p}\) is \(\|\mathbf{p}\|_{1}=\sum_{S}z_{S}\mathbf{1}(\mathbf{p}\in S)=\sum_{S:\mathbf{p}\in S}z_{S}\). Given this structure and the focus on \(f(U)\), we need to analyze the following process: _The Last Point Process._ Start with some set \(V\subseteq U\), and empty sequence \(\mathcal{S}\leftarrow\langle\rangle\). At each step, pick a set \(S\not\in\mathcal{S}\) with probability \(\frac{z_{S}}{\sum_{T\not\in S}z_{T}}\) and add it to the end of \(\mathcal{S}\). If \(|V\setminus S|\neq\emptyset\), set \(V\gets V\setminus S\). When all remaining sets \(S\not\in\mathcal{S}\) have \(z_{S}=0\), stop and output the current \(V\), a singleton set we call \(V_{\mathrm{final}}\). An inductive argument shows that if we start with \(V=U\), the final set \(V_{\text{final}}\) has the same distribution as the set of points in the region containing the origin in the Random Thresholds algorithm. Specifically, the first cut is taken with probability \(\frac{z_{S}}{\sum_{T\not\in S}z_{T}}\) and the process inductively proceeds; the process is thus identical to the Closest Point Process, and so \(V_{\text{final}}\) contains a single point \(\widehat{\boldsymbol{p}}\in U\) when the process stops with \(f(U)=\mathbb{E}[\|\widehat{\boldsymbol{p}}\|_{1}]\). To analyze this, we change the perspective slightly further, and recast the process in terms of "exponential clocks". Define independent exponential random variables \(X_{S}\sim\exp(z_{S})\) for each set \(S\subseteq 2^{U}\) such that \(z_{S}>0\). Since exponential random variables \(\{Y_{i}\sim\exp(r_{i})\}\) have the memoryless property, and the property that \(\Pr[Y_{j}=\min_{i}\{Y_{i}\}]=\frac{r_{j}}{\sum_{i}r_{i}}\), we see the sets in the same order \(\mathcal{S}\) as in the last-center process above. Moreover, this order depends only on the set \(U\), and is independent of the starting set \(V\subseteq U\). Now consider the Last Point Process starting with different sets \(V\subseteq U\) (and not just the entire point set \(U\)): naturally, the identity of the final point \(\widehat{\boldsymbol{p}}\) changes. However, it turns out we can make the following claim. Define the event _point \(\boldsymbol{p}\in U\) is last in \(V\)_ if starting with the set \(V\) results in \(V_{\text{final}}=\{\boldsymbol{p}\}\). It turns out that being last in this process has a nice "monotone" property. (We defer the proof to Appendix A.2.) **Lemma 8** (Monotonicity).: _For any sets \(T,V\) such that \(T\subseteq V\), and any point \(\boldsymbol{p}\in V\setminus T\), we have_ \[\text{``}\boldsymbol{p}\text{ is last in }V\text{''}\Rightarrow\text{``} \boldsymbol{p}\text{ is last in }V\backslash T\text{''}.\] ### Bounding the Expected Cost By the definition of our process, we know that \[f(U)=\sum_{\boldsymbol{p}\in U}\|\boldsymbol{p}\|_{1}\cdot\Pr[\boldsymbol{p} \text{ is last in }U]\leq\gamma+\sum_{\boldsymbol{p}:\|\boldsymbol{p}\|_{1}>\gamma}\| \boldsymbol{p}\|_{1}\cdot\Pr[\boldsymbol{p}\text{ is last in }U] \tag{4}\] for any \(\gamma\). (We choose \(\gamma>1\), which ensures that \(\boldsymbol{p}\neq\boldsymbol{p}^{*}\).) We now bound (4) as follows. Observe that whenever \(\boldsymbol{p}\) is last in \(U\), the following is true. There must exist a cut \(T\) that removes the closest point \(\boldsymbol{p}^{*}\) before any cut removes \(\boldsymbol{p}\), i.e., \(\boldsymbol{p}^{*}\in T,\boldsymbol{p}\notin T\). This implies \(X_{T}\leq X_{S}\) for all sets \(S\) such that \(\boldsymbol{p}\in S,\boldsymbol{p}^{*}\notin S\), which can be written as \[X_{T}\leq X_{\boldsymbol{p}}\text{, where }X_{\boldsymbol{p}}:=\min_{S: \boldsymbol{p}\in S,\boldsymbol{p}^{*}\notin S}X_{S}\,.\] Second, by the Monotonicity Lemma 8, we have that \(\boldsymbol{p}\) is last in \(U\) implies that \(\boldsymbol{p}\) is last in \(U\backslash T\). Defining \(\mathcal{F}_{\boldsymbol{p}}:=\{T\mid\boldsymbol{p}^{*}\in T,\boldsymbol{p} \not\in T\}\) to be all those cuts that could remove \(\boldsymbol{p}^{*}\) before \(\boldsymbol{p}\), therefore yields the upper bound \[f(U) \leq\gamma+\sum_{\boldsymbol{p}:\|\boldsymbol{p}\|_{1}>\gamma}\| \boldsymbol{p}\|_{1}\cdot\Pr[\exists T\in\mathcal{F}_{\boldsymbol{p}}\text{ such that }X_{T}\leq X_{\boldsymbol{p}}\bigwedge\boldsymbol{p}\text{ is last in }U \backslash T]\] \[\leq\gamma+\sum_{\boldsymbol{p}:\|\boldsymbol{p}\|_{1}>\gamma}\| \boldsymbol{p}\|_{1}\sum_{T\in\mathcal{F}_{\boldsymbol{p}}}\Pr[X_{T}\leq X_{ \boldsymbol{p}}\bigwedge\boldsymbol{p}\text{ is last in }U\backslash T]\,.\] (union bound) We upper bound the contribution of a fixed point \(\boldsymbol{p}\) to the above expression. By the law of total probability, \(\Pr[X_{T}\leq X_{\boldsymbol{p}}\bigwedge\boldsymbol{p}\text{ is last in }U\backslash T]\) equals \[\int_{-\infty}^{\infty}\Pr[X_{T}\leq t\bigwedge\boldsymbol{p}\text{ is last in }U\backslash T\mid X_{p}=t]f_{X_{p}}(t)dt\,, \tag{5}\] where \(f_{X_{p}}(t)\) denotes the probability density function of \(X_{p}\). The event \(X_{T}\leq t\) is independent from the event "\(\mathbf{p}\) is last in \(U\backslash T\)" because, \(T\) does not cut any points in \(U\backslash T\) and hence the value of \(X_{T}\) is irrelevant to the process restricted to points in \(U\backslash T\). We also know that \(X_{T}\) and \(X_{\mathbf{p}}\) are independent. These observations can be used to rewrite the above expression as \[\int_{-\infty}^{\infty}\Pr[X_{T}\leq t]\cdot\Pr[\mathbf{p}\text{ is last in }U\backslash T\mid X_{p}=t]f_{X_{p}}(t)dt\,. \tag{6}\] As \(\Pr[X_{T}\leq t]\) is an increasing function of \(t\), the above expression is maximized if the probability mass of the event "\(\mathbf{p}\) is last in \(U\backslash T\)" is on large values of \(t\). Formally, if we select \(\mathbf{\tau}\) to be threshold so that \[\int_{-\infty}^{\infty}\Pr[\mathbf{p}\text{ is last in }U\backslash T\mid X_{p}=t ]f_{X_{p}}(t)dt=\int_{-\infty}^{\infty}\mathbf{1}[t\geq\mathbf{\tau}]f_{X_{p}}(t)dt= \int_{\mathbf{\tau}}^{\infty}f_{X_{p}}(t)dt \tag{7}\] then (6) is upper bounded by \[\int_{\mathbf{\tau}}^{\infty}\Pr[X_{T}\leq t]f_{X_{p}}(t)dt\,. \tag{8}\] To understand this expression, recall that \(X_{T}\) is an exponential random variable with rate \(z_{T}\). Further, the random variable \(X_{p}\) is the minimum of exponentials, and hence is itself exponentially distributed with rate \(\ell(\mathbf{p})=\sum_{S:\mathbf{p}\in S,\mathbf{p}^{*}\not\in S}z_{S}\). In other words, \(\Pr[X_{T}\leq t]=1-e^{-z_{T}\cdot t}\) and \(f_{X_{p}}(t)=\ell(\mathbf{p})e^{-\ell(\mathbf{p})t}\) for \(t\geq 0\). This gives us that the choice of \(\mathbf{\tau}\) that satisfies the identity (7) is \[\mathbf{\tau}=\frac{-\ln Q_{T}(\mathbf{p})}{\ell(\mathbf{p})},\text{ where }Q_{T}(\mathbf{p}) \text{ is the probability that }\mathbf{p}\text{ is last in }U\backslash T.\] The integral (8) can be upper-bounded by standard calculations: \[\int_{\mathbf{\tau}}^{\infty}\Pr[X_{T}\leq t]f_{X_{p}}(t)dt =\int_{\mathbf{\tau}}^{\infty}(1-e^{-z_{T}t})\cdot\ell(\mathbf{p})\cdot e ^{-\ell(\mathbf{p})t}dt\] \[=Q_{T}(\mathbf{p})-\frac{\ell(\mathbf{p})}{\ell(\mathbf{p})+z_{T}}e^{-(\ell( \mathbf{p})+z_{T})\cdot\mathbf{\tau}}\] \[\leq Q_{T}(\mathbf{p})\left(1-\frac{\ell(\mathbf{p})}{\ell(\mathbf{p})+z_{T}} \cdot\left(1+\frac{z_{T}\ln Q_{T}(\mathbf{p})}{\ell(\mathbf{p})}\right)\right)\] \[=Q_{T}(\mathbf{p})\cdot z_{T}\left(\frac{1-\ln Q_{T}(\mathbf{p})}{\ell( \mathbf{p})+z_{T}}\right).\] Substituting in this upper bound, we have \[f(U) \leq\gamma+\sum_{\mathbf{p}:\|\mathbf{p}\|_{1}>\gamma}\|\mathbf{p}\|_{1}\sum_{ T\in\mathcal{F}_{\mathbf{p}}}Q_{T}(\mathbf{p})\cdot z_{T}\left(\frac{1-\ln Q_{T}(\mathbf{p})}{ \ell(\mathbf{p})+z_{T}}\right)\] \[\leq\gamma+\sum_{\mathbf{p}:\|\mathbf{p}\|_{1}>\gamma}\frac{\|\mathbf{p}\|_{1 }}{\|\mathbf{p}\|_{1}-1}\sum_{T\in\mathcal{F}_{\mathbf{p}}}Q_{T}(\mathbf{p})\cdot z_{T} \left(1-\ln Q_{T}(\mathbf{p})\right),\] where we use that \(\ell(\mathbf{p})+z_{T}\geq\|\mathbf{p}\|_{1}-1\). Indeed \(\ell(\mathbf{p})=\sum_{S:\mathbf{p}\in S,\mathbf{p}^{*}\not\in S}z_{S}\geq\|\mathbf{p}\|_{1}- \|\mathbf{p}^{*}\|_{1}\geq\|\mathbf{p}\|_{1}-1\). Using the fact that \(\nicefrac{{x}}{{x-1}}\) is a decreasing function and then replacing the summation over all \(\mathbf{p}:\|\mathbf{p}\|_{1}>\gamma\) to all \(\mathbf{p}\neq\mathbf{p}^{*}\), and exchanging the summations gives \[\leq\gamma+\frac{\gamma}{\gamma-1}\sum_{T\ni\mathbf{p}^{*}}z_{T}\sum_{\mathbf{p}\in U \backslash T}Q_{T}(\mathbf{p})(1-\ln Q_{T}(\mathbf{p})).\] Observe that for any cut \(T\), we have \(\sum_{\mathbf{p}\in U\backslash T}Q_{T}(\mathbf{p})=1\), and the sum is over at most \(k-1\) points. As the entropy \(\sum_{\mathbf{p}\in U\backslash T}Q_{T}(\mathbf{p})(-\ln Q_{T}(\mathbf{p}))\) of \(|U\backslash T|\leq k-1\) outcomes is at most \(\ln(k-1)\), the inner sum is at most \(1+\ln(k-1)\). Finally, using that \(\sum_{T\ni\mathbf{p}^{*}}z_{T}=\|\mathbf{p}^{*}\|_{1}=1\), we get \[f(U)\leq\gamma+\frac{\gamma}{\gamma-1}(1+\ln(k-1))\leq\ln(k-1)+2\sqrt{1+\ln(k -1)}+2\] by optimizing over \(\gamma\). This proves Theorem 7, and gives us an asymptotically optimal bound on the price of explainability. In the next section we show that the bound is, in fact, tight up to lower-order terms. ## 3 Lower Bounds on the Price of Explainability In this section, we prove a tight lower bound on the price of explainability (up to lower order terms), and a lower bound on the approximability of explainable clustering. Both results are obtained via a reduction from the classic _hitting set problem_: given a set system \(([d],\mathcal{S})\), where \([d]=\{1,\ldots,d\}\) denotes the ground set and \(\mathcal{S}=\{S_{1},S_{2},\ldots,S_{k}\}\) is a family of \(k\) subsets of \([d]\), the task is to find the smallest subset \(H\subseteq[d]\) that hits every subset in \(\mathcal{S}\), i.e., \(H\cap S_{i}\neq\emptyset\) for all \(S_{i}\in\mathcal{S}\). We further say that a hitting set instance \(([d],\mathcal{S})\) is _\(s\)-uniform if all subsets of \(\mathcal{S}\) are of the same size \(s\)_. We now first present the reduction from \(s\)-uniform hitting set instances to explainable clustering, and we then analyze its implications. Reducing hitting set to explainable clustering.Given an \(s\)-uniform hitting set instance \(([d],\mathcal{S}=\{S_{1},S_{2},\ldots,S_{k}\})\), define the following data set \(\mathcal{X}\) in \(\{0,1\}^{d}\) and reference solution \(\mathcal{U}\): 1. The reference clustering has \(k+1\) centers \(\mathcal{U}:=\{\mathbf{\mu}_{0},\mathbf{\mu}_{1},\ldots,\mathbf{\mu}_{k}\}\), where \(\mathbf{\mu}_{0}\) is at the origin, and each other \(\mathbf{\mu}_{i}\in\{0,1\}^{d}\) is the characteristic vector of the set \(S_{i}\). 2. The data set \(\mathcal{X}\) consists of one point at each of the locations \(\{\mathbf{e}_{i}\}_{i\in[d]}\), and \(M=\text{poly}(d,k)\gg\max\{d,k\}\) "colocated" points at each of the \(k+1\) locations in \(\mathcal{U}\), giving \(|\mathcal{X}|=d+M\cdot(k+1)\). The cost of this reference clustering \(\mathcal{U}\) with \(k+1\) centers is at most \(d\), since all the \(d\) non-colocated points can be assigned to the center \(\mathbf{\mu}_{0}\). We proceed to analyze the cost of an optimal _explainable_ clustering with \(k+1\) centers. **Lemma 9**.: _Let \(h\) be the size of an optimal solution to the hitting set instance \(([d],\mathcal{S})\) and let OPT be the cost of an optimal \((k+1)\)-median explainable clustering of the data set \(\mathcal{X}\). Then_ \[d+h(s-2-o(1))\leq\text{OPT}\leq d+h(s-2)\,.\] _Moreover, the same bounds hold for the optimal \((k+1)\)-means explainable clustering._ Proof.: We present the proof for \((k+1)\)-median and then observe that all the distance calculations also hold for \((k+1)\)-means since all the points of \(\mathcal{X}\) have binary coordinates and, as we will show, the centers will be (arbitrarily close) to such coordinates as well. Note that \(\|\mathbf{p}-\mathbf{q}\|_{1}=\|\mathbf{p}-\mathbf{q}\|_{2}^{2}\) if \(\mathbf{p},\mathbf{q}\in\{0,1\}^{d}\). We now proceed with the analysis for \((k+1)\)-median. The \(M\) points colocated with each of the reference centers \(\mathbf{\mu}_{i}\) ensure that the best explainable clustering separates each of the centers \(\mathbf{\mu}_{i}\). Separating a center \(\mathbf{\mu}_{i}\) from \(\mathbf{\mu}_{0}\) using a threshold cut means choosing some dimension \(j\in S_{i}\) and a value \(\theta\in(0,1)\), which in turn also separates the data point \(\mathbf{e}_{j}\) from \(\mathbf{\mu}_{0}\). Since \(M\gg k\), the center for the final cluster containing \(\mathbf{e}_{j}\) is located at some location very close the reference center in it, and hence this data point now incurs cost \(s-(1+o(1))\)) instead of \(1\). Here we used the fact that each set has size \(s\) and the term \(o(1)\) accounts for the potential small difference in the locations of the centers in the explainable clustering compared to those in the reference clustering. The above observations imply that * the collection of threshold cuts that separate \(\boldsymbol{\mu}_{0}\) from other centers must form a hitting set for the set system \(([k],\mathcal{S})\); and * if this hitting set has size \(h^{\prime}\), the cost of the explainable clustering is at least \(h^{\prime}(s-(1+o(1))+(d-h^{\prime})=d+h^{\prime}(s-2-o(1))\). We thus have \(\text{OPT}\geq d+h(s-2-o(1))\) since \(h\) is the smallest size of a hitting set. For the upper bound \(\text{OPT}\leq d+h(s-2)\), let \(H=\{i_{1},i_{2},\ldots,i_{h}\}\subseteq[d]\) be an optimal hitting set of size \(h\). Starting with the reference clustering \(\mathcal{U}\), build a threshold tree by adding the threshold cuts along dimensions \(i_{1},i_{2},\ldots,i_{h}\) with thresholds \(1/2\). Specifically, the cut along dimension \(i_{1}\) is at the root of the tree and the remaining cuts are recursively added to the subinstance that contain the reference center \(\boldsymbol{\mu}_{0}\). After adding these cuts we have separated \(\boldsymbol{\mu}_{0}\) from all other centers, since \(H\) is a hitting set. Furthermore, the only points in \(\mathcal{X}\) that are separated from their closest center in \(\mathcal{U}\) are \(\boldsymbol{e}_{i_{1}},\boldsymbol{e}_{i_{2}},\ldots,\boldsymbol{e}_{i_{h}}\). Note that the tree may still contain centers \(\boldsymbol{\mu}_{i},\boldsymbol{\mu}_{j}\) with \(i,j\geq 1\) that are yet not separated. But they can be separated without incurring any additional cost since, in their subinstance, all points of \(\mathcal{X}\) are colocated with the centers (or have already been separated from their closest center \(\boldsymbol{\mu}_{0}\)). Hence, we can build a threshold tree that has one leaf per center in \(\mathcal{U}\) and the only points of \(\mathcal{X}\) that are separated from their closest center are \(\boldsymbol{e}_{i_{1}},\boldsymbol{e}_{i_{2}},\ldots,\boldsymbol{e}_{i_{h}}\). Each of these separated points \(\boldsymbol{e}_{i_{j}}\) has cost at most \(s-1\) instead of \(1\) since the hitting set instance was \(s\)-uniform and the final center \(\boldsymbol{\mu}_{q}\) that \(\boldsymbol{e}_{i_{j}}\) is assigned to correspond to a set \(S_{q}\) that contains \(i_{j}\), and so \(\|\boldsymbol{e}_{i_{j}}-\boldsymbol{\mu}_{q}\|_{1}=s-1\). Hence, the total cost of the clustering is at most \(h\cdot(s-1)+d-h=d+h(s-2)\), which completes the proof of the lemma. Having described our reduction, we now proceed to its implications. Price of explainability for \(k\)-median.As aforementioned, the cost of the reference clustering \(\mathcal{U}\) is at most \(d\). Furthermore, Lemma 9 says that the optimal \((k+1)\)-median explainable clustering costs at least \(h(s-2-o(1))+d\), where \(h\) is the size of an optimal hitting set of \(([d],\mathcal{S})\). It thus suffices to construct a set system \(\mathcal{S}\) having large \(h(s-2-o(1))\approx hs\). For example, letting \(d=|\mathcal{S}|=k\) and defining \(\mathcal{S}\) based on the Hadamard code would give us \(s=k/2\) and a hitting set of size \(\log_{2}k\), and hence a lower bound of \(\approx\frac{1}{2}\log_{2}k\). A better guarantee follows using a probabilistic construction (selecting uniformly at random sets), whose proof we defer to the appendix. **Lemma 10** (Hitting Set Lemma).: _For large enough \(k\), there exist set systems \(([k],\mathcal{S})\) with \(k\) sets of size \(s\) each, such that the minimum hitting set satisfies \(h(s-2-o(1))/k\geq\ln k-O(\ln\ln k)\)._ Combining the above lemma with our reduction shows that the price of explainability is at least \((1-o(1))\ln k\), giving the proof of Theorem 2. **Theorem 2** (Lower Bound for \(k\)-medians).: _There exist instances of \(k\)-medians for which any explainable clustering has cost at least \((1-o(1))\ln k\) times the cost of the optimal \(k\)-medians clustering._ Hardness of approximationOur reduction from the hitting set problem to explainable clustering immediately leads to a hardness result as well. Feige, in his landmark paper [10], proved that it is hard to distinguish whether an \(s\)-uniform hitting set instance \(([d],\mathcal{S})\)2 Footnote 2: We remark that the result in [10] is stated in the terminology of set cover. The instances constructed there has a ground set of size \(n\) and a family of \(m\) subsets. Furthermore, they can be assumed to be regular: each element is contained in \(s\) subsets and each subset is of size \(\ell\). Now in the yes case, there is a set cover so that each element is covered by exactly one set. By the regularity, this implies that the set cover has size \(n/\ell=m/s\) in the yes case. Here we used that \(n\cdot s=m\cdot\ell\). In the no case however, any set cover is at least a factor \((1-o(1))\ln n\) larger. Now in the terminology of hitting set, this is an hitting set instance with \(d=m\) elements and a family \(\mathcal{S}\) of \(k=n\) many sets, each of size \(s\), with the stated yes case and no case. * _(yes case:)_ has a hitting set of \(d/s\) elements; or * _(no case:)_ any hitting set has size at least \((1-o(1))\ln(k)\cdot d/s\), where \(k=|\mathcal{S}|\). Here, "hard" means that there is no polynomial-time algorithm can distinguish between these two cases unless \(P=NP\); it was under the stronger assumption in Feige's original paper [10] and then subsequently improved to hold under the weakest possible assumption \(P\neq NP\)[11, 12]. Our reduction runs in polynomial time so the above hardness together with Lemma 9 implies the following. Assuming \(P\neq NP\), there is no polynomial-time algorithm that, given a data set \(\mathcal{X}\subseteq\mathbb{R}^{d}\), distinguishes whether * _(yes case:)_ there is an explainable clustering with \(k+1\) clusters of cost at most \(2d\); or * _(no case:)_ any such clustering has cost at least \((1-o(1))\ln(k)d\). As any approximation algorithm with better guarantee than \((\nicefrac{{1}}{{2}}-o(1))\ln(k)\) would allow us to distinguish between the two cases, we have the following hardness of approximation result for explainable clustering. **Theorem 4** (Approximability).: _The explainable \(k\)-medians and \(k\)-means problems are hard to approximate better than \((\nicefrac{{1}}{{2}}-o(1))\ln k\), unless P=NP._ The above hardness result settles the approximability of explainable \(k\)-medians up to small constants: it is the same as its price of explainability! For \(k\)-means, the situation is different. Our hardness of approximation result is far from the lower bound \(\Omega(k)\) on its price of explainability. We conjecture that there is no such hardness result matching \(\Omega(k)\) and, in contrast to \(k\)-medians, that there are significantly better approximation algorithms for explainable \(k\)-means than its price of explainability. ## 4 Explainable \(k\)-means clustering We now prove our improved bound on the price of explainability of the \(k\)-means problem, which improves on the previous bound of \(O(k\ln k)\). Our main result is the following: **Theorem 11**.: _Given a data set \(\mathcal{X}\) and a base clustering with centers \(\mathcal{U}\) and map \(\pi\), we can output a random threshold tree \(T\) separating \(\mathcal{U}\) such that_ \[\mathbb{E}[\mathrm{cost}_{2}(T)]\leq O(k\ln\ln k)\cdot\mathrm{cost}_{2}(\pi, \mathcal{U}).\] At a high level, the approach is similar to that for \(k\)-medians: we give an algorithm to separate a given set of centers, but since we are dealing with squared Euclidean distances, we choose cuts from a non-uniform distribution over dimensions and coordinate values. However, since a single cut can increase the cost by a factor of \(\Omega(k)\) we have to be careful not to lose another factor of \(\Omega(\ln k)\) due to the recursion. Here we use a win-win analysis: we define a quantity called the _stretch_ of a pair of points and argue that the loss due to a single cut is just the stretch: moreover, we show that if stretch is large, the recursive problems are relatively balanced and the loss in the recursion is a geometric sum, adding up to \(\approx O(k)\). On the other hand, if the stretch is low, we lose less-than-the-worst-case in each round (although we now need to take a collection of "bulk" cuts). ### The Closest Point Process Again Recall that we proved the performance of the Random Thresholds algorithm for \(k\)-medians by reducing to the perspective of a single data point and analyzing the expected increase in its cost. We can also define the closest point process for any \(\ell_{q}\) norm and for any algorithm \(\mathcal{A}\) separating point sets \(U\) that is invariant under translations and scaling, as follows: **Definition 12** (\(\ell_{q}\)-Norm Closest Point Process).: Given a set of \(k\) points \(U\subseteq\mathbb{R}^{d}\), let \(\mathbf{p}^{*}:=\arg\min_{\mathbf{p}\in U}\|\mathbf{p}\|_{q}\) be the point in \(U\) closest to the origin according to the \(\ell_{q}\) metric. Assume \(\|\mathbf{p}^{*}\|_{q}=1\). Run the algorithm \(\mathcal{A}\) to create a random threshold tree \(T\) that separates this point set \(U\). Consider leaf node \(u\in T\) whose corresponding region \(B_{u}\subseteq\mathbb{R}^{d}\) contains the origin, and let \(\widehat{\mathbf{p}}\) be the unique point of \(U\) in this region \(B_{u}\). Define \[f_{q,\mathcal{A}}(U):=\mathbb{E}[\;\|\widehat{\mathbf{p}}\|_{q}^{q}\;]. \tag{9}\] Finally, define \(\alpha_{q,\mathcal{A}}(k):=\max_{U:|U|=k}f_{q,\mathcal{A}}(U)\). A proof identical to that of Lemma 6 shows that the price of explainability for \(\ell_{q}\)-norm clustering is at most \(\alpha_{q,\mathcal{A}}(k)\). In the rest of this section, we give some terminology and then an algorithm \(\mathcal{A}\) which separates the input point set \(U\subseteq\mathbb{R}^{d}\); we then bound the resulting value of \(f_{q,\mathcal{A}}(U)\). ### Terminology We use similar terminology as in [11]. Given a set \(U\subseteq\mathbb{R}^{d}\) of points, and a dimension \(i\in[d]\), let \(\ell_{i}:=\min_{v\in U}v_{i}\) and \(u_{i}:=\max_{v\in U}v_{i}\) be the leftmost and rightmost coordinates of points. Given two values \(x,y\in\mathbb{R}\), let \(\mathcal{I}_{i}(x,y)\) be the set of consecutive intervals along the \(i\)-th dimension delimited by the coordinates \(x\) and \(y\) themselves and the projections of points in \(U\) that lie between \(x\) and \(y\). For example, consider the 2-dimensional point set \(U\) shown in Figure 2 (the same example was given in [11]). On the horizontal axis, two coordinate values \(x\) and \(y\) are marked along with the projections of the points: \(\mathcal{I}_{1}(x,y)\) consists of the three consecutive intervals \([x,v_{1}^{4}],[v_{1}^{4},v_{1}^{2}]\), and \([v_{1}^{2},y]\). By the definition of \(\mathcal{I}_{i}(x,y)\), we have \(|x-y|=\sum_{[a,b]\in\mathcal{I}_{i}(x,y)}|b-a|\). Let \[\mathcal{I}_{\text{all}}:=\cup_{i\in[d]}\left\{(i,[a,b])\;|\;[a,b]\in\mathcal{ I}_{i}(\ell_{i},u_{i})\right\}\] denote the collection of all dimension-interval pairs which are delimited by the projections of the points onto the respective dimensions; for brevity, define \(\mathcal{I}_{i}:=\mathcal{I}_{i}(\ell_{i},u_{i})\). Define \[L_{2}:=\sum_{(i,[a,b])\in\mathcal{I}_{\text{all}}}|b-a|^{2}.\] Figure 2: Intervals defined by projections. A key definition is that of the _pseudo-distance_: for points \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{d}\), let \[\mathcal{I}(\mathbf{x},\mathbf{y})=\bigcup_{i\in[d]}\{(i,[a,b])\mid[a,b]\in\mathcal{I}_ {i}(x_{i},y_{i})\}.\] We then define the pseudo-distance between \(\mathbf{x}\) and \(\mathbf{y}\) as \[d_{2}(\mathbf{x},\mathbf{y})=\sum_{(i,[a,b])\in\mathcal{I}(x,y)}|b-a|^{2}.\] It follows that \(\|\mathbf{x}-\mathbf{y}\|_{2}^{2}\geq d_{2}(\mathbf{x},\mathbf{y})\geq\frac{1}{|U\cup\{\mathbf{x},\mathbf{y}\}|-1}\cdot\|\mathbf{x}-\mathbf{y}\|_{2}^{2}\). We define a distribution \(D_{2}\) as follows: first select a dimension-interval pair \((i,[a,b])\in\mathcal{I}_{\mathrm{all}}\) with probability \(|b-a|^{2}/L_{2}\), and then pick \(\theta\in[a,b]\) randomly such that the p.d.f. is \[P_{a,b}(\theta):=\frac{4}{(b-a)^{2}}\min(\theta-a,b-\theta)\quad=\quad\frac{4 }{(b-a)^{2}}\min_{\mathbf{v}\in U}\{|\theta-v_{i}|\}. \tag{10}\] Often we refer to the above concepts not for the entire point set \(U\) but for some subset \(V\subseteq U\); in those cases we refer to the partition \(\mathcal{I}_{\mathrm{all}}(V)\), the sum \(L_{2}(V)\), or the distribution \(D_{2}(V)\), etc. Finally, for subset \(V\subseteq U\) of points we define: 1. Let \(\Delta(V):=\max_{\mathbf{x},\mathbf{y}\in V}\|\mathbf{x}-\mathbf{y}\|^{2}\) be the squared diameter of point set \(V\). 2. Call a pair of points \(\mathbf{x},\mathbf{y}\in V\)_far_ if \(\|\mathbf{x}-\mathbf{y}\|_{2}^{2}\geq\Delta(V)/2\), and _close_ if \(\|\mathbf{x}-\mathbf{y}\|_{2}^{2}<\Delta(V)/k^{4}\). 3. Define the _stretch_ of a pair \(\mathbf{x},\mathbf{y}\in V\) to be \(s_{\mathbf{x}\mathbf{y}}(V):=\|\mathbf{x}-\mathbf{y}\|_{2}^{2}/d_{2}(\mathbf{x},\mathbf{y})\). Define the _stretch of the set_\(s(V)\) to be the maximum stretch of any _far pair_ in \(V\). ### The Algorithm The process to construct the threshold tree \(T\) for \(k\)-means is slightly more complex than for \(k\)-medians: as before we start off with the root representing the entire point set \(\mathbb{R}^{d}\). Now, given a node \(v\) representing some box \(B_{v}\) (giving us a subset \(U_{v}:=U\cap B_{v}\)), define the distribution \(D_{2}(v):=D_{2}(U_{v})\). Now consider the stretch \(s(v):=s(U_{v})\) of the set of points. 1. _(Solo Cuts)_ If \(s(v)\geq\frac{|U_{v}|}{\ln^{2}|U_{v}|}\), let \(\mathbf{p}^{*},\mathbf{q}^{*}\) be a pair of far points in \(U_{v}\) of stretch \(s(v)\), and pick a threshold cut \((i,\theta)\sim D_{2}(v)\), conditioned on separating the pair \(\mathbf{p}^{*},\mathbf{q}^{*}\). This partitions the box \(B_{v}\) into two boxes, and recurse on both. 2. _(Bulk Cuts)_ Else if \(s(v)\) is smaller than the quantity above. In this case, repeatedly sample cuts from \(D_{2}(v)\) conditioned on not separating any close pairs of points in \(U_{v}\), until all pairs of far points in \(U_{v}\) are separated. Apply all these cuts in sequence, partitioning \(B_{v}\) into potentially multiple pieces; recurse on each of them. The process stops when each leaf of \(T\) contains a single point from \(U\). For the analysis below, we consider a _compressed threshold tree_\(T^{\prime}\) which is a tree with branching at least two: if we perform a solo cut at node \(v\), we call it a _solo node_, and it has two children corresponding to the two parts obtained by this cut. If we perform bulk cuts at node \(v\), it has potentially multiple children (one for each smaller box obtained by applying these cuts), and we call it a _bulk node_. In the following, let \(S(T^{\prime})\) and \(B(T^{\prime})\) denote the solo and bulk nodes in \(T^{\prime}\). For any node \(v\in T^{\prime}\), let \(L_{2}(v):=L_{2}(U_{v})\), and similarly for other parameters defined above. ### The Expected Cost Increase We will bound the cost for the solo cuts and bulk cuts separately. First we give some preliminary lemmas, then we bound the cost due to bulk cuts, and finally the cost for solo cuts. **Lemma 13** (Separation Probability).: _For any subset of points \(V\subseteq U\) and a point \(\mathbf{p}\in V\),_ \[\Pr_{(i,\theta)\sim D_{2}(V)}[(i,\theta)\text{ separates the origin from }\mathbf{p}]\leq\frac{2\|\mathbf{p}\|_{2}^{2}}{L_{2}(V)}\,.\] Proof.: The probability that a random cut sampled from \(D_{2}(V)\) separates the origin \(\mathbf{0}\) and \(\mathbf{p}\) is: \[\sum_{i\in[d]}\sum_{[a,b]\in\mathcal{I}_{i}}\frac{(a-b)^{2}}{L_{2 }(V)}\int_{a}^{b}P_{a,b}(\theta)\cdot\mathbbm{1}[\text{$\theta$ is between $0$ and $p_{i}$}]\,d\theta\] \[=\frac{4}{L_{2}(V)}\sum_{i\in[d]}\sum_{[a,b]\in\mathcal{I}_{i}} \int_{a}^{b}\min(\theta-a,b-\theta)\cdot\mathbbm{1}[\text{$\theta$ is between $0$ and $p_{i}$}]\,d\theta \tag{11}\] \[\leq\frac{4}{L_{2}(V)}\sum_{i\in[d]}\sum_{[a,b]\in\mathcal{I}_{i} }\int_{a}^{b}|\theta-p_{i}|\cdot\mathbbm{1}[\text{$\theta$ is between $0$ and $p_{i}$}]d\theta\] (12) \[=\frac{4}{L_{2}(V)}\sum_{i\in[d]}\int_{-\infty}^{\infty}|\theta-p _{i}|\cdot\mathbbm{1}[\text{$\theta$ is between $0$ and $p_{i}$}]\,d\theta=\frac{4}{L_{2}(V)}\sum_{i\in[d]}(p_{i})^{2}/2= \frac{2\|\mathbf{p}\|_{2}^{2}}{L_{2}(V)}.\] Equality (11) and inequality (12) use the expressions for \(P_{a,b}(\theta)\) given in (10). Given a set \(V\), suppose we sample cuts from distribution \(D_{2}(V)\) with an added rejection step if the cut separates some pair of points in \(V\) whose distance is at most \(\Delta(V)/k^{4}\). Formally, let \(R(V)\subseteq\mathcal{I}_{\text{all}}(V)\) be the subset of intervals which are contained in projections of close centers in \(V\) onto the coordinate axis. Let \[L_{2}^{\prime}(V):=\sum_{(i,[a,b])\in\mathcal{I}_{\text{all}}(V)\backslash R( V)}|b-a|^{2}.\] The distribution \(D_{2}^{\prime}(V)\) picks an interval \([a,b]\) in \(\mathcal{I}_{\text{all}}^{\prime}(V):=\mathcal{I}_{\text{all}}(V)\backslash R(V)\) with probability \(\frac{(b-a)^{2}}{L_{2}^{\prime}(V)}\) and then a cut is chosen from this interval with the same distribution as \(P_{a,b}(\theta)\). **Proposition 14**.: _For any subset of points \(V\), we have \(L_{2}^{\prime}(V)\geq L_{2}(V)/2\)._ Proof.: The sum of squared length of intervals in \(R(V)\) is at most the total sum of squared distance between all pairs of close centers, which is at most \[\binom{|V|}{2}\cdot\frac{\Delta(V)}{k^{4}}\leq\frac{\Delta(V)}{2k^{2}}\leq \frac{L_{2}(V)}{2k}.\] This implies that \(L_{2}^{\prime}(V)\geq L_{2}(V)(1-1/2k)\geq\frac{1}{2}\cdot L_{2}(V)\). **Lemma 15** (Expected Number of Cuts).: _For any node \(V\), the expected number of cuts from \(D_{2}^{\prime}(V)\) until all far pairs in \(V\) are separated is at most \(24\ln|V|\cdot s(V)\cdot\frac{L_{2}(V)}{\Delta(V)}\)._ Proof.: Consider a collection of \(M:=3\ln|V|\cdot\frac{4s(V)\cdot L_{2}(V)}{\Delta(V)}\) cuts sampled from \(D_{2}^{\prime}(V)\), and consider two "far" points \(\mathbf{p},\mathbf{q}\), i.e., such that \(\|\mathbf{p}-\mathbf{q}\|_{2}^{2}>\nicefrac{{\Delta(V)}}{{2}}\). Then, the probability that any one cut separates the two is at least \[\Pr_{(i,\theta)\sim D_{2}^{\prime}(V)}[(i,\theta)\text{ separates }\mathbf{p},\mathbf{q}] \geq\frac{d_{2}(\mathbf{p},\mathbf{q})-\binom{k}{2}\frac{\Delta(V)}{k^{4}}}{L_{2}(V)}\geq \frac{\|\mathbf{p}-\mathbf{q}\|_{2}^{2}}{s(V)\cdot L_{2}(V)}-\frac{\Delta(V)}{2k^{2} \cdot L_{2}(V)}\] \[\geq\frac{\Delta(V)}{2s(V)\cdot L_{2}(V)}-\frac{\Delta(V)}{2k^{2 }\cdot L_{2}(V)}\] \[\geq\frac{\Delta(V)}{4s(V)\cdot L_{2}(V)}\,.\] The last inequality in the above equation follows using \(s(V)\leq k\) and \(k\geq 2\). Hence the probability that the \(M\) cuts do not separate some pair at distance at least \(\Delta(V)/2\) can be upper-bound using a union bound by \[\binom{|V|}{2}\cdot\left(1-\frac{\Delta(V)}{4\cdot s(V)\cdot L_{2}(V)}\right) ^{M}\leq|V|^{2}\left(\frac{1}{e}\right)^{(3\ln|V|)}=1/|V|\,.\] Hence, these \(M\) cuts separate all pairs that have squared distance at least \(\Delta(V)/2\) with probability at least \(1-1/|V|\geq\nicefrac{{1}}{{2}}\). In turn, the expected number of cuts is at most \(2M\). We can now start to bound the cost incurred due to bulk cuts. **Lemma 16** (Logarithmic Number of Relevant Levels).: _For any cut \((i,\theta)\), we have_ \[\sum_{v\in B(T^{\prime})}\mathds{1}_{[\mathbf{0}\in B_{v}]}\cdot\mathds{1}_{[(i, \theta)\in\operatorname{supp}(D_{2}^{\prime}(v))]}\leq 4\ln k.\] Proof.: The bulk nodes in the compressed tree \(T^{\prime}\) that correspond to the part containing the origin \(\mathbf{0}\) lie on a root-leaf path; call these \(v_{1},v_{2},\dots,v_{\ell}\), with \(v_{1}\) closest to the root. Our algorithm ensures that \(\Delta(v_{j})\leq\Delta(v_{j-1})/2\). Consider the lowest integer \(j\) such that \((i,\theta)\) belongs to the support of \(D_{2}^{\prime}(v_{j})\), and let \(\mathbf{p},\mathbf{q}\in U\cap B_{v_{j}}\) be the closest pair of points in \(B_{v_{j}}\) separated by \((i,\theta)\). The definition of the probability distribution \(D_{2}^{\prime}(v_{j})\) ensures that \(\|\mathbf{p}-\mathbf{q}\|_{2}^{2}\geq\Delta(v_{j})/k^{4}\). For \(j^{\prime}=j+4\ln k+1\) we have that \(\Delta(v_{j^{\prime}})<\Delta(v_{j})/k^{4}\), and so there are no pairs of points separated by \((i,\theta)\)--implying that this cut will no longer be in the support of \(D_{2}(v_{j^{\prime\prime}})\) for \(j^{\prime\prime}\geq j^{\prime}\). **Lemma 17** (Cost for Bulk Cuts).: _The expected cost increase due to bulk cuts is at most \(O(k)\cdot\|\mathbf{p}^{*}\|_{2}^{2}\)._ Proof.: Consider a bulk node \(v\) in the decision tree created by the algorithm: we generate a random number of bulk cuts \(\mathcal{K}_{v}\) at this node, and each of these can cause an increase in cost. Let \(Y_{t}\) be the following upper bound on the increase in cost due to the \(t^{\text{th}}\) such cut \((i,\theta)\): \[Y_{t}:=\Delta(v)\cdot\mathds{1}_{[(i,\theta)\text{\leavevmode\nobreak\ \text{\rm{seps}} }\ \mathbf{0},\mathbf{p}^{*}]}\cdot\mathds{1}_{[\{\mathbf{0},\mathbf{p}^{*}\}\subseteq B_{v}]}.\] Moreover, let \(N\) be the number of such cuts, then the total expected cost is \(\mathbb{E}[\sum_{t=1}^{N}Y_{t}]\). Since the \(Y_{t}\) variables are independent and \(N\) is a stopping time, we can use Wald's equation to infer that the total expected cost due to these cuts is \(\mathbb{E}[N]\cdot\mathbb{E}[Y_{t}]\). Taking expectations of the expression for \(Y_{t}\) above (with respect to the distribution \(D_{2}^{\prime}(v)\)), and using Lemma 15 to bound \(\mathbb{E}[N]\), this is at most \[\left(O(s(v)\ln|U_{v}|)\cdot\frac{L_{2}(v)}{\Delta(v)}\right)\cdot\Delta(v)\cdot \sum_{(i,[a,b])\in\mathcal{I}_{\text{all}}^{\prime}(v)}\frac{(b-a)^{2}}{L_{2} ^{\prime}(v)}\int_{a}^{b}P_{a,b}(\theta)\cdot\mathds{1}_{[(i,\theta)\text{ \leavevmode\nobreak\ \text{\rm{seps}}}\ \mathbf{0},\mathbf{p}^{*}]}\cdot\mathds{1}_{[\{\mathbf{0},\mathbf{p}^{*}\}\subseteq B_{v}]}\,d\theta.\] Using Proposition 14, we know that \(L^{\prime}_{2}(v)\geq L_{2}(v)/2\), so the above expression is at most \[O(s(v)\ln|U_{v}|)\cdot\sum_{(i,[a,b])\in\mathcal{I}_{\text{all}}(v)}(b-a)^{2} \int_{a}^{b}P_{a,b}(\theta)\cdot\mathbbm{1}_{[(i,\theta)\text{\leavevmode \nobreak\ \text{\rm{seps}}}\leavevmode\nobreak\ \mathbf{0},\boldsymbol{p}^{*}]}\cdot \mathbbm{1}_{[(\boldsymbol{0},\boldsymbol{p}^{*}\}\subseteq B_{v}]\cdot \mathbbm{1}_{[(i,\theta)\in\text{\rm{supp}}(D^{\prime}_{2}(v))]}\,d\theta.\] Next, we observe that for any dimension \(i\), we have \[(b-a)^{2}\cdot P_{a,b}(\theta)\cdot\mathbbm{1}_{[\boldsymbol{p}^{*}\in B_{v}] }\leq 4|\theta-\boldsymbol{p}_{i}^{*}|.\] Moreover, for each bulk node we have \(s(v)\ln|U_{v}|\leq|U_{v}|/\ln|U_{v}|\). This in turn is at most \(k/\ln k\), since the function \(\frac{x}{\ln x}\) is monotone and \(|U_{v}|\leq k\). Substituting both these facts, we get \[O(\nicefrac{{k}}{{\ln k}})\cdot\sum_{i}\int_{-\infty}^{\infty}|\theta- \boldsymbol{p}_{i}^{*}|\cdot\mathbbm{1}_{[(i,\theta)\text{\leavevmode \nobreak\ \text{\rm{seps}}}\leavevmode\nobreak\ \mathbf{0},\boldsymbol{p}^{*}]}\cdot \mathbbm{1}_{[\boldsymbol{0}\in B_{v}]}\cdot\mathbbm{1}_{[(i,\theta)\in\text{ \rm{supp}}(D^{\prime}_{2}(v))]}\,d\theta.\] Next, we use Lemma 16 to get: \[\sum_{v\in B(T^{\prime})}\mathbbm{1}_{[\boldsymbol{0}\in B_{v}]}\cdot \mathbbm{1}_{[(i,\theta)\in\text{\rm{supp}}(D^{\prime}_{2}(v))]}\leq O(\ln k).\] Now summing over all \(v\), we get \[O(k)\cdot\sum_{i}\int_{-\infty}^{\infty}|\theta-\boldsymbol{p}_{i}^{*}|\cdot \mathbbm{1}_{[(i,\theta)\text{\leavevmode\nobreak\ \text{\rm{seps}}}\leavevmode\nobreak\ \mathbf{0},\boldsymbol{p}^{*}]}\,d\theta=O(k)\cdot\|\boldsymbol{p}^{*}\|_{2}^{2}.\] This completes the proof. Finally, we turn our attention to solo cuts. For solo node \(v\) let \(\boldsymbol{p}_{v},\boldsymbol{q}_{v}\) be the two far nodes such that their stretch is at least \(\frac{|U_{v}|}{(\ln|U_{v}|)^{2}}\), and define the distribution \(D^{\prime\prime}_{2}(v)\) to be the distribution \(D_{2}(v)\) conditioned on separating this far pair. **Lemma 18** (Ratio for Solo Cuts).: _For any solo node \(v\in T^{\prime}\),_ \[\frac{\mathbb{E}_{(i,\theta)\sim D^{\prime\prime}_{2}(v)}[\text{\leavevmode \nobreak\ cost increase at node $v$ }]}{\mathbb{E}_{(i,\theta)\sim D^{\prime\prime}_{2}(v)}[\text{\leavevmode \nobreak\ size of smaller child of node $v$ }]}\leq 32\|\boldsymbol{p}^{*}\|_{2}^{2}\cdot\left(1+\ln\left(\frac{|U_{v}|}{s(v )}\right)\right).\] Proof.: Lemma 13 implies that \[\mathbb{E}_{(i,\theta)\sim D_{2}(v)}[\text{\leavevmode\nobreak\ cost increase }]\leq\Delta(v)\cdot\frac{2\|\boldsymbol{p}^{*}\|_{2}^{2}}{L_{2}(v)}.\] Moreover, the probability of separating \(\boldsymbol{p}_{v},\boldsymbol{q}_{v}\) is at least \[\frac{d_{2}(\boldsymbol{p}_{v},\boldsymbol{q}_{v}\mid U_{v})}{L_{2}(v)}\geq \frac{\|\boldsymbol{p}-\boldsymbol{q}\|_{2}^{2}}{s(v)\cdot L_{2}(v)}\geq\frac {\Delta(v)}{2\cdot s(v)\cdot L_{2}(v)}\,.\] Since the cost increase is non-negative, we get that \[\mathbb{E}_{(i,\theta)\sim D^{\prime\prime}_{2}(v)}[\text{\leavevmode \nobreak\ cost increase }]\leq\Delta(v)\cdot\frac{2\|p^{*}\|_{2}^{2}}{L_{2}(v)}\cdot\frac{2\cdot s(v) \cdot L_{2}(v)}{\Delta(v)}\leq 4s(v)\cdot\|\boldsymbol{p}^{*}\|_{2}^{2}. \tag{13}\] This bounds the numerator of the desired quantity; for the denominator, we prove the following claim in Appendix C.1: _Claim 19_.: If for a cut \((i,\theta)\), we define \(H^{+}:=\{x\mid x_{i}\geq\theta\}\) and \(H^{-}=\mathbb{R}^{d}\setminus H^{+}\), then \[\mathbb{E}_{(i,\theta)\sim D_{2}^{\prime\prime}(v)}\bigg{[}\min \left(|U_{v}\cap H^{+}|,|U_{v}\cap H^{-}|\right)\bigg{]}\geq\frac{s(v)}{8(1+ \ln(\nicefrac{{|U_{v}|}}{{s(v)}}))}\.\] Using Claim 19 with (13) finishes the proof. **Lemma 20** (Cost for Solo Cuts).: _For any internal solo node \(v\in T^{\prime}\), the expected cost increase due to solo cuts made in the subtree \(T^{\prime}_{v}\) is at most_ \[32|U_{v}|(1+2\ln\ln|U_{v}|)\cdot\mathbbm{1}_{[\mathbf{0}\in B_{ v}]}\cdot\|\mathbf{p}^{*}\|_{2}^{2}.\] Proof.: The proof is by induction. The base cases are when node \(v\) has \(|U_{v}|\leq 3\). For \(v\) to be an internal node, \(|U_{v}|\in\{2,3\}\). If it has two nodes, then \(s(v)=1<\frac{2}{\ln^{2}2}\), and hence \(u\) cannot be a solo node. If we have 3 points, then the only solo node in the subtree \(T^{\prime}_{v}\) is \(u\) itself, so it suffices to argue that the expected cost increase due to this solo cut is at most \(32|U_{v}|(1+2\ln\ln|U_{v}|)\cdot\|\mathbf{p}^{*}\|_{2}^{2}\). From (13) we know that the expected cost increase is at most \(4s(u)\cdot\|\mathbf{p}^{*}\|_{2}^{2}\). Now using \(s(u)\leq|U_{v}|\) and \(|U_{v}|\leq 3\), we obtain the required bound. Else consider some node \(v\) with \(|U_{v}|\geq 4\), and let \(\chi(v)\) be the set of solo nodes whose closest solo ancestor is \(v\): it follows that \(\sum_{w\in\chi(v)}|U_{w}|\leq|U_{v}|\). Moreover, suppose the random solo cut \((i_{v},\theta_{v})\sim D_{2}^{\prime\prime}(v)\) partitions \(U_{v}\) into parts of size at least \(\sigma(v)\), then each \(|U_{v}|-|U_{w}|\geq\sigma(v)\). Using the induction hypothesis on each \(w\in\chi(v)\), the total expected cost increase is at most \[\mathbb{E}[\text{ cost increase at }v\ ]+\sum_{w\in\chi(v)}32|U_{w}|(1+2\ln\ln|U_{w}|) \cdot\mathbbm{1}_{[\mathbf{0}\in B_{w}]}\cdot\|\mathbf{p}^{*}\|_{2}^{2}. \tag{14}\] Since the origin belongs to at most one of the sets \(B_{w^{*}}\), the sum contributes at most \[32|U_{w^{*}}|(1+2\ln\ln|U_{v}|)\cdot\|\mathbf{p}^{*}\|_{2}^{2}\leq 3 2(|U_{v}|-\mathbb{E}[\sigma(v)])(1+2\ln\ln|U_{v}|)\cdot\|\mathbf{p}^{*}\|_{2}^{2}. \tag{15}\] (The RHS is non-negative, since \(\sigma(v)\leq|U_{v}|\), so the bound holds even when \(\chi(v)=\emptyset\).) Now Lemma 18 implies that the cost at \(v\) is at most \[\mathbb{E}[\sigma(v)]\cdot 32(1+\ln(\nicefrac{{|U_{v}|}}{{s(v)}}))\cdot \|\mathbf{p}^{*}\|_{2}^{2}.\] Finally, using that \(s(v)\geq|U_{v}|/(\ln|U_{v}|)^{2}\) for a solo node, and summing the two terms, completes the proof. We can wrap up: Using Lemma 17 and Lemma 20, we know that the expected cost increase due to all the cuts used to separate the points in \(U\) is at most \(O(k\ln\ln k)\cdot\|\mathbf{p}^{*}\|_{2}^{2}\). If \(\widehat{\mathbf{p}}\) is the unique point in \(U\) in the region \(B_{u}\) corresponding to the leaf node \(u\) such that \(\mathbf{0}\in B_{u}\), then the cost is upper bounded by \[\|\widehat{\mathbf{p}}\|_{2}^{2}\leq 2\cdot\|\widehat{\mathbf{p}}-\mathbf{p}^{*}\|_{2}^{2 }+2\cdot\|\mathbf{p}^{*}\|_{2}^{2} \tag{16}\] using the generalized triangle inequality. Taking expectation on both sides of Equation (16) and plugging in \(\|\mathbf{p}^{*}\|_{2}^{2}=1\) gives \(f_{2}(U)\leq O(k\ln\ln k)\). ## 5 Tight Bounds for the Random Threshold Algorithm We now improve the bound of \((1+o(1))\ln k\) from SS2.1 to give an exact bound of \(1+H_{k-1}\) for the Random Thresholds algorithm; we first show an example which achieves this bound, and then give our precise analysis of the algorithm. ### A Lower Bound Consider an instance in \(\mathbb{R}^{k}\) given by a reference clustering having one "close" center \(\mathbf{\mu}^{1}=e_{1}\), and \(k-1\) "far" centers \(\mathbf{\mu}^{i}=M\,e_{i}\) for each \(i\in\{2,\ldots,k\}\), where scalar \(M\gg 1\). We consider a single data point at the origin. (As always, we can imagine there being many points colocated with each of the centers.) Let the expected assignment cost due to the algorithm for the point at the origin on an instance with \(j\) far points be denoted by \(g(j)\). Then we get a recurrence: \[g(k)=\tfrac{1}{M(k-1)+1}\cdot M+\tfrac{M(k-1)}{M(k-1)+1}\cdot g(k-1),\] and \(g(\mathbf{0})=1\). As \(M\to\infty\), this gives us \(g(k)\to 1+H_{k-1}\), as claimed. In the next section, we will prove a matching upper bound of \(1+H_{k-1}\). ### Towards a Matching Upper Bound Our proof for this case is technical, so the reader may want to keep three special cases in mind: 1. The "axis-aligned" case, inspired by the bad example: the points in \(U\) are \(\{\mathbf{p}_{i}:=d_{i}\mathbf{e}_{i}\}\), where \(1=d_{1}\leq d_{2}\leq\ldots\leq d_{k}\), 2. the "orthogonal closest-point" case, where the closest point is \(\mathbf{e}_{1}\), and all other points lie in the orthogonal subspace to it, and 3. the "uniform" case, where all points other than the closest are at the same distance \(d\geq 1\). Many of our proofs become simpler in these special cases, and thinking about these cases will give us crucial intuition. We start with a set of points \(U\subseteq\mathbb{R}^{d}\), recall that \(f(U)\) was defined to be \(\mathbb{E}[\|\widehat{\mathbf{p}}\|_{\mathbf{1}}]\), where \(\widehat{\mathbf{p}}\) is the unique point in the box containing the origin in the Random Thresholds algorithm. As in SS2.3, assume we have a dimension for each cut \(S\subseteq U\), and point \(\mathbf{p}\in U\) has value \(z_{S}\mathbbm{1}[\mathbf{p}\in S]\geq 0\) in this coordinate. Finally defining \(\mathcal{C}_{S}\) to be the collection of sets that cross \(S\subseteq U\), we get that for any \(S\subseteq U\), \[f(S)=\frac{\sum_{E\in\mathcal{C}_{S}}z_{E}\cdot f(S\setminus E)}{\sum_{E\in \mathcal{C}_{S}}z_{E}},\qquad\text{and}\qquad f(\{\mathbf{p}\})=\|\mathbf{p}\|_{1}. \tag{17}\] Moreover, when \(\sum_{E\in\mathcal{C}_{S}}z_{E}=0\), the value of \(f(S)=0\). Let us denote by \(\ell(\mathbf{p}):=\|\mathbf{p}\|_{1}\) and \(\beta_{k-1}:=1+H_{k-1}\). Using Equation (17), we think of the function \(f(U)\) purely as an algebraic function of the \(z_{S}\) values, and our central goal in this section to prove the following: **Theorem 21** (Main Goal).: _For any point \(\mathbf{p}\in U\), the value \(f(U)\leq\beta_{k-1}\cdot\ell(\mathbf{p})\)._ Since the ratio \(f(U)/\ell(\mathbf{p})\) is difficult to argue about, we instead focus on bounding the derivative \(\frac{\partial f(U)}{\partial z_{E}}\) by \(\beta_{k-1}\). The following lemma shows that, by integrating along a path from the origin to the point \(\mathbf{p}\), such a bound on the derivative suffices: (a formal proof appears in SSD) **Lemma 22**.: _For any \(\mathbf{p}\in U\), if \(\frac{\partial f(U)}{\partial z_{E}}\leq\beta_{k-1}\) for all \(E\subseteq U\) with \(\mathbf{p}\in E\), then \(f(U)\leq\beta_{k-1}\cdot\ell(\mathbf{p})\)._ ### Bounding the Derivative We start by taking definition (17) and calculating the derivative \(\frac{\partial f(S)}{\partial z_{T}}\) for the case \(|S|\geq 2\): \[\frac{\partial f(S)}{\partial z_{T}}=\frac{\sum_{E\in\mathcal{C}_{S}}z_{E}\cdot \frac{\partial f(S\setminus E)}{\partial z_{T}}+\mathbb{1}[T\in\mathcal{C}_{S}] \cdot\big{(}f(S\setminus T)-f(S)\big{)}}{\sum_{E\in\mathcal{C}_{S}}z_{E}}. \tag{18}\] When \(|S|=1\), if \(S=\{r\}\), then \(f(S)=\ell(r)\) by definition, and so \(\frac{\partial f(S)}{\partial z_{T}}=\frac{\partial\ell(r)}{\partial z_{T}}\). Henceforth, let us fix a set \(S\subseteq U\) and some subset \(T\). The next lemma (in SSD) follows by direct calculations: **Lemma 23**.: _The partial derivatives satisfy:_ 1. _If_ \(T\supseteq S\)_, then_ \(\frac{\partial f(S)}{\partial z_{T}}=1\)_._ 2. _If_ \(T\cap S=\emptyset\)_, then_ \(\frac{\partial f(S)}{\partial z_{T}}=0\)_._ 3. _We have_ \(f(S)\geq\sum_{E\supseteq S}z_{E}\)_._ The partial derivative \(\frac{\partial f(S)}{\partial z_{T}}\) is not very well-behaved: e.g., it is not guaranteed to be non-negative (which turns out to make an inductive proof difficult). To address this issue, we define a surrogate _pseudo-derivative_ operator, and use bounds on this pseudo-derivative to bound the derivative. **Definition 24** (Pseudo-Derivative).: The pseudo-derivative of \(f(S)\) with respect to the variable \(z_{T}\) such that \(T\) crosses \(S\) (i.e., \(T\in\mathcal{C}_{S}\)) is: \[\frac{\widehat{\partial}f(S)}{\widehat{\partial}z_{T}}=\frac{\sum_{E\in \mathcal{C}_{S}}z_{E}\cdot\frac{\widehat{\partial}f(S\setminus E)}{\widehat{ \partial}z_{T}}+f(S\setminus T)-\sum_{E\supseteq S}z_{E}}{\sum_{E\in\mathcal{C} _{S}}z_{E}}. \tag{19}\] It is defined to be \(1\) if \(T\supseteq S\), and \(0\) if \(T\cap S=\emptyset\). Observe the differences with (18), which are marked in red: when \(T\in\mathcal{C}_{S}\), each smaller derivative term \(\frac{\partial f(S\setminus E)}{\partial z_{T}}\) in the numerator of the derivative is naturally replaced with the corresponding pseudo-derivative term \(\frac{\widehat{\partial}f(S\setminus E)}{\partial z_{T}}\), but crucially, the term \(f(S)\) is replaced with \(\sum_{E\supseteq S}z_{E}\). These are the terms corresponding to the "shift" of the set \(S\)--i.e., the cuts that separate all points in \(S\) from the origin--and hence they form a lower bound on \(f(S)\), the expected distance to the closest point in \(S\). This latter change makes the following arguments easier (and indeed, possible), but still maintains the intuition of the derivative being invariant under translations. In SSD, we prove the following lemma, showing it is indeed an upper bound. **Lemma 25**.: _The pseudo-derivative is non-negative, and bounds the derivative from above. I.e.,_ \[\max\left(\frac{\partial f(S)}{\partial z_{T}},0\right)\leq\frac{\widehat{ \partial}f(S)}{\widehat{\partial}z_{T}}.\] Given Lemma 25, it suffices to upper bound the pseudo-derivative by \(\beta_{k-1}\), which we do next. **Theorem 26**.: _For any \(S\subseteq U\) and any \(T\neq\emptyset\), we have \(\frac{\widehat{\partial}f(S)}{\widehat{\partial}z_{T}}\leq\beta_{|S\setminus T|}.\)_ Theorem 26 implies our desired bound on the derivative, because \(|S\setminus T|\leq k-1\) for any \(S\subseteq U,T\neq\emptyset\). To prove Theorem 26, we first prove it for the special case when all points in \(S\setminus T\) have the same norm (the _uniform_ case), and then reduce the general case to this uniform case. #### 5.3.1 Proof of Theorem 26: the Uniform Case The main reason that it is easier to prove Theorem 26 for the uniform case is because we know the value of \(f(S\setminus T)\) exactly which will be equal to the norm of all the points in \(S\setminus T\). Otherwise, it is hard to obtain any upper bound to \(f(S\setminus T)\) in the general case. Moreover, we know that the uniform property holds true for all subsets of \(S\). This enables us to use the upper bound of Theorem 26 to derivative terms \(\frac{\widehat{\delta}f(S\setminus E)}{\widehat{\partial}z_{T}}\) by \(\beta_{|S\setminus(E\cup T)|}\). **Lemma 27**.: _If all points in \(S\setminus T\) have the same norm, then \(\frac{\widehat{\partial}f(S)}{\widehat{\partial}z_{T}}\leq\beta_{|S\setminus T|}\)._ Proof.: The proof is by induction on \(|S|\). If \(T\notin\mathcal{C}_{S}\), we know that \(\frac{\widehat{\delta}f(S)}{\widehat{\partial}z_{T}}\) is either \(0\) or \(1\). But \(\beta_{|S\setminus T|}\geq 1\) as \(\beta_{m}\geq 1\) for any \(m\geq 0\) which implies that we are done. Hence, for \(|S|=1\), we know that \(T\notin\mathcal{C}_{S}\) and we are done. From now on, we can assume \(|S|\geq 2\) and \(T\in\mathcal{C}_{S}\). So we use the definition of the pseudo-derivative from (19) and use the upper bound of the lemma inductively for the recursive terms \(\frac{\widehat{\partial}f(S\setminus E)}{\widehat{\partial}z_{T}}\) to get: \[\frac{\widehat{\partial}f(S)}{\widehat{\partial}z_{T}}\leq\frac{\sum_{E\in \mathcal{C}_{S}}z_{E}\cdot\beta_{|S\setminus(E\cup T)|}+f(S\setminus T)-\sum_{ E\supseteq S}z_{E}}{\sum_{E\in\mathcal{C}_{S}}z_{E}}. \tag{20}\] In order to show that \(\frac{\widehat{\partial}f(S)}{\widehat{\partial}z_{T}}\leq\beta_{|S\setminus T|}\), it is sufficient to upper bound the RHS of Equation (20) by \(\beta_{|S\setminus T|}\). Simplifying gives the following sufficient condition \[f(S\setminus T)\leq\sum_{E\supseteq S}z_{E}+\sum_{E\in\mathcal{C}_{S}}z_{E} \left(\beta_{|S\setminus T|}-\beta_{|S\setminus(T\cup E)|}\right). \tag{21}\] From here on, we will prove Equation (21). Since \(T\in\mathcal{C}_{S}\), we know that \(|S\setminus T|\geq 1\). All points in \(S\setminus T\) have the same norm, so we can write \[f(S\setminus T) =\frac{1}{|S\setminus T|}\sum_{r\in S\setminus T}\ell(r)\] (22) \[\stackrel{{\eqref{eq: ### Proof of Theorem 26: Reducing to the Uniform Case The case for points in \(U\backslash T\) have different norms is the technical heart of the proof. In this case, we "lift" the points in such a way that the value of the pseudo-derivative \(\frac{\hat{\partial}f(S)}{\hat{\partial}z_{T}}\) is monotonically increasing, thereby reducing to the uniform case of Section 5.3.1. To begin, we give a few supporting lemmas. #### 5.4.1 Supporting lemmas **Observation 29**.: \(\max_{b\in S}(\ell(b))\geq f(S)\geq\min_{a\in S}(\ell(a))\)__ Proof.: The proof follows from the fact that \(f(S)\) is an expectation of norms of points in \(S\). Let us denote \(S\backslash\{\mathbf{p}\}\) simply as \(S-\mathbf{p}\). **Observation 30**.: _For any point \(\mathbf{p}\in S\) such that \(|S|\geq 2\), if \(E\in\mathcal{C}_{S}\) and \(E\notin\mathcal{C}_{S-\mathbf{p}}\), then \(E\cap S\) is equal to either \(\{\mathbf{p}\}\) or \(S-\mathbf{p}\)._ **Observation 31**.: _Let \(A=\sum_{j=1}^{m}w_{j}\cdot v_{j}/\sum_{j=1}^{m}w_{j}\) be a weighted average of \(m\) items \(t_{1},\ldots,t_{m}\) with values \(v_{1},\ldots,v_{m}\) weighted by respective weights \(w_{1},\ldots,w_{m}\). Adding any number \(m^{\prime}-m\) items of value \(A\) (with any weights) does not change the weighted average. I.e., the new weighted average is_ \[A^{\prime}=\frac{\sum_{j=1}^{m}w_{j}\cdot v_{j}+\sum_{j=m+1}^{m^{\prime}}w_{j} \cdot A}{\sum_{j=1}^{m}w_{j}+\sum_{j=m+1}^{m^{\prime}}w_{j}}=A\] Observation 31 is simple but powerful. **Lemma 32**.: _Let \(\mathbf{p}\) be a point in \(S\) having the minimum \(\ell_{1}\) norm. If \(S-\mathbf{p}\neq\emptyset\), then_ \[f(S-\mathbf{p})\geq f(S)\geq f(\{\mathbf{p}\}) \tag{26}\] Proof.: The lower bound follows from Observation 29. Let us prove the upper bound by induction in \(|S|\). If \(|S|=2\), then \(f(S-\mathbf{p})\) is the norm of the maximum norm point in \(S\) which is more than \(f(S)\) by Observation 30. If \(|S|\geq 3\), then \(|S-\mathbf{p}|\geq 2\) so we will use Equation (17) to expand both \(f(S)\) and \(f(S-\mathbf{p})\) as \[f(S-\mathbf{p})=\frac{\sum_{E\in\mathcal{C}_{S-\mathbf{p}}}z_{E}\cdot f((S-\mathbf{p}) \backslash E)}{\sum_{E\in\mathcal{C}_{S-\mathbf{p}}}z_{E}},\,f(S)=\frac{\sum_{E \in\mathcal{C}_{S}}z_{E}\cdot f(S\backslash E)}{\sum_{E\in\mathcal{C}_{S}}z_{ E}}. \tag{27}\] Using Observation 31 and Observation 30, we can re-write \(f(S-\mathbf{p})\) as \[f(S-\mathbf{p})=\frac{\sum_{E\in\mathcal{C}_{S-\mathbf{p}}}z_{E}\cdot f((S-\mathbf{p}) \backslash E)+z_{\{\mathbf{p}\}}\cdot f(S-\mathbf{p})+z_{S-\mathbf{p}}\cdot f(S-\mathbf{p})}{ \sum_{E\in\mathcal{C}_{S}}z_{E}}. \tag{28}\] In the numerators of \(f(S-\mathbf{p})\) and \(f(S)\), the coefficient of \(z_{E}\) in \(f(S-\mathbf{p})\) is more than that of \(f(S)\) either by induction on \(|S|\) (\(S\gets S\backslash E\)) or using the fact that \(f(S-\mathbf{p})\geq f(\{\mathbf{p}\})\) from Observation 29. From now, many other lemmas will have their proofs very similar to that of Lemma 32 where we compare the function values of two different sets by expanding the functions and using Observation 31 to normalize the denominators and then use induction and other arguments to conclude. #### 5.4.2 The lifting operation Now let us define the lift operation. For a set \(S^{\prime}=\{\mathbf{p}_{1},\dots,\mathbf{p}_{k^{\prime}}\}\subseteq U\), consider the following trajectory for the \(\mathbf{z}\) vector \[z_{E}(t):=\begin{cases}z_{E}+t&E=\{\mathbf{p}\},\,\mathbf{p}\in S^{\prime}\\ z_{E}&\text{otherwise}\.\end{cases} \tag{29}\] Given this trajectory for the embedding, we are interested in how this changes the function value. Let us define \[\frac{\partial f(S)}{\partial S^{\prime}}:=\frac{\mathrm{d}f_{t}(S)}{\mathrm{d }t}\bigg{|}_{t=0}\,.\] **Lemma 33**.: _Let \(S^{\prime}\) be a subset of \(U\). Then,_ 1. _If_ \(S\cap S^{\prime}=\emptyset,\frac{\partial f(S)}{\partial S^{\prime}}=0\)_._ 2. _If_ \(|S|=1\)_,_ \(S\subseteq S^{\prime},\frac{\partial f(S)}{\partial S^{\prime}}=1\)_._ 3. _If_ \(|S|\geq 2\)_,_ \[\frac{\partial f(S)}{\partial S^{\prime}}=\frac{\sum_{E\in\mathcal{C}_{S}}z_{ E}\cdot\frac{\partial f(S\setminus E)}{\partial S^{\prime}}+\sum_{\mathbf{p}_{i}\in S \cap S^{\prime}}\left(f(S-\{\mathbf{p}_{i}\})-f(S)\right)}{\sum_{E\in\mathcal{C}_{ S}}z_{E}}.\] (30) _Moreover, if points in_ \(S^{\prime}\cap S\) _if any, have the least norm out of points in_ \(S\)_, then_ \(\frac{\partial f(S)}{\partial S^{\prime}}\geq 0\)_._ Proof.: Since \(f(S)\) is purely a function of points in \(S\), and hence only depends on variables \(z_{E}\) such that \(\mathbf{p}\in E\) for some \(\mathbf{p}\in S\). But the only variables that change with \(t\) are \(z_{\{\mathbf{p}^{\prime}\}},\mathbf{p}^{\prime}\in S^{\prime}\). Since \(S^{\prime}\cap S=\emptyset\), we can conclude statement 1. When \(S=\{\mathbf{p}\}\) and \(\mathbf{p}\in S^{\prime}\), \(f_{t}(S)=\ell_{t}(\mathbf{p})\). Since \(z_{\{\mathbf{p}\}}\) is the only variable that is changing and \(\ell(\mathbf{p})\) contains this, we can conclude statement 2. Once we know that \(|S|=2\), using Equation (17) and applying the \(\frac{\partial}{\partial S^{\prime}}\) operator on both sides gives \[\frac{\partial f(S)}{\partial S^{\prime}}=\frac{\sum_{E\in\mathcal{C}_{S}} \left(z_{E}\cdot\frac{\partial f(S\setminus E)}{\partial s^{\prime}}+\frac{ \partial z_{E}}{\partial S^{\prime}}\cdot f(S\setminus E)\right)}{\sum_{E\in \mathcal{C}_{S}}z_{E}}-\frac{f(S)}{\sum_{E\in\mathcal{C}_{S}}z_{E}}\cdot\frac{ \partial\sum_{E\in\mathcal{C}_{S}}z_{E}}{\partial S^{\prime}}. \tag{31}\] Using the fact that the only variables that are changing with \(t\) are \(z_{\{\mathbf{p}\}}\) for \(\mathbf{p}\in S^{\prime}\) in Equation (31) gives Equation (30). Let us argue that \(\frac{\partial f(S)}{\partial S^{\prime}}\geq 0\) when points in \(S\cap S^{\prime}\) have the least norm of points in \(S\) using induction on \(|S|\). For \(|S|=1\), using statement 2 or statement 1, we are done. Otherwise, using Equation (30), we have that the recursive derivative terms \(\frac{\partial f(S\setminus E)}{\partial S^{\prime}}\) in the numerator of Equation (30) are non-negative by inductive hypothesis. From Lemma 32, we know that \(f(S-\mathbf{p}_{i})-f(S)\geq 0\) for every \(\mathbf{p}_{i}\in S\cap S^{\prime}\). Combining both these observations, we can conclude statement 3 and hence the lemma. **Lemma 34**.: _If \(S^{\prime}\) is the set of points in \(U\backslash T\) of minimum norm, then_ \[\frac{\partial}{\partial S^{\prime}}\left(\frac{\hat{\partial}f(S)}{\hat{ \partial}z_{T}}\right)\geq 0.\] Proof.: The proof is by induction on \(|S|\). If \(T\notin\mathcal{C}_{S}\), this is trivially true because \(\frac{\partial f(S)}{\hat{\partial}z_{T}}\) is either \(0\) or \(1\). So for \(|S|=1\), we are done. Otherwise, we have \(|S|\geq 2\) and \(T\in\mathcal{C}_{S}\). Recall from Equation (19) that \[\frac{\hat{\partial}f(S)}{\hat{\partial}z_{T}}=\frac{\sum_{E\in\mathcal{C}_{S}}z_{ E}\cdot\frac{\hat{\partial}f(S\backslash E)}{\hat{\partial}z_{T}}+f(S \backslash T)-\sum_{E\supseteq S}z_{E}}{\sum_{E\in\mathcal{C}_{S}}z_{E}}.\] Applying the operator \(\frac{\partial}{\partial S^{\prime}}\) on both sides gives \[\frac{\sum_{E\in\mathcal{C}_{S}}z_{E}\cdot\frac{\partial}{\partial S^{\prime}} \left(\frac{\hat{\partial}f(S\backslash E)}{\hat{\partial}z_{T}}\right)+\frac{ \partial f(S\backslash T)}{\partial S^{\prime}}-\frac{\partial}{\partial S^{ \prime}}\left(\sum_{E\supseteq S}z_{E}\right)+\sum_{\mathbf{p}\in S^{\prime} \cap S}\left(\frac{\hat{\partial}f(S-\mathbf{p})}{\hat{\partial}z_{T}}-\frac{ \hat{\partial}f(S)}{\hat{\partial}z_{T}}\right)}{\sum_{E\in\mathcal{C}_{S}}z _{E}}.\] Since \(|S|\geq 2\), the term \(\frac{\partial}{\partial S^{\prime}}\left(\sum_{E\supseteq S}z_{E}\right)\) is zero. Using this, what remains is \[\frac{\sum_{E\in\mathcal{C}_{S}}z_{E}\cdot\frac{\partial}{\partial S^{\prime} }\left(\frac{\hat{\partial}f(S\backslash E)}{\hat{\partial}z_{T}}\right)+ \frac{\partial f(S\backslash T)}{\partial S^{\prime}}+\sum_{\mathbf{p}\in S \cap S^{\prime}}\left(\frac{\hat{\partial}f(S-\mathbf{p})}{\hat{\partial}z_{T}}- \frac{\hat{\partial}f(S)}{\hat{\partial}z_{T}}\right)}{\sum_{E\in\mathcal{C}_{ S}}z_{E}}.\] This means, it is sufficient to show that when \(T\in\mathcal{C}_{S}\), \[\frac{\partial f(S\backslash T)}{\partial S^{\prime}}+\sum_{\mathbf{p}\in S \cap S^{\prime}}\left(\frac{\hat{\partial}f(S-\mathbf{p})}{\hat{\partial}z_{T}}- \frac{\hat{\partial}f(S)}{\hat{\partial}z_{T}}\right)\geq 0. \tag{32}\] Because, the recursive derivatives \(\frac{\partial}{\partial S^{\prime}}\left(\frac{\hat{\partial}f(S\backslash E )}{\hat{\partial}z_{T}}\right)\) are at least zero by induction on \(|S|\). The proof of Equation (32) is now given in Lemma 35. **Lemma 35**.: _Let \(S^{\prime}=\{\mathbf{p}_{1},\ldots,\mathbf{p}_{k^{\prime}}\}\) be the set of points in \(U\backslash T\) of minimum norm. For \(T\in\mathcal{C}_{S}\),_ \[\frac{\partial f(S\backslash T)}{\partial S^{\prime}}+\sum_{\mathbf{p}\in S \cap S^{\prime}}\left(\frac{\hat{\partial}f(S-\mathbf{p})}{\hat{\partial}z_{T}}- \frac{\hat{\partial}f(S)}{\hat{\partial}z_{T}}\right)\geq 0.\] Proof.: The proof is by induction on \(|S|\). Let us work out boundary cases first. If \(S\cap S^{\prime}=(S\backslash T)\cap S^{\prime}=\emptyset\), then \(\frac{\partial f(S\backslash T)}{\partial S^{\prime}}=0\) and the summation is empty which makes the entire expression equal to \(0\) in which case, we are fine. So from now, we can assume \(S\cap S^{\prime}\neq\emptyset\). If \(|S\backslash T|=1\), then \(S\backslash T=\{\mathbf{p}^{*}\}\subseteq S^{\prime}\), which implies \(\frac{\partial f(S\backslash T)}{\partial S^{\prime}}=1\). The expression in this case is simply \[1+\frac{\hat{\partial}f(S-\mathbf{p}^{*})}{\hat{\partial}z_{T}}-\frac{\hat{ \partial}f(S)}{\hat{\partial}z_{T}}=2-\frac{\hat{\partial}f(S)}{\hat{\partial }z_{T}}\] because \(S-\mathbf{p}^{*}\in T\). For this boundary case, it remains to prove that \(\frac{\hat{\partial}f(S)}{\hat{\partial}z_{T}}\leq 2\). This is taken care of Corollary 28. For the base case \(|S|=2\), we have \(|S\backslash T|=1\) in which case, we are done from the preceding arguments. Otherwise, if \(|S\backslash T|\geq 2\) we can expand all the derivative terms. But before we do that, we need Observation 31 so that we have the same denominator for all the three derivatives. Now using Equation (30) and the fact that \((S\backslash T)\cap S^{\prime}=S\cap S^{\prime}\), \[\frac{\partial f(S\backslash T)}{\partial S^{\prime}} =\frac{\sum_{E\in\mathcal{C}_{S\backslash T}}z_{E}\cdot\frac{ \partial f(S\backslash(T\cup E))}{\partial S^{\prime}}+\sum_{\mathbf{p}\in S \cap S^{\prime}}\left(f(S\backslash(T\cup\{\mathbf{p}\}))-f(S\backslash T)\right)}{ \sum_{E\in\mathcal{C}_{S\backslash T}}z_{E}}\] \[=\frac{\sum_{E\in\mathcal{C}_{S\backslash T}}z_{E}\cdot\frac{ \partial f(S\backslash(T\cup E))}{\partial S^{\prime}}+\sum_{E\in\mathcal{C}_{S \backslash}\mathcal{C}_{S\backslash T}}z_{E}\cdot\frac{\partial f(S\backslash T)}{ \partial S^{\prime}}+\sum_{\mathbf{p}\in S\cap S^{\prime}}\left(f(S\backslash(T\cup\{ \mathbf{p}\}))-f(S\backslash T)\right)}{\sum_{E\in\mathcal{C}_{S}}z_{E}}\] \[\geq\frac{\sum_{E\in\mathcal{C}_{S\setminus T}}z_{E}\cdot\frac{ \partial f(S(\Gamma\cup E))}{\partial S^{\prime}}+\sum_{E\in\mathcal{C}_{S} \setminus\mathcal{C}_{S\setminus T}}z_{E}\cdot\frac{\partial f(S\setminus T)}{ \partial S^{\prime}}}{\sum_{E\in\mathcal{C}_{S}}z_{E}}. \tag{33}\] Using Equation (19) and the fact that \(T\in\mathcal{C}_{S-\boldsymbol{p}}\) for any \(\boldsymbol{p}\in S\cap S^{\prime}\), \[\frac{\hat{\partial}f(S-\boldsymbol{p})}{\hat{\partial}z_{T}} =\frac{\sum_{E\in\mathcal{C}_{S-\boldsymbol{p}}}z_{E}\cdot\frac{ \hat{\partial}f(S\setminus(E\cup\{\boldsymbol{p}\}))}{\hat{\partial}z_{T}}+f(S \setminus(T\cup\{\boldsymbol{p}\}))-\sum_{E\supseteq S-\boldsymbol{p}}z_{E}}{ \sum_{E\in\mathcal{C}_{S-\boldsymbol{p}}}z_{E}}\] \[=\frac{\sum_{E\in\mathcal{C}_{S-\boldsymbol{p}}}z_{E}\cdot\frac{ \hat{\partial}f(S\setminus(E\cup\{\boldsymbol{p}\}))}{\hat{\partial}z_{T}}+ \sum_{E\in\mathcal{C}_{S}\setminus\mathcal{C}_{S-\boldsymbol{p}}}z_{E}\cdot \frac{\hat{\partial}f(S\setminus\{\boldsymbol{p}\})}{\hat{\partial}z_{T}}+f(S \setminus(T\cup\{\boldsymbol{p}\}))-\sum_{E\supseteq S-\boldsymbol{p}}z_{E}}{ \sum_{E\in\mathcal{C}_{S}}z_{E}}.\] Using Equation (19), \[\frac{\hat{\partial}f(S)}{\hat{\partial}z_{T}}=\frac{\sum_{E\in \mathcal{C}_{S}}z_{E}\cdot\frac{\hat{\partial}f(S\setminus E)}{\hat{\partial} z_{T}}+f(S\setminus T)-\sum_{E\supseteq S}z_{E}}{\sum_{E\in\mathcal{C}_{S}}z_{E}}.\] The numerator of \(\frac{\hat{\partial}f(S-\boldsymbol{p})}{\hat{\partial}z_{T}}-\frac{\hat{ \partial}f(S)}{\hat{\partial}z_{T}}\) is \[\sum_{E\in\mathcal{C}_{S-\boldsymbol{p}}}z_{E}\left(\frac{\hat{ \partial}f(S\setminus(E\cup\{\boldsymbol{p}\}))}{\hat{\partial}z_{T}}-\frac{ \hat{\partial}f(S\setminus E)}{\hat{\partial}z_{T}}\right)+\sum_{E\in\mathcal{ C}_{S}\setminus\mathcal{C}_{S-\boldsymbol{p}}}z_{E}\left(\frac{\hat{ \partial}f(S\setminus\{\boldsymbol{p}\})}{\hat{\partial}z_{T}}-\frac{\hat{ \partial}f(S\setminus E)}{\hat{\partial}z_{T}}\right)\] \[-\sum_{E:S\setminus E=\{\boldsymbol{p}\}}z_{E}+(f(S\setminus(T \cup\{\boldsymbol{p}\}))-f(S\setminus T))\] \[\geq\sum_{E\in\mathcal{C}_{S-\boldsymbol{p}}}z_{E}\left(\frac{ \hat{\partial}f(S\setminus(E\cup\{\boldsymbol{p}\}))}{\hat{\partial}z_{T}}- \frac{\hat{\partial}f(S\setminus E)}{\hat{\partial}z_{T}}\right)+\sum_{E\in \mathcal{C}_{S}\setminus\mathcal{C}_{S-\boldsymbol{p}}}z_{E}\left(\frac{\hat{ \partial}f(S\setminus\{\boldsymbol{p}\})}{\hat{\partial}z_{T}}-\frac{\hat{ \partial}f(S\setminus E)}{\hat{\partial}z_{T}}\right)\] \[-\sum_{E:S\setminus E=\{\boldsymbol{p}\}}z_{E}. \tag{34}\] Note that the condition \(E\in\mathcal{C}_{S}\setminus\mathcal{C}_{S-\boldsymbol{p}}\) holds only when \(\boldsymbol{p}\in S\) and \(E\cap S\) is either \(S\setminus\{\boldsymbol{p}\}\) or \(\{\boldsymbol{p}\}\) from Observation 30. When \(E\cap S=\{\boldsymbol{p}\}\), the second derivative term in Equation (34) is zero. Otherwise, it is \(\frac{\hat{\partial}f(S\setminus\{\boldsymbol{p}\})}{\hat{\partial}z_{T}}- \frac{\hat{\partial}f(\{\boldsymbol{p}\})}{\hat{\partial}z_{T}}=\frac{\hat{ \partial}f(S\setminus\{\boldsymbol{p}\})}{\hat{\partial}z_{T}}\). Using these observations, Equation (34) can be simplified and can be re-written as \[\sum_{E\in\mathcal{C}_{S-\boldsymbol{p}}}z_{E}\left(\frac{\hat{ \partial}f(S\setminus(E\cup\{\boldsymbol{p}\}))}{\hat{\partial}z_{T}}-\frac{ \hat{\partial}f(S\setminus E)}{\hat{\partial}z_{T}}\right)+\sum_{E:S \setminus E=\{\boldsymbol{p}\}}z_{E}\left(\frac{\hat{\partial}f(S\setminus \{\boldsymbol{p}\})}{\hat{\partial}z_{T}}-1\right). \tag{35}\] The remaining numerator of \(\frac{\partial f(S\setminus T)}{\partial S^{\prime}}\) from Equation (33) is \[\sum_{E\in\mathcal{C}_{S\setminus T}}z_{E}\cdot\frac{\partial f(S \setminus(T\cup E))}{\partial S^{\prime}}+\sum_{E\in\mathcal{C}_{S}\setminus \mathcal{C}_{S\setminus T}}z_{E}\cdot\frac{\partial f(S\setminus T)}{\partial S ^{\prime}}. \tag{36}\] It remains to prove that the sum of Equation (36), and Equation (35) summed over all \(\boldsymbol{p}\in S\cap S^{\prime}\) is non-negative. Let us do that by carefully partitioning all the \(E\) into groups and do a case analysis. **Case 1: (\(E\in\mathcal{C}_{S}\setminus\mathcal{C}_{S\setminus T}\))**: First, observe that this happens only when either \(\emptyset\neq S\cap E\subseteq S\cap T\) or \(\emptyset\neq S\setminus E\subseteq S\cap T\) as shown in figures Figure 2(a) and Figure 2(b) respectively. Observe that in either case, for that \(E\), we have \(E\in\mathcal{C}_{S-\boldsymbol{p}}\) for any \(\boldsymbol{p}\in S\cap S^{\prime}\). **Case 1a:**: (\(\emptyset\neq S\cap E\subseteq S\cap T\) ) In this case, the coefficient of \(z_{E}\) in the sum of Equation (36) and, Equation (35) summed over all \(\boldsymbol{p}\in S\cap S^{\prime}\) is \[\frac{\partial f(S\backslash T)}{\partial S^{\prime}}+\sum_{\boldsymbol{p}\in S \cap S^{\prime}}\left(\frac{\hat{\partial}f(S\backslash(E\cup\{\boldsymbol{p} \}))}{\hat{\partial}z_{T}}-\frac{\hat{\partial}f(S\backslash E)}{\hat{ \partial}z_{T}}\right). \tag{37}\] Which we can re-write as \[\frac{\partial f((S\backslash E)\backslash T)}{\partial S^{\prime}}+\sum_{ \boldsymbol{p}_{r}\in(S\backslash E)\cap S^{\prime}}\left(\frac{\hat{ \partial}f((S\backslash E)\backslash\{\boldsymbol{p}\})}{\hat{\partial}z_{T}} -\frac{\hat{\partial}f(S\backslash E)}{\hat{\partial}z_{T}}\right).\] This, we can argue is non-negative by induction on \(|S|\) by setting \(S\gets S\backslash E\). But note that before using inductive hypothesis, we need \(T\in\mathcal{C}_{S\backslash E}\) which is not guaranteed when \(E=S\cap T\). But in that case, the terms inside the summation of Equation (37) are zero because \(T\cap(S\backslash E)=\emptyset\) which implies that the coefficient is \(\frac{\partial f(S\backslash T)}{\partial S^{\prime}}=\frac{\partial f(S \backslash(T\cup E))}{\partial S^{\prime}}\geq 0\) from Lemma 33. **Case 1b:**: (\(\emptyset\neq S\backslash E\subseteq S\cap T\)) In this case, the coefficient of \(z_{E}\) is the same expression as Equation (37) but observe that \(S\backslash(E\cup\{\boldsymbol{p}\})=S\backslash E\) which implies that the coefficient is simply \(\frac{\partial f(S\backslash T)}{\partial S^{\prime}}\) which we know is non-negative from Lemma 33. **Case 2:**: (\(E\in\mathcal{C}_{S\backslash T}\)) Let us branch based on whether there exists \(\boldsymbol{p}\in S\cap S^{\prime}\) such that \(E\notin\mathcal{C}_{S-\boldsymbol{p}}\). This can happen in two ways. First, there exists a point \(\boldsymbol{p}^{*}\in S\cap S^{\prime}\) such that \(S\backslash E=\{\boldsymbol{p}^{*}\}\). Second, there exists a point point \(\boldsymbol{p}^{*}\in S\cap S^{\prime}\) such that \(S\backslash E=S\backslash\{\boldsymbol{p}^{*}\}\) **Case 2a:**: (\(\exists\boldsymbol{p}^{*}\in S\cap S^{\prime},S\backslash E=\{\boldsymbol{p}^{*}\}\)) First, note that \(E\in\mathcal{C}_{S\backslash T}\) for such a cut (see Figure 3(a)). Observe that for any \(\boldsymbol{p}\neq\boldsymbol{p}^{*}\in S\cap S^{\prime}\), we have \(S\backslash(E\cup\{\boldsymbol{p}\})=S\backslash E\) and there cannot be a different \(\boldsymbol{p}\) such that \(S\backslash E=\{\boldsymbol{p}\}\) for the same \(E\). This implies that the coefficient of \(z_{E}\) in the sum of Equation (36) and, Equation (35) summed over all \(\boldsymbol{p}\in S\cap S^{\prime}\) is \[\frac{\partial f(S\backslash(T\cup E))}{\partial S^{\prime}}+ \frac{\hat{\partial}f(S\backslash\{\boldsymbol{p}^{*}\})}{\hat{\partial}z_{T} }-1\] \[=\frac{\partial f(\{\boldsymbol{p}^{*}\})}{\partial S^{\prime}}+ \frac{\hat{\partial}f(S\backslash\{\boldsymbol{p}^{*}\})}{\hat{\partial}z_{T} }-1\] Figure 3: Case 1 \[=\frac{\hat{\partial}f(S\backslash\{\mathbf{p}^{*}\})}{\hat{\partial}z_{T}}\geq 0\] **Case 2b: (\(\exists\mathbf{p}^{*}\in S\cap S^{\prime},S\backslash E=S\backslash\{\mathbf{p}^{*}\}\))**: First, note that \(E\in\mathcal{C}_{S\backslash T}\) for such a cut (see Figure 3(b)). Observe that for any \(\mathbf{p}\neq\mathbf{p}^{*}\in S\cap S^{\prime}\), we have \(E\in\mathcal{C}_{S-\mathbf{p}}\). This implies that the coefficient of \(z_{E}\) in the sum of Equation (36) and, Equation (35) summed over all \(\mathbf{p}\in S\cap S^{\prime}\) is \[\frac{\partial f(S\backslash(T\cup E))}{\partial S^{\prime}}+ \sum_{\mathbf{p}\neq\mathbf{p}^{*}\in S\cap S^{\prime}}\left(\frac{\hat{\partial}f(S \backslash(E\cup\{\mathbf{p}\}))}{\hat{\partial}z_{T}}-\frac{\hat{\partial}f(S \backslash E)}{\hat{\partial}z_{T}}\right)\] \[=\frac{\partial f((S\backslash\{\mathbf{p}^{*}\})\backslash T)}{ \partial S^{\prime}}+\sum_{\mathbf{p}\in(S\backslash\mathbf{p}^{*})\cap S^{\prime}} \left(\frac{\hat{\partial}f((S\backslash\{\mathbf{p}^{*}\})\backslash\{\mathbf{p}\})} {\hat{\partial}z_{T}}-\frac{\hat{\partial}f(S\backslash\{\mathbf{p}^{*}\})}{\hat {\partial}z_{T}}\right).\] This, we can argue is non-negative by induction on \(|S|\) (\(S\gets S\backslash\{\mathbf{p}^{*}\}\)). Note that \(T\in\mathcal{C}_{S-\mathbf{p}^{*}}\) holds so we can use induction hypothesis. **Case 2c: (\(E\in\mathcal{C}_{S-\mathbf{p}},\forall\mathbf{p}\in S\cap S^{\prime}\) )**: In this case, the coefficient of \(z_{E}\) is simply, \[\frac{\partial f(S\backslash(T\cup E))}{\partial S^{\prime}}+ \sum_{\mathbf{p}\in S\cap S^{\prime}}\left(\frac{\hat{\partial}f(S\backslash(E \cup\{\mathbf{p}\}))}{\hat{\partial}z_{T}}-\frac{\hat{\partial}f(S\backslash E)}{ \hat{\partial}z_{T}}\right)\] \[=\frac{\partial f((S\backslash E)\backslash T)}{\partial S^{ \prime}}+\sum_{\mathbf{p}\in(S\backslash E)\cap S^{\prime}}\left(\frac{\hat{ \partial}f((S\backslash E)\backslash\{\mathbf{p}\}))}{\hat{\partial}z_{T}}-\frac{ \hat{\partial}f(S\backslash E)}{\hat{\partial}z_{T}}\right).\] This, we can again argue is non-negative by induction on \(|S|\) (\(S\gets E\)). Note that if \(T\notin\mathcal{C}_{S\backslash E}\), then \(E\supseteq S\cap T\) in which case, the terms in the summation are zero and we are done directly without induction. This completes the proof of Lemma 35. We can now wrap up: Lemma 35 was the missing piece in the proof of Lemma 34. In turn, because of Lemma 34 we can assume all points in \(S\backslash T\) to have the same norm. Lemma 27 shows the desired bound of Theorem 26 for this uniform case, completing the proof. Figure 4: Case 2 Proofs from Section 2 Proof of Lemma 6.: For any client \(\mathbf{x}\in\mathbf{X}\). Since the Random Thresholds algorithm is translation and scaling invariant, we imagine that \(\mathbf{x}=\mathbf{0}\). Now the expected cost incurred by this client is at most the distance to the unique center in its leaf region in the tree produced by the Random Thresholds algorithm, which is at most \(\alpha(|\mathcal{U}|)\). The claim now follows by scaling by the true distance to the closest center \(\|\mathbf{x}-\pi(\mathbf{x})\|\), summing over all \(\mathbf{x}\in\mathbf{X}\), and using linearity of expectations. ### Going from \(\ell_{1}\) Metrics to Cut Metrics It is known that point set in \(\ell_{1}\) can be written as a non-negative sum of cut metrics [10]; we give the details here for completeness. Given a point set \(V\in\mathbb{R}^{d}\), define \(\ell_{i}:=\min_{\mathbf{v}\in V}v_{i}\) and \(u_{i}=\max_{\mathbf{v}\in V}v_{i}\). Then \(L_{1}(V):=\sum_{i=1}^{d}(u_{i}-\ell_{i})\), and \(D_{1}(V)\) is the uniform distribution over \(\{(i,\theta)\mid\theta\in[\ell_{i},u_{i}]\}\). Define for any \(S\subseteq V\) the non-negative quantity \[z_{S}=L_{1}(V)\cdot\Pr_{(i,\theta)\sim D_{1}(V)}[(V\cap\{\mathbf{x}\mid\operatorname {sign}(\theta)\cdot x_{i}\geq\theta\})=S].\] This is a scaled version of the probability that for a random threshold cut, the points of \(V\) in the halfspace not containing the origin equals \(S\). A direct calculation shows that for all \(\mathbf{p},\mathbf{q}\in V\), we have \[\|\mathbf{p}-\mathbf{q}\|_{1}=\sum_{S}z_{S}\mathbbm{1}_{(|S\cap\{\mathbf{p},\mathbf{q}\}|=1)}.\] Now define an \(\ell_{1}\)-embedding \(\varphi:V\to\mathbb{R}_{\geq 0}^{2^{|V|}}\) by setting, for any \(S\subseteq V\), \[\varphi(\mathbf{p})_{S}=\sum_{S}z_{S}\mathbbm{1}_{(\mathbf{p}\in S)}.\] Again, \(\|\mathbf{p}-\mathbf{q}\|_{1}=\|\varphi(\mathbf{p})-\varphi(\mathbf{q})\|_{1}\). Moreover, if the origin belongs to \(V\), we get \(\varphi(\mathbf{0})=\mathbf{0}\). ### Proof of Lemma 8 Let \(\mathcal{S}=\langle S_{1},\dots,S_{2^{k}}\rangle\) be the sequence of cuts in increasing order of their sample values \(X_{S}\). (We add the subsets with \(z_{S}=0\) at the end of the sequence in some fixed but arbitrary order.) However, it is not true that we remove points from \(U\) in this order: we need to reject cuts that do not cross the remaining set \(U\). (We say a set \(A\)_crosses_\(B\) if \(B\cap A\) is a non-empty proper subset of \(B\): i.e., if both \(B\setminus A\) and \(B\cap A\) are non-empty.) Hence, we recast the "last-point" process again as follows. Given any subset of points \(U\subseteq V\): 1. Define \(U^{0}:=U\) and \(\mathcal{S}^{0}=\langle\rangle\). In general, \(U^{r}\) is the set of points remaining after considering sets \(S_{1},\dots,S_{r}\), and let \(\mathcal{S}^{r}\) is a sequence of cuts selected until this point from the sequence \(\mathcal{S}\). 2. We define \(\mathcal{S}^{r+1}\leftarrow\mathcal{S}^{r}\circ\langle S_{r+1}\rangle\) if \(S_{r+1}\) crosses \(U^{r}\), else we define \(\mathcal{S}^{r+1}\leftarrow\mathcal{S}^{r}\). Either way, \(U^{r+1}=U\backslash\bigcup_{S\in\mathcal{S}^{r+1}}S\). Note that \(U^{r}\) and \(\mathcal{S}^{r}\) are both functions of \((U,\mathcal{S})\). Call the cuts in \(\mathcal{S}^{2^{k}}\) to be the _valid_ cuts for set \(U\). Given the same sequence of cuts \(\mathcal{S}\) we may get different subsequences for each subset \(U\) of \(V\). So it is not necessarily true that \((U\backslash T)^{r}=U^{r}\backslash T\), because the set of valid cuts can differ when considering point sets \(U\) and \(U\backslash T\). But it turns out that we can still relate \((U\backslash T)^{r}\) and \(U^{r}\backslash T\) in some settings. **Lemma 36**.: _Given sequence \(\mathcal{S}\) and index \(0\leq r\leq 2^{k}\) such that \(U^{r}\backslash T\neq\emptyset\), we have \((U\backslash T)^{r}=U^{r}\backslash T\)._ Proof of Lemma 36.: We prove this by induction on \(r\). For \(r=0\), we know that \(U^{0}\backslash T=U\backslash T=U^{0}\backslash T\). Suppose the claim holds for \(r=t\), then we want to show it holds for \(r=t+1\). Suppose \(U^{t+1}\backslash T\neq\emptyset\), then since \(U^{t+1}\subseteq U^{t}\) we have \(U^{t}\backslash T\neq\emptyset\), and by the inductive hypothesis we get that \((U\backslash T)^{t}=U^{t}\backslash T\). In particular, we get that \((U\backslash T)^{t}\subseteq U^{t}\). Hence if the new cut \(S_{t+1}\) crosses \((U\backslash T)^{t}\), then it also crosses \(U^{t}\), and therefore \[(U\backslash T)^{t+1}=(U\backslash T)^{t}\setminus S_{t+1}=(U^{t}\backslash T )\setminus S_{t+1}=(U^{t+1}\backslash T).\] So suppose the new cut \(S_{t+1}\) does not cross \((U\backslash T)^{t}\), and thus \((U\backslash T)^{t+1}=(U\backslash T)^{t}\). There are two cases: either \((U\backslash T)^{t}\subseteq S_{t+1}\) or \((U\backslash T)^{t}\cap S_{t+1}=\emptyset\). In the first case, if \(S_{t+1}\) crosses \(U^{t}\), then \(U^{t+1}\backslash T=(U^{t}\setminus S_{t+1})\setminus T=\emptyset\), and hence there is nothing to prove. Else if \(S_{t+1}\) does not cross \(U^{t}\), then \(U^{t+1}=U^{t}\) and also \((U\backslash T)^{t+1}=(U\backslash T)^{t}\), so we are done using the inductive hypothesis. In the second case, \((U\backslash T)^{t}\cap S_{t+1}=(U^{t}\backslash T)\cap S_{t+1}=\emptyset\), so we get that \((U^{t}\cap S_{t+1})\subseteq T\). This means that \(U^{t+1}\backslash T=U^{t}\backslash T\) regardless of whether \(S_{t+1}\) crosses \(U^{t}\). Since \(U^{t}\backslash T=(U\backslash T)^{t}=(U\backslash T)^{t+1}\), we are done. As discussed above, \(|U^{2^{k}}|=1\), and we define this unique point \(p\in U^{2^{k}}\) to be the "last" point in \(U\), and we call this event "\(p\) is last in \(U\)". **Lemma 8** (Monotonicity).: _For any sets \(T,V\) such that \(T\subseteq V\), and any point \(\boldsymbol{p}\in V\setminus T\), we have_ \[\text{``}\boldsymbol{p}\text{ is last in }V\text{''}\Rightarrow\text{``} \boldsymbol{p}\text{ is last in }V\backslash T\text{''}.\] Proof.: Using the definition of the event "\(p\) is last in \(U\)", we know that \(U^{2^{k}}=\{p\}\) and since \(p\notin T\), we have \(U^{2^{k}}\backslash T\neq\emptyset\). Using Lemma 36, we can say that \((U\backslash T)^{2^{k}}=\{p\}\). ## Appendix B Proofs from Section 3 **Lemma 10** (Hitting Set Lemma).: _For large enough \(k\), there exist set systems \(([k],\mathcal{S})\) with \(k\) sets of size \(s\) each, such that the minimum hitting set satisfies \(h(s-2-o(1))/k\geq\ln k-O(\ln\ln k)\)._ Proof of Lemma 10.: For some parameter \(p\in(0,\nicefrac{{1}}{{2}})\), consider \(k\) independently chosen sets, each obtained by adding in each element of \([k]\) independently with probability \(p\). The expected size of each such set is \(\bar{s}:=pk\); moreover, each element of \([k]\) should hit a \(p\) fraction of the sets, so hitting all the \(k\) sets should require \(\bar{h}:=\ln_{1-p}(1/k)\approx(1/p)\ln k\) sets, giving \(\bar{s}\bar{h}/k=\ln k\). We now show that with non-zero probability, there does exist a set system with parameters close to these. Define \(\mathcal{B}_{1}\) to be the bad event that some set \(S_{i}\) has size smaller than \(s:=(1-\varepsilon)pk\), and \(\mathcal{B}_{2}\) to be the bad event that the hitting set has size at most some parameter \(h\). We now show that for suitable choices of \(\varepsilon\) and \(h\), we have \(\Pr[\mathcal{B}_{1}]<\nicefrac{{1}}{{2}}\) and \(\Pr[\mathcal{B}_{2}]\leq\nicefrac{{1}}{{2}}\), which completes the proof. We consider the event \(\mathcal{B}_{2}\) first: a union bound shows that \[\Pr[\mathcal{B}_{2}]\leq\sum_{H:|H|=h}\Pr\left[\forall i\in[k]:S_{i}\cap H \neq\emptyset\right]=\binom{k}{h}(1-(1-p)^{h})^{k}\leq\frac{(2k)^{h}}{2}\cdot e ^{-k(1-p)^{h}}.\] Setting this upper bound to equal \(\nicefrac{{1}}{{2}}\), we get \[\ln h-h\ln(1-p)=\ln\left(\frac{k}{\ln 2k}\right). \tag{38}\] We now use that \(-p-p^{2}\leq\ln(1-p)\leq-p\) for \(p\in[0,\nicefrac{{1}}{{2}}]\) to get \[\ln h+hp\leq\ln\left(\frac{k}{\ln 2k}\right)\leq\ln h+hp(1+p). \tag{39}\] Since \(h\geq 1\), the left-most inequality of (39) gives \(h\leq(1/p)\ln(\frac{k}{\ln 2k})\). However, we want a lower bound on \(h\), so we substitute this into the right-most inequality of (39) to get \[\frac{1}{1+p}\cdot\ln\underbrace{\left(\frac{k}{\ln 2k}-\frac{1}{p}\ln\left( \frac{k}{\ln 2k}\right)\right)}_{(\star)}\leq p\,h=\frac{s\,h}{(1-\varepsilon)k}. \tag{40}\] We can now set \(p:=\frac{2\ln 2k}{k^{1/3}}\) (which ensures that the second term in (\(\star\)) is at most half the first for a large enough \(k\)) and get \[sh/k\geq\frac{1-\varepsilon}{1+p}\cdot\ln\left(\frac{k}{2\ln 2k}\right).\] Now setting \(\varepsilon:=1/k^{1/3}\) and using a Chernoff bound and a union bound, \[\Pr[\mathcal{B}_{1}]<k\cdot e^{-\varepsilon^{2}pk/2}=\frac{1}{2}.\] Taking a union bound over the two bad events, we get that with non-zero probability our sets in \(\mathcal{S}\) are of size \(\approx k^{2/3}\ln k\), the hitting set is of size \(\approx k^{1/3}\), and \(hs/k\geq(1-\frac{2\ln 2k}{k^{2/3}})(\ln k-O(\ln\ln k))\). ## Appendix C Proofs from Section 4 ### Proof of the Stretch-vs.-Separation Claim 19 We prove a lemma about point sets that immediately implies Claim 19. Consider a set \(S\subseteq\mathbb{R}^{d}\) of points, and focus on \(p,q\in S\). Consider some dimension \(i\), and consider the projection of the points onto that dimension (as in the figure). Let \(a_{1},\ldots,a_{\ell}\) be the lengths of intervals into which the line segment joining \(p_{i}\) and \(q_{i}\) is partitioned by projections of other points in \(S\) onto the \(i^{th}\) dimension. Any cut \((i,\theta)\) intersecting the \(j^{th}\) interval splits the centers into two groups with at least \(\min(j,\ell+1-j)\) centers on either side. Define the _stretch_\(s_{i}\) between \(p\) and \(q\) in the \(i^{th}\) dimension, and the _expected separation_\(\mathrm{sep}_{i}\) after choosing a random cut that cuts the \(j^{th}\) interval with probability proportional to \(a_{j}^{2}\) as \[s_{i}:=\frac{\big{(}\sum_{j\in[\ell]}a_{j}\big{)}^{2}}{\sum_{j\in[\ell]}a_{j} ^{2}}\qquad\text{and}\qquad\mathrm{sep}_{i}:=\frac{\sum_{j\in[\ell]}a_{j}^{2} \cdot\min(j,\ell+1-j)}{\sum_{i\in[\ell]}a_{j}^{2}}.\] **Lemma 37**.: \(\mathrm{sep}_{i}\geq\frac{s_{i}}{8(1+\ln(\nicefrac{{(2i|S|)}}{{s_{i}})})}\)_._ Figure 5: projection of points onto an axis Before we prove this, let us generalize this to higher dimensions: **Corollary 38**.: _Consider a set \(S\subseteq\mathbb{R}^{d}\) and two points \(p,q\in S\) having stretch \(s\). If we choose a threshold cut \((i,\theta)\) from the distribution \(D_{2}(S)\) and condition on separating \(p,q\), the expected number of points in each side of the cut decreases by at least \(\frac{s}{8\left(1+\ln(\left\langle 2\left|S\right|\right\rangle/s)\right)}\)._ Proof.: The stretch between \(p,q\), and the expected separation conditioned on separating the two centers is \[\overline{s}:=\frac{\sum_{i\in[d]}\big{(}\sum_{j\in[\ell_{i}]}a_{i,j}\big{)}^{ 2}}{\sum_{i\in[d]}\sum_{j\in[\ell_{i}]}a_{i,j}^{2}}\qquad\text{and}\qquad \overline{sep}:=\frac{\sum_{i\in[d]}\big{(}\sum_{j\in[\ell_{i}]}a_{i,j}^{2} \cdot\min(j,\ell_{i}+1-j)\big{)}}{\sum_{i\in[d]}\sum_{j\in[\ell_{i}]}a_{i,j}^{2 }},\] where \(a_{i,j}\) is the width in the partition defined above along dimension \(i\). Define a random variable \(I\in[d]\) on the dimensions, that takes on value \(i\) with probability \[\frac{\sum_{j\in[\ell_{i}]}a_{i,j}^{2}}{\sum_{i\in[d]}\sum_{j\in[\ell_{i}]}a_{ i,j}^{2}}.\] Then we have \(\overline{s}=\mathbb{E}_{I}[s_{I}]\) and \(\overline{\mathrm{sep}}=\mathbb{E}_{I}[\mathrm{sep}_{I}]\). Finally, the function \(h(x)=\frac{x}{1+\ln(\alpha/x))}\) being convex for \(0\leq x\leq\alpha\), we can use Jensen's inequality to get \[\overline{\mathrm{sep}}=\mathbb{E}_{I}[\mathrm{sep}_{I}]\overset{\eqref{eq: Jensen}}{\geq}\mathbb{E}_{I}\left[\frac{s_{I}}{8\left(1+\ln\left(\frac{2 \left|S\right|}{s_{I}}\right)\right)}\right]\geq\frac{\overline{s}}{8\left(1+ \ln\left(\frac{\left|S\right|}{\overline{s}}\right)\right)}.\qed\] Finally, translating to the language of SS4 shows that Claim 19 is just a reformulation of Corollary 38. So it suffices to prove Lemma 37, which we do next. Proof of Lemma 37.: Let us look at the following constrained minimization problem \[\min\mathrm{sep}_{i}=\sum_{j}a_{j}^{2}\cdot\min(j,\ell+1-j) \tag{41}\] \[s.t.\quad\sum_{j}a_{j}=\sqrt{s_{i}}\] \[\sum_{j}a_{j}^{2}=1.\] The Lagrangian dual of the above primal program is \[\mathcal{L}(a,\lambda,\gamma)=\sum_{j}a_{j}^{2}\cdot\min(j,\ell+1-j)-2\lambda \bigg{(}\sum_{j}a_{j}-\sqrt{s_{i}}\bigg{)}+\gamma\bigg{(}\sum_{j}a_{j}^{2}-1 \bigg{)},\] and setting the gradient to zero means the minima for \(a\) occur when \(a_{j}=\frac{\lambda}{\gamma+\min(j,\ell+1-j)}\). (We assume \(\gamma\geq 0\).) Substituting and simplifying gives \[2\lambda\sqrt{s_{i}}-\sum_{j}\frac{\lambda^{2}}{\gamma+\min(j,\ell+1-j)}-\gamma.\] We can maximize over \(\lambda\) which happens when \(\sum_{j}\frac{\lambda}{\gamma+\min(j,\ell+1-j)}=\sqrt{s_{i}}\); substituting gives \[\frac{s_{i}}{\sum_{j}\frac{1}{\gamma+\min(j,\ell+1-j)}}-\gamma\geq\frac{s_{i}} {2\ln(1+\frac{\ell+1}{2\gamma})}-\gamma\geq\frac{1}{2}\bigg{(}\frac{s_{i}}{ \ln(1+\frac{\left|S\right|}{2\gamma})}-2\gamma\bigg{)}.\] It remains to choose \(\gamma\). For convenience, we set the above expression to \(\gamma\); this means \[\frac{s_{i}}{\ln(1+\frac{|S|}{2\gamma})}=4\gamma\qquad\Longleftrightarrow\qquad \frac{2|S|}{s_{i}}=\frac{|S|/2\gamma}{\ln(1+\frac{|S|}{2\gamma})}.\] Using Observation 39 below, we get \(\gamma\geq\frac{s_{i}}{8(1+\ln(\nicefrac{{(2|S|)}}{{s_{i}})})}\). **Observation 39**.: _For any \(x\geq 0\) and \(y\geq 1\) such that \(y=\frac{x}{\ln(1+x)}\), we have \(x\leq 2y(1+\ln y)\)._ Proof.: Since \(x/\ln(1+x)\) is an increasing function, it is sufficient to prove that \[y\leq\frac{2y(1+\ln y)}{\ln(1+2y(1+\ln y))}.\] Finally, taking derivatives shows \(\ln(1+2y(1+\ln y))\leq 2(1+\ln y)\) for \(y\geq 1\). ## Appendix D Proofs from Section 5 Proof of Lemma 22.: For a point \(\mathbf{p}\in U\), define \[z_{E}(t):=\begin{cases}z_{E}\cdot t&\mathbf{p}\in E\\ z_{E}&\mathbf{p}\notin E.\end{cases} \tag{42}\] Let \(f_{t}(U)\) and \(\ell_{t}(\mathbf{p})\) be the function value of points in \(U\) and norm of \(\mathbf{p}\) at time \(t\) when the embedding is changing according to the trajectory given by Equation (42). We first claim that \(\ell_{0}(\mathbf{p})=0\) and \(f_{0}(U)=0\). Indeed, We know that \(\ell_{t}(\mathbf{p})=\sum_{S:\mathbf{p}\in S}z_{S}(t)=\sum_{S:\mathbf{p}\in S}z_{S}\cdot t= t\cdot\ell(\mathbf{p})\), which implies that \(\ell_{0}(\mathbf{p})=0\). The second part is proved by induction on \(|U|\): If \(|U|=1\), then \(f_{0}(U)=\ell_{0}(\mathbf{p})=0\). If \(|U|\geq 2\) and \(z_{E}(0)=0\) for all \(E\in\mathcal{C}_{U}\), then \(f_{0}(U)=0\) by definition. Otherwise, using Equation (17), we can write \[f_{0}(U)=\frac{\sum_{E\in\mathcal{C}_{U}}z_{E}(0)\cdot f_{0}(U\backslash U_{E} )}{\sum_{E\in\mathcal{C}_{U}}z_{E}(0)}.\] In the numerator of \(f_{0}(U)\), either \(z_{E}(0)=0\) when \(\mathbf{p}\in E\), or \(f_{0}(U\backslash U_{E})=0\) when \(\mathbf{p}\in U\backslash U_{E}\) by inductive hypothesis which concludes the proof. Now using the chain rule and the assumed bound on the derivative, \[\frac{\mathrm{d}f_{t}(U)}{\mathrm{d}t}=\sum_{E\subseteq[k]} \frac{\partial f(U)}{\partial z_{E}}\Big{|}_{\mathbf{z}=\mathbf{z}(t)}\cdot\frac{ \mathrm{d}z_{E}(t)}{\mathrm{d}t} =\sum_{E:\mathbf{p}\in E}z_{E}\cdot\frac{\partial f(U)}{\partial z_{E} }\Big{|}_{\mathbf{z}=\mathbf{z}(t)}\] \[\leq\sum_{E:\mathbf{p}\in E}z_{E}\cdot\beta_{k-1}=\beta_{k-1}\cdot \ell(\mathbf{p}).\] Integrating gives \(f_{1}(U)-f_{0}(U)=f(U)-f_{0}(U)=f(U)\leq\beta_{k-1}\cdot\ell(\mathbf{p})\). Proof of Lemma 23.: The proof is by induction on \(|S|\). The base case is when \(|S|=1\). If \(S=\{r\}\), then we have \(\frac{\partial f(S)}{\partial z_{T}}=\frac{\partial\ell(r)}{\partial z_{T}}\). We know that \(\ell(r)\) can be written as \[\ell(r)=\sum_{E\ni r}z_{E}. \tag{43}\] The derivative \(\frac{\partial\ell(r)}{\partial z_{T}}\) is equal to \(1\) if \(z_{T}\) appears as a term in the expansion of \(\ell(r)\) as in Equation (43) and is equal to \(0\) otherwise. In the case of statement (i), since \(T\in\mathcal{C}_{S}^{\ell}\), we have \(S\subseteq T\implies r\in T\) this concludes the base case for statement (i). Similarly, if statement (ii) holds, \(z_{T}\) does not appear as a term in the expansion of \(\ell(r)\) and hence the partial derivative of \(f(S)=\ell(\mu_{r})\) with respect to \(z_{T}\) is zero. For \(|S|\geq 2\), we have \(\mathbb{1}[T\in\mathcal{C}_{S}]=0\) in both the cases of statements (i) and (ii). Using this in Equation (18) gives \[\frac{\partial f(S)}{\partial z_{T}}=\frac{\sum_{E\in\mathcal{C}_{S}}z_{E} \cdot\frac{\partial f(S\backslash E)}{\partial z_{T}}}{\sum_{E\in\mathcal{C}_{ S}}z_{E}}. \tag{44}\] For the inductive step, since \(S\subseteq T\) in statement (i), we have \((S\backslash E)\subseteq T\). By the inductive hypothesis, the recursive derivative terms \(\frac{\partial f(S\backslash E)}{\partial z_{T}}\) are equal to \(1\). Using Equation (44), we are done. The inductive step for the case of statement (ii) follows similarly where all terms \(\frac{\partial f(S\backslash E)}{\partial z_{T}}\) are equal to \(0\). Finally, using the fact that \(f(S)\) is always non-negative for any \(z\), and the fact that \(\frac{\partial f(S)}{\partial z_{E}}=1\) for any \(S\subseteq E\), we have \(f(S)\geq\sum_{E\supseteq S}z_{E}\). Proof of Lemma 25.: The proof is by induction on \(|S|\). Observe that for \(T\notin\mathcal{C}_{S}\), either \(T\supseteq S\) or \(T\cap S=\emptyset\), and then \(\frac{\partial f(S)}{\partial z_{T}}=\frac{\partial f(S)}{\partial z_{T}}\) by Definition 24 and Lemma 23. The fact that the pseudo-derivative is non-negative in this case is immediate from the Definition 24. For \(|S|=1\), since \(T\notin\mathcal{C}_{S}\), we are done. If \(|S|\geq 2\) and \(T\in\mathcal{C}_{S}\), the inductive hypothesis implies that \(\frac{\partial f(S\backslash E)}{\partial z_{T}}\geq\max\left(\frac{\partial f (S\backslash E)}{\partial z_{T}},0\right)\). It remains to prove that \[\max\left(f(S\setminus T)-f(S),0\right)\leq f(S\setminus T)-\sum_{E\supseteq S }z_{E}.\] Using Lemma 23, we get \(f(S\setminus T)-f(S)\leq f(S\setminus T)-\sum_{E\supseteq S}z_{E}\). The other inequality \(\sum_{E\supseteq S}z_{E}\leq f(S\setminus T)\) follows from the below argument, again using statement (iii) of Lemma 23: \[f(S\setminus T)\geq\sum_{E\supseteq S\setminus T}z_{E}\geq\sum_{E\supseteq S }z_{E}\] This concludes the proof of the lemma.
2304.06470
Qualitative Failures of Image Generation Models and Their Application in Detecting Deepfakes
The ability of image and video generation models to create photorealistic images has reached unprecedented heights, making it difficult to distinguish between real and fake images in many cases. However, despite this progress, a gap remains between the quality of generated images and those found in the real world. To address this, we have reviewed a vast body of literature from both academic publications and social media to identify qualitative shortcomings in image generation models, which we have classified into five categories. By understanding these failures, we can identify areas where these models need improvement, as well as develop strategies for detecting deep fakes. The prevalence of deep fakes in today's society is a serious concern, and our findings can help mitigate their negative impact.
Ali Borji
2023-03-29T15:26:44Z
http://arxiv.org/abs/2304.06470v6
# Qualitative Failures of Image Generation Models ###### Abstract The ability of image and video generation models to create photorealistic images has reached unprecedented heights, making it difficult to distinguish between real and fake images in many cases. However, despite this progress, a gap remains between the quality of generated images and those found in the real world. To address this, we have reviewed a vast body of literature from both academic publications and social media to identify qualitative shortcomings in image generation models, which we have classified into five categories. By understanding these failures, we can identify areas where these models need improvement, as well as develop strategies for detecting deep fakes. The prevalence of deep fakes in today's society is a serious concern, and our findings can help mitigate their negative impact. In order to support research in this field, a collection of instances where models have failed is made available at here. _Keywords--_ Generative Models, Image and Video Generation, Qualitative Failures, Deepfakes, Image Forensics, Object and Scene Recognition, Neural Networks, Deep Learning ## 1 Introduction Generated images, also known as synthetic images, are created by machine learning algorithms or other software programs, while real images are captured by cameras or other imaging devices. Generated images are not real-world representations of a scene or object, but rather computer-generated approximations. As such, they lack the authenticity and realism of real images. Despite the abundance of anecdotal evidence shared on social media regarding the weaknesses of image generation models, there has yet to be a comprehensive and systematic analysis of these failures. Often, the examples shared by people are selectively chosen to showcase instances in which the models perform well, which may lead to a biased perception of their capabilities, and an overestimation of their effectiveness. While there have been quantitative studies aimed at evaluating and comparing generative models [4, 6], such as the use of metrics like FID [14], these measures can be difficult to interpret and are usually calculated over large datasets, making them unsuitable for determining the authenticity of individual images. Quantitative measures for detecting deep fakes do exist [23], but they are not as easily accessible to the general public as qualitative measures, which are simpler to carry out. As the quality of generated images continues to improve, it is crucial to conduct more in-depth and precise analyses. Thus far, people have been amazed by the ability of synthesized images to approximate natural scenes. When Photoshop was introduced, significant efforts were made to identify manipulated images, and a similar approach is needed for generated images today. It would be beneficial to compile a set of indicators and other resources to aid in detecting deep fakes. We present a collection of indicators that can be examined in a single image to determine whether it is genuine or manipulated. Overall, we offer five classes of these indicators including _Human and Animal Body Parts_, _Geometry_, _Physics_, _Semantics and Logic_, as well as _Text, Noise, and Details_, for both portraits and natural landscapes. The advantage of utilizing qualitative cues is that they are easily accessible and can be utilized by anyone, potentially serving as the initial step in detecting deep fakes. Generated images can appear realistic when viewed from a distance or at high resolutions, making it difficult to discern them from actual photographs. However, at lower resolutions, nearly all generated images lack distinguishable characteristics that set them apart from real photographs. To illustrate, refer to Figure 1, which depicts a painting by Camille Pissarro featuring intricate details. While the overall image may seem satisfactory, closer inspection reveals several missing details such as distorted facial features. This study has a dual purpose. Firstly, it aims to explore the differences between generated images and real-world images. Therefore, this research complements studies that propose quantitative approaches Figure 1: The Fishmarket, Dieppe, 1902 - Camille Pissarro. When observed more closely, it becomes apparent that the faces in the image lack clarity and numerous details are either incorrect or absent, similar to fake images. Although such images may appear authentic at first glance, scrutinizing them thoroughly is crucial to avoid overlooking errors. It is advisable to conduct a detailed examination of each object within the image by zooming in and analyzing its shape, features, location, and interaction with other objects. This approach allows for a more accurate assessment of the image’s authenticity and being free from errors. for evaluating generative models. Secondly, it aims to examine qualitative methods that can be employed to identify deepfakes and train individuals to become proficient in this task, with the added benefit of systematically organizing this knowledge. ## 2 Related work ### Quantitative and Qualitative Approaches to Evaluate Generative Models **Quantitative approaches** have emerged as a vital tool to evaluate the performance of generative models. These methods rely on quantitative measures to assess how well a model is able to generate realistic data. One commonly used metric is the Inception Score [28], which evaluates the diversity and quality of generated images based on the classification accuracy of a pre-trained classifier. Another popular approach is the Frechet Inception Distance [14], which uses feature statistics to compare the distribution of generated data with that of real data. Moreover, other metrics such as precision and recall [27] can be used to evaluate the quality of generated samples in specific domains such as vision, text and audio. These quantitative approaches provide a rigorous and objective way to measure the effectiveness of generative models, helping researchers to improve their models and develop more advanced generative techniques. Recently, two metrics have gained popularity, namely the CLIP score and the CLIP directional similarity. The CLIP score evaluates the coherence of image and caption pairs by measuring their compatibility. A higher CLIP score indicates a greater degree of compatibility, which can also be interpreted as the semantic similarity between the image and the caption. Moreover, studies have shown that the CLIP score has a strong correlation with human judgement. On the other hand, the CLIP directional similarity is used for generating images based on text prompts while being conditioned on an input image. It assesses the consistency between the differences in the two images (in CLIP space) and the differences in their respective captions. To obtain a thorough analysis of quantitative metrics for evaluating generative models, please refer to the following references [4, 6, 30, 33]. **Qualitative assessment** of generated images entails a human evaluation. The quality of these images is evaluated on various criteria, such as compositionality, image-text alignment, and spatial relations. DrawBench and PartiPrompts are prompt datasets used for qualitative benchmarking, that are were introduced by Imagen [26] and Parti [32], respectively. These benchmarks allow for side-by-side human evaluation of different image generation models PartiPrompts is a rich set of over 1600 prompts in English. It can be used to measure model capabilities across various categories and challenge aspects such as "Basic", "Complex", "Writing & Symbols", etc. DrawBench is comprised of a collection of 200 prompts that are divided into 11 categories (Table 1), which aim to assess various capabilities of models. These prompts test a model's ability to accurately render different attributes, such as colors, object counts, spatial relationships, text in the scene, and unusual object interactions. Additionally, the categories include complex prompts that incorporate lengthy, intricate textual descriptions, as well as uncommon words and misspelled prompts. DrawBench was used to directly compare different models, where human evaluators were presented with two sets of images, each consisting of eight samples, one from Model A and the other from Model B. Evaluators were then asked to compare Model A and Model B based on sample fidelity and image-text alignment. Large-scale datasets have also been used in studies that focus on the qualitative evaluation of generated images (_e.g._[2]). The assessment of models through qualitative methods can be susceptible to errors, potentially leading to an incorrect decision. Conversely, quantitative metrics may not always align with image quality. Therefore, the use of both qualitative and quantitative evaluations is typically recommended to obtain a more robust indication when selecting one model over another. ### Deepfake Detection Methods Deepfake detection has become an essential area of research due to the increasing sophistication of deep learning algorithms that can generate highly realistic fake images, videos, and audio. As a result, numerous deepfake detection methods have been proposed in recent years, ranging from traditional image and video forensic techniques to advanced deep learning-based approaches. These methods can be broadly categorized into two groups: static and dynamic analysis. Static analysis methods use handcrafted features to distinguish between real and fake images. Examples of static analysis methods include reverse image search, which compares the content of an image to a large database of known images (_e.g._[8]), and error level analysis, which detects inconsistencies in the compression levels of an image [16]. Another method is the use of noise patterns and artifacts, which are common in images and videos captured by digital cameras and can be used to identify forgeries. For instance, the sensor pattern noise in images captured by digital cameras can be used to authenticate images and detect tampering attempts [19]. In addition, traditional forensic techniques such as shadow analysis, lighting analysis, and perspective analysis can also be used to identify inconsistencies in the shadows, lighting, and perspectives of images. On the other hand, dynamic analysis methods rely on deep neural networks to analyze the temporal features of video and audio data to detect deepfakes. These methods aim to exploit the fact that deepfakes lack the natural temporal variations and correlations that are present in real videos and audios. For instance, the use of convolutional neural networks (CNNs) has been proposed to detect deepfakes by analyzing the spatial features of images and videos (_e.g._[1, 21, 22, 10]). Similarly, recurrent neural networks (RNNs) have been proposed to analyze the temporal features of video and audio data to detect deepfakes [13]. Moreover, Generative Adversarial Networks (GANs) [12] have been used to generate fake images and videos, but can also be used to detect them by identifying inconsistencies in the generator's output [18]. \begin{table} \begin{tabular}{l c c} \hline \hline **Category** & **Description** & **Examples** \\ \hline \multirow{2}{*}{Colors} & Ability to generate objects & “A blue colored dog.” \\ & with specified colors. & “A black apple and a green backpack.” \\ \hline \multirow{2}{*}{Counting} & Ability to generate specified & “Three cats and one dog sitting on the grass.” \\ & number of objects. & “Five cars on the street.” \\ \hline \multirow{2}{*}{Conflicting} & Ability to generate conflicting & “A horse riding an astronaut.” \\ & interactions b/w objects. & “A panda making latte art.” \\ \hline \multirow{2}{*}{DALL-E [24]} & Subset of challenging prompts & “A triangular purple flower pot.” \\ & from [24]. & “A cross-section view of a brain.” \\ \hline \multirow{2}{*}{Description} & Ability to understand complex and long & “A small vessel propelled on water by oars, sails, or an engine.” \\ & text prompts describing objects. & “A mechanical or electrical device for measuring time.” \\ \hline \multirow{2}{*}{Marcus et al. [20]} & Set of challenging prompts & “A pear cut into seven pieces arranged in a ring.” \\ & from [20]. & “Paying for a quarter-sized pizza with a pizza-sized quarter.” \\ \hline \multirow{2}{*}{Misspellings} & Ability to understand & “Rberfraigerator.” \\ & misspelled prompts. & “Tennis rpacket.” \\ \hline \multirow{2}{*}{Positional} & Ability to generate objects with & “A car on the left of a bus.” \\ & specified spatial positioning. & “A stop sign on the right of a refrigerator.” \\ \hline \multirow{2}{*}{Rare Words} & Ability to understand rare words1. & “Artophagous.” \\ & & “Octthorpe.” \\ \hline \multirow{2}{*}{Reddit} & Set of challenging prompts from & “A yellow and black bus cruising through the rainforest.” \\ & DALLE-2 Reddit2. & “A medieval painting of the wifi not working.” \\ \hline \multirow{2}{*}{Text} & Ability to generate quoted text. & “A storefront with ’Deep Learning’ written on it.” \\ & & “A sign that says ’Text to Image’.” \\ \hline \hline \end{tabular} \end{table} Table 1: Description and examples of the 11 categories in DrawBench, compiled from [26]. Overall, deepfake detection is a challenging problem due to the rapid evolution of deep learning algorithms that can generate more realistic fake content [9]. Thus, a combination of static and dynamic analysis approaches is necessary to achieve effective detection of deepfakes. Additionally, extensive evaluation and comparison of deepfake detection methods are essential to identify their effectiveness and limitations and to guide future research in this area. To read more about this subject, you may want to consult [11, 25, 23, 29] which offer comprehensive reviews on the topic. ## 3 Qualitative Failures of Image Generation Models We compiled a list of qualitative failures by examining images from various sources including social media websites such as Twitter, LinkedIn, Discord, and Reddit3, as well as images from the DiffusionDB dataset [31]. These images have been generated by notable generative models such as DALL-E 2, Midjourney, StableDiffusion, and Bing Image Creator. Additionally, we analyzed images from websites such as thisxdoesnotexist.com, whichfaceisreal.com, the Adobe Stock library, and openart.ai. We made sure that the text prompts used to generate images were not intentionally seeking peculiar images. Finally, we manually reviewed the images and filtered out the ones without problems. Footnote 3: A few of the images used in this work were obtained with the consent of a Reddit user named Kronzky. ### Human and Animal Body Parts **Faces.** Since the initial triumphs of GANs, the generation of fake faces has been the most extensively scrutinized category for deep generative models [5]. Faces are comparatively simpler to generate than complex scenes because they are easier to calibrate. In the past, the first generated faces were effortlessly recognizable by humans. However, with the advancement of technology such as StyleGAN [15], the latest examples of generated faces are more challenging to distinguish. Figure 2 illustrates a few faces that were generated with Figure 2: Examples of poorly generated faces. issues. You can evaluate your ability to distinguish between real and computer-generated faces by taking a quiz at whichfaceisreal.com. **Image Background.** When creating deepfakes, issues with the background of the images may arise, particularly in cases where the face is in focus while the surrounding clues are incorrect. The neural network used to generate the images focuses mainly on the face and may not pay as much attention to the surrounding details. This can lead to strange companions or chaotic forms in the background. Additionally, the objects or people next to the primary person in the image may appear unnatural or "mutant". Figure 3 displays several instances of failures as examples. **Eyes and Gaze.** Deep generative models have largely overcome issues with early fake images such as cross-eyed, uncentered or different sized pupils, different colored irises, and non-round pupils, as shown in examples in Figure 4. Early GANs used to produce pupils that were not circular or elliptical like those found in real human eyes, which can be a clue that an image is fake. Reflections in the eyes can also be used to identify fake images. Other clues include irregularities in pupil shape, although this is not always indicative of a fake image since some diseases can cause such irregularities. See the example shown in the bottom-right panel in Figure 4. Unnatural gaze direction or unrealistic eye movements may be observed in deepfakes, which can indicate that a machine learning algorithm generated or manipulated the image. Please see Figure 5. **Eyeglasses.** Algorithms can struggle to create realistic eyeglasses, with frame structures often differing between the left and right sides, or with one side having an ornament and the other not. Sometimes the frame can appear crooked or jagged. The glasses may partially disappear or blend with the head, and they Figure 4: Here are some instances of eyes that were generated poorly. The eye in the bottom right corner is an actual photograph of a patient who has an irregularly shaped pupil. You can refer to this link for more details. This case represents a unique manifestation of a condition known as “cat’s eye Adie-like pupil,” which is considered a warning sign for ICE syndrome. Figure 3: Fake images can be exposed through background cues. can be asymmetrical. The view through the lens may also be heavily distorted or illogical, and nose pads may be missing or distorted. Please see Figure 6 for some examples. **Teeth.** Rendering teeth is a difficult task for AI, which often results in odd or asymmetric teeth. When someone's teeth appear unusual or crooked, there's a good chance that the image was generated by AI. Semi-regular repeating details like teeth are difficult for models to generate, causing misaligned or distorted teeth. This problem has also been observed in other domains, such as texture synthesis with bricks. Occasionally, an image may display an excessive number of teeth or teeth with abnormal shapes and colors, and in some instances, there may be an insufficient number of incisors. Please see Figure 7 for some examples. **Ear and Earrings.** Ears in AI-generated images may exhibit discrepancies such as differences in size, one ear appearing higher or bigger than the other, or missing or partially missing earrings. Additionally, earrings may be randomly shaped or not match visually. If earrings are asymmetrical or have different features such as one having an attached earlobe while the other doesn't or one being longer than the other, it's likely that the image has been generated by AI. Examples of poorly generated ears and earrings are shown in Figure 8. **Hair and Whiskers.** The style of hair can differ greatly, which also means there is a lot of intricate detail to capture. This makes it one of the most challenging aspects for a model to render accurately. The generated images may contain stray strands of hair in unusual places, or the hair may appear too straight or streaked. Occasionally, the image may resemble acrylic smudges from a palette knife or brush. Another issue may be a strange glow or halo around the hair. In some cases, the model may bunch hair in clumps or create random wisps around the shoulders, while also including thick stray hairs on the forehead. Please see Figure 9. **Skin.** Deepfakes can be deficient in delicate details and subtleties found in genuine images, like skin texture, pores, or fine lines on someone's face. The skin tone in deepfakes may appear unnatural or inconsistent, such as a person's face appearing too pale or too red. Additionally, deepfakes may lack the presence of noise or grain which exists in real images, giving a sense of texture and realism. Without the presence of noise or grain, deepfake images may seem excessively clean or artificial. Some example failures are shown in Figure 10. **Limbs, Hands, and Fingers.** The models used for generating deepfakes often fall short when it comes to Figure 5: Here are some examples of images where the gaze direction is problematic. In these images, one eye appears to be looking in a different direction compared to the other, similar to a medical condition called Strabismus in the real world. You can check out [https://en.wikipedia.org/wiki/Strabismus](https://en.wikipedia.org/wiki/Strabismus) for additional information on this topic. accurately depicting the intricate details of human extremities. For instance, hands may randomly duplicate, fingers can merge together or there may be too many or too few of them, and third legs may unexpectedly appear while existing limbs may disappear without a trace. Furthermore, limbs may be positioned in unrealistic or impossible poses, or there may be an excess number of them. As a result, deepfakes may exhibit unnatural body language, such as unrealistic gestures or postures that are out of place. See Figs 11 and 12. **Clothing.** Generative models may produce distorted clothing with various issues, such as asymmetrical, peculiar, or illogical textures or components such as zippers or collars merging with the skin, and textures abruptly changing or ending. Please refe to Figure 13 for some of such failures. ### Geometry Deepfakes may exhibit anomalous or atypical image geometry, with objects appearing to be of an unusual shape or size, in comparison to their expected proportions. **Straight Lines and Edges.** AI-generated images may lack the straight lines, seams, and connections found Figure 10: Examples of poorly generated skin, absolutely perfect skin with no pores. Figure 9: Examples of poorly generated hair. Figure 11: Examples of images with poorly generated limbs and distorted body. Figure 12: Issues with AI-generated fingers. in real-world objects, resulting in wavy, misaligned, and jumpy renderings (_e.g._ in tiles). Deepfakes can also exhibit inconsistent or unnatural image edges, which refer to the boundaries between different parts of the image. Further, surfaces, which are typically straight, may look somewhat uneven in generated images. Some samples failures are shown in Figure 14. **Perspective.** Models lack the ability to understand the 3D world, which results in physically impossible situations when objects cross different planes in a scene. These errors are difficult to detect as our brain often auto-corrects them, requiring a conscious investigation of each angle of the object to identify inconsistencies. Deepfakes can display an unnatural or distorted perspective, where a person's body appears stretched or compressed unrealistically. Deepfakes may also have inconsistent or unrealistic camera angles, where a person's face appears to be viewed from an impossible angle or perspective. Some example failures are shown in Figure 15. **Symmetry.** Due to difficulty managing long-distance dependencies in images, symmetry (reflection, radial, translation, etc) can be challenging for models. For instance, in generated images, eyes may appear heterochromatic and crosseyed, unlike in real life where they tend to point in the same direction and have the same color. Additionally, asymmetry may appear in facial hair, eyeglasses, and the types of collar or fabric used on the left and right sides of clothing. Models may face challenges in maintaining symmetry not only in faces but also in other objects and scenes. For instance, two shoes in a pair or wings in an airplane might not be exactly the same. This is a type of reasoning glitch where the model cannot understand that certain elements should be symmetrical. Some example failures are shown in Figures 16 and 17. **Relative Size.** Relative size is a visual perceptual cue that helps us understand the size of objects in relation to one another. It is a powerful cue because it allows us to estimate the size of objects even when we do not have any absolute size reference in the scene. Models, however, fall short in synthesizing objects with objects with sizes proportional to their size in the real world. Some example failures are shown in Figure 18. **Other Geometry.** Deepfakes exhibit various geometrical anomalies that may reveal their artificiality. For instance, their depth cues can be inconsistent or unnatural, causing the foreground or background to seem blurry or devoid of detail. Moreover, deepfakes often lack parallax, which is the apparent displacement of objects when viewed from different perspectives, resulting in a flat or two-dimensional appearance. Figure 13: Generating realistic clothing is a challenge for generative models. Additionally, incorrect or inconsistent motion blur may suggest that certain parts of the image have been manipulated. The absence of occlusion, i.e., the overlapping of objects in the scene, is another telltale sign of deepfakes, as it can make the image look flat or unrealistic. Lastly, deepfakes may display improper image alignment, with objects seeming misaligned or out of place. Figure 14: Examples of lines, edges, and surfaces that are generated poorly by AI. Figure 15: Examples of generated images that exhibit issues with perspective. Figure 16: Examples of generated images that display inconsistent symmetry. ### Physics Generated images that violate physics rules exhibit various cues that can give them away as unrealistic or physically impossible. These cues include objects appearing to float in mid-air without support, shadows that are inconsistent with the light source, reflections or refractions that break the laws of optics, objects passing through each other without interaction, and incorrect physics-based simulations such as fluids or cloth that behave in impossible ways. By identifying these cues, it is possible to identify and distinguish realistic images from those that violate the rules of physics. **Reflection.** An effective technique for detecting generated images is to examine the lighting and how it interacts with the elements within the image, and how it causes reflections and shadows. Deepfakes can exhibit artificial reflections that are inconsistent with the natural lighting and environment, such as those in glasses, mirrors, or pupils. The root cause of this issue is that deep generative models lack a proper understanding of reflections. While these models may recognize that an image contains a reflection and typically involves two people (one facing the camera and the other with their back turned), they do not comprehend that the two individuals are, in fact, the same person. Deepfakes may display other lighting effects that do not match real-world environments, such as lens flares, lens distortion, chromatic aberration Figure 17: Additional examples of generated images that exhibit inconsistent symmetry. Figure 18: Examples of images where there is a violation of relative size. physical properties of camera lenses and the way light is refracted through them. Some example failures are shown in Figure 19. **Shadow.** Deepfakes might not include shadows, which are typically found in real images and contribute to the impression of depth and authenticity. It is important to observe objects without shadows and those with highlights that appear to originate from a different direction than the rest of the image. Additionally, if the photo was taken outdoors in natural light during the afternoon, the setting sun will produce longer shadows than it would at midday, which can be easily identified by scrutinizing the shadow's length. However, this method may not be as precise in artificial lighting conditions. Finally, if there are multiple objects or people within the scene, their shadows should be consistent with each other. Some generated images with inconsistent shadows are shown in 20. **Objects without Support.** When an object or material appears to be floating in mid-air without any visible means of support, it gives the impression that the object is defying gravity or the laws of physics. In reality, all objects are subject to the force of gravity unless they are held up by some other force. When an object appears to be floating, it could be a result of an incorrect rendering or an error in the physics simulation that fails to account for the gravitational force. This type of inconsistency can cause a generated image to look unrealistic or implausible. Some example failures are shown Figure 21. ### Semantics and Logic Images produced by generative models may lack the semantic meaning or contextual relationships present in authentic images. These models tend to focus on the nouns in a given prompt and construct a plausible scene based on them, potentially failing to capture the true relationships between objects. It is crucial to bear in mind that AI lacks an inherent understanding of the world and can only process information in terms of Figure 19: Generated images with inconsistent reflections. grasp, resulting in potential difficulties in these areas. For example, when tasked with generating an image of the solar system drawn to scale, a generative model may struggle to maintain the correct planetary order, as demonstrated here. **Spatial Reasoning.** Natural scenes are complex and contain a wide range of spatial relationships among objects, such as occlusions, relative distances, and orientations. Capturing these relationships requires the model to have a nuanced understanding of the scene and the objects within it, which can be difficult to achieve without more explicit guidance. Furthermore, some image generation models rely solely on pixel-level reconstruction, without explicitly modeling the underlying semantics or spatial relationships. In these cases, the model may generate images that are visually realistic but lack coherent semantic meaning or accurate spatial relationships among objects. Please see Figure 22 for some examples. **Context and Scene Composition.** Deepfakes can be detected through various inconsistencies such as the background or surroundings not matching the real-world environment, cardinality/counting, missing contextual details, unnatural object placement, and inconsistent image composition. These irregularities may include inconsistencies in order of objects, missing objects or features, objects appearing in the wrong location or orientation, or unnatural arrangement and placement of objects in the image. Please see Figure 23. **Other Semantics.** Figure 24 depicts several additional generated images that exhibit semantic issues. For instance, one image features a person with his head and feet pointing in opposite directions, while another displays a fragmented pizza that does not cohere into a single entity. In yet another image, a blank painting hangs on the wall, creating a confusing and nonsensical composition. Figure 20: Generated images with inconsistent shadows. ### Text, Noise, and Details **Text.** Generating text and logos in images requires the generative model to understand the relationships between the text and the visual content of the image. This can be challenging because the text and image data have different structures and are not directly aligned with each other. Additionally, text can appear in various locations and orientations within an image, and the context of the text may change depending on the surrounding visual content. Furthermore, generating text that accurately describes the visual content of an image requires a deep understanding of the semantics and context of both the text and the image. While some progress has been made in recent years with the development of methods such as image captioning, it is still an active area of research to develop generative models that can effectively generate text in images. Figure 25 displays instances where the text is incomprehensible. In such cases, the letters appear scrambled or duplicated, and the words are spelled incorrectly. **Noise, Color, and Blur Artifacts.** Digital distortion in the form of pixelation or imperfect coloring can be present in generated images, particularly around the image edges. Monochrome areas may display semi-regular noise with horizontal or vertical banding, potentially due to the network attempting to replicate cloth textures. Older GANs tend to produce a more noticeable checkerboard noise pattern. Other telltale signs of deepfakes include inconsistencies in color or tone, oversaturation or undersaturation of colors, and unnatural image noise patterns. See the top row in Figure 26. Fluorescent bleed, where bright colors bleed onto the hair or face of a person in the image from the background, is also a potential indicator of a deepfake (the bottom row in Figure 26). The human attention system is naturally adept at quickly recognizing these patterns, making them useful tools for identifying generated images. **Images with Cartoonish Look.** AI generated images may look cartoonish or may look like a painting. This could be due to several reasons such as inconsistent or unnatural image texture, lack of depth, or focus. Some examples are shown in Figure27. Figure 21: Generated images where some objects lack visible physical support. Some objects are suspended in mid-air without any explanation or justification. This lack of physical support could result from a failure to properly simulate or model the forces acting on the objects in the scene. **Fine-grained Details.** AI-generated images may contain technical details that are either incorrect or appear as random shapes. For example, furniture legs can be particularly challenging for AI to accurately render, resulting in incorrect numbers of legs or physically impossible configurations. These issues can be attributed to the inherent difficulty of modeling complex objects and the limitations of the AI's understanding of real-world. Some example failures are shown in Figure 28. ## 4 Discussion ### Other Cues In addition to the cues discussed above, there are several other indicators that can be used to identify deep fakes. One such method involves examining the metadata of an image or conducting a reverse Google search to verify its authenticity. Additionally, common sense can be applied to detect images that are likely to be generated, such as a shark swimming down a street or aliens eating sushi in a Chinese restaurant. Other indications of a deep fake include lack of motion blur, unnatural bokeh, all objects appearing in focus, and repeated patterns in the image. ### Some Challenging Objects Generative models face particular challenges when it comes to generating images of objects such as clocks, Lego houses, chessboards, carpets, circuit boards, basketballs, glasses of water, dice, diagrams and tables, keyboards, and computer screens. One of the reasons for this is that these types of images contain many repeated patterns, which can be difficult for the model to accurately capture. Several examples of failed Figure 24: Additional generated images that exhibit semantic issues. Figure 25: Generative images that exhibit issues or inconsistencies with the text. attempts to generate these objects can be seen in Figures 31 and 32. This list of challenging objects can be used to assess and compare the performance of different image generation models. ### Memorization and Copyright As previously mentioned, a method for identifying whether an image is generated or not is through reverse image search. Generative models may memorize images partially or in their entirety, as seen in the examples presented in Figure 33. This phenomenon has raised concerns regarding copyright infringement, as generated images may include watermarks from the original images. For more information on this issue, please refer to the this link. ### Failure modes from other studies Certain image generation techniques may incorporate failure models to provide readers with a more comprehensive understanding of their models' limitations. For instance, the creators of the Parti image generator [32]4 Footnote 4: [https://parti.research.google/](https://parti.research.google/) Figure 27: Some generated images that look cartoonish or look like paintings. Figure 26: Top row: problems with color and noise in generated images. Bottom row: fluorescent colors sometimes bleed in from background onto the hair or face. have presented some examples of such failure cases, which are illustrated in Figure 35. These failure cases can be categorized into the errors discussed earlier. It is recommended that researchers in this field consider including a discussion of their models' failure models as a best practice. ## 5 Conclusion and Future Work This paper lists several qualitative indicators for identifying deepfakes. These indicators not only enable us to address the issue of fake images but also underscore the differences between generated and real-world content [7]. Furthermore, they serve as a checklist for evaluating image generation models. It should be noted that as algorithms improve, some of these clues may become obsolete over time. However, this does not mean that these models will not make any of these mistakes in generating images. It may be necessary to use a combination of these indicators to identify deepfakes, as there is no one-size-fits-all solution. Image generation models are becoming increasingly widespread and accessible. However, in the wrong Figure 28: Generated images with flawed details. hands, these algorithms can be used to create propaganda and other forms of fake media. In a world rife with fake news [17], we have learned not to believe everything we read. Now, we must also exercise caution when it comes to visual media. The blurring of lines between reality and fiction could transform our cultural landscape from one primarily based on truth to one characterized by artificiality and deception. As we have demonstrated with the set of cues presented here, it is possible to identify fake images. In fact, we were able to use some of these indicators to detect fake faces with high accuracy in the quiz available on whichfaceisreal.com. While visual inspection can be a valuable tool for identifying deepfakes, it may not be sufficient to detect all types of generated images. Therefore, incorporating other methods such as machine learning algorithms or forensic analysis can offer a more thorough approach to deepfake detection. Additionally, staying up-to-date of the latest techniques and technologies in this field is crucial, as it is continually advancing. In this study, we focused on still images. However, for videos, additional indicators beyond those outlined here, such as motion and optical flow, as well as the synchronization of lip, face, and head movements over time, can also be significant factors [3]. Educating individuals on the cues outlined in this paper may aid in combatting deepfake proliferation. It would be worthwhile to investigate whether individuals can be effectively trained to become experts in this area. Figure 30: Generated crowd scenes with issues. Figure 29: Example failures of generated complex scenes. Achieving accurate and detailed rendering in these types of images is particularly difficult due to the large number of objects and the intricate relationships between them. Figure 31: Some object that are difficult for models to generate.
2304.10212
The impact of the AI revolution on asset management
Recent progress in deep learning, a special form of machine learning, has led to remarkable capabilities machines can now be endowed with: they can read and understand free flowing text, reason and bargain with human counterparts, translate texts between languages, learn how to take decisions to maximize certain outcomes, etc. Today, machines have revolutionized the detection of cancer, the prediction of protein structures, the design of drugs, the control of nuclear fusion reactors etc. Although these capabilities are still in their infancy, it seems clear that their continued refinement and application will result in a technological impact on nearly all social and economic areas of human activity, the likes of which we have not seen before. In this article, I will share my view as to how AI will likely impact asset management in general and I will provide a mental framework that will equip readers with a simple criterion to assess whether and to what degree a given fund really exploits deep learning and whether a large disruption risk from deep learning exist.
Michael Kopp
2023-04-20T11:08:31Z
http://arxiv.org/abs/2304.10212v2
# The impact of the AI revolution on asset management ###### Abstract Recent progress in deep learning, a special form of machine learning, has led to remarkable capabilities machines can now be endowed with: they can read and understand free flowing text, reason and bargain with human counterparts, translate texts between languages, learn how to take decisions to maximize certain outcomes, etc. Today, machines have revolutionized the detection of cancer, the prediction of protein structures, the design of drugs, the control of nuclear fusion reactors etc. Although these capabilities are still in their infancy, it seems clear that their continued refinement and application will result in a technological impact on nearly all social and economic areas of human activity, the likes of which we have not seen before. In this article, I will share my view as to how AI will likely impact asset management in general and I will provide a mental framework that will equip readers with a simple criterion to assess whether and to what degree a given fund really exploits deep learning and whether a large disruption risk from deep learning exist. keywords: artificial intelligence, fund management + Footnote †: journal: arxiv ###### Contents * 1 Introduction * 2 The deep learning revolution so far * 2.1 A technological revolution like no other * 2.2 The odd nature of this deep learning revolution * 2.3 Consequences or why harnessing deep learning is not straight forward * 3 The shape of things to come in asset management * 3.1 Trade decision making in proprietary trading * 3.2 Automatically finding novel trading strategies * 3.3 Capital allocation * 3.4 Risk management * 3.5 A word of warning * 3.6 A word about data * 4 A framework for assessing the role of deep learning in a given fund * 4.1 A simple criterion * 4.2 Remarks * 5 Summary and conclusions ## 1 Introduction At its core, artificial intelligence (AI) promises to industrially scale 'intelligence', including decision making, planning, understanding, reasoning with and about the world around us from digital datasets that world produces alone. On the back of the twin pillars of the 'big data revolution' and ever increasing compute power (Moore's Law), the last decade has witnessed the emergence of stunning advances and real world applications of AI that are at the heart of a currently dominant belief that the underlying technology - mostly 'deep learning' - will allow us to tackle hitherto unfathomable problems in nearly all areas of our lives. Billions have been and are being invested into 'deep learning' in sectors as diverse as drug discovery, food production, traffic management, map making, logistics, energy management, oil production etc. Such is the promise that it is only natural to muse how every industry is being and will be affected as this mushrooming technology unfolds and matures. This article is my personal take on this question with regards to asset management. I have a somewhat unique view on the matter given my blended background. I am a former portfolio manager responsible for managing capital in convertible bonds, merger arbitrage, credit, equity event driven and fundamentals based strategies. I am also a former mathematician who adopted deep learning in most aspects of his professional life early, building products and solutions with deep learning when it became capable enough, ran the research unit of a large mapping provider and am currently jointly running a research institute dedicated to AI. My motivation for writing this article is two-fold. Given my background, I face this question quite frequently from investors, fund managers but also technologists. Moreover, I have the impression that the current discourse on the matter is somewhat sketchy and could benefit from an opinion from my vantage point. I will try and set out my view of what is in store for the asset management industry in section 3 below. In section 2, I will give my take on where 'deep learning' stands today. In section 4, I will outline a useful framework, based on section 3, that allows one to classify asset managers as to how they make use of deep learning today which also, in my view, highlights the disruption risk they will soon face from deep learning based approaches. ## 2 The deep learning revolution so far In this section I will argue that technological advances in artificial intelligence (AI) will have an undeniable impact on all aspects of our life, that this technology is entirely different to other technology advances and, hence, that applying it to real world problems successfully requires a novel approach of being aware of and embracing its idiosyncrasies. ### A technological revolution like no other Over the last decade, we have witnessed a remarkable technological revolution in which the dream of 'artificial intelligence', as outlined in the famous 1956 symposium at Dartmouth college, has started to become a reality (see McCarthy et al. (1955)). Concretely, neural network based (Devlin et al., 2018; Brown et al., 2020; Hoffmann et al., 2022) large language models (LLMs) can now understand and generate text, in all languages, to such a degree that, recently, an engineer at Google thought a new chat bot was sscentient (Tiku, 2022), thus clearly passing the famous Turing Test (Turing, 1950). Such models can even revive, decipher and explore hitherto extinct languages from few archaeological samples (Luo et al., 2019). Deep learning has also had a profound impact on image analysis as well as generation. As examples of the former, facial recognition software is now scarily efficient, deep learning models beat pathologists at detecting cancer from cell samples (He et al., 2021) and landslides can now be detected from simple satellite images from space (Ghorbanzadeh et al., 2022). Recent advances in new training regimes for neural networks have led to stunning advances in the latter. Where in the past large datasets needed to be annotated (labelled) for neural networks to teach themselves desired analytic skills by a large amount of examples, these networks can now learn relevant aspects of the data through either self-consistency or having access to several different sensor readings on the same underlying observation. Surprisingly conceptually simple models like CLIP (Radford et al., 2021) and CLOOB (Furst et al., 2021) can be trained on internet size datasets of pairs of images and text (basically images scraped from the web together with their caption), which lead to powerful joint feature embedding of text and image features that allow for automatic image captioning as well as image generation from text as shown in Figure 1. Last, but not least, these same deep learning techniques have revolutionized control theory through'reinforcement learning'. Deep learning agents can now play complex computer games on a super human level, including the game of go (Silver et al., 2016), DOTA2 against online players (Berner et al., 2019) and strategy and planning games like 'No-Press Diplomacy'. These advances are at the heart of novel electronic trading algorithms, the hope of achieving self-driving technology and emerging algorithms running nuclear fusion reactors as well as electricity networks. Combinations of all the above technologies has also recently lead to predicting the molecular structure of proteins (Jumper et al., 2021) and are actively used in discovering novel drugs (Chen et al., 2018). Moreover, as widely reported in the media, a merge of techniques from control theory and pre-trained generative large language models on chat room content has lead to chatGPT, a conversational AI agent which can answer questions, write computer programming code and also dream up answers that are false but plausible. ### The odd nature of this deep learning revolution In order to understand how this technological revolution can be harvested, it is necessary to understand its peculiarities. Firstly, this revolution is founded on three pillars: data, compute power and heuristic exploration. Modern deep learning models teach themselves from data alone (with or without labels). Almost all do so via simple gradient descent methods and the help of the chain rule (backpropagation method) which require the help of large, specialized hardware accelerators (GPUs for instance). The key ingredient of designing these deep learning models is experimental trial and error that requires both experience and mathematical guidance. This is so as there is no 'theory' yet as to the architecture of a deep learning model that is supposed to work on a novel, hitherto not analyzed dataset. Rather, practitioners find models that worked on similar datasets and adjust them or compose several models that work on parts of the data that are known to capture relevant features. There is a large part of research dedicated to methods as to how to combine/compose models (few-shot learning, meta learning, student teacher architecture, etc.) and any advance in this area is immediately applicable in production. For instance, the recently discovered ability to self train extremely large neural network models on certain massive, unlabelled data sources or a combination of such sources has led to the idea of having universal, reusable components, so called 'foundation models' Figure 1: Example of image generated via a stable diffusion network from a CLOOB embedding from the prompt sentence “a moody painting of a lonely duckling” (Bommasani et al., 2021) that are trained once and form the key lego-piece feature extractors for novel applications. One unique aspect of this AI revolution the last sentence demonstrates is that fundamental research is turned into production code within weeks or months, rather than years or decades as would be the case for most natural sciences. The increased popularity AI is enjoying has lead to more trial and error in deep learning, which in turn, together with the increased experience of most AI researchers has led to an ever accelerating field. To put this into context, the last year has seen such novel insights that most prior deep learning solutions are now obsolete again. Last but not least, this latest leg in the deep learning revolution relies on internet size, unlabelled data being fed to models mostly trained to correctly guess masked out parts of the input. Given that a key component of the dominant neural network architecture (transformers) underlying most models these days as well as widely used generation techniques are known to be a powerful memory capable of storing and learning to retrieving large amounts of data (Ramsauer et al., 2020; Carlini et al., 2023), it is no wonder that copyright lawsuits about over whether training models on data off the internet really represents a 'fair use' or infringement (Wiggers, 2023). ### Consequences or why harnessing deep learning is not straight forward Building deep learning solutions in production is difficult given the'shape' of this AI revolution discussed above. * Which problems should be solved with deep learning and which ones should be tackled with traditional software? The former are black-box algorithms trained mostly by gradient descent. How are the boundaries handled? * Unlike most 'traditional' technology solutions, deep learning models are hardly explainable and their performance can only really be gleaned from measurement ('black-box' nature). * Moreover, a shift in the underlying distribution of the data fed into a deep learning model compared to the data with which it was trained might lead to unexplained outcomes and potentially necessitate constant retraining and enhanced monitoring. * Given that large scale trial and error is best achieved by community interaction and technology reuse, what should and should not be patented and what is the copyright situation of the training data used, i.e. the fuel on which these models run? A consistent set of answers to these questions is only just emerging and requires expertise in building these systems as well as understanding the problems one will face in the real world. This can be amply demonstrated by the fact that the key tool-chain on even writing code for AI models, the python frameworks tensorflow (Tensorflow, 2022) and pytorch (pytorch team, 2022), are not backward compatible as each version will feature large changes necessitating changes in the programming languages' basic syntax. The rest of the usual tool chain (deployment, orchestration, etc.) is even less fixed and also only just emerging. ## 3 The shape of things to come in asset management The infamous quote 'it is tough to make predictions, especially about the future' is attributed to Yogi Berra and has been an uncomfortable truth befalling many predictions about the future. Nevertheless, even only considering the technologies the current deep learning revolution has washed ashore and ignoring high level discussions and predictions about future successes, it seems hard to not come to the conclusion that some vital parts of asset management can and will be heavily impacted, both in traditional human led proprietary trading as well as algorithmic trading. In what follows, I will thus ignore all well meant warnings, lean out of the window and attempt a deep glance into the crystal ball to glean the shape of things to come. ### Trade decision making in proprietary trading There are quite a few highly profitable and scalable trading strategies that have so far resisted being usurped by algorithmic trading. Usually, these depend heavily on a human level understanding of the underlying trade dynamic which is hard to explicitly capture with an algorithm. Take for example antitrust risk in a merger arbitrage deal. Assessing this risk requires an understanding of the competitive landscape such a merger would bring about, on the methodology and legal principles different antitrust authorities employ and the precise conditions the merger agreement sets out for a given set of such authorities for the deal to still go through. The emerging ability to build machines that can understand large corpora of texts and that can combine this understanding with past trading data of past deals in order to learn trading actions makes it rather plausible that a system can be built to assess this risk now. Such a system would need to reason about the acquirer's and target's business in different geographies, their competitors in these geographies usually marked out in regulatory filings, read thousands of merger documents, emerging legal opinions, verdicts and antitrust agency publications on methodologies and combine this information with trading data and other newspaper articles in the usual 'rumourtage' outlets read by many merger arbitrage traders in order to form a quantitative view of the deal's antitrust risk, just like a human trader would do with her/his natural neural network. Yes, the task is not straight forward and all the caveats of the last section apply, but a technological stretch it seems to be not. In short, trading strategies that have hitherto withstood the onslaught of algorithmic trading are likely to come into scope. ### Automatically finding novel trading strategies Another fascinating capability that has emerged and is being exploited is the ability in control systems to find a plethora of different optimization strategies satisfying complex rewards. Trading a portfolio with given imposed risk metrics is just such a scenario and hence it is now technologically feasible to create a large number of solutions with different characteristics for such a problem with the press of a button. In other words, with the press of a button a large number of different artificial traders can be generated that all adhere to a given risk limit whilst optimizing returns set over a given horizon. We note that this ability to find new trading strategies in combination with other emerging deep learning capabilities is, in theory, enough to also disrupt existing algorithmic trading funds. Imagine a fund in which not only new short term or ultra short term trading strategies are found via deep learning, but where the same underlying algorithms also produce highly optimized code to actually execute the trades, to connect to exchanges and products in a new way and to continuously improve all parts of the underlying code making up the entire trading operation. This is, although technologically not infeasible today, unlikely to exist any time soon at this stage. That said, I am just taking existing pieces of deep learning successes and extrapolating their combination and so, total sci-fi it seems to be not to me. ### Capital allocation Combining the ability of the last subsection with the ability of a learnt capital allocation system that maximizes rewards of a large number of automated trading strategies seems technologically feasible. In its simplest form, a deep learning system could learn, from reward data alone, to allocate capital at each step to traders whose past performance is known - reducing this to yet another control theory problem that seems solvable. A more sophisticated version would require being mindful of paradigm shifts not just of past data but a near to infinite number of dreamt up future scenarios is also within the realm of the possible, namely in model-based reinforcement learning and model-based imitation learning. Of course, such dreamt up scenarios have to be realistic enough and self-consistent enough to be helpful and current technologies are only scratching the surface on this. Nevertheless, with enough compute power and data it would be hard to argue that this could not be achieved today in particular trading settings. ### Risk management This ability to dream up realistic future scenarios in large numbers and force deep learning agents to do well on these scenarios is also key in risk management. Such systems could read and comprehend every trade-able security's underlying risk as well as the relevant context of our world in an outright long only or long short portfolio, for instance, and could estimate the entire portfolio's reaction to certain shocks. Take for instance an unexpected closure of the Suez canal or the invasion of Ukraine. It is not infeasible with today's technology that large language models have stored enough data to be able to verbally reason that the Suez canal affects the shipping routes of certain raw materials and hence any security that is linked to such a material. That said, whether or not such dreamt up scenarios are in fact feasible and not just well constructed lies with current technologies is hard to say but mitigation approaches exist. Finally, although such advanced risk management on its own is clearly valuable, it seems equally clear that its value would be much larger if it was directly integrated at conception of any novel trading strategy as well as in the capital allocation process. ### A word of warning The above predictions are, of course, idealistic and should be balanced by some realistic considerations. First and foremost, it is not at all clear that regulators or investors would cherish trading strategies where a human is taken completely out of the loop, even if that is feasible. More generally, the black box nature of current deep learning systems necessitate realistic backtesting or model-based approaches to assess the risk fully. That said, in a way this not different to the case of human traders who trade based on their natural neural networks and within strictly defined risk parameters that are constantly monitored. Although such traders can verbalize their train of thoughts, it is hard to audit their risk any better than a deep learning trader envisaged here. Bayesian based algorithmic trading strategies, i.e. strategies which come with a theoretical return distribution rather than a single outcome that has been verified by backtesting for years are at a natural advantage here. That said, such strategies are usually of limited complexity and usually found in high frequency trading on ultra liquid assets where they are extremely profitable. ### A word about data Ominously absent in the above discussion is any word about the training data such systems would require. Apart from the emerging legal questions as to the copyright and fair use of data available on the internet as training data we commented on above, datasets ingested by deep learning systems for training purposes are usually in the 'dirty' raw format in which they occur in the wild especially if the datasets in question are large. This is somewhat in stark contrast to most other machine learning approaches where great care has to be taken to curate or label such datasets accurately at great cost. Moreover, the true value of automatic trading strategy discovery lies in the fact that such strategies automatically decide what data is relevant. The only constraint for throwing arbitrary amounts of data at a model is compute power, but practical techniques exists to efficiently encode data neurally (through either fixed or learnt representations). The emergence of efficient scaling compute and dataset provision techniques (pytorch team, 2022; Svogor et al., 2022) facilitate that task. ## 4 A framework for assessing the role of deep learning in a given fund It is clear on reading the last subsection and ordering its thoughts that by far the biggest impact of deep learning on asset management will come from the ability to automate finding trading strategies and capital allocation decisions and, necessarily, a form of risk assessment (either learned implicitly or broken out separately or, best, both for redundancy). This would not only allow for such deep learning driven funds to start competing in strategies where, currently, human traders (aided by quantitative or AI driven tools) excel, but also come up with new trading strategy paradigms that have so far not emerged. Successes in AI over the past years have shown that both such'super-human' successes are feasible and such super-human strategies are in fact intuitive to us if they hail from deep learning algorithms that were explicitly inspired by human like learning. A stunning example of the latter is the fact that the only game the human world champion Lee Sedol won against AlphaGo in the game of Go in 2016 was him replaying a trick on the machine he learned from it in his first defeat to it. This means he was able to abstract a principle behind a sequence of moves he observed and apply that principle on a new setting, fooling the originator of that principle in this new setting. This seeming weakness of the machine is actually a strength, for further training of AlphaGo with today's more powerful compute resources would have reduced that risk but also, transferable strategies are more explainable strategies - an important aspect for regulatory reasons. ### A simple criterion The above insight leads to a rather simple classification criteria for funds across asset and trading strategy classes that captures where deep learning can have an impact: the level of automation of trading and of capital allocation. This results in the following 4 broad classes one can bucket any fund manager into. **Level 0:** trading decisions made by human traders (likely aided by quantitative or AI driven tools such as Bloomberg terminals) and capital allocation which follows a static rule or is dynamic and result dependent. **Level 1:** automated trading decisions with trading strategies designed and validated with a human in the loop and either a static or dynamic capital allocation rule also designed with a human in the loop. **Level 2:** automated trading decisions and automated discovery of trading strategies conceived and designed without a human in the loop and a human designed or in the loop capital allocation rule as well as trade supervision. **Level 3:** fully automated and human independent capital allocation and trading strategy discovery and, necessarily risk management. ### Remarks Technologically, Level 3 asset managers are now within the realm of the possible and I know of at least one fund that is tantalizingly close to achieving this goal. One challenge of course is regulatory compliance: after all, Level 3 trading strategies employed could be anything and operators of such a fund manager would have to figure out the underlying strategy via introspection, as well as the learned risk management associated underlying capital allocation. A challenge operators of such a fund would have would be to assure that the underlying algorithm has also learned its disclosure obligations towards potential regulators which can be both be sophisticated and vary widely between jurisdictions. For instance finding and combining several public data sources that predict large market moving trades in the equity market of others and exploiting the resulting market moves might be 'front running' in some jurisdictions but not in others. It is therefore likely, and for the one instance of a fund I know in fact the case, that such Level 3 funds will first emerge trading largely unregulated assets (such as crypto currencies) extremely successfully. Most quantitative hedge fund managers would fall into Level 1 or Level 2. I have my doubts a Level 3 strategy can be achieved without a deep learning approach such as the one outlined above. I note in passing that Level 1,2 and 3 fund managers do not necessarily need to execute trades automatically, rather the decision to trade as well as the strategy underlying these decisions are fully automated. The purpose of this simple categorization scheme really is to show where deep learning based methods, mostly or partially in existence today, can have a disruptive impact. In my view asset managers with a lower category level are at a higher risk of being disrupted by deep learning based approaches. Of course, one could run the argument that, for instance, successful traditionally human traders who invest in, say, equity long only value strategies could also use deep learning techniques to discover new strategies and, should they be intuitive enough to be recognizable, filter them and employ them just as Lee Sedol did in his winning game against AlphaGo. My issue with this argument is that with increased computing power, the level of'super-human-ness' of the trading strategies is, in my view, likely to increase and our human ability to ascribe a strategy might reach its limits, too. Maybe one could get the machine to literally spell out its reasoning, but at some point even that might not help bridge this gap if the compute power employed is large enough. ## 5 Summary and conclusions In section 2 I argued that deep learning is currently the dominant driving technique of the AI revolution and that its odd nature requires adopters of these incredible techniques to embrace their oddness. In section 3 I have taken a deep look into the crystal ball and sketched how I think already existing techniques can and will end up impacting the asset management industry. A resulting observation is then explored in section 4, namely that deep learning can have a tremendous impact on trade strategy discovery automation and automatic capital allocation and using this as a simple classification criterion for asset managers captures, in my view, their true adoption of deep learning so far as well as the disruptive risk they face from deep learning techniques. One point I wish to stress about the above is the necessity to embrace the odd nature of deep learning I highlight is fundamental and not easy to achieve in asset management. Investors and regulators need to be comfortable with the black-box nature of this technology which also implies that it should be viewed more human like rather than a defined algorithm in terms of its ability to err. Deep learning is incredibly successful not least because the ultimate underlying philosophy giving rise to it was for these algorithms to be human like. Little wonder then that they make mistakes like we do. Deep learning and AI based traders are like human ones and require the organizational setup we place around human traders in order to mitigate risk, so, say, a deep learning based risk-manager that cuts positions and is trained in an adversarial way, etc. To my mind, the challenge to mount a truly deep learning based asset manager requires both a culture to embrace its black-box-nature in all aspects as well as the ability to be a tail blazer with investors and regulators for this technology. This is a large effort, often underestimated in my view, but likely extremely profitable for the first ones to achieve this. ## Acknowledgements Thanks to my colleagues at the institute of advanced research in artificial intelligence (IARAI), specifically Sepp Hochreiter, David Kreil, Moritz Neun and Alina Mihai as well as all my academic collaborators in AI. Without their help, hard work and discussions I could not have formed the opinion expressed above. I would also like to thank my former colleagues Kaveh Sheibani, Julian Harvey Wood and Philippe Lamarque and everyone at Pendragon Capital in its various iterations for teaching me all I know about traditional proprietary trading and portfolio management. Last, but not least, I would like to thank Urs Alder for incredibly helpful suggestions on earlier drafts of this paper. It would not be in existence without his kind and expert help.
2306.06209
Backdoor Attack with Sparse and Invisible Trigger
Deep neural networks (DNNs) are vulnerable to backdoor attacks, where the adversary manipulates a small portion of training data such that the victim model predicts normally on the benign samples but classifies the triggered samples as the target class. The backdoor attack is an emerging yet threatening training-phase threat, leading to serious risks in DNN-based applications. In this paper, we revisit the trigger patterns of existing backdoor attacks. We reveal that they are either visible or not sparse and therefore are not stealthy enough. More importantly, it is not feasible to simply combine existing methods to design an effective sparse and invisible backdoor attack. To address this problem, we formulate the trigger generation as a bi-level optimization problem with sparsity and invisibility constraints and propose an effective method to solve it. The proposed method is dubbed sparse and invisible backdoor attack (SIBA). We conduct extensive experiments on benchmark datasets under different settings, which verify the effectiveness of our attack and its resistance to existing backdoor defenses. The codes for reproducing main experiments are available at \url{https://github.com/YinghuaGao/SIBA}.
Yinghua Gao, Yiming Li, Xueluan Gong, Zhifeng Li, Shu-Tao Xia, Qian Wang
2023-05-11T10:05:57Z
http://arxiv.org/abs/2306.06209v3
# Backdoor Attack with Sparse and Invisible Trigger ###### Abstract Deep neural networks (DNNs) are vulnerable to backdoor attacks, where the adversary manipulates a small portion of training data such that the victim model predicts normally on the benign samples but classifies the triggered samples as the target class. The backdoor attack is an emerging yet threatening training-phase threat, leading to serious risks in DNN-based applications. In this paper, we revisit the trigger patterns of existing backdoor attacks. We reveal that they are either visible or not sparse and therefore are not stealthy enough. More importantly, it is not feasible to simply combine existing methods to design an effective sparse and invisible backdoor attack. To address this problem, we formulate the trigger generation as a bi-level optimization problem with sparsity and invisibility constraints and propose an effective method to solve it. The proposed method is dubbed sparse and invisible backdoor attack (SIBA). We conduct extensive experiments on benchmark datasets under different settings, which verify the effectiveness of our attack and its resistance to existing backdoor defenses. The codes for reproducing main experiments are available at [https://github.com/YinghuaGao/SIBA](https://github.com/YinghuaGao/SIBA). Backdoor Attack, Invisibility, Sparsity, Trustworthy ML, AI Security ## I Introduction Deep neural networks (DNNs) have demonstrated their effectiveness and been widely deployed in many mission-critical applications (\(e.g.\), facial recognition [1, 2, 3]). Currently, training a well-performed model generally requires a large amount of data and computational consumption that are costly. Accordingly, researchers and developers usually choose to exploit third-party training resources (\(e.g.\), open-sourced data, cloud computing platforms, and pre-trained models) to alleviate training burdens in practice. However, the convenience of using third-party training resources may also lead to new security threats. One of the most emerging yet severe threats is called the backdoor attack [4, 5, 6, 7, 8, 9], where the adversaries intend to implant a hidden backdoor to victim models during the training process. The backdoored models behave normally on predicting benign samples whereas their predictions will be maliciously changed to the adversary-specified target class whenever the backdoor is activated by the adversary-specified trigger pattern. The design of trigger patterns is one of the most important factors in backdoor attacks. Currently, there are many different types of trigger patterns [10, 11, 9]. Arguably, patch-based triggers [4, 12, 13] and additive triggers [14, 15, 16] are the most classical and widely adopted ones among all trigger patterns. To fulfill the purpose of stealthiness, the adversaries will constrain the sparsity of trigger patches (\(i.e.\), trigger size in this case) and the maximal perturbation size of additive triggers. We find that almost all existing patch-based triggers are visible while additive triggers are non-sparse (usually the size of the whole image). An intriguing question arises: _Is it possible to design a trigger pattern for effective backdoor attacks that is both sparse and invisible?_ The answer to the aforementioned question is positive. In general, the most straightforward method is to introduce a (random) sparse mask to the patterns of existing invisible backdoor attacks during their generation process. However, as we will show in the experiments, this method is not effective in many cases, especially when the image size is relatively large. Its failure is mostly because the position of trigger areas is also critical for their success, especially when the perturbation size is limited in invisible attacks. Based on these understandings, in this paper, we propose to design sparse and invisible backdoor attacks in a co-optimization manner. Specifically, we formulate the sparse and invisible attack as a bi-level optimization problem with sparsity and invisibility constraints, where the upper-level problem is to minimize the loss on poisoned samples via optimizing the trigger while the lower-level one is to minimize the loss on all training samples via optimizing the model weights. In particular, this optimization problem is difficult to be solved directly due to the high non-convexity and poor convergence of gradient-based methods. To alleviate these problems, we utilize a pre-trained surrogate model to reduce the complexity of lower-level optimization and derive an alternate projected method to satisfy the \(L_{\infty}\) and \(L_{0}\) constraints. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of the proposed method and its feasibility under different settings. Besides, as we will show in the experiments, the generated sparse trigger pattern contains semantic information about the target class. It indicates our attack may serve as the potential path toward explaining DNNs, which is an unexpected gift. Our main contributions can be summarized as follows: * We reveal the potential limitations of current backdoor attacks that the triggers could not satisfy the sparsity and invisibility constraints simultaneously. * We formulate the sparse and invisible backdoor attack as a bi-level optimization and develop an effective and efficient method to solve it. * We conduct extensive experiments on benchmark datasets, verifying the effectiveness of our attack and its resistance to potential backdoor defenses. The rest of this paper is organized as follows: We briefly review some related works in Section II and introduce the proposed method in Section III. We present the experimental results and analysis in Section IV and conclude the whole paper in Section V at the end. ## II Related Work ### _Backdoor Attack_ Backdoor attacks aim to inject backdoor behaviors to the victim model such that the attacked model behaves normally on benign test samples but predicts the target class whenever the trigger is attached to the test samples. Trigger design is the core of backdoor attacks and a large corpus of works are devoted to proposing better triggers. In general, existing trigger designs can be roughly divided into two main categories, including trigger patches and additive perturbations, as follows. **Backdoor Attacks with Patch-based Triggers.** BadNets [4] is the first backdoor attack designed with the patch-based trigger. In general, the adversaries randomly select a few benign samples from the original dataset and'stamp' the pre-defined black-and-white trigger patch to those images and reassign their labels to a target class. These modified samples (dubbed 'poisoned samples') associated with remaining benign samples will be released as the poisoned dataset to victim users. After that, Chen _et al._[17] proposed a blended injection strategy to make the poisoned samples hard to be noticed by humans by introducing trigger transparency. Recently, Li _et al._[18] discussed how to design physical backdoor attacks that are still effective when the poisoned samples are directly captured by digital devices (\(e.g.\), camera) in real-world scenarios where trigger angles and positions could be changed. Most recently, Li _et al._[19] adopted patch-based triggers to design the first untargeted backdoor attack. In general, patch-based triggers are the most classical and even the default setting for new tasks [12, 20, 21]. The adversaries usually limited the sparsity (\(i.e.\), trigger size) of patch-based trigger patterns for stealthiness. However, although a high transparency-level of triggers is adopted, the perturbations are still visible to a large extent since the trigger patterns are significantly different from the replaced image portions in the poisoned images. **Backdoor Attacks based on Additive Perturbations.** Recently, using additive perturbations as trigger patterns becomes popular in backdoor attacks. Compared to patch-based triggers, these methods are more controllable regarding trigger stealthiness since the adversaries can easily ensure invisibility by limiting the maximum perturbation size of trigger patterns. For example, Zhao _et al._[15] adopted the target universal adversarial perturbation as the trigger to design an effective clean-label backdoor attacks where the target label is consistent with the ground-truth label of the poisoned samples; Nguyen _et al._[10] generated trigger patterns by image warping; Li _et al._[16] adopted a pre-trained attribute encoder to generate additive trigger patterns, inspired by deep image steganography. However, to the best of our knowledge, all existing methods need to modify the whole image for poisoning. How to design attacks with invisible and sparse trigger patterns remains an important open question and worth further explorations. ### _Backdoor Defense_ Currently, there are some methods (dubbed 'backdoor defenses') to reduce the backdoor threats. In general, existing defenses can be divided into four main categories, including **(1)** the detection of poisoned training samples, **(2)** poison suppression, **(3)** backdoor removal, and **(4)** the detection of poisoned testing samples. They correspond to the four important stages of DNNs, including data collection, model training, model deployment, and model inference. Specifically, the first type of defense intends to filter out poisoned training samples [22, 23, 24]. They are either based on the representation differences between poisoned and benign samples at intermediate layers of the model or directly attempt to reverse the trigger pattern; Poison suppression [25, 26, 27] depresses the effectiveness of poisoned samples during the training process to prevent the creation of hidden backdoors; Backdoor removal aims to remove the hidden backdoors in give pre-trained models [28, 29, 30]; The last type of defense [31, 32, 33] targets the detection of poisoned testing samples, serving as a firewall of given models. These defenses are probably the most practical ones, due to their efficiency and low requirements of defender capacities. ### _Sparse Optimization_ Sparse optimization requires the most elements of the variable to be zero, which usually brings unanticipated benefits such as interpretability [34, 35] or generalization [36, 37]. It has been extensively studied in various applications such as basis pursuit denoising [38], compressed sensing [39], source coding [40], model pruning [41, 42] and adversarial attacks [43, 34, 35]. The popular approaches to solve sparse optimization include relaxed approximation method [44, 45, 46] which penalized the original objective with regularizers such as \(L_{1}\)-norm, top-\(k\) norm and Schatten \(L_{p}\) norm, and proximal gradient method [47, 48, 49] which exploited the proximal operator that can be evaluated analytically. In this paper, we provide a unified formulation of sparse and invisible backdoor attacks and derive a projected method to practically optimize the trigger. We notice that similar techniques were used in recent adversarial attacks [43, 50]. However, our method differs from them in that **(1)** Existing methods were test-time attacks while our method is training-time attack. **(2)** These papers focused on data-wise optimization while our method studies on group-wise optimization, which is a much more challenging task. ## III The Proposed Method ### _Preliminaries_ **Threat Model.** In this paper, we focus on the image classification task where the model outputs a \(C\)-class probability vector: \(f_{\mathbf{\theta}}:\mathbb{R}^{d}\rightarrow[0,1]^{C}\). Given a training set \(\mathcal{D}=\{\mathbf{x}_{i},y_{i}\}_{i=1}^{N}\), \(\mathbf{x}\in\mathbb{R}^{d}\), \(y\in[C]=[1,2,\cdots,C]\), the parameters of classifier are optimized by the empirical risk minimization: \[\min\sum_{(\mathbf{x},y)\in\mathcal{D}}\mathcal{L}(f_{\mathbf{\theta}}(\mathbf{x}),y),\] where \(\mathcal{L}\) represents the loss function (\(e.g.\), the cross entropy loss). The adversary is allowed to modify a small subset \(\mathcal{D}_{s}\) of the original training set \(\mathcal{D}\) (\(i.e.\), \(\mathcal{D}_{s}\subset\mathcal{D}\) and \(|\mathcal{D}_{s}|\ll|\mathcal{D}|\)) by attaching the trigger \(\mathbf{t}\) and relabelling them as the target class \(y_{T}\). The victim model \(f_{\mathbf{\theta}}\) is trained on the modified dataset \(\mathcal{D}^{\prime}\), which is composed of a benign dataset \(\mathcal{D}_{c}=\mathcal{D}\setminus\mathcal{D}_{s}\) and a poisoned dataset \(\mathcal{D}_{p}=\{(\mathbf{x}+\mathbf{t},y_{T})|(\mathbf{x},y)\in\mathcal{D}_{s}\}\). In particular, the ratio \(|\mathcal{D}_{p}|/|\mathcal{D}|\) is called as the _poisoning rate_. **Adversary Goals.** In general, the adversaries have three main goals, including the _utility_, _effectiveness_, and _stealthiness_. Specifically, utility requires that the attacked model \(f_{\mathbf{\theta}}\) achieves high accuracy on benign test samples. Otherwise, the model would not be adopted and no backdoor could be planted; The effectiveness desires that the attacked model can achieve high attack success rates when benign samples are attached with the trigger pattern; The stealthiness requires that the dataset modification should be unnoticeable to victim dataset users. For example, the trigger patterns should be invisible and the poisoning rate should be small. ### _Sparse and Invisible Backdoor Attack (SIBA)_ As we mentioned in the previous section, we need to optimize the trigger sparsity and visibility simultaneously to ensure better stealthiness. In this section, we introduce the formulation and optimization of our sparse and invisible backdoor attack. **Problem Formulation.** The objective of SIBA could be formulated as a bi-level optimization problem since the effectiveness of the trigger pattern is related to a trained model whose optimization is also influenced by poisoned samples, as follows: \[\begin{split}\min_{\mathbf{t}}&\sum_{(\mathbf{x},y)\in \mathcal{D}_{v}}\mathcal{L}(f_{\mathbf{w}}(\mathbf{x}+\mathbf{t}),y_{T})\\ s.t.&\mathbf{w}=\arg\min_{\mathbf{\theta}}& \sum_{(\mathbf{x},y)\in\mathcal{D}_{c}\bigcup\mathcal{D}_{p}}\mathcal{L}(f_{\mathbf{ b}}(\mathbf{x}),y),\\ &\underbrace{\|\mathbf{t}\|_{0}\leq k}_{\text{sparsity}}& \underbrace{\|\mathbf{t}\|_{\infty}\leq\epsilon}_{\text{invisibility}},\end{split} \tag{1}\] where \(\mathcal{D}_{v}\) denotes the validation set acquired by the adversary. The upper-level optimization aims to ensure the effectiveness of the trigger, that is, the trained model \(f_{\mathbf{w}}\) would classify the samples attached with the triggers as the target class. The lower-level optimization represents the training process of the victim model. Besides, we add \(L_{0}\) and \(L_{\infty}\) constraints to confirm the trigger's sparsity and invisibility. **Surrogate Optimization Problem.** Due to the high non-convexity of the optimization Problem 1, we need to seek a feasible solution. In particular, the optimization of \(\mathbf{\theta}\) in the lower-level optimization requires full model training, which is time-consuming. To alleviate the computational burden and optimization difficulties, we exploit a pre-trained benign model \(f_{\mathbf{b}}\) to replace \(f_{\mathbf{w}}\) in the upper-level optimization. Instead of solving the Problem 1 directly, we turn it into the following surrogate optimization problem: \[\begin{split}\min_{\mathbf{t}}&\sum_{(\mathbf{x},y)\in \mathcal{D}_{v}}\mathcal{L}(f_{\mathbf{b}}(\mathbf{x}+\mathbf{t}),y_{T})\\ s.t.&\underbrace{\|\mathbf{t}\|_{0}\leq k}_{\text{sparsity}},&\underbrace{\|\mathbf{t}\|_{\infty}\leq\epsilon}_{\text{invisibility}}. \end{split} \tag{2}\] In this way, we only need to train the surrogate model \(f_{\mathbf{b}}\) once and avoid frequent updates. We will demonstrate the feasibility and rationality of the surrogate optimization by showing the attack transferability in Section IV. Fig. 1: The main pipeline of our sparse and invisible backdoor attack (SIBA). In the first step, the adversaries will generate a sparse and invisible trigger pattern. In the second step, the victim users will train their model on both benign and poisoned samples released by the adversary. In the third step, the attacked model can correctly predict clean test samples whereas the adversaries can maliciously change its predictions to the target class (‘airplane’ in this example). **Practical Optimization.** Let \(h(\mathbf{t})=\sum_{(\mathbf{x},y)\in\mathcal{D}_{\mathbf{t}}}\mathcal{L}(f_{\mathbf{b}}(\mathbf{x}+ \mathbf{t}),y_{T})\), we investigate the update of \(h(\mathbf{t})\) under the \(L_{\infty}\) and \(L_{0}\) constraints sequentially. Firstly, to satisfy the \(L_{\infty}\) constraint in Problem 2, we utilize the projected gradient method, which has been extensively explored in adversarial training [51]. The \(i\)-th update formula is shown as follows: \[\mathbf{v}_{i}=\Pi_{\mathcal{B}_{\epsilon}}\left(\mathbf{t}_{i}-\alpha \cdot\epsilon\cdot\text{sign}(\nabla_{\mathbf{t}}h(\mathbf{t}_{i}))\right), \tag{3}\] where \(\mathcal{B}_{\epsilon}=\{\mathbf{t}\mid\|\mathbf{t}\|_{\infty}\leq\epsilon\}\), \(\alpha\) is the step size. Next, we attempt to project \(\mathbf{v}_{i}\) into the \(L_{0}\) box: \(\mathcal{B}=\{\mathbf{t}\mid\|\mathbf{t}\|_{0}\leq k,\ \mathbf{t}_{j}=\mathbf{v}_{i,j}\ \text{or}\ 0,\ j=1,2\cdots,d\}\), which means we must select at most \(k\) element of \(\mathbf{v}_{i}\) and set the other elements to zero. We denote \(\mathbf{s}_{i}=\mathbf{t}_{i}-\alpha\cdot\nabla_{\mathbf{t}}h(\mathbf{t}_{i})\) and require the projected \(\mathbf{t}_{i+1}\) as close to \(\mathbf{s}_{i}\) as possible in the terms of square loss, as follows: \[\mathbf{t}_{i+1}=\arg\min_{\mathbf{u}\in\mathcal{B}}\|\mathbf{s}_{i}-\mathbf{u} \|_{2}^{2}. \tag{4}\] To solve Problem 4, we have the following Lemma. **Lemma 1**.: _Assuming the initial value of \(\mathbf{t}_{i}\) is 0, Problem 4 has the analytical solution as follows:_ \[\mathbf{t}_{i+1,j}=\begin{cases}\mathbf{v}_{i,j}&\text{if }j\in C^{\prime}\\ 0&\text{if }j\notin C^{\prime}\end{cases} \tag{5}\] _where \(C^{\prime}\) represents the subscript group which has the largest \(k\) element of \(|\nabla_{\mathbf{t}}h(\mathbf{t}_{i})|\)._ We leave the detailed proof of Lemma 1 in the appendix. Based on the above analysis, each iteration consists of Step 3 and Step 5. We exploit a mask vector \(\mathbf{m}\in\{0,1\}^{d}\) to perform Step 5 and update \(\mathbf{m}\) after multiple iterations of Step 3 to stabilize the optimization. The detailed optimization procedure is described in Algorithm 1. **Poison Injection.** We attach the optimized trigger \(\mathbf{t}\) to a small portion of training data and relabel them as the target class. The modified training set that consists of the triggered set and the untouched set is released to the users for training the victim models. The training configurations of the victim models are determined by the users while the adversary can not intervene. **Model Inference.** During the inference phase, the adversary validates the effectiveness of the backdoor attack with two types of data: benign test data and poisoned test data. The victim model is expected to predict correct class for the benign test data and the target class for the poisoned test data. The whole pipeline of our SIBA is shown in Figure 1. ## IV Experiments ### _Experimental Setup_ **Datasets.** In this paper, we consider CIFAR-10 and VGGFace2 datasets in our experiments. CIFAR-10 dataset consists of 10 classes and each class includes 5,000 training images and 1,000 test images. The size of each image is \(32\times 32\times 3\). For VGGFace2 dataset, we construct a 20-class subset from the original set for training efficiency. Each class includes 400 training images and 100 test images. The size of each image is \(128\times 128\times 3\). Both datasets are commonly used in recent backdoor-related research [10, 19, 52, 53, 54]. **Baseline Selection.** We compare our SIBA method with six representative backdoor attacks, including **(1)** BadNets [4], **(2)** \begin{table} \begin{tabular}{c|c|c|c c c c c c|c c} \hline \hline Model\(\downarrow\) & Metric\(\downarrow\), Method\(\rightarrow\) & No Attack & BadNets & Blended & TUAP & WaNet & ISSBA & UBW-P & Random & Sparse & SIBA \\ \hline \multirow{3}{*}{ResNet} & BA (\%) & 94.67 & 94.41 & 94.58 & 94.53 & 94.29 & 94.57 & 94.46 & 94.13 & 94.44 & 94.06 \\ & ASR (\%) & - & 100 & 98.16 & 85.47 & 47.90 & 0.76 & 64.78 & 2.87 & 88.49 & 97.60 \\ & \(L_{0}\) & - & 9 & 1,020 & 1,020 & 1,016 & 1,024 & 9 & 100 & 100 & 100 \\ & \(L_{\infty}\) & - & 0.81 & 0.19 & 0.03 & 0.19 & 0.05 & 0.81 & 0.03 & 0.03 & 0.03 \\ \hline \multirow{3}{*}{VGG} & BA (\%) & 93.34 & 93.25 & 93.31 & 93.06 & 92.09 & 93.15 & 93.16 & 92.69 & 92.59 & 92.84 \\ & ASR (\%) & - & 100 & 98.28 & 69.13 & 7.53 & 1.24 & 10.02 & 1.48 & 66.44 & 91.22 \\ \cline{1-1} & \(L_{0}\) & - & 9 & 1,020 & 1,020 & 1,016 & 1,024 & 9 & 100 & 100 & 100 \\ \cline{1-1} & \(L_{\infty}\) & - & 0.81 & 0.19 & 0.03 & 0.19 & 0.05 & 0.81 & 0.03 & 0.03 & 0.03 \\ \hline \hline \end{tabular} \end{table} TABLE I: The results of backdoor attacks on CIFAR-10. We mark the results in red if the ASR is lower than \(50\%\) or \(L_{0}/L_{\infty}\) distance is larger than \(10\%\) of the maximum possible values (\(i.e.\), \(102=1024\times 10\%\) for \(L_{0}\), \(0.1=1\times 10\%\) for \(L_{\infty}\)). Fig. 2: The examples of poisoned samples with different backdoor attacks on CIFAR-10 and VGGFace2 datasets. **First Row**: poisoned samples on the CIFAR-10 dataset. **Second Row**: poisoned samples on the VGGFace2 dataset. backdoor attack with the blended strategy (dubbed as 'Blended') [17], **(3)** TUAP [15], **(4)** WaNet [10], **(5)** ISSBA [16], and **(6)** UBW-P [19]. Among the aforementioned baselines, the triggers of BadNets and UBW-P are sparse (small \(L_{0}\) constraint) but visible (large \(L_{\infty}\) constraint), while those of others (Blended, TUAP, WaNet, ISSBA) are invisible (small \(L_{\infty}\) constraint) but dense (large \(L_{0}\) constraint). We also provide the results of two straightforward (yet ineffective) sparse and invisible backdoor attacks, including **(1)** using a random noise as the trigger that is restricted to have the same \(L_{\infty}\) and \(L_{0}\) constraint with our SIBA (dubbed as 'Random') and **(2)** the improved version of Random where its trigger magnitude is optimized using Line 8 in Algorithm 1. **Attack Settings.** For all attacks, we set the poisoning rate as \(1\%\) and choose the class '0' as the target class. Specifically, for the settings of BadNets, the trigger is a \(3\times 3\) checkerboard on CIFAR-10 and a \(20\times 20\) all-white patch on VGGFace2; The trigger is a Hello-Kitty image on CIFAR-10 and a random noise image on VGGFace2 for Blended. The transparency parameter is set as \(0.2\); We exploit a pre-trained model to generate the targeted universal adversarial perturbation on the benign model as the trigger for TUAP. The \(L_{\infty}\) constraint is set as \(8/255\) on both datasets; For WaNet, the size of the control field is \(4\times 4\) and the strength parameter of the backward warping field is set to \(0.5\); We adopt the default settings used in its original paper [16] for ISSBA; UBW-P is an untargeted backdoor with the BadNets-type trigger pattern; For our SIBA, we set the step size \(\alpha=0.2\) and \(L_{\infty}\) constraint \(\epsilon=8/255\) on both datasets. \(L_{0}\) constraint \(k\) is set to \(100\) on CIFAR-10 dataset and \(1,600\) on VGGFace2 dataset. We set the number of training iterations \(T=200\) and the update step \(K=5\) on both datasets. We use the whole training set to optimize the SIBA trigger. The example of poisoned samples is shown in Figure 2. **Training Settings.** We select ResNet-18 [55] and VGG-16 [56] as the model structures. For the CIFAR-10 dataset, the victim model is obtained by using SGD optimizer with momentum \(0.9\) and weight decay \(5\times 10^{-4}\). The number of training epochs is 100 and the initial learning rate is 0.1 which is divided by 10 in the 60-th epoch and the 90-th epoch. For the VGGFace2 dataset, we exploit an ImageNet pre-trained model and train the victim model using the SGD optimizer with momentum \(0.9\) and weight decay \(1\times 10^{-4}\). The number of training epochs is 30 and the initial learning rate is \(0.001\) which is divided by 10 in the 15-th epoch and the 20-th epoch. Classical data augmentations such as random crop and random horizontal flip are used for higher benign accuracy. Note that the pre-trained model \(f_{\text{b}}\) and the victim model have the same network structure and we will explore the attack effectiveness when they are different in Section IV-E2. **Evaluation Metrics.** Following the classical settings in existing research, we adopt benign accuracy (BA) and attack success rate (ASR) to evaluate all backdoor attacks. BA is the ratio of correctly classified samples in the benign test set. ASR is the ratio of test samples that are misclassified as the target class when the trigger is attached to them. BA indicates the model utility and ASR reflects the attack effectiveness. Besides, we incorporate the comparison of the \(L_{0}\) and \(L_{\infty}\) distances of trigger patterns for indicating stealthiness. They are calculated as the average value among all poisoned samples. ### _Main Results_ As shown in Table I-II, our SIBA reaches the best performance among all sparse and invisible backdoor attacks (\(i.e.\), Random, Sparse, and SIBA) on both datasets. Especially on the VGGFace2 dataset, the ASR improvements are larger than 50% compared to Sparse and 90% compared to Random. These results verify the effectiveness of our trigger optimization. Besides, the ASRs of our attack are always higher than 90% and the BA decreases compared to the model trained without attacks are always less than 2%. In particular, the attack performance of our SIBA is on par with (BadNets and Blended) or even better than (TUAP, WaNet, ISSBA, and UBW-P) of all baseline attacks that are either visible or not sparse. These results verify the effectiveness and stealthiness of our proposed method. ### _Ablation Study_ In this section, we discuss the effectiveness of our SIBA with different key hyper-parameters. Unless otherwise specified, all settings are the same as those used in Section IV-B. **Effects of the Target Label.** To validate the effectiveness of SIBA with different target labels, we conduct experiment on the ResNet18 with ten different classes. As shown in Figure 3, we could find that SIBA achieves \(>90\%\) ASR for all cases on the CIFAR-10 dataset and \(>75\%\) ASR on the VGGFace2 dataset, although the performance may have some mild fluctuations. **Effects of the Poisoning Rate.** To validate the effectiveness of SIBA with different poisoning rates, we experiment on the ResNet18 model with more poisoning rates from \(0.5\%\) to \(2.5\%\). As shown in Figure 4, the attack performance of \begin{table} \begin{tabular}{c|c|c|c c c c c c|c c c} \hline \hline Model \(\downarrow\) & Metric \(\downarrow\), Method \(\rightarrow\) & No Attack & BadNets & Blended & TUAP & WaNet & ISSBA & UBW-P & Random & Sparse & SIBA \\ \hline \multirow{4}{*}{ResNet} & BA (\%) & 79.75 & 78.80 & 78.80 & 78.75 & 79.05 & 78.30 & 77.85 & 77.80 & 78.25 & 78.85 \\ & ASR (\%) & - & 93.68 & 99.95 & 85.84 & 3.32 & 1.00 & 38.20 & 1.74 & 27.79 & 96.21 \\ & \(L_{0}\) & - & 400 & 16,384 & 16.332 & 15,892 & 16,382 & 400 & 1,600 & 1,600 & 1,600 \\ & \(L_{\infty}\) & - & 0.92 & 0.20 & 0.03 & 0.23 & 0.04 & 0.92 & 0.03 & 0.03 & 0.03 \\ \hline \multirow{4}{*}{VGG} & BA (\%) & 86.35 & 86.15 & 86.30 & 85.90 & 85.75 & 85.40 & 84.50 & 85.90 & 85.65 & 86.15 \\ & ASR (\%) & - & 98.42 & 100 & 83.68 & 2.95 & 87.53 & 48.10 & 4.63 & 41.68 & 96.37 \\ \cline{1-1} & \(L_{0}\) & - & 400 & 16,384 & 16.332 & 15,892 & 16.382 & 400 & 1,600 & 1,600 & 1,600 \\ \cline{1-1} & \(L_{\infty}\) & - & 0.92 & 0.20 & 0.03 & 0.23 & 0.04 & 0.92 & 0.03 & 0.03 & 0.03 \\ \hline \hline \end{tabular} \end{table} TABLE II: The results of backdoor attacks on VGGFace2. We mark the results in red if the ASR is lower than \(50\%\) or \(L_{0}/L_{\infty}\) distance is larger than \(10\%\) of the maximum possible values (\(i.e.\), \(1638=16384\times 10\%\) for \(L_{0}\), \(0.1=1\times 10\%\) for \(L_{\infty}\)). our SIBA increases with the increase of the poisoning rate. The attack performance of our SIBA is always better than baseline invisible and sparse attacks (\(i.e.\), Random and Sparse). In particular, on the CIFAR-10 dataset, SIBA achieves \(>90\%\) ASR with only \(0.5\%\) poisoning rate while the poisoning rate of the other two baselines has to be set three or four times higher to achieve similar attack performance. The advantage of our SIBA is even more obvious on the VGGFace2 dataset. **Effects of the \(L_{0}\) Constraints.** To investigate the attack performance of our SIBA under various \(L_{0}\) constraints, we experiment on the ResNet18 model with different \(k\) values ranging from \(50\) to \(250\) on CIFAR-10 and from \(800\) to \(2400\) on VGGFace2, respectively. As shown in Figure 5, the attack effectiveness increases with the increase of \(k\) while having mild effects on the benign accuracy. In particular, our SIBA achieves \(>90\%\) ASR with only 50 perturbed pixels (about \(5\%\) sparsity) on the CIFAR-10 dataset. However, for the other baseline attacks, the number of maximum perturbed pixels has to be increased to 150 for 'Sparse' and 200 for 'Random' to reach a similar performance. The improvement of our SIBA is even larger on the VGGFace2 dataset. **Effects of the \(L_{\infty}\) Constraint.** To investigate the attack performance of our SIBA under various \(L_{\infty}\) constraints, we experiment on the ResNet18 model with different \(\epsilon\) values, ranging from \(4/255\) to \(20/255\). As shown in Figure 6, similar to the effects of \(k\), the attack effectiveness increases with the increase of \(\epsilon\) while having mild effects on the benign accuracy. Our SIBA can achieve \(>90\) ASR under \(4/255\) budget on the CIFAR-10 dataset. In contrast, to achieve a similar attack performance, the \(L_{\infty}\) constraints of both baseline attacks have to be increased to three or four times larger than that of the SIBA. The results on VGGFace2 also demonstrate the superiority of SIBA over these baseline methods. **Effects of Other Parameters.** To demonstrate the stability of our attack under other parameters, we experiment on CIFAR-10 dataset with ResNet18 model and various \(K\), \(\alpha\), and \(T\) values. Specifically, \(K\) ranges from \(5\) to \(20\); \(\alpha\) ranges from \(0.2\) to \(1.0\); \(T\) ranges from \(200\) to \(1,000\). As shown in Figure 7, the ASR of SIBA is always higher than \(95\%\) in all cases. These results indicate that we can easily obtain a good performance without fine-tuning these parameters in practice. Fig. 4: Results with different poisoning rates on the CIFAR-10 dataset and the VGGFace2 dataset. Fig. 5: The effects of \(L_{0}\) constraint on the CIFAR-10 dataset and the VGGFace2 dataset. Fig. 3: The Effects of the target class on CIFAR-10 and VGGFace2 datasets. Fig. 6: The effects of \(L_{\infty}\) constraint on the CIFAR-10 dataset and the VGGFace2 dataset. ### _The Resistance to Potential Backdoor Defenses_ **The Resistance to Neural Cleanse.** As a representative detection of poisoned training samples, neural cleanse (NC) [23] first attempts to reconstruct the possible trigger of the candidate model and collects the \(L_{1}\) values of the optimized trigger associated with each target label. Then, NC calculates the median absolute deviation of the group and the anomaly index of each label. If the anomaly index is larger than the threshold, the model is regarded as backdoored and the defender can filter out poisoned training samples. In our experiments, the threshold is 2 as suggested in its original paper. We use the Adam optimizer in which the learning rate is \(0.1\). The coefficient of the regularization term is \(0.001\) and the number of training epochs is 50. As shown in Figure 8, the NC is ineffective to detect our backdoored model since the anomaly index of the proposed method is always lower than the threshold value. This failure is mostly because our SIBA only needs to manipulate a small number of pixels such that the optimization of NC cannot catch our trigger location. **The Resistance to Anti-backdoor Learning (ABL).** As a representative poison suppression method, ABL [26] first identifies the poisoned sample candidates with loss values and then unlearns the candidate samples by gradient ascent. In our experiments, the isolation epoch and the unlearning epoch are set to 20 and 80, as suggested in its original paper [26]. As shown in Table III, our attack is resistant to ABL in most cases, although the ASR may have some decreases. Its failure is mostly because the loss values are not effective to reflect the difference between poisoned and benign samples of SIBA. **The Resistance to Fine-pruning.** As a representative backdoor removal method, fine-pruning (FP) [28] first tests the candidate model with a small clean validation set and records the average activation of each neuron. Then, FP prunes the channels with increasing order until the clean accuracy drops below some threshold. In our experiments, the validation set is obtained by randomly choosing \(20\%\) samples from the clean training dataset and the total channel number is 512. The curves of BA and ASR with respect to the number of pruned channels are shown in Figure 9. We could observe that the ASR of the proposed backdoor attack preserves on CIFAR-10 even if a large portion of channels are pruned. As for the VGGFace2 dataset, the ASR is reduced below \(80\%\) when the number of pruned channels is larger than \(400\). However, the BA is significantly decreased as its sacrifice. These results verify the resistance of our SIBA to FP. **The Resistance to SentiNet.** As a representative white-box detection of poisoned testing samples, SentiNet [31] relies on model interpretability techniques to locate potential trigger regions. Grad-CAM uses the gradient with respect to the model's final layer and calculates the salience map of the input region to reflect the positive importance of the input image. In our experiments, we visualize the salience maps of some poisoned samples on CIFAR-10 and VGGFace2 datasets. As shown in Figure 10, the salience maps could not provide useful information to detect the trigger. Its failure is mostly because the trigger of SIBA is not a small-sized patch. **The Resistance to STRIP.** As a representative black-box detection of poisoned samples with predicted logits, STRIP [32] perturbs a given test image by superimposing various images and then inspects the entropy of the model prediction. The suspicious samples having low entropy are regarded as poisoned samples. We evaluate the resistance of our SIBA to STRIP by visualizing the entropy distributions of samples. As shown in Figure 11, the entropy distributions of poisoned samples are mixed with those of benign samples. Accordingly, Fig. 8: The resistance of our SIBA to neural cleanse. \begin{table} \begin{tabular}{c|c|c c} \hline \hline Dataset\(\downarrow\) & Model\(\downarrow\), Metric\(\rightarrow\) & BA (\%) & ASR (\%) \\ \hline \multirow{2}{*}{CIFAR-10} & ResNet & 88.33 & 22.71 \\ & VGG & 82.41 & 94.00 \\ \hline \multirow{2}{*}{VGGFace2} & ResNet & 72.45 & 74.74 \\ & VGG & 77.20 & 96.53 \\ \hline \hline \end{tabular} \end{table} TABLE III: The resistance to anti-backdoor learning. Fig. 7: Results of our SIBA with different parameters on the CIFAR-10 dataset and the VGGFace2 dataset. our SIBA can evade the detection of STRIP. **The Resistance to Scale-Up.** As a representative black-box detection of poisoned samples with predicted labels, Scale-Up [33] discovered the phenomenon that the poisoned samples had the scaled prediction consistency when the pixel values were amplified and proposed to distinguish the poisoned samples by counting the predictions of scaled images. In our experiments, we use a scaling set with size 5 and set the threshold as \(0.8\). We report true positive rate (TPR), false positive rate (FPR), area under the receiver operating characteristic (AUROC), and ASR in Table IV. As shown in the table, although Scale-Up can decrease the effectiveness of SIBA to some extent, the detection performance is far from satisfactory since the average ASR is still \(>70\%\). In other words, our SIBA is resistant to the Scale-Up to a large extent. ### _Discussion_ #### Iv-E1 A Closer Look to the Effectiveness of our SIBA To understand the effectiveness of our SIBA, we also visualize the (normalized) trigger patterns when different target labels are adopted. As shown in Figure 12, the SIBA trigger is always located in the main body of the object and reflects the discriminate image region of the target class (such as the airplane's fuselage and wings in Figure 12). In other words, the generated sparse trigger pattern contains semantic information about the target class. The manipulated positions of SIBA reveal the fragile parts suffering from backdoor attacks and shed the light on the understanding of the backdoor robustness of DNNs. However, we do not observe a similar phenomenon in the baseline backdoor attacks. It indicates our attack may serve as the potential path toward explainable artificial intelligence, which is an unexpected gift. \begin{table} \begin{tabular}{c|c|c c c c} \hline Dataset\({}_{i}\) & Model\({}_{i}\), Metric-\(\downarrow\) & TPR & FPR & AUROC & ASR (\%) \\ \hline CIFAR-10 & ResNet & 0.5816 & 0.2067 & 0.7427 & 41.84 \\ & VGG & 0.4028 & 0.2135 & 0.6369 & 59.72 \\ \hline & ResNet & 0.0220 & 0.0855 & 0.4428 & 97.80 \\ & VGG & 0.0295 & 0.0830 & 0.4603 & 97.05 \\ \hline \end{tabular} \end{table} TABLE IV: The resistant of our SIBA to Scale-Up. Fig. 10: The resistance of our SIBA to SentiNet. Fig. 9: The resistance of our SIBA to FP. #### Iv-A2 Attack Transferability with Different Model Structures As described in Section III-B, we need a pre-trained benign model to generate our SIBA trigger pattern. The experiments in Section IV-A are conducted based on the assumption that the surrogate model and victim model have the same model structure, which may not be feasible in practice since the adversaries have no information of the structure that victim users may use. In this part, we explore the transferability of our SIBA: '_How effective is SIBA when the surrogate model is different from the victim model?_'. We select four network architectures: ResNet18, ResNet34, VGG16, and VGG19 on CIFAR-10 for discussions. Other settings are the same as those used in Section IV-B. As shown in Figure 13, our SIBA achieves consistently excellent attack performance under different settings, although the performance may have some fluctuations due to different model capacities. These results indicate that our SIBA method does not require knowing any information of victim users and therefore can serve as an effective poison-only backdoor attack. #### Iv-A3 The Extension to All-to-all Setting The experiments in Section IV-B adopt the all-to-one setting, where all poisoned samples are expected to classified as the same target class. In this part, we extend our SIBA to the all-to-all setting, in which the target class depends on the ground truth class of the poisoned sample. Specifically, we adopt the most classical \begin{table} \begin{tabular}{c|c c c} \hline Metric\(\rightarrow\) & BA (Surrogate) & BA (Vvictim) & ASR (Vvictim) \\ \hline 5\% & 57.55\% & 94.44\% & 65.44\% \\ 10\% & 70.65\% & 94.25\% & 97.10\% \\ 15\% & 79.32\% & 94.11\% & 95.49\% \\ 20\% & 84.14\% & 94.83\% & 97.56\% \\ \hline 100\% & 94.67\% & 94.06\% & 97.60\% \\ \hline \end{tabular} \end{table} TABLE V: Results with limited data on the CIFAR-10 dataset. Fig. 11: The resistance of our SIBA to STRIP. \begin{table} \begin{tabular}{c|c c c} \hline Metric\(\rightarrow\) & BA (Surrogate) & BA (Vvictim) & ASR (Vvictim) \\ \hline 5\% & 57.55\% & 94.44\% & 65.44\% \\ 10\% & 70.65\% & 94.25\% & 97.10\% \\ 15\% & 79.32\% & 94.11\% & 95.49\% \\ 20\% & 84.14\% & 94.83\% & 97.56\% \\ \hline 100\% & 94.67\% & 94.06\% & 97.60\% \\ \hline \end{tabular} \end{table} TABLE V: Results with limited data on the CIFAR-10 dataset. Fig. 12: The normalized SIBA triggers generated with different target classes on the CIFAR-10 dataset. Fig. 13: The performance of our SIBA with different surrogate and victim structures on the CIFAR-10 dataset. **Row**: surrogate models; **Column**: victim models. transformation function '\(c(y)=(y+1)\bmod C\)' in this paper, following the settings of existing papers. In this case, the problem formulation of our SIBA is as follows: \[\min_{\mathbf{t}}\sum_{(\mathbf{x},y)\in\mathcal{D}_{v}}\mathcal{L}(f_{\mathbf{ b}}(\mathbf{x}+\mathbf{t}),c(y)) \tag{6}\] \[s.t.\ \|\mathbf{t}\|_{0}\leq k,\ \|\mathbf{t}\|_{\infty}\leq\epsilon.\] We conduct experiments of the all-to-all SIBA attack on the CIFAR-10 dataset with ResNet18. The poisoning rate is increased to \(10\%\) since the all-to-all attacks are more complicated than all-to-one methods. All other settings are the same as those used in Section IV-B. As a result, the BA is \(94.61\%\) and the ASR is \(93.34\%\), indicating that our SIBA is feasible to be applied under the all-to-all setting. #### Iv-C4 SIBA with Limited Training Data In the previous sections, we assume that the adversary optimizes the SIBA trigger via the whole training set. However, in real scenarios, it might be infeasible to acquire the whole training set for the adversary to train the surrogate model. We hereby raise the question:_How effective is SIBA when the adversary has limited data?_. In this part, we optimize our SIBA trigger based on a subset of the training set in which the data percentage ranges from \(5\%\) to \(20\%\). We report the BA of the surrogate model, and the BA and the ASR of the victim model. As shown in Table V, the degraded performance of the surrogate model does not mean the inefficiency of SIBA when only limited training data is adopted. Our SIBA achieves \(>90\%\) ASR even when the adversary can only access to \(10\%\) training data. These results verify the effeciency of our SIBA. #### Iv-C5 SIBA with Asymmetric Triggers To further boost the attack effectiveness while maintaining attack stealthiness in practical scenarios, we explore the idea of asymmetric triggers [17, 13] that maintains the original trigger during the training process but amplifies it during the inference time. Specifically, we construct the test poisoned sample with the following formula: \(\mathbf{x}_{i}+\epsilon\cdot\text{sign}(\mathbf{t}_{i}),\ i=1,2,\cdots,d\), where \(\epsilon\) controls the visibility. In our experiments, we set the number of maximum perturbed pixels as \(50\). Other training details are consistent with those in Section IV-A. We illustrate the poisoned samples in Figure 14 and depict the ASR curves in Figure 15, from which we find that amplified triggers not only outperform the original triggers but also could be implemented as backdoor patches [57, 58] in physical world. ## V Conclusion In this paper, we proposed a novel backdoor attack, \(i.e.\), sparse and invisible backdoor attack (SIBA), to achieve attack effectiveness and attack stealthiness simultaneously. Our SIBA method only needs to modify a few pixels of the original images to generate poisoned samples and is human-imperceivable due to the low modification magnitude. To achieve it, we formulated the trigger generation as a bi-level optimization problem with sparsity and invisibility constraints and proposed an effective method to solve it. We conducted extensive experiments on benchmark datasets, verifying the effectiveness, the resistance to potential defenses, and the flexibility under different settings of our attack. We hope our method can provide a new angle and deeper understanding of backdoor mechanisms, to facilitate the design of more secure and robust DNNs. ## Appendix **Lemma 1**.: _Assuming the initial value of \(\mathbf{t}_{i}\) is 0, Problem 4 has the analytical solution as follows:_ \[\mathbf{t}_{i+1,j}=\begin{cases}\mathbf{v}_{i,j}&\text{if }j\in C^{\prime}\\ 0&\text{if }j\notin C^{\prime}\end{cases} \tag{7}\] _where \(C^{\prime}\) represents the subscript group which has the largest \(k\) element of \(|\nabla_{\mathbf{t}}h(\mathbf{t}_{i})|\)._ Proof.: We denote \(C\) as the \(k\)-dimension subset of \(\mathbf{v}_{i}\) such that \(\mathbf{u}_{j}=\mathbf{v}_{i,j}\) if \(j\in C\) and \(\mathbf{u}_{j}=0\) if \(j\notin C\) and assume that the initial value of \(\mathbf{t}_{i}\) is 0. Then, the objective in the equation (4) could be derived as follows: \[\|\mathbf{s}_{i}-\mathbf{u}\|_{2}^{2}= \sum_{j\in C}(\mathbf{s}_{i,j}-\mathbf{v}_{i,j})^{2}+\sum_{j\notin C}\mathbf{ s}_{i,j}^{2} \tag{8}\] \[= \sum_{j\in C}\left(\alpha\cdot(\nabla_{\mathbf{t}}h(\mathbf{t}_{i}))_{j}- \epsilon\cdot\text{sign}(\nabla_{\mathbf{t}}h(\mathbf{t}_{i}))_{j}\right)^{2}+\] \[\sum_{j\notin C}\left(\alpha\cdot\nabla_{\mathbf{t}}h(\mathbf{t}_{i}) \right)_{j}^{2}\] \[= \sum_{j\in C}\left((\alpha\cdot|(\nabla_{\mathbf{t}}h(\mathbf{t}_{i}))_{j }|-\epsilon)\cdot\text{sign}(\nabla_{\mathbf{t}}h(\mathbf{t}_{i}))_{j}\right)^{2}+\] \[\sum_{j\notin C}\left(\alpha\cdot\nabla_{\mathbf{t}}h(\mathbf{t}_{i}) \right)_{j}^{2}\] \[= \sum_{j\in C}((\alpha\cdot\nabla_{\mathbf{t}}h(\mathbf{t}_{i}))_{j}^{2}- \sum_{j\in C}2\alpha\epsilon\cdot|\nabla_{\mathbf{t}}h(\mathbf{t}_{i})_{j}|+\] \[\sum_{j\in C}\epsilon^{2}+\sum_{j\notin C}((\alpha\cdot\nabla_{\bm {t}}h(\mathbf{t}_{i}))_{j})^{2}\] \[= \|\alpha\cdot\nabla_{\mathbf{t}}h(\mathbf{t}_{i})\|_{2}^{2}+k\epsilon^{2}- 2\alpha\epsilon\sum_{j\in C}\lvert\nabla_{\mathbf{t}}h(\mathbf{t}_{i})\rangle_{j}|\] Observing the equation (8), the first two terms are constants and the equation is minimized when the last term contains the largest \(k\) element of \(|\nabla_{\mathbf{t}}h(\mathbf{t}_{i})|\) Fig. 15: Results of SIBA with amplified triggers on CIFAR-10.
2305.07447
Universal Source Separation with Weakly Labelled Data
Universal source separation (USS) is a fundamental research task for computational auditory scene analysis, which aims to separate mono recordings into individual source tracks. There are three potential challenges awaiting the solution to the audio source separation task. First, previous audio source separation systems mainly focus on separating one or a limited number of specific sources. There is a lack of research on building a unified system that can separate arbitrary sources via a single model. Second, most previous systems require clean source data to train a separator, while clean source data are scarce. Third, there is a lack of USS system that can automatically detect and separate active sound classes in a hierarchical level. To use large-scale weakly labeled/unlabeled audio data for audio source separation, we propose a universal audio source separation framework containing: 1) an audio tagging model trained on weakly labeled data as a query net; and 2) a conditional source separation model that takes query net outputs as conditions to separate arbitrary sound sources. We investigate various query nets, source separation models, and training strategies and propose a hierarchical USS strategy to automatically detect and separate sound classes from the AudioSet ontology. By solely leveraging the weakly labelled AudioSet, our USS system is successful in separating a wide variety of sound classes, including sound event separation, music source separation, and speech enhancement. The USS system achieves an average signal-to-distortion ratio improvement (SDRi) of 5.57 dB over 527 sound classes of AudioSet; 10.57 dB on the DCASE 2018 Task 2 dataset; 8.12 dB on the MUSDB18 dataset; an SDRi of 7.28 dB on the Slakh2100 dataset; and an SSNR of 9.00 dB on the voicebank-demand dataset. We release the source code at https://github.com/bytedance/uss
Qiuqiang Kong, Ke Chen, Haohe Liu, Xingjian Du, Taylor Berg-Kirkpatrick, Shlomo Dubnov, Mark D. Plumbley
2023-05-11T16:41:55Z
http://arxiv.org/abs/2305.07447v1
# Universal Source Separation with Weakly Labelled Data ###### Abstract Universal source separation (USS) is a fundamental research task for computational auditory scene analysis, which aims to separate mono recordings into individual source tracks. There are three potential challenges awaiting the solution to the audio source separation task. First, previous audio source separation systems mainly focus on separating one or a limited number of specific sources. There is a lack of research on building a unified system that can separate arbitrary sources via a single model. Second, most previous systems require clean source data to train a separator, while clean source data are scarce. Third, there is a lack of USS system that can automatically detect and separate active sound classes in a hierarchical level. To use large-scale weakly labeled/unlabeled audio data for audio source separation, we propose a universal audio source separation framework containing: 1) an audio tagging model trained on weakly labeled data as a query net; and 2) a conditional source separation model that takes query net outputs as conditions to separate arbitrary sound sources. We investigate various query nets, source separation models, and training strategies and propose a hierarchical USS strategy to automatically detect and separate sound classes from the AudioSet ontology. By solely leveraging the weakly labelled AudioSet, our USS system is successful in separating a wide variety of sound classes, including sound event separation, music source separation, and speech enhancement. The USS system achieves an average signal-to-distortion ratio improvement (SDRi) of 5.57 dB over 527 sound classes of AudioSet; 10.57 dB on the DCASE 2018 Task 2 dataset; 8.12 dB on the MUSDB18 dataset; an SDRi of 7.28 dB on the Slakh2100 dataset; and an SSNR of 9.00 dB on the voicebank-demand dataset. We release the source code at [https://github.com/bytedance/uss](https://github.com/bytedance/uss) Universal source separation, hierarchical source separation, weakly labelled data. ## 1 Introduction Mono source separation is the task of separating single-channel audio recordings into individual source tracks. An audio recording may consist of several sound events and acoustic scenes. _Universal source separation_ (USS) is a task to separate arbitrary sound from a recording. Source separation has been researched for several years and has a wide range of applications, including speech enhancement [1, 2], music source separation [3], and sound event separation [4, 5]. USS is closely related to the well-known cocktail party problem [6], where sounds from different sources in the world mix in the air before arriving at the ear, requiring the brain to estimate individual sources from the received mixture. Humans can focus on a particular sound source and separate it from others, a skill sometimes called _selective hearing_. As a study of auditory scene analysis by computational means, computational auditory scene analysis [7, 8] systems are machine listening systems that aim to separate mixtures of sound sources in the same way that human listeners do. Many previous works mainly focus on _specific source separation_ that only separate one or a few sources; these include speech separation [1, 2] and music source separation [9] tasks. Different from specific source separation tasks such as speech enhancement or music source separation, a USS system aims to automatically detect and separate the tracks of sound sources from a mixture. One difficulty of USS is that there are hundreds of different sounds in the world, and it is difficult to separate all sounds using a unified model [10]. Recently, the USS problem has attracted the interests of several researchers. A system [11] was proposed to separate arbitrary sounds by predicting the masks of sounds, where the masks control how many signals should remain from the mixture signals. Unsupervised USS systems [12, 13] were proposed to separate sounds by mixing training samples into a mixture and separating the mixture into a variable number of sources. A Free Universal Sound Separation (FUSS) system applied a time-domain convolutional network (TDCN++) system to separate mixtures into up to 4 separate sources. A class conditions system [14] was proposed for 4-stem music source separation. Some other methods [15, 16, 17] use audio embeddings that character an audio clip to control what sources to separate from a mixture. In [18], one-hot encodings of sound classes are used as controls to separate corresponding sources. Other sound separation systems include learning to separate from weakly labelled scenes [19] and SuDoRM-RM [13, 20] to remove sound sources from mixtures. Recently, a language-based source separation system was proposed in [21]. Those systems are mainly trained on small datasets and do not scale to automatically detect and separate hundreds of sound classes. For the source separation problem, we define _clean source data_ as audio segments that contain only target sources without other sources. The clean source data can be mixed to form audio mixtures to train potential separators. However, collecting clean data is time-consuming. For sound classes such as "Speech", clean data can be recorded in the laboratory. But for many other environmental sounds such as "Thunder", the collection of clean data is difficult. Recently, the concept of _weakly labelled data_[22, 23] was used in audio signal processing. In contrast to clean source data, weakly labelled data contains multiple sources in an audio clip. An audio clip is labelled with one or multiple tags, while the time information of tags is unknown. For example, a 10-second audio clip is labelled as "Thunder" and "Rain", but the time when these two events exactly appear within this 10-second clip is not provided. Weakly labelled data has been widely used in audio tagging [24, 25, 26, 27] and sound event detection [28, 29, 30]. But there has been limited work on using weakly labelled for source separation [31, 32]. In this work, we propose a USS framework that can be trained with weakly labelled data. This work extends our previously proposed USS systems [31, 32, 33] with contributions as follows: * We are the first to use large-scale weakly labelled data to train USS systems that can separate hundreds of sound classes. * We propose to use sound event detection systems trained on weakly labelled data to detect short segments that are most likely to contain sound events. * We investigate a variety of query nets to extract conditions to build USS systems. The query nets are pretrained or finetuned audio tagging systems. * We propose a hierarchical USS strategy to automatically detect and separate the sources of existing sound classes with an hierarchical AudioSet ontology. The USS procedure do require the specification of the sound classes to separate. * We show that a single USS system is able to perform a wide range of separation tasks, including sound event separation, music source separation, and speech enhancement. We conduct comprehensive ablation studies to investigate how different factors in our system affect the separation performance. This article is organized as follows. Section 2 introduces neural network-based source separation systems. Section 3 introduces our proposed weakly labelled source separation framework. Section 4 reports on the results of experiments. Section 5 concludes this work. ## 2 Source Separation via Neural Networks Deep learning methods for audio source separation have outperformed traditional methods such as Non-negative Matrix Factorization [34]. Fig. 1 shows source separation models in the time domain (left) and in the frequency domain (right). Here, we introduce the basic methodology of those separation models. ### _Time-domain Separation Models_ A neural network time-domain separation model \(f\) is typically constructed as an encoder-decoder architecture, as shown in the left of Fig. 1. Formally, given a single-channel audio clip \(x\in\mathbb{R}^{L}\) and a separation target \(s\in\mathbb{R}^{L}\), where \(L\) is sample length, the separator \(f\) contains two types: a synthesis-based separation system that directly outputs the waveform of the target source, and a mask-based separation that predict a mask that can be multiplied to the mixture to output the target source. Separation models such as Demucs [35, 36] and Wave-U-Net [9], \(f\) directly estimates the final separation target: \(\hat{s}=f(x)\). Mask-based separation models such as TasNet [37] and ConvTasNet [38] predict masks in the latent space produced by the neural network. The masks control how much of sources should remain from the mixture. Then, a decoder is designed to reconstruct the separated waveform from the masked latent feature produced by the neural network. ### _Frequency-domain Separation Models_ In contrast to time-domain models, frequency-domain models leverage a spectrogram, such as a short-time Fourier transform (STFT), to facilitate the separation process. Harmonic features have more patterns in the frequency domain than those in the time domain. This might help improve separation performance in source separation tasks, such as music source separation and environmental sound separation [39]. Formally, given a mono audio clip \(x\), we denote the STFT of \(x\) as a complex matrix \(X\in\mathbb{C}^{T\times F}\), where \(T\) is the number of time frames and \(F\) is the number of frequency bins. We denote the magnitude and the phase of \(X\) as \(|X|\) and \(\angle X\), respectively. The right part of Fig. 1 shows a frequency-domain separation system \(f\) predicting a magnitude ideal ratio mask (IRM) [40]\(M\in\mathbb{R}^{T\times F}\) or a complex IRM (cIRM) Fig. 1: The standard architecture of deep-learning-based audio source separation model. Left top: synthesis-based separation model. Left bottom: mask-based separation model. Right: the general type of frequency-domain separation model. [41]\(M\in\mathbb{C}^{T\times F}\) that can be multiplied by the STFT of the mixture to obtain the STFT of the separated source. The complex STFT of the separated source \(\hat{S}\in\mathbb{C}^{T\times F}\) can be calculated by: \[\hat{S}=M\odot X. \tag{1}\] where \(\odot\) is the element-wise complex multiplication. Then, the separated source \(\hat{s}\in\mathbb{R}^{L}\) can be obtained by applying an inverse STFT on \(\hat{S}\). Frequency domain models include fully connected neural networks [2], recurrent neural networks (RNNs) [42, 43, 44], and convolutional neural networks (CNNs) [43, 45, 46]. UNets [47, 48] are variants of CNN that contain encoder and decoder layers for source separation. Bandsplit RNNs (BSRNNs) [39] apply RNNs along both the time and frequency axes to capture time and frequency domain dependencies. There are also approaches such as hybrid Demucs [49, 50] which combine time and frequency domain systems to build source separation systems. ### _Challenges of Source Separation Models_ As mentioned above, many previous source separation systems require clean source data to train source separation systems. However, the collection of clean source data is difficult and time-consuming. Table I summarizes datasets that can be used for source separation. On the one hand, previous clean source datasets have durations of around tens of hours. On the other hand, weakly labelled datasets are usually larger than clean source datasets and clean datasets. AudioSet [23] is a representative weakly labelled dataset containing over 5,800 hours of 10-second audio clips and is larger in both size and number of sound classes than clean source datasets. AudioSet has an ontology of 527 sound classes in its released version. The ontology of AudioSet has a tree structure, where each audio clip may contain multiple tags. In this work, we use the weakly labelled AudioSet dataset containing 5,800 hours to train a USS system that can separate hundreds of sound classes. ## 3 USS with Weakly Labelled Data ### _Weakly Labelled Data_ In contrast to clean source data, weakly labelled data only contain the labels of what sound classes are present in an audio recording. Weakly labelled data may also contain interfering sounds. There are no time stamps for sound classes or clean sources. We denote the \(n\)-th audio clip in a weakly labelled dataset as \(a_{n}\) where \(a\) is the abbreviation for the _audio_. The tags of \(a_{n}\) is denoted as \(y_{n}\in\{0,1\}^{K}\), where \(K\) is the number of sound classes. The value \(y_{n}(k)=1\) indicates the presence of a sound class \(k\) while \(y_{n}(k)=0\) indicates the absence of a sound class \(k\). We denote a weakly labelled dataset as \(D=\{a_{n},y_{n}\}_{n=1}^{N}\), where \(N\) is the number of training samples in the dataset. The left part of Fig. 2 shows a clean source audio clip containing the clean waveform of "Flute". The right part of Fig. 2 shows a weakly labelled audio clip containing a target sound class "Air horn, truck horn" which only occurs between 2.5 s and 4.0 s. The weakly labelled audio recording also contains unknown interference sounds, i.e., \(y_{n}(k)=0\) may contain missing tags for some sound class \(k\). The goal of a weakly labelled USS system is to separate arbitrary sounds trained with only weakly labelled data. Fig. 3 depicts the architecture of our proposed system, containing four steps: 1. We apply a sampling strategy to sample audio clips of different sound classes from a weakly labelled audio dataset. 2. We define an _anchor segment_ as a short segment that is most likely to contain a target sound class in a long audio clip. We apply an anchor segment mining algorithm to localize the occurrence of events/tags in the weakly labelled audio tracks. 3. Use pretrained audio tagging models to predict the tag probabilities or embeddings of anchor segments. 4. Mix anchor segments as input mixtures. Train a query-based separation network to separate the mixture into one of the target source queried by the sound class condition. ### _Audio Clips Sampling_ For a large-scale dataset, we apply two sampling strategies: 1) random sampling: randomly sample audio clips from the dataset to constitute a mini-batch; and 2) balanced sampling: sample audio clips from different sound classes to constitute a mini-batch to ensure the clips contain different sound classes. AudioSet is highly unbalanced; sound classes such as "Speech" and "Music" have almost 1 million audio clips, while sound classes such as "tooth breath" have only tens of training samples. Without balanced sampling, the neural network may never see "tooth breath" if the training is not long enough. Following the training scheme of audio classification systems [25, 26, 27, 56], we apply the balanced \begin{table} \begin{tabular}{l c c c} \hline \hline Dataset & Dur. (h) & Classes & Types \\ \hline Voicebank-Demand [51] & 19 & 1 & Clean \\ MUSDB18 [52] & 6 & 4 & Clean \\ UrbanSound8K [53] & 10 & 10 & Clean \\ FSDKaggle 2018 [54] & 20 & 41 & Clean \\ FUSS [55] & 23 & 357 & Clean \\ AudioSet [23] & 5,800 & 527 & Weak \\ \hline \hline \end{tabular} \end{table} TABLE I: Source separation datasets. The types can be clean data or weakly labelled data. Fig. 2: Left: Clean source data of sound class “Flute”. Right: Weakly labelled data of sound class “Air horn, truck horn” which only occurs between 2.5s - 4.0s. sampling strategy to retrieve audio data from AudioSet so that all sound classes can be sampled equally. That is, each sound class is sampled evenly from the unbalanced dataset. We denote a mini-batch of sampled audio clips as \(\{a_{i}\}_{i=1}^{B}\), where \(B\) is the mini-batch size. ### _Anchor Segment Mining_ We define _anchor segment mining_ as a procedure to localize anchor segments in an audio clip. We use sound event detection models that are trained only on weakly-labelled data but can localize the occurrence (i.e. time stamps) of sound classes. Recently, audio tagging systems trained with the weakly labelled AudioSet [25, 26, 57, 58, 27] have outperformed systems trained with clean source data. We apply Pretrained Audio Neural Networks (PANNs) [25] and a Hierarchical Token-semantic Audio Transformer (HTS-AT) [27] as audio tagging models to perform the anchor segment mining procedure. Such models are able to extract audio clips with relatively clean sound sources from weakly labelled audio samples. Anchor segment mining is the core part of USS systems trained with weakly labelled data. Since the weakly labeled audio track does not always contain the labeled sound class throughout its timeline, we need to extract a short audio segment inside this track to create source data for training the separation model. Formally, given an audio clip \(a_{i}\in\mathbb{R}^{L}\), an anchor segment mining algorithm extracts an anchor segment \(s_{i}\in\mathbb{R}^{L^{\prime}}\) from \(a_{i}\), where \(L^{\prime}<L\) is the samples number of the anchor segment. For each audio clip in mini-batch \(\{a_{i}\}_{i=1}^{B}\), we propose two types of anchor segment mining strategies: (A) Randomly select an anchor segment \(s_{i}\) from an audio clip \(a_{i}\), or (B) Apply a pretrained sound event detection (SED) system to detect an anchor segment \(s_{i}\), where the center of \(s_{i}\) is the time stamp where the sound class label is most likely to occur. For the SED method (B) of anchor mining, we leverage PANNs [25] and HTS-AT [27] to perform sound event detection on the audio track. We introduce the SED anchor segment mining strategy (B) as follows. For an audio clip \(a_{i}\), a SED system produces two main outputs: 1) event classification prediction \(p_{\text{AT}}\in[0,1]^{K}\) where \(K\) is the number of sound classes and AT is the abbreviation for audio tagging; and 2) framewise event prediction \(p_{\text{SED}}\in[0,1]^{T\times K}\), where \(T\) the number of frames and SED is the abbreviation for sound event detection. To use weakly labelled data in training, the audio tagging prediction is usually calculated by maximizing the framewise event prediction \(p_{\text{SED}}\). Both PANNs and HTS-AT are trained with the weakly labeled data by minimizing the binary cross entropy loss between the audio tagging prediction \(p_{\text{AT}}\) and the labels \(y\in[0,1]^{K}\) and: \[l=-\sum_{k=1}^{K}y(k)\ln p_{\text{AT}}(k)+(1-y(k))\ln(1-p_{\text{AT}}(k)). \tag{2}\] Fig. 4: Top: log mel spectrogram of a 10-second audio clip from AudioSet; Middle: predicted SED probability of “Speech”, where red block shows the selected anchor segment; Bottom: predicted audio tagging probabilities of the anchor segment. Fig. 3: The architecture of our proposed query-based audio source separation pipeline trained from weakly-label data, including datasets, sampling strategies, audio tagging model, and conditional audio source separation models. We then use the trained sound event detection model to perform framewise event prediction \(p_{\text{SED}}\) of an audio clip. We denote the anchor segment score of the \(k\)-th sound class as: \[q_{k}(t)=\sum_{t-\tau/2}^{t+\tau/2}p_{\text{SED}}(t,k), \tag{3}\] where \(\tau\) is the duration of anchor segments. Then, the center time \(t\) of the optimal anchor segment is obtained by: \[t_{\text{anchor}}=\underset{t}{\text{argmax}}\;q_{k}(t). \tag{4}\] The red block in Fig. 4 shows the detected anchor segment. We apply the anchor segment mining strategy as described in (3) and (4) process a mini-batch of the audio clips \(\{x_{1},...,x_{B}\}\) into a mini-batch of anchor segments \(\{s_{1},...,s_{B}\}\). Algorithm 1 shows the procedure for creating training data. Step 1 describes audio clip sampling and Step 2 describes anchor segment mining. To further avoid two anchor segments containing the same classes being mixed, we propose an optional Step 3 to mine anchor segments from a mini-batch of audio clips \(\{s_{1},...,s_{B}\}\) to constitute mixtures. Step 4 describes mixing detected anchor segments into mixtures to train the USS system. Fig. 5 shows the model architectures of both PANNs and HTS-AT as two audio tagging models we employed. The DecisionLevel systems of PANNs [25] provide frame-wise predictions and contain VGG-like CNNs to convert an audio mel-spectrogram into feature maps. The model averages the feature maps over the time axis to obtain a final event classification vector. The framewise prediction \(p_{\text{SED}}\in[0,1]^{T\times K}\) indicates the SED result. Additionally, the output of the penultimate layer with a size of \((T,H)\) can be used to obtain its averaged vector with a size of \(H\) as a latent source embedding for the query-based source separation in our system, where \(H\) is the dimension of the latent embedding. HTS-AT [27] is a hierarchical token-semantic transformer for audio classification. It applies Swin-Transformer [59] to an audio classification task. In the right of Fig. 5, a mel-spectrogram is cut into different patch tokens with a patch-embed CNN and sent into the Transformer in order. The time and frequency lengths of the patch are equal to \(P\times P\), where \(P\) is the patch size. To better capture the relationship between frequency bins of the same time frame, HTS-AT first splits the mel-spectrogram into windows \(w_{1},w_{2},...,w_{n}\) and then splits the patches in each window. The order of tokens \(Q\) follows **time\(\rightarrow\)frequency\(\rightarrow\)window**. The patch tokens pass through several network groups, each of which contains several transformer-encoder blocks. Between every two groups, a patch-merge layer is applied to reduce the Fig. 5: Two audio tagging models for audio classification, sound event detection, and latent feature production. Left: Pretrained Audio Neural Networks (PANN) in CNN14 architecture. Right: Hierarchical Token-Semantic Transformer (HTS-AT) in 4-block architecture. number of tokens to construct a hierarchical representation. Each transformer-encoder block is a Swin-transformer [59] block with the shifted window attention module, a modified self-attention module to improve the training efficiency. Then, HTS-AT applies a token-semantic 2D-CNN to further process the reshaped output \((\frac{T}{8P},\frac{F}{8P},H)\) into the framewise event presence map \((T,K)\) which can be averaged to an event classification vector \(K\). The latent embedding, at the same time, is produced by averaging the reshaped output into a \(H\)-dimension vector with an average-pooling layer. ### _Source Query Conditions_ In contrast to previous query-based source separators that extract pre-defined representations [31] or learnable representations [60, 17, 32] from clean sources, we propose to extract query embeddings from anchor segments to control sources to separate from a mixture. We introduce four types of embeddings: a hard one-hot embedding with a dimension of \(K\) where \(K\) is the number of sound classes, a soft probability condition with a dimension of \(K\), a latent embedding condition with a dimension of \(H\), where \(H\) is the latent embedding dimension, and a learnable embedding condition with a dimension of \(H\). #### 3.4.1 Hard One-Hot Condition We define the hard one-hot condition of an anchor segment \(s_{i}\) as \(c_{i}\in[0,1]^{K}\), where \(c_{i}\) is the one-hot representation of tags of the audio clip \(x_{i}\). The hard one-hot condition has been used in music source separation [61]. Hard one-hot embedding requires clean source data for training source separation systems. #### 3.4.2 Soft Probability Condition The soft probability condition applies pretrained audio tagging models, such as PANNs or HTS-AT to calculate the event classification probability \(c_{i}=p_{\text{AT}}(s_{i})\) of an anchor segment as the query embedding. For the weakly labelled dataset, the soft probability condition provides a continuous value prediction of what sounds are in an anchor segment than the hard one-hot condition. The advantage of the soft probability condition is that it explicitly presents the SED result. #### 3.4.3 Latent Embedding Condition The latent embedding with a dimension of \(H\) is calculated from the penultimate layer of an audio tagging model. The advantage of using the latent embedding is that the separation is not limited to the given \(K\) sound classes. The USS system can be used to separate arbitrary sound classes with a query embedding as input, allowing us to achieve USS. We investigate a variety of PANNs, including CNN5, CNN10, CNN14, and an HTS-AT model to extract the latent embeddings and we evaluate their efficiency on the separation performance. We denote the embedding condition extraction of as \(c_{i}=f_{\text{emb}}(s_{i})\). #### 3.4.4 Learnable Condition The latent embedding condition can be learned during the training of our universal source separation system. The first method is to fine-tune the parameters of the query net. The second method is to freeze the parameters of the query net and add a cloned query net containing learnable parameters as a shortcut branch \(f_{\text{shortcut}}\) to construct the embedding. The third method is to add learnable fully connected layers \(f_{\text{ada}}(\cdot)\) on top of the query net where ada is the abbreviation for adaptive. The embedding can be extracted by: \(c_{i}=f_{\text{ada}}(f_{\text{AT}}(s_{i}),\theta)\). ### _Query-based Source Separation_ A typical source separator is a single-input-single-output model [10] that deals with one specific source, such as vocal, drum, or bass. To enable the model to separate arbitrary sound sources, we apply a query-based source separator by introducing the conditional embeddings as described in Section 3.4 into the ResUNet source separation backbone [33] to build a single-input single-output source separation model [10]. As mentioned in Section 2, the input to the ResUNet separator is a mixture of audio segments. First, we apply a short-time Fourier transform (STFT) to the waveform to extract the complex spectrum \(X=\mathbb{C}^{T\times F}\). Then, we follow the same setting of [33] to construct an encoder-decoder network to process the magnitude spectrogram \(|X|\). The ResUNet encoder-decoder consists of 6 encoder blocks, 4 bottleneck blocks, and 6 decoder blocks. Each encoder block consists of 4 residual convolutional blocks to downsample the spectrogram into a bottleneck feature, and each decoder block consists of 4 residual deconvolutional blocks to upsample the feature back to separation components. The skip-connection is applied from each encoder block to the corresponding decoder block of the same downsampling/upsampling rate. The residual block contains 2 convolutional layers, 2 batch normalization [62] layers, and 2 Leaky-ReLU activation layers. An additional residual shortcut is added between the input and the output of each residual block. The details of the model architecture can be found at [33]. The ResUNet separator outputs the magnitudes and the phases of the cIRM \(M\in\mathbb{C}^{T\times F}\). The separated complex spectrum can be obtained by: \[\hat{S} =M\odot X \tag{5}\] \[=|M|\odot|X|e^{j(\angle M+\angle X)},\] where both \(|M|\) and \(\angle M\) are calculated from the output of the separator. The separated source can be obtained by multiplying the STFT of the mixture by the cIRM \(M\). The complex multiplication can also be decoupled into a magnitude multiplication part and a phase addition part. The magnitude \(|M|\) controls how much the magnitude of \(|X|\) should be scaled, and the angle \(\angle\angle M\) controls how much the angle of \(X\) should be rotated. Based on the ResUNet separator, we adopt a feature-wise linear modulation (FiLM) [63] method to construct convolutional blocks within the separator. We apply a pre-activation architecture [64] for all of the encoder and decoder layers, we incorporate the conditional embedding as: \[h^{l}=W*(\sigma(\text{BN}(h^{l-1})+Vc)) \tag{6}\] where \(h^{l-1}\) is the feature map of the \(l-1\)-th layer, and \(V\) is a fully connected layer to map the conditional vector \(c\) into an embedding space. \(Vc\) modulates the value \(BN(h^{l-1})\). The value is convolved with the weight \(W\) to output \(h^{l}\). The training of our weakly labeled USS system is illustrated in Algorithm 2. ``` 1:Inputs: Dataset \(D\), e.g., AudioSet. 2:Outputs: A trained USS model. 3:while loss function \(l\) does not converge do 4: prepare a mini-batch of mixture source training pairs \(\{(x_{i},s_{i})\}_{i=1}^{B}\) by Algorithm 1 5: Calculate source query embeddings \(\{c_{i}\}_{i=1}^{B}\) by any of query nets as described in Section 3.4. 6:for each \(s_{i}\)do 7: Obtain the separation \(\hat{s}_{i}=f(x_{i},c_{i})\) 8: Calculate loss by \(l(\hat{s}_{i},s_{i})\) 9:endfor 10:endwhile ``` **Algorithm 2** Training of a USS system. ### _Data augmentation_ When constituting the mixture with \(s_{i}\) and \(s_{i+1}\), the amplitude of \(s_{i}\) and \(s_{i+1}\) can be different. We propose an energy augmentation to augment data. That is, we first calculate the energy of a signal \(s_{i}\) by \(E=||s_{i}||_{2}^{2}\). We denote the energy of \(s_{i}\) and \(s_{i+1}\) as \(E_{i}\) and \(E_{i+1}\). We apply a scaling factor \(\alpha_{i}=\sqrt{E_{i}/E_{i+1}}\) to \(s_{i+1}\) when creating the mixture \(x_{i}\): \[x=s_{i}+\alpha s_{i+1}. \tag{7}\] By this means, both anchor segments \(s_{i}\) and \(s_{i+1}\) have the same energy which is beneficial to the optimization of the separation system. We will show this in Section 4.6.7. On the one hand, we match the energy of anchor segments to let the neural network learn to separate the sound classes. On the other hand, the amplitude diversity of sound classes is increased. We will show this energy augmentation is beneficial in our experiments. ### _Loss functions_ We propose to use the L1 loss between the predicted and ground truth waveforms following [33] to train the end-to-end universal source separation system: \[l=||s-\hat{s}||_{1}, \tag{8}\] where \(l\) is the loss function used to train the neural network. A lower loss in (8) indicates that the separated signal \(\hat{s}\) is closer to the ground truth signal \(s\). In training, the gradients of parameters are calculated by \(\partial l/\partial\theta\), where \(\theta\) are the parameters of the neural network. ``` 1:Inputs: An arbitrary duration audio clip \(x\). A trained USS system \(f_{\text{SS}}\). A trained audio tagging system \(f_{\text{AT}}\). Hierarchical level \(l\). 2:Outputs: Separated sources \(O=\{\hat{s}_{j}\}_{j\in C}\) where \(C\) is the indexes. 3:Split \(x\) into non-overlapped short segments \(\{x_{i}\}_{i=1}^{I}\) where \(I\) is the number of segments. 4:Apply \(f_{\text{AT}}\) on all segments to obtain \(P(i,k)\) with a size of \(I\times K\), where \(K\) is the number of sound classes. 5:# Calculate ontology predictions. 6:if hierarchical_separation then 7:\(Q(i,j)=\text{Hierarchical}\text{OntologyGrouping}(P(i,k),l)\) following Algorithm 4. \(Q(i,j)\) has a shape of \(I\times J\) where \(J\) is the number of sound classes in the \(l\)-th level. 8:endif 9:# Detect active sound event. 10:\(C=\{\}\)# Active sound class indexes. 11:for\(j=1,...,J\)do 12:if\(\text{max}_{i}Q(j,j)>\delta\)then: 13:\(C=C\cup\{j\}\) 14:endif 15:endfor 16:# Do separation. 17:\(O=\{\}\)# Separated sources of active sound classes. 18:for\(j\in C\)do 19:for\(i=1,...,I\)do 20:if\(Q(i,j)>\delta\)then: 21: Get condition \(c_{j}\) by (10). 22:\(\hat{s}_{ij}=f_{\text{ss}}(x_{i},c_{j})\) 23:else 24:\(\hat{s}_{ij}=\textbf{0}\) 25:endif 26:endfor 27:\(\hat{s}_{j}=\{\hat{s}_{ij}\}_{i=1}^{I}\) 28:\(O=O\cup\{\hat{s}_{j}\}\) 29:endfor ``` **Algorithm 3** Automatic sound event detection hierarchical USS. ### _Inference_ In training, the oracle embedding of an anchor segment can be calculated by \(f_{\text{emb}}(s_{i})\). In inference, for the hard one-hot condition and soft probability condition, we can simply use 1) the one-hot representation of the \(k\)-th sound class to separate the audio of the \(k\)-th sound class. 2) Only remaining soft probabilities' \(\{k_{j}\}_{j=1}^{J}\) indexes values as the condition, where \(J\) is the number of sound classes to separate. However, for the latent embedding condition and learnable condition, we need to calculate the embedding \(c\) from the training dataset by: \[c=\frac{1}{N}\sum_{n=1}^{N}f_{\text{emb}}(s_{n}) \tag{9}\] where \(\{s_{n}\}_{n=1}^{N}\) are query samples of one sound class and \(N\) is the number of query samples. That is, we average all conditional embeddings of query samples from the same sound class to constitute \(c\). ### _Inference with Hierarchical AudioSet Ontology_ We propose a hierarchical separation strategy to address the USS problem. It is usually unknown how many and what sound classes are present and which need to be separated in an audio clip. To address this problem, we propose a hierarchical sound class detection strategy to detect the sound classes presence. We separate those sound classes by using the trained USS system. Algorithm 3 shows the automatic sound class detection and separation steps. The input to the USS system is an audio clip \(x\). We first split the audio clip into short segments and apply an audio tagging system \(f_{\text{AT}}\) to calculate the segment-wise prediction \(P(t,k)\) with a size of \(I\times K\), where \(I\) and \(K\) are the number of segments and sound classes, respectively. The AudioSet ontology has a tree structure. The first level of the tree structure contains seven sound classes described in the AudioSet ontology, including "Human sounds", "Animal", "Music", "Source-ambiguous sounds", "Sounds of things", "Nature sounds", and "Channel, environment and background". Each root category contains several sub-level sound classes. The second level and the third levels contain 41 and 251 sound classes, respectively, as described in the AudioSet ontology [23]. The tree structure has a maximum depth of six levels. In inference, the USS system supports hierarchical source separation with different levels. We denote the sound classes of level \(l\) as \(C=\{c_{j}\}_{j=1}^{J}\), where \(J\) is the number of sound classes in the \(l\)-th level. For a sound class \(j\) in the \(l\)-th level, we denote the set of all its children's sound classes as children\((j)\) including \(j\). For example, for the human sounds class 0, there are children\((0)=\{0,1,...,72\}\). We set score \(Q(i,j)=\text{max}_{k}\{P(i,j)\}_{k\in\text{children}(j)}\). We detect a sound class \(j\) as active if \(\text{max}_{i}Q(i,j)\) larger than a threshold \(\theta\). We set separated segments to silence if \(Q(i,j)\) is smaller than \(\theta\). Then, we apply the USS by using (10) as the condition: \[c_{k}=\left\{\begin{aligned} & f_{\text{AT}}(x),k\in\text{children }(j)\\ & 0,k\notin\text{children}(j).\end{aligned}\right. \tag{10}\] The USS procedure is described in Algorithm 4. ``` 1:Inputs: segment-wise prediction \(P(i,k)\) with a size of \(I\times K\), hierarchy level \(l\). 2:Outputs:\(Q(i,j)\) with a size of \(I\times J\) where \(J\) is the number of children sound classes of the \(l\)-th ontology level. 3:for\(j\in 1,...,J\)do 4:\(Q(i,j)=\text{max}_{k}\{P(i,k)\}_{k\in\text{children}(j)}\) 5:endfor ``` **Algorithm 4** Hierarchical Ontology Grouping. ## 4 Experiments In this section, we investigate our proposed universal source separation system on several tasks, including AudioSet separation [23], sound event separation [54, 65], music source separation [52, 66], and speech enhancement [51]. Our USS system is trained only on the large-scale weakly labelled AudioSet [23] without using any clean training data, which is a major difference from the previous source separation systems that are trained on specific datasets with clean sources [43, 45, 47, 48, 67]. The trained USS system can address a wide range of source separation tasks without being finetuned. ### _Training Dataset_ AudioSet is a large-scale weakly labelled audio dataset containing 2 million 10-second audio clips sourced from the YouTube website. Audio clips are only labelled with the presence or absence of sound classes, without knowing when the sound events occur. There are 527 sound classes in its released version, covering a wide range of sound classes in the world, such as "Human sounds", "Animal", etc. The training set consists of 2,063,839 audio clips, including a balanced subset of 22,160 audio clips. There are at least 50 audio clips for each sound class in the balanced training set. Although some audio links are no longer available, we successfully downloaded 1,934,187 (94%) audio clips from the full training set. All audio clips are padded with silence into 10 seconds. Due to the fact that a large amount of audio recordings from YouTube have sampling rates lower than 32 kHz, we resample all audio recordings into mono and 32 kHz. ### _Training Details_ We select anchor segments as described in Section 3.3 and mix two anchor segments to constitute a mixture \(x\). The duration of each anchor segment is 2 seconds. We investigate different anchor segment durations in Section 4.6.4. We apply matching energy data augmentation as described in Section 3.6 to scale two anchor segments to have the same energy, and extract the short-time Fourier transform (STFT) feature \(X\) from \(x\) with a Hann window size of 1024 and a hop size 320. This hop size leads to 100 frames in a second consistent to the audio tagging systems in PANNs [25] and HTS-AT [27]. The query net is a CNN14 of PANNs or HTS-AT. The query net is pretrained on the AudioSet tagging task [23, 27] and the parameters are frozen during the training of the USS system. The prediction and the embedding layers of the query net have dimensions of 527 and 2048, respectively. Either the prediction layer or the embedding layer is connected to fully connected layers and input to all layers of the source separation branch as FiLMs. We adopt ResUNet [33] as the source separation branch. The 30-layer ResUNet consists of 6 encoder and 6 decoder blocks. Each encoder block consists of two convolutional layers with kernel sizes of \(3\times 3\). Following the pre-activation strategy [64], we apply batch normalization [62] and leaky ReLU [68] before each convolutional layer. The FiLM is added to each convolutional layer as described in (6). The number of output feature maps of the encoder blocks are 32, 64, 128, 256, 512, and 1024, respectively. The decoder blocks are symmetric to the encoder blocks. We apply an Adam optimizer [69] with a learning rate of \(10^{-3}\) to train the system. A batch size of 16 is used to train the USS system. The total training steps is 600 k trained for 3 days on a single Tesla V100 GPU card. ### _Conditional Embedding Calculation_ For AudioSet source separation, the oracle embedding or each anchor segment is calculated by: \[c=f_{\text{emb}}(s) \tag{11}\] where \(s\) is the clean source. Using oracle embedding as condition indicates the upper bound of the universal source separation system. For real applications, we calculate the conditional embeddings by (9) from the training set of the AudioSet, FSD50Kaggle2018, FSD50k, MUSDB18, Slakh2100, and VoicebankDemand datasets to evaluate on those datasets, respectively. ### _Evaluation Datasets_ #### 4.4.1 AudioSet The evaluation set of AudioSet [23] contains 20,317 audio clips with 527 sound classes. We successfully downloaded 18,887 out of 20,317 (93%) audio clips from the evaluation set. AudioSet source separation is a challenging problem due to USS need to separate 527 sound classes using a single model. We are the first to propose using AudioSet [31] to evaluate the USS. To create evaluation data, similarly to Section 3.3, we first apply a sound event detection system to each 10-second audio clip to detect anchor segments. Then, we select two anchor segments from different sound classes and sum them as a mixture for evaluation. We create 100 mixtures for each sound class, leading to 52,700 mixtures for all sound classes in total. #### 4.4.2 FSDKaggle2018 The FSDKaggle2018 [54] is a general-purpose audio tagging dataset containing 41 sound classes ranging from musical instruments, human sounds, domestic sounds, and animals, etc. The duration of the audio clips ranges from 300 ms to 30 s. Each audio clip contains a unique audio tag. The test set is composed of 1,600 audio clips with manually-verified annotations. We pad or truncate each audio clip into 2-second segment from the start, considering sound events usually occur in the start of audio clips. We mix two segments from different sound classes to consist a pair. We constitute 100 mixtures for each sound class. This leads to a total of 4,100 evaluation pairs. #### 4.4.3 FSD50K dataset The Freesound Dataset 50k (FSD50K) dataset [65] contains 51,197 training clips distributed in 200 sound classes from the AudioSet ontology. In contrast to the FSDKaggle2018 dataset, each audio clip may contain multiple tags with a hierarchical architecture. There are an average of 2.79 tags in each audio clip. All audio clips are sourced from Freesound1. There are 10,231 audio clips distributed in 195 sound classes in the test set. Audio clips have variable durations between 0.3s to 30s, with an average duration of 7.1 seconds. We mix two segments from different sound classes to consist a pair. We create 100 mixtures for each sound class. This leads to in total 19,500 evaluation pairs. Footnote 1: [https://freesound.org/](https://freesound.org/) #### 4.4.4 Musdb18 The MUSDB18 dataset [52] is designed for the music source separation task. The test set of the MUSDB18 dataset contains 50 songs with four types of stems, including vocals, bass, drums, and other. We linearly sum all stems to constitute mixtures as input to the USS system. We use the museval toolkit2 to evaluate the SDR metrics. Footnote 2: [https://github.com/sigsep/sigsep-mus-eval](https://github.com/sigsep/sigsep-mus-eval) #### 4.4.5 Slakh2100 The Slakh2100 dataset [66] is a multiple-instrument dataset for music source separation and transcription. The test of the Slakh2100 dataset contains 225 songs. The sound of different instruments are rendered by 167 different types of plugins. We filtered 151 non-silent plugin types for evaluation. Different from the MUSDB18 dataset, there can be over 10 instruments in a song, leading to the Slakh2100 instrument separation a challenging problem. #### 4.4.6 Voicebank-Demand The Voicebank-Demand [51] dataset is designed for the speech enhancement task. The Voicebank dataset [51] contains clean speech. The Demand dataset [70] contains multiple different background sounds that are used to create mixtures. The noisy utterances are created by mixing the VoiceBank dataset and the Demand dataset under signal-to-noise ratios of 15, 10, 5, and 0 dB. The test set of the Voicebank-Demand dataset contains 824 utterances in total. ### _Evaluation Metrics_ We use the signal-to-distortion ratio (SDR) [71] and SDR improvement (SDRi) [71] to evaluate the source separation performance. The SDR is defined as: \[\text{SDR}(s,\hat{s})=10\text{log}_{10}\left(\frac{||s||^{2}}{||s-\hat{s}||^ {2}}\right) \tag{12}\] where \(s\) and \(\hat{s}\) are the target source and estimated source, respectively. Larger SDR indicates better separation performance. The SDRi is proposed to evaluate how much SDR a USS system improves compared to without separation: \[\text{SDRi}=\text{SDR}(s,\hat{s})-\text{SDR}(s,x) \tag{13}\] where \(x\) is the mixture signal. For the speech enhancement task, we apply the Perceptual evaluation of speech quality (PESQ) [72] and segmental signal-to-ratio noise (SSNR) [73] for evaluation. ### _Results Analysis_ #### 4.6.1 Conditional Embedding Types The default configuration of our USS system is a 30-layer ResUNet30 trained on the balanced set of AudioSet. Table II shows the USS system results trained with different conditional embedding types including wav2vec [74], speaker embeddings3, CNN6, CNN10, CNN14 from PANNs [25], and HTS-AT [27]. The wav2vec embedding is trained using unsupervised contrastive learning on 960 hours of speech data. The wav2vec embedding is averaged along the time axis to a single embedding with a dimension of 512. The speaker embedding is a gated recurrent unit (GRU) with three recurrent layers operates on log mel-spectrogram and has output has a shape of 256. The CNN6 and the CNN10 have dimensions of 512. The CNN14 and the HTS-AT have dimensions of 2048. The oracle embedding (ora emb) shows the results using (11) as condition. The average embedding (avg emb) shows the results using (9) as condition. Table II shows that the CNN6, CNN10, CNN14 embeddings achieve AudioSet SDR between 5.30 dB and 5.57 dB using the average embedding, outperforming the wav2vec of 4.30 dB and the speaker embedding of 2.82 dB. One possible explanation is that both wav2vec and the speaker embeddings are trained on speech data only, so that they are not comparable to PANNs and HTS-AT trained for general audio tagging. The wav2vec embedding slightly outperforms the speaker embedding on FSDKaggle2018, FSD50k, and MUSDB18 separation, indicating that the unsupervised learned ASR embeddings are more suitable for universal source separation. The HTS-AT achieves the highest oracle embedding SDR among all systems. All of CNN6, CNN10, CNN14, and HTS-AT outperform the wav2vec embedding and the speaker embedding in AudioSet, FSDKaggle2018, FSD50k, MUSDB18, Slakh2100, and Voicebank-Demand datasets by a large margin. The CNN14 slightly outperforms CNN6 and CNN10. In the following experiments, we use CNN14 as the default conditional embedding. Table III shows the comparison between using the CNN14 segment prediction with a dimension of 527 and the CNN14 embedding condition with a dimension of 2048 to build the USS system. On one hand, the segment prediction embedding achieves an SDR of 7.80 dB on AudioSet, outperforming the embedding condition of 5.57 dB. The segment prediction also achieves higher SDRs than the embedding condition on the FSDKaggle2018, and the FSD50k dataset datasets. An explaination is that the sound classes of all of the AudioSet, FSDKaggle2018, and the FSD50k datasets \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{**AudioSet (SDR)**} & \multicolumn{2}{c}{**FSDK2018**} & \multicolumn{2}{c}{**FSD50k**} & \multicolumn{2}{c}{**MUSDB18**} & \multicolumn{2}{c}{**Slakh2100**} & \multicolumn{2}{c}{**VoicebankDemand**} \\ \cline{2-13} & ora. emb & avg. emb & SDR & SDR1 & SDR & SDR1 & SDR & SDR1 & wSDR & wSDR & SDR1 & SDR & SDR1 & PESQ & SSNR \\ \hline CNN14 (random weights) & 8.51 & 2.82 & 5.96 & 5.91 & 2.82 & 2.82 & -0.48 & 4.59 & 1.97 & 7.08 & -1.34 & 6.40 & 2.28 & 6.96 \\ CNN14 (scratch) & 2.38 & 2.38 & 2.46 & 2.41 & 2.22 & 2.15 & 0.71 & 5.78 & 1.16 & 6.30 & -1.20 & 6.54 & 1.62 & -0.28 \\ CNN14 (fenture) & 9.83 & 1.96 & 3.42 & 3.38 & 1.50 & 1.40 & 2.10 & 7.77 & 3.10 & 8.52 & 1.39 & 9.12 & 1.77 & 3.52 \\ +CNN14 (freeze) & 8.26 & 5.57 & 10.61 & 10.57 & 5.54 & 5.79 & 3.08 & 8.12 & 4.02 & 9.22 & -0.46 & 7.28 & 2.18 & 9.00 \\ CNN14 + shortcut & 6.95 & 4.57 & 9.29 & 9.25 & 4.74 & 4.94 & 1.84 & 7.05 & 3.40 & 8.78 & -1.44 & 6.30 & 2.06 & 8.91 \\ CNN14 + adaptor & 8.01 & 5.81 & 11.00 & 10.96 & 5.79 & 6.07 & 2.95 & 7.96 & 3.90 & 9.24 & -0.87 & 6.87 & 2.30 & 9.60 \\ \hline \hline \end{tabular} \end{table} TABLE IV: USS results with freeze, finetune, and adapt conditional embeddings. \begin{table} \begin{tabular}{l c c c c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{**AudioSet (SDR)**} & \multicolumn{2}{c}{**FSDK2018**} & \multicolumn{2}{c}{**FSD50k**} & \multicolumn{2}{c}{**MUSDB18**} & \multicolumn{2}{c}{**Slakh2100**} & \multicolumn{2}{c}{**VoicebankDemand**} \\ \cline{2-13} & ora. emb & avg. emb & SDR & SDR1 & SDR & SDR1 & SDR & SDR1 & wSDR & wSDR1 & SDR & SDR1 & PESQ & SSNR \\ \hline wav2vec (46) & 8.87 & 4.30 & 8.95 & 8.91 & 4.52 & 4.70 & 1.90 & 7.03 & 2.96 & 8.37 & -1.08 & 6.66 & 2.11 & 6.02 \\ speaker (46d) & 8.87 & 2.82 & 6.69 & 6.65 & 3.00 & 3.03 & 1.52 & 6.85 & 2.48 & 7.94 & 0.18 & 7.92 & 2.13 & 4.72 \\ \hline CNNC (45a2) & 8.68 & 5.30 & 10.36 & 10.31 & 5.25 & 5.50 & 3.05 & 8.43 & 3.94 & 9.42 & -0.37 & 7.37 & 2.27 & 9.39 \\ CNN10 (45a3) & 8.35 & 5.36 & 9.95 & 9.90 & 5.19 & 5.43 & 2.87 & 8.10 & 4.11 & 9.34 & -0.27 & 7.47 & 2.27 & 8.68 \\ +CNN14 (44a) & 8.26 & 5.57 & 10.61 & 10.57 & 5.54 & 5.79 & 3.08 & 8.12 & 4.02 & 9.22 & -0.46 & 7.28 & 2.18 & 9.00 \\ HTSAT (45c) & 9.38 & 3.78 & 7.95 & 7.91 & 3.38 & 3.51 & 2.83 & 8.48 & 3.77 & 9.36 & 0.81 & 8.55 & 2.23 & 8.78 \\ \hline \hline \end{tabular} \end{table} TABLE II: USS results with different conditional embedding types. \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{**AudioSet (SDR)**} & \multicolumn{2}{c}{**FSDK2018**} & \multicolumn{2}{c}{**FSD50k**} & \multicolumn{2}{c}{**MUSDB18**} & \multicolumn{2}{c}{**Slakh2100**} & \multicolumn{2}{c}{**VoicebankDemand**} \\ \cline{2-13} & ora. emb & avg. emb & SDR & SDR1 & SDR & SDR1 & SDR & SDR1 & wSDR & wSDR1 & SDR & SDR1 & PESQ & SSNR \\ \hline \hline \multirow{2}{*}{\begin{tabular}{l} Swam2vec (46) \\ speaker (46d) \\ \end{tabular} } & \multirow{2}{*}{8.87} & \multirow{2}{*}{4.30} & \multirow{2}{*}{8.95} & \multirow{2}{*}{8.91} & \multirow{2}{*}{4.52} & \multirow{2}{*}{4.70} & \multirow{2}{*}{1.90} & \multirow{2}{*}{7.03} & \multirow{2}{*}{2.96} & \multirow{2}{*}{8.37} & \multirow{2}{*}{-1.08} & \multirow{2}{*}{6.66} & \multirow{2}{*}{2.11} & \multirow{2}{*}{6.02} \\ & & & & & & & & & & & & & & & \\ & 8.87 & 2.82 & 6.69 & 6.65 & 3.00 & 3.03 & 1.52 & 6.85 & 2.48 & 7.94 & 0.18 & 7.92 & 2.13 & 4.72 \\ \hline \hline \multirow{2}{*}{ \begin{tabular}{l} CNN6 (45a2) \\ CNN10 (45a3) \\ +CNN14 (44a) \\ HTSAT (45c) \\ \end{tabular} } & \multirow{2}{*}{8.35} & \multirow{2}{*}{10.61} & \multirow{2}{*}{10.57} & \multirow{2}{*}{5.54} & \multirow{2}{*}{5.79} & \multirow{2}{*}{3.08} & \multirow{2}{*}{8.12} & \multirow{2}{*}{4.02} & \multirow{2}{*}{9.22} & \multirow{2}{*}{-0.46} & \multirow{2}{*}{7.28} & \multirow{2}{*}{2.18} & \multirow{2}{*}{9.00} \\ \hline \hline \end{tabular} \end{table} TABLE III: USS results with soft audio tagging and latent embedding as condition. are sub-classes of the AudioSet. The segment prediction performs better than embedding condition in in-vocabulary sound classes separation. On the other hand, the embedding condition achieves higher SDRs than the segment prediction on the MUSDB18 and the Slakh2100 dataset. This result indicates that the embedding condition perform better than segment prediction in new-vocabulary sound classes separation. Fig. 7 in the end of this paper shows the classwise SDRi results of AudioSet separation including 527 sound classes. The dashed lines show the SDRi with oracle segment prediction or embedding as conditions. The solid lines show the SDRi with averaged segment prediction or embedding calculated from the anchor segments mined from the balanced training subset. Fig. 7 shows that sound classes such as busy signal, sine wave, bicycle bell achieve the highest SDRi over 15 dB. We discovered that clear defined sound classes such as instruments can achieve high SDRi scores. Most of sound classes achieve positive SDRi scores. The tendency of using segment prediction and embedding as conditions are the same, although the segment prediction outperform the embedding and vice versa in some sound classes. #### 4.6.2 Freeze, Finetune, and Adapt Conditional Embeddings Table IV shows the comparison of using random, frozen, finetuned, and adapted conditional embeddings to build the USS system. All the variations of the conditional embeddings extractors are based on the CNN14 architecture. Using random weights to extract conditional embeddings achieves an SDRi of 2.82 dB on AudioSet, compared to use pretrained CNN14 to extract conditional embeddings achieves an SDR 5.57 dB. We show that using random weights to extract conditional embeddings work for all USS tasks, such as achieves an SDRi of 5.91 dB on the FSDKaggle2018 dataset compared to the pretrained CNN14 embedding extractor of 10.57 dB. Next, we experiment with learning the parameters of the conditional embedding extractor from scratch or finetune the weights from pretrained models. Table IV shows that neither the learning from scratch nor the finetuning approach improves the USS system performance. The learning from scratch approach and the finetuning approaches achieve SDRi of 2.41 dB and 3.38 dB on the FSDKaggle2018 dataset, even underperform the random weights of 5.91 dB. One possible explanation is that the parameters of the conditional embedding branch and the source separation branch are difficult to be jointly optimized when both of them are deep. The training falls to a collapse mode. Using the pretrained frozen CNN14 system as conditional embedding extractor significantly improves the SDRi to 10.57 dB on the FSDKaggle2018 dataset. Based on the pretrained frozen CNN14 conditional embedding extractor, we propose to add a learnable shortcut or add an learnable adaptor on top of the CNN14 system. The learnable shortcut has a CNN14 architecture with learnable parameters. Table IV shows that the learnable shortcut conditional embedding extractor achieves an SDR of 9.29 dB on FSDKaggle2018, less than using the pretrained frozen CNN14 conditional embedding extractor of 10.57 dB. One possible explanation is that the learnable shortcut destories the embedding information for source separation. The adaptor is a 2-layer fully connected neural network on top of the pretrained frozen CNN14 conditional embedding extractor. With the adaptor, we achieve an SDR of 11.10 dB and outperforms the CNN14 system. This result indicates that the adaptor is beneficial for the USS task. #### 4.6.3 Separation Architectures Table V shows the results of building USS systems with different source separation backbones. The open-umnix system [67] is a 3-layer bidirectional long short term memory (BLSTM) system. The BLSTM is applied on the mixture spectrogram to output the estimated clean spectrogram. The open-umnix system achieves an SDR of 3.39 dB on AudioSet separation and achieves a PESQ of 2.40 on the \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{**AudioSet (SDRi)**} & \multicolumn{2}{c}{**FSDK2018**} & \multicolumn{2}{c}{**FSD50k**} & \multicolumn{2}{c}{**MUSDB18**} & \multicolumn{2}{c}{**Slakh2100**} & \multicolumn{2}{c}{**VoicebankDemand**} \\ \cline{2-13} \cline{6-13} & \multicolumn{1}{c}{ora. emb} & avg. emb & SDR & SDRi & SDR & SDRi & SDR & SDRi & wSDR & wSDRi & SDR & SDRi & PESQ & SSNR \\ \hline 0.5 s & 4.07 & 2.86 & 4.51 & 4.47 & 2.55 & 2.51 & 0.97 & 0.78 & 2.61 & 2.43 & -0.79 & 6.95 & 1.57 & 5.96 \\ 1s & 7.50 & 4.99 & 9.45 & 9.41 & 4.81 & 5.00 & 0.18 & -0.02 & 2.54 & 2.50 & -1.66 & 6.08 & 2.17 & 8.55 \\ 4-2s & 8.26 & 5.57 & 10.61 & 10.57 & 5.54 & 5.79 & 3.08 & 8.12 & 4.02 & 9.22 & -0.46 & 7.28 & 2.18 & 9.00 \\ 4s & 7.39 & 5.21 & 10.22 & 10.17 & 5.38 & 5.60 & 1.83 & 6.79 & 3.38 & 8.68 & -2.62 & 5.12 & -2.62 & 5.12 \\ 6s & 6.39 & 4.68 & 9.20 & 9.16 & 5.05 & 5.24 & 0.00 & 4.98 & 2.70 & 7.97 & -4.26 & 3.48 & 2.21 & 2.56 \\ 8s & 6.26 & 4.48 & 8.85 & 8.80 & 4.77 & 4.94 & -3.67 & -4.00 & 1.60 & 1.50 & -5.68 & 2.06 & 2.24 & 2.35 \\ 10s & 6.29 & 4.47 & 9.11 & 9.07 & 4.80 & 4.98 & -2.68 & -2.79 & 1.56 & 1.53 & -5.07 & 2.67 & 2.13 & 2.14 \\ \hline \hline \end{tabular} \end{table} TABLE VI: USS results trained with different anchor segment durations. \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{**AudioSet (SDRi)**} & \multicolumn{2}{c}{**FSDK2018**} & \multicolumn{2}{c}{**FSD50k**} & \multicolumn{2}{c}{**MUSDB18**} & \multicolumn{2}{c}{**Slakh2100**} & \multicolumn{2}{c}{**VoicebankDemand**} \\ \cline{2-13} & \multicolumn{1}{c}{ora. emb} & avg. emb & SDR & SDRi & SDR & SDRi & SDR & SDRi & wSDR & wSDRi & SDR & SDRi & PESQ & SSNR \\ \hline mining & 8.26 & 5.57 & 10.61 & 10.57 & 5.54 & 5.79 & 3.08 & 8.12 & 4.02 & 9.22 & -0.46 & 7.28 & 2.18 & 9.00 \\ in-clip random & 4.89 & 3.94 & 5.53 & 5.49 & 3.63 & 3.66 & 1.10 & 6.05 & 2.36 & 7.79 & -1.72 & 6.01 & 2.21 & 5.69 \\ out-clip random & 8.19 & 5.90 & 11.06 & 11.01 & 6.04 & 6.34 & 2.57 & 7.68 & 3.81 & 9.17 & -1.08 & 6.66 & 2.39 & 9.48 \\ \hline \hline \end{tabular} \end{table} TABLE VII: USS results with different anchor mining strategies. Voicebank-Demand speech enhancement task, indicating that the BLSTM backbone performs well for speech enhancement. The open-unmix system underperforms other backbone source separation systems in FSDKaggle2018, FSD50k, MUSDB18, and Slakh2100 separation, indicating that the capacity of the open-unmix system is not large enough to separate a wide range of sound classes. The ConvTasNet [38] is a time-domain source separation system consists of one-dimensional convolutional encoder and decoder layers. The ConvTasNet achieves an SDRi of 5.00 dB on AudioSet separation and outperforms the open-unmix system. Our proposed UNet30 [47] is an encoder-decoder convolutional architecture consists of 30 convolutional layers. The ResUNet30 [33] adds residual shortcuts in the encoder and decoder blocks in UNet30. The UNet30 and the ResUNet30 systsem achieve SDRis of 5.50 dB and 5.57 dB on AudioSet, outperforming the ConvTasNet by around 1 dB in all source separation tasks. We extend ResUNet30 to a deeper system ResUNet60 with 60 convolutiona layers. Table V shows that ResUNet60 outperforms ResUNet30 by around 0.5 dB in all USS tasks. This result indicates that deeper architectures are beneficial for USS. #### 4.6.4 Different Anchor Segment Durations Table VI shows the results of USS systems trained with different anchor segment durations ranging from 0.5 s to 10 s. The anchor segments are mined by a pretrained SED system as described in Section 3.3. On one hand, Table VI shows that the separation scores increase with anchor segment durations increase from 0.5 s to 2 s and achieves the best SDRi of 5.57 dB at anchor segment of 2 s on AudioSet separation. This result shows that the anchor segment should be long enough to contain sufficient context information to build the USS system. On the other hand, the separation scores decrease with anchor segment durations decrease from 2 s to 10 s on all tasks. One possible explanation is that long anchor segment contain undesired interfering sounds that will impair the training of the USS system. Therefore, we use 2-second anchor segment in all other experiments. #### 4.6.5 Different Anchor Segment Mining Strategies Table VII shows the results of different anchor mining strategies. The in-clip random strategy randomly select two anchor segments from a same 10-second audio clip which significantly underperform the SED mining strategy in all of the source separation tasks. The out-clip random strategy randomly select two anchor segments from two different 10-second audio clips. The out-clip random strategy achieves an SDRi of 5.90 dB on AudioSet, outperforms the SED mining of 5.57 dB. On one hand, the out-clip random strategy also outperforms the SED mining strategy in FS-DKaggle2018 and the FSD50k dataset. On the other hand, the SED mining strategy outperforms the out-clip random strategy in MUSDB18 and Slakh2100 source separation. Both the out-clip and the SED mining strategies outperform the in-clip random strategy. #### 4.6.6 Sources number to mix during training Table VIII shows the USS results trained with different number of sources \(J\) to constitute a mixture. Table VIII shows that \(J=2\) performs the best on AudioSet with an SDRi of 5.57 dB and also performs the best on the FSDKaggle2018, FSD50k, and on MUSDB18 datasets. This result shows that mixing two sources is sufficient for those source separation tasks. By using \(J=4\) the USS system perform the beston the Slakh2100 dataset. An explanation is that the Slakh2100 \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{**AudioSet (SDRi)**} & \multicolumn{2}{c}{**FSDK2018**} & \multicolumn{2}{c}{**FSD50k**} & \multicolumn{2}{c}{**MUSDB18**} & \multicolumn{2}{c}{**Slakh2100**} & \multicolumn{2}{c}{**VoicebankDemand**} \\ \cline{2-13} & \multicolumn{1}{c}{ora. emb} & avg. emb} & SDR & SDRi & SDR & SDRi & SDR & SDRi & wSDRi & SDR & SDRi & PESQ & SSNR \\ \hline no aug & 7.11 & 3.81 & 7.19 & 7.14 & 3.27 & 3.35 & 1.78 & 7.22 & 3.09 & 8.74 & 0.69 & 8.43 & 2.39 & 6.36 \\ + 20 dB & 5.51 & 3.62 & 5.77 & 5.73 & 2.93 & 2.94 & 1.69 & 7.02 & 2.51 & 8.03 & -0.34 & 7.40 & 2.22 & 5.34 \\ +match energy & 8.26 & 5.57 & 10.61 & 10.57 & 5.54 & 5.79 & 3.08 & 8.12 & 4.02 & 9.22 & -0.46 & 7.28 & 2.18 & 9.00 \\ \hline \hline \end{tabular} \end{table} TABLE IX: USS results with different data augmentation. \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{**AudioSet (SDRi)**} & \multicolumn{2}{c}{**FSDK2018**} & \multicolumn{2}{c}{**FSD50k**} & \multicolumn{2}{c}{**MUSDB18**} & \multicolumn{2}{c}{**Slakh2100**} & \multicolumn{2}{c}{**VoicebankDemand**} \\ \cline{2-13} & \multicolumn{1}{c}{ora. emb} & avg. emb} & SDR & SDRi & SDR & SDRi & SDR & SDRi & wSDRi & wSDRi & SDR & SDRi & PESQ & SSNR \\ \hline 2 s res to 1 src & 8.26 & 5.57 & 10.61 & 10.57 & 5.54 & 5.79 & 3.08 & 8.12 & 4.02 & 9.22 & -0.46 & 7.28 & 2.18 & 9.00 \\ 3 srcs to 1-2 s res & 7.37 & 4.71 & 8.30 & 8.26 & 4.36 & 4.52 & 2.43 & 8.08 & 3.56 & 8.69 & -0.48 & 7.26 & 2.37 & 8.34 \\ 4 sres to 1-3 s res & 7.03 & 4.38 & 7.49 & 7.45 & 3.99 & 4.10 & 2.43 & 7.99 & 3.51 & 8.98 & 0.70 & 8.44 & 2.38 & 7.78 \\ \hline \hline \end{tabular} \end{table} TABLE VIII: USS results with different sources number. \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{**AudioSet (SDRi)**} & \multicolumn{2}{c}{**FSDK2018**} & \multicolumn{2}{c}{**FSD50k**} & \multicolumn{2}{c}{**MUSDB18**} & \multicolumn{2}{c}{**Slakh2100**} & \multicolumn{2}{c}{**VoicebankDemand**} \\ \cline{2-13} & \multicolumn{1}{c}{ora. emb} & avg. emb} & SDR & SDRi & SDR & SDRi & SDR & SDRi & wSDRi & SDR & SDRi & PESQ & SSNR \\ \hline no aug & 7.11 & 3.81 & 7.19 & 7.14 & 3.27 & 3.35 & 1.78 & 7.22 & 3.09 & 8.74 & 0.69 & 8.43 & 2.39 & 6.36 \\ + 20 dB & 5.51 & 3.62 & 5.77 & 5.73 & 2.93 & 2.94 & 1.69 & 7.02 & 2.51 & 8.03 & -0.34 & 7.40 & 2.22 & 5.34 \\ +match energy & 8.26 & 5.57 & 10.61 & 10.57 & 5.54 & 5.79 & 3.08 & 8.12 & 4.02 & 9.22 & -0.46 & 7.28 & 2.18 & 9.00 \\ \hline \hline \end{tabular} \end{table} TABLE IX: USS results with different data augmentation. contains audio clips contain multiple instruments being played simultaneously. Using more sources to constitute a mixture perform better than using fewer sources. #### 4.6.7 Data augmentation Table IX shows the USS results with different augmentation strategies applied to sources to create a mixture. First, we do not apply any data augmentation to create a mixture. Second, we randomly scale the volume of each source by \(\pm 20\) dB. Third, we propose a matching energy data augmentation to scale the volume of sources to create a mixture to ensure the sources have the same energy. Table IX shows that the matching energy data augmentation significantly outperform the systems trained without data augmentation and random volume scale augmentation, with an SDRi of 5.57 dB compared to 3.81 dB and 3.63 dB on AudioSet separation. The matching energy data augmentation also outperform no data augmentation and random volume augmentation on all the other tasks. #### 4.6.8 USS results Trained with balanced and full AudioSet Table IX shows the results of training the USS systems with the balanced and the full AudioSet, respectively. The full training data is 100 times larger than the balanced data. We also experiment with training the USS system with 4 GPUs and a larger batch size of 64. The USS system trained on the full AudioSet outperforms the USS system trained on the balanced set after trained 1 million steps. Table IX shows that training on the full AudioSet with a batch size of 64 achieves an SDRi of 6.76 dB, outperforming training on the balanced set of 5.57 dB. #### 4.6.9 Visualization of Hierarchical Separation One application of the hierarchical separation is to separate arbitrary audio recordings into individual sources with AudioSet ontology. For example, the USS system can separate the sound in a movie into different tracks. One challenge of the hierarchical separation is the number of present sources are unknown. We use the methods in Section 3.9 to detect and separate the present sound events. Fig. 6 shows the automatically detected and separated waveforms of a movie clip from _Harry Potter and the Sorcer's Stone_ from ontology levels 1 to 3 by using Algorithm 3. Level 1 indicates coarse sound classes and level 3 indicates fine sound classes. In level 1, the USS system successfully separate human sounds, music and sounds of things. In level 2, the USS system further separate human group actions, vehicle, and animals. In level 3, the USS system separate fine-grained sound classes such as bell, bird, crowd, and scary music. ## 5 Conclusion In this paper, we propose universal source separation (USS) systems trained on the large-scale weakly labelled AudioSet. The USS systems can separate hundreds of sound classes using a single model. The separation system can achieve universal source separation by using the embedding calculated from query examples as a condition. In training, we first apply a sound event detection (SED) system to detect the anchor segments that are most likely to contain sound events. We constitute a mixture by mixing several anchor segments. Then, we use a pretrained audio tagging system to calculate the segment prediction probability or the embedding vector as the condition of the target anchor segment. The USS system takes the mixture and the condition as input to output the desired anchor segment waveform. In inference, we propose both a hierarchical separation with an AudioSet ontology. We evaluated our proposed USS systems on a wide range of separation tasks, including AudioSet separation, FSDKaggle2018 and FSD50k general sound separation, MUSDB18 and S Fig. 6: Computational auditory scene analysis and hierarchical USS of the trailer of “Harry Potter and the Sorcer’s Stone”: [https://www.youtube.com/watch?v=vyHV08Rtdxo](https://www.youtube.com/watch?v=vyHV08Rtdxo) instruments separation, and Voicebank-Demand speech enhancement without training on those datasets. We show the USS system is an approach that can address the USS problem. In future, we will improve the quality of the separated waveforms of the weakly labelled USS systems.
2310.03497
Sharp global well-posedness for the higher order non-linear Schrödinger equation on modulations spaces
We consider the initial value problem (IVP) associated to a higher order nonlinear Schr\"odinger (h-NLS) equation $ \partial_{t}u+ia \partial^{2}_{x}u+ b\partial^{3}_{x}u+ic_1|u|^{2}u+c_2 |u|^{2}\partial_{x}u=0, \quad x,t \in \mathbb{R}, $ for given data in the modulation space $M_s^{2,p}(\mathbb{R})$. Using ideias of Killip, Visan, Zhang, Oh, Wang, we prove that the IVP associated to the h-NLS equation is globally well-posed in the modulation spaces $M^{s,p}$ for $s\geq\frac14$ and $p\geq2$.
X. Carvajal, P. Gamboa, R. Santos
2023-10-05T12:20:05Z
http://arxiv.org/abs/2310.03497v2
Sharp global well-posedness for the higher order non-linear Schrodinger equation on modulations spaces ###### Abstract. We consider the initial value problem (IVP) associated to a higher order nonlinear Schrodinger (h-NLS) equation \[\partial_{t}u+ia\partial_{x}^{2}u+b\partial_{x}^{3}u+ic_{1}|u|^{2}u+c_{2}|u|^{2 }\partial_{x}u=0,\quad x,t\in\mathbb{R},\] for given data in the modulation space \(M_{s}^{2,p}(\mathbb{R})\). Using ideas in Killip, Visan, Zhang [19], Oh, Wang [25] and [26], we prove that the IVP associated to the h-NLS equation is globally well-posed in the modulation spaces \(M^{s,p}\) for \(s\geq\frac{1}{4}\) and \(p\geq 2\). Key words and phrases:Key-words: Schrodinger equation, Korteweg-de Vries equation, Initial value problem, Well-posedness, Sobolev spaces, Fourier-Lebesgue spaces, Modulation spaces. ## 1. Introduction In this work we consider the initial value problem (IVP) associated to a higher order nonlinear Schrodinger (h-NLS) equation \[\begin{cases}\partial_{t}u+ia\partial_{x}^{2}u+b\partial_{x}^{3}u+ic_{1}|u|^ {2}u+c_{2}|u|^{2}\partial_{x}u=0,\quad x,t\in\mathbb{R},\\ u(x,0)=u_{0}(x),\end{cases} \tag{1.1}\] where \(a,b,c_{1},c_{2}\in\mathbb{R}\) and \(u=u(x,t)\) is a complex valued function. The main objective of this work is to investigate the global well-posedness issues of the IVP (1.1) in the modulation spaces. In recent time, well-posedness of the IVPs associated to the nonlinear dispersive equations are being studied in some other scales of the function spaces than the usual \(L^{2}\) based Sobolev spaces \(H^{s}(\mathbb{R})\) viz., the Fourier-Lebesgue spaces \(\mathcal{F}L^{s,p}(\mathbb{R})\) with norm \[\|u\|_{\mathcal{F}L^{s,p}}=\|\langle\xi\rangle^{s}\widehat{u}(\xi)\|_{L^{p}},\] and modulation spaces \(M_{s}^{r,p}(\mathbb{R})\) with norm given by (2.3) (below). More specifically we mention the local well-posedness result for the modified Korteweg-de Vries (mKdV) equation in \(\mathcal{F}L^{s,p}(\mathbb{R})\) for \(s\geq\frac{1}{2p}\) with \(2\leq p<4\) obtained in [14] and its improvement for the same range of \(s\) with \(2\leq p<\infty\) in [15]. For other discussion about these results we refer the ## 1. Introduction In this paper we study the \(\ell^{p}\)-sum of the modulated \(H^{\theta}\)-norm of a given function \(\mathcal{F}\) on the real line and Fourier-Lebesgue space \(\mathcal{F}L^{p}(\mathbb{T})\) on the circle. We consider the following two-dimensional case: \[\mathcal{F}L^{p}(\mathbb{T})=\{x\in\mathbb{R}:x\in\mathbb{R}\},\quad\mathcal{F }L^{p}(\mathbb{T})=\{x\in\mathbb{R}:x\in\mathbb{R}\}.\] Here \(\mathcal{F}\) is a \(\ell^{p}\)-sum of the \(\ell^{p}\)-sum of the modulated \(H^{\theta}\)-norm of a given function \(\mathcal{F}\) on the real line and Fourier-Lebesgue space \(\mathcal{F}L^{p}(\mathbb{T})\) on the circle. The \(\ell^{p}\)-sum of the modulated \(H^{\theta}\)-norm of a given function \(\mathcal{F}\) on the circle is the Fourier-Lebesgue space \(\mathcal{F}L^{p}(\mathbb{T})\) on the circle. The \(\ell^{p}\)-sum of the modulated \(H^{\theta}\)-norm of a given function \(\mathcal{F}\) on the circle is the Fourier-Lebesgue space \(\mathcal{F}L^{p}(\mathbb{T})\) on the circle. The \(\ell^{p}\)-sum of the modulated \(H^{\theta}\)-norm of a given function \(\mathcal{F}\) on the circle is the Fourier-Lebesgue space \(\mathcal{F}L^{p}(\mathbb{T})\) on the circle. The \(\ell^{p}\)-sum of the modulated \(H^{\theta}\)-norm of a given function \(\mathcal{F}\) on the circle is the Fourier-Lebesgue space \(\mathcal{F}L^{p}(\mathbb{T})\) on the circle. The authors in [26] proved that the cubic NLS is globally well-posed in \(M^{2,p}(\mathbb{R})\) for any \(p<\infty\) and the normalized cubic NLS is globally well-posed in \(\mathcal{F}L^{p}(\mathbb{T})\) for any \(p<\infty\). As far we we know, there are no known results about the global well-posedness issues for the IVP (1.1) for given data in the modulation spaces. Our interest here is in addressing the well-posedness issues for the h-NLS equation (1.1) with given data in the modulation spaces. We obtain the global well-posedness result for the IVP (1.1) in the same spirit to that for the complex mKdV equation [25]. The h-NLS is a particular case of the more general equation (honse equation) \[\begin{cases}\partial_{t}u+ia\partial_{x}^{2}u+b\partial_{x}^{3}u+ic_{1}|u|^{ 2}u+c_{2}|u|^{2}\partial_{x}u+du^{2}\partial_{x}\overline{u}=0,\quad x,t\in \mathbb{R},\\ u(x,0)=u_{0}(x),\end{cases} \tag{1.2}\] when \(d=0\). The honse equation (1.2), which is a mixed model of complex Korteweg-de Vries (KdV) and Schrodinger type was proposed by Hasegawa and Kodama in [16] and [20] to describe the nonlinear propagation of pulses in optical fibers. The IVP (1.2) has also been studied by several authors in recent years. Taking \(a,b,c_{1},c_{2}\) and \(d\) as real constants, Laurey [21] proved that the IVP (1.2) for given data in \(H^{s}(\mathbb{R})\) is locally well-posed when \(s>\frac{3}{4}\) and globally well-posed when \(s\geq 1\). Later, using the techniques developed by Kenig, Ponce and Vega [18], Staffilani [28] improved the result in [21] by showing that the IVP (1.2) is locally well-posed in \(H^{s}(\mathbb{R})\), \(s\geq\frac{1}{4}\). Using the method of almost conserved quantities and the I-method introduced by Colliander et. al. [9], Carvajal [6] proved the sharp global well-posedness of IVP associated to (1.2) in \(H^{s}(\mathbb{R})\) for \(s\geq\frac{1}{4}\). The IVP (1.2) when \(a\) and \(b\) are functions of \(t\in[-T_{0},T_{0}]\) for some \(T_{0}>0\) and \(b(t)\neq 0\) for all \(t\in[-T_{0},T_{0}]\) has also been a matter of study (see for instance [21], [5] and [7]). In [8] the authors proved the local well-posedness to the IVP (1.2) in the modulation spaces \(M^{s,p}\) for \(s\geq\frac{1}{4}\) and \(p\geq 2\). The IVP (1.2) posed on the circle \(\mathbb{T}\) is also studied in the literature, see for instance [29] and references therein. As far as we know, no work is available in the literature that deals with the global well-posedness issues of the IVP (1.2) for given data in the modulation spaces. Motivated by the recent works in [25] and [8], our interest in this work is to address this issue. In fact, we prove the global well-posedness result for the IVP (1.2) for given data in the modulation space \(M_{s}^{2,p}(\mathbb{R})\), whenever \(s\geq\frac{1}{4}\). This is the content the following theorem which is the main result of this work. **Theorem 1.1**.: _For given \(s\geq\frac{1}{4}\) and \(2\leq p<\infty\), the IVP (1.1) is globally well-posed in the modulation space \(M_{s}^{2,p}(\mathbb{R})\)._ We present the organization of this work. In Section 2 we introduce the function spaces, their properties and record some preliminary results. Section 3 is devoted to properties of traces, multiplication of operators and we derive the key conservation of the perturbation determinant \(\alpha(k,u)\) (see definition (3.42)) that is fundamental to prove the main result of this work. In Section 4 we provide the proofs of an apriori estimate in modultion spaces and the main result of this paper. We finish this section recording some principal notations that will be used throughout this work. **Notations:** We will use standard notations of the PDEs throughout this work. We use \(\widehat{f}\) to denote Fourier transform and is defined by \(\widehat{f}(\xi)=(2\pi)^{-1/2}\int e^{-ix\xi}f(x)\,dx.\) We write \(A\lesssim B\) if there exists a constant \(c>0\) such that \(A\leq cB,\) we also write \(A\sim B\) if \(A\lesssim B\) and \(B\lesssim A...\) etc. ## 2. Function spaces and preliminary results As described in the previous section, the best global well-posedness result for the IVP (1.1) for given data in the Sobolev space \(H^{s}(\mathbb{R})\), \(s\geq 1/4\) was obtained in [6], using the Fourier transform norm space \(Z^{s,b}\) defined for \(s,b\in\mathbb{R}\), \(Z^{s,b}\) is the Fourier transform restriction norm space introduced by Bourgain [2] with norm \[\|u\|_{Z^{s,b}}:=\|\langle\xi\rangle^{s}\langle\tau-\phi(\xi)\rangle^{b} \widehat{u}(\xi,\tau)\|_{L^{2}_{\xi}L^{2}_{\xi}}, \tag{2.1}\] where \(\langle x\rangle:=1+|x|\) and \(\phi(\xi)=\xi^{3}-\alpha\xi^{2}\) is the phase function associated to the h-NLS equation (1.1). We note that, for \(b>\frac{1}{2}\) one has \(Z^{s,b}\subset C(\mathbb{R};H^{s}(\mathbb{R}))\) and these spaces play a very important role in obtaining the well-posedness results for the IVP associated to the dispersive equations with low regularity Sobolev data. Now, we move on to introduce modulation spaces on which we are interested to concentrate our work. For given \(s\in\mathbb{R}\), \(1\leq r,p\leq\infty\), modulation spaces \(M_{s}^{r,p}(\mathbb{R})\) are defined by [12, 13] \[M_{s}^{r,p}(\mathbb{R}):=\{f\in\mathcal{S}^{\prime}(\mathbb{R}):\|f\|_{M_{s}^ {r,p}}<\infty\}, \tag{2.2}\] where \[\|f\|_{M_{s}^{r,p}}:=\|\langle n\rangle^{s}\|\psi_{n}(D)f\|_{L_{x}^{r}(\mathbb{R}) }\|_{\ell_{n}^{p}(\mathbb{Z})}, \tag{2.3}\] with \(\psi\in\mathcal{S}(\mathbb{R})\) such that \[\operatorname{supp}\psi\subset[-1,1],\qquad\sum_{k\in\mathbb{Z}}\psi(\xi-k)=1,\] and \(\psi_{n}(D)\) is the Fourier multiplier operator with symbol \[\psi_{n}(\xi):=\psi(\xi-n).\] For given \(n\geq 1\), let \(P_{N}\) be the Littlewood-Paley projector on the frequencies \(\{|\xi|\sim N\}\). For \(n\in\mathbb{Z}\) we define \[\widehat{\Pi_{n}f}(\xi):=\psi_{n}(\xi)\widehat{f}(\xi). \tag{2.4}\] For any \(1\leq q\leq p\leq\infty\), from Bernstein's inequality we have the followings \[\begin{split}\|P_{N}f\|_{L_{x}^{p}}&\lesssim N^{ \frac{1}{q}-\frac{1}{p}}\|f\|_{L_{x}^{q}},\\ \|\Pi_{n}f\|_{L_{x}^{p}}&\lesssim\|f\|_{L_{x}^{q}}. \end{split} \tag{2.5}\] Now, we introduce the Bourgain's type space \(X_{p}^{s,b}\) adapted to the modulation space \(M_{s}^{2,p}(\mathbb{R})\) with norm given by \[\|f\|_{X_{p}^{s,b}}:=\Big{(}\sum_{n\in\mathbb{Z}}\langle n\rangle^{sp}\|\tau- \xi^{3}\rangle^{b}\widehat{f}(\xi,\tau)\|_{L_{r,\xi}^{2}(\mathbb{R}\times[n,n+ 1])}^{p}\Big{)}^{\frac{1}{p}}\sim\|\|\Pi_{n}f\|_{X^{s,b}}\|_{\ell_{n}^{p}}. \tag{2.6}\] For \(p=2\), the space \(X_{p}^{s,b}\) simply reduces to the usual Bourgain's space \(X^{s,b}\). Note that, for \(b>\frac{1}{2}\), one has the following inclusion \[X_{p}^{s,b}\subset C(\mathbb{R}:M_{s}^{2,p}(\mathbb{R})). \tag{2.7}\] Also the following estimates hold \[\|x_{n}\|_{\ell_{n}^{p}}\leq\|x_{n}\|_{\ell_{n}^{q}},\qquad p\geq q\geq 1, \tag{2.8}\] \[\|u\|_{X_{p}^{s,b}}\leq\|u\|_{X_{q}^{s,b}},\qquad p\geq q\geq 1, \tag{2.9}\] \[\|P_{N}u\|_{X_{q}^{s,b}}\lesssim N^{\frac{1}{q}-\frac{1}{p}}\|P_{N}u\|_{X_{p} ^{s,b}},\qquad p\geq q\geq 1. \tag{2.10}\] For a given time interval \(I\), the local-in-time version \(X_{p}^{s,b}(I)\) of \(X_{p}^{s,b}\) are defined with the norm \[\|f\|_{X_{p}^{s,b}(I)}:=\inf\{\|g\|_{X_{p}^{s,b}}:g|_{I}=f\}.\] In what follows we record some preliminary results. We start with the estimates that the unitary group satisfies in the \(X^{s,b}_{p}\) spaces from [25]. **Lemma 2.1**.: _Let \(s,b\in\mathbb{R}\) and \(1\leq p<\infty\). Then for any \(0<T\leq 1\) the following estimate holds_ \[\left\|e^{-t\partial_{x}^{3}}f\right\|_{X^{s,b}_{p}([0,T])}\lesssim\|f\|_{M^{2,p}_{\rm s}}. \tag{2.11}\] **Lemma 2.2**.: _Let \(s\in\mathbb{R}\), \(-\frac{1}{2}<b^{\prime}\leq 0\leq b\leq 1+b^{\prime}\) and \(1\leq p<\infty\). Then for any \(0<T\leq 1\) the following estimate holds_ \[\left\|\int_{0}^{t}e^{-(t-t^{\prime})\partial_{x}^{3}}F(t^{\prime})dt^{\prime }\right\|_{X^{s,b}_{p}([0,T])}\lesssim T^{1+b^{\prime}-b}\|F\|_{X^{s,b^{\prime }}_{p}([0,T])}. \tag{2.12}\] ## 3. Operators and Traces Let \(f\in S(\mathbb{R})\), we define the linear operator \(T\) com kernel \(K_{T}\in L^{2}(\mathbb{R}^{2})\), \[Tf(x):=\int_{\mathbb{R}}K_{T}(x,y)f(y)dy. \tag{3.1}\] Observe that \(T:L^{2}\to L^{2}\), is a bounded linear operator, in fact using Minkowsky and Cauchy-Schwartz inequalities holds \(\|T\|_{L^{2}\to L^{2}}\leq\|K_{T}\|_{L^{2}(\mathbb{R}^{2})}\). We define the trace of the operator T as: \[\operatorname{tr}(T)=\int_{\mathbb{R}}K_{T}(x,x)dx.\] Using Fubini's Theorem in \(\langle Tf,g\rangle=\int_{\mathbb{R}}Tf(x)\overline{g(x)}dx\), \(f,g\in S(\mathbb{R})\), we obtain \[K_{T^{*}}(x,y)=\overline{K_{T}(y,x)}. \tag{3.2}\] Observe that if \(T_{j}\) has kernel \(K_{j}\), \(j=1,2\), then \[T_{1}T_{2}f(x)=\int_{\mathbb{R}^{2}}K_{1}(x,y)K_{2}(y,z)f(z)dydz=\int_{ \mathbb{R}}\left(\int_{\mathbb{R}}K_{1}(x,y)K_{2}(y,z)dy\right)f(z)dz,\] thus \(T_{1}T_{2}\) has kernel and trace \[K(x,z)=\int_{\mathbb{R}}K_{1}(x,y)K_{2}(y,z)dy,\quad\operatorname{tr}(T_{1}T _{2})=\int_{\mathbb{R}^{2}}K_{1}(x,y)K_{2}(y,x)dydx.\] Using Fubini's Theorem, we have \[\operatorname{tr}(T_{1}T_{2})=\operatorname{tr}(T_{2}T_{1}). \tag{3.3}\] We also set \[\|T\|^{2}=\operatorname{tr}(TT^{*})=\int_{\mathbb{R}^{2}}|K_{T}(x,y)|^{2}dxdy. \tag{3.4}\] The operator (3.1) is a Hilbert-Schmidt operator and this norm (3.4) is the Hilbert-Schmidt norm. In general if \(T_{j}\) has kernel \(K_{j}\), \(j=1,2,\ldots,n\), then \[T_{1}T_{2}\ldots T_{n}f(x)=\int_{\mathbb{R}^{n}}K_{1}(x,x_{1})K_{2}(x_{1},x_{2}) \ldots K_{n}(x_{n-1},x_{n})f(x_{n})dx_{1}dx_{2}\ldots dx_{n}, \tag{3.5}\] has kernel \[K(x,x_{n})=\int_{\mathbb{R}^{n-1}}K_{1}(x,x_{1})K_{2}(x_{1},x_{2})\ldots K_{n}( x_{n-1},x_{n})dx_{1}\ldots dx_{n-1}, \tag{3.6}\] and trace \[\operatorname{tr}(T_{1}T_{2}\ldots T_{n})=\int_{\mathbb{R}^{n}}K_{1}(x,x_{1}) K_{2}(x_{1},x_{2})\ldots K_{n}(x_{n-1},x)dxdx_{1}\ldots dx_{n-1}. \tag{3.7}\] We also have \[|\operatorname{tr}(T_{1}T_{2}\ldots T_{n})|\leq\prod_{j=1}^{n}\|T_{j}\|. \tag{3.8}\] Let \(m(\xi)\) a real function, we defined the multiplier operator \(M\) associated to \(m\), as \[\widehat{Mf}(\xi)=m(\xi)\widehat{f}(\xi).\] Let \(u\in S(\mathbb{R})\) and \(M\) a multiplier operator associated to \(m\), we define other operators \(Mu\) and \(uM\) as follow \((Mu)f(x):=M(uf)(x)\), and \((uM)f(x)=u(x)(Mf)(x)\). If \(M_{j}\) is a multiplier operator with multiplicator \(m_{j}\), \(j=1,2\), since that \(M_{1}M_{2}=M_{2}M_{1}\) has multiplicator \(m_{1}m_{2}=m_{2}m_{1}\), then \[M_{1}M_{2}u=M_{2}M_{1}u\quad\text{and}\quad uM_{1}M_{2}=uM_{2}M_{1}. \tag{3.9}\] We have the following examples \begin{tabular}{|c|c|c|} \hline Operator & Kernel, \(K(x,\xi)=\) & Trace \\ \hline M & \(m^{\vee}(x-\xi)\) & \\ \hline Mu & \(m^{\vee}(x-\xi)u(\xi)\) & \(\int m^{\vee}(0)u(x)dx=\left(\int m(x)dx\right)\left(\int u(x)dx\right)\) \\ \hline uM & \(u(x)m^{\vee}(x-\xi)\) & \(\int u(x)m^{\vee}(0)dx=\left(\int m(x)dx\right)\left(\int u(x)dx\right)\) \\ \hline uMv & \(u(x)m^{\vee}(x-\xi)v(\xi)\) & \(\int u(x)m^{\vee}(0)v(x)dx=\left(\int m(x)dx\right)\left(\int u(x)v(x)dx\right)\) \\ \hline \end{tabular} Observe that using (3.5) and (3.6) we can define other operators as \(M_{1}uM_{2}=M_{1}(uM_{2})=(M_{1}u)M_{2}\) with kernel: \[K(x,z):=K(m_{1},u,m_{2})=\int_{\mathbb{R}}K_{1}(x,y)K_{2}(y,z)dy=\int_{\mathbb{ R}}m_{1}^{\vee}(x-y)u(y)m_{2}^{\vee}(y-z)dy, \tag{3.10}\] and using (3.7), with trace: \[\operatorname{tr}(M_{1}uM_{2})=\int_{\mathbb{R}^{2}}K_{1}(x,y)K_{2}(y,x)dydx=\int_ {\mathbb{R}^{2}}m_{1}^{\vee}(x-y)u(y)m_{2}^{\vee}(y-x)dydx. \tag{3.11}\] It is not difficult to see that if \(m_{1},m_{2}\in L^{2}(\mathbb{R})\), then \(K(x,z)\in L^{2}(\mathbb{R}^{2})\). **Lemma 3.1**.: _For \(k\neq 0\) and \(u\in S(\mathbb{R})\),_ \[\|(k-\partial)^{-1/2}u(k+\partial)^{-1/2}\|^{2}\lesssim\int_{\mathbb{R}}\frac {|\widehat{u}(\xi)|^{2}}{|k|+|\xi|}d\xi\leq C_{k}\|u\|_{H^{-1/2}}^{2}, \tag{3.12}\] _where \(C_{k}=(\min\{1,|k|\})^{-1/2}\)._ In order to prove (3.12), we need the following **Lemma 3.2**.: _If \(a,b>0\) and \(a+b>1\), we have_ \[\int_{\mathbb{R}}\frac{dx}{\langle x-\alpha\rangle^{a}\langle x-\beta\rangle^{ b}}\lesssim\frac{1}{\langle\alpha-\beta\rangle^{c}},\quad c=\min\{a,b,a+b-1\}. \tag{3.13}\] Proof of Lemma 3.1.: From (3.10), using Plancherel's identity and properties of the Fourier transform we have \[\begin{split}\|K(x,z)\|_{L^{2}_{x}L^{2}_{z}}&=\|m_{1 }^{\vee}*[u(\cdot)(m_{2}(\cdot)e^{-iz(\cdot)})^{\vee}](x)\|_{L^{2}_{x}L^{2}_{z }}\\ &=\|m_{1}(\xi)\|[u(\cdot)(m_{2}(\cdot)e^{-iz(\cdot)})^{\vee}]^{ \wedge}(\xi)\|_{L^{2}_{x}}\|_{L^{2}_{\xi}}\\ &=\|m_{1}(\xi)\|[\widehat{u}*(m_{2}(\cdot)e^{-iz(\cdot)})(\xi)\|_ {L^{2}_{z}}\|_{L^{2}_{\xi}}\\ &=\|m_{1}(\xi)\|[(u(\cdot)e^{-i\xi(\cdot)})^{\vee}m_{2}(\cdot)]^{ \wedge}(z)\|_{L^{2}_{x}}\|_{L^{2}_{\xi}}\\ &=\|m_{1}(\xi)\|u^{\vee}(\xi-\eta)m_{2}(\eta)\|_{L^{2}_{\eta}}\|_ {L^{2}_{x}}.\end{split} \tag{3.14}\] Thus making a change of variables \(\xi=k\xi\) and \(\eta=k\eta\) followed by another change of variables \(\xi-\eta=y\), \[\begin{split}\|K(x,z)\|_{L^{2}_{x}L^{2}_{z}}^{2}&= \int_{\mathbb{R}^{2}}\frac{|\widehat{u}(\xi-\eta)|^{2}}{\sqrt{k^{2}+\xi^{2}} \sqrt{k^{2}+\eta^{2}}}d\xi d\eta\\ &\sim\int_{\mathbb{R}^{2}}\frac{|\widehat{u}(k(\xi-\eta))|^{2}}{ \langle\xi\rangle\langle\eta\rangle}d\xi d\eta\\ &=\int_{\mathbb{R}^{2}}\frac{|\widehat{u}(k(y))|^{2}}{\langle y+ \eta\rangle\langle\eta\rangle}dyd\eta\\ &\lesssim\int_{\mathbb{R}}\frac{|\widehat{u}(k(y))|^{2}}{\langle y \rangle}dy,\end{split} \tag{3.15}\] where in the last inequality was used Fubini theorem and Lemma 3.2 with \(a=b=1\). If \(T\) is a linear operator associated with the kernel \(K\), we define the operator \(\overline{T}\) as \[\overline{T}f(x):=\int_{\mathbb{R}}\overline{K(x,y)}f(y)dy, \tag{3.16}\] where \(\overline{K}\) denotes the complex conjugate of \(K\). Thus if \(K(x,y)\in\mathbb{R}\), then \(\overline{T}=T\). Observe that \(\overline{m^{\vee}(\eta)}=\widehat{\overline{m}}(\eta)=\overline{m}^{\vee}(-\eta)\), thus \[\overline{Mu}=M^{-}\overline{u}, \tag{3.17}\] where \(M^{-}\) is the multiplier operator, associated to \(m^{-}(\xi)=\overline{m}(-\xi)\). In this way we get, for any \(k\) real number \[\overline{(k-\partial)^{-1}u}=(k-\partial)^{-1}\overline{u},\quad\text{and} \quad\overline{(k+\partial)^{-1}u}=(k+\partial)^{-1}\overline{u}, \tag{3.18}\] since that if \(m(\xi)=(k\pm i\xi)^{-1}\), then \(\overline{m}(-\xi)=m(\xi)\). Let \(M\) a multiplier operator associated to \(m\), by (3.2) \[K_{M^{*}}(x,y)=\overline{K_{M}(y,x)}=\overline{m^{\vee}}(y-x)=\overline{ \widehat{m}}(x-y)=\overline{m}^{\vee}(x-y)=K_{\overline{M}}(x,y),\] i.e. \[M^{*}=\overline{M}. \tag{3.19}\] Similarly using (3.2) and the above example, we have \[(Mu)^{*}=\overline{u}\overline{M}. \tag{3.20}\] The equalities (3.19) and (3.20) imply that \[(M_{1}uM_{2})^{*}=((M_{1}u)M_{2})^{*}=M_{2}^{*}(M_{1}u)^{*}=\overline{M_{2}} \overline{u}\overline{M_{1}}. \tag{3.21}\] ### Derivatives of the Multiplication Operator Let \(u\in S(\mathbb{R})\) we define \(Pu:S(\mathbb{R})\to S(\mathbb{R})\) the operator of multiplication associated to \(u\) as: \[(Pu)f=uf,\quad f\in S(\mathbb{R}).\] We will use the notation \(Pu:=u\). Let \(n,l\in\mathbb{Z}^{+}\), we also define the operators of multiplication \(u\partial^{n}:S(\mathbb{R})\to S(\mathbb{R})\) by \((u\partial^{n})(f)=uf^{(n)}\) and \((\partial^{l}u):S(\mathbb{R})\to S(\mathbb{R})\) by \((\partial^{l}u)(f):=(uf)^{(l)}\). Considering \(n=l=1\), we obtain \((\partial u)(f)=(uf)^{\prime}=u^{\prime}f+uf^{\prime}\) for all \(f\in S(\mathbb{R})\) or equivalently \(\partial u=u^{\prime}+u\partial\) and thus \(u^{\prime}=\partial u-u\partial=[\partial,u]\). We can have combinations between both operators such as \[\partial(u\partial)(f)=((u\partial)f)^{\prime}=(uf^{\prime})^{\prime}=u^{ \prime}f^{\prime}+uf^{\prime\prime}\quad\iff\quad\partial(u\partial)=u^{ \prime}\partial+u\partial^{2}. \tag{3.22}\] Similarly we can define other operators of multiplication, such as \[(\partial^{2}u)(f):=(uf)^{\prime\prime}=u^{\prime\prime}f+2u^{\prime}f^{ \prime}+uf^{\prime\prime} \tag{3.23}\] which gives \[\partial^{2}u=u^{\prime\prime}+2u^{\prime}\partial+u\partial^{2}\quad\iff\quad u ^{\prime\prime}=\partial^{2}u-2u^{\prime}\partial-u\partial^{2}, \tag{3.24}\] We define \((k\pm\partial)u:=ku\pm\partial u\), \((k\pm\partial)^{2}u:=k^{2}u\pm 2k\partial u+\partial^{2}u\) and by induction \((k\pm\partial)^{n}u:=(k\pm\partial)^{n-1}(k\pm\partial)u\), \(n=3,4,\dots\). Using (3.22), adding and subtracting terms in the above equality, we have the following identity \[\begin{split} u^{\prime\prime}=& u(\partial^{2}-2k \partial-k^{2})+(\partial^{2}+2k\partial-k^{2})u+2(k-\partial)u(k+\partial) \\ =& u(k-\partial)^{2}+(k+\partial)^{2}u-4k^{2}u+2(k- \partial)u(k+\partial).\end{split} \tag{3.25}\] valid for all \(k\in\mathbb{R}\). Also we have the operator of multiplication \[(\partial^{3}u)(f):=(uf)^{\prime\prime\prime}=u^{\prime\prime\prime}f+3u^{ \prime\prime}f^{\prime}+3u^{\prime}f^{\prime\prime}+uf^{\prime\prime\prime} \tag{3.26}\] therefore \[\partial^{3}u=u^{\prime\prime\prime}+3u^{\prime\prime}\partial+3u^{\prime} \partial^{2}+u\partial^{3}\ \iff\ \ u^{\prime\prime\prime}=\partial^{3}u-3u^{\prime\prime}\partial-3u^{\prime} \partial^{2}-u\partial^{3} \tag{3.27}\] we also have the following identity \[u^{\prime\prime\prime}=\partial^{3}u-3\partial^{2}u\partial+3\partial u \partial^{2}-u\partial^{3}=\sum_{j=0}^{3}(-1)^{j}\binom{3}{j}\partial^{3-j}u \partial^{j}.\] Using \(u^{\prime}=\partial u-u\partial\), adding and subtracting terms in the above equality, we obtain \[u^{\prime\prime\prime}=u(k-\partial)^{3}+(k+\partial)^{3}u-8k^{3}u+(k-\partial )(3u^{\prime}+6ku)(k+\partial) \tag{3.28}\] for any \(k\in\mathbb{R}\). On the other hand \(\partial(|u|^{2}u)(f)=(|u|^{2}uf)^{\prime}=2|u|^{2}u^{\prime}f+u^{2}\overline{ u}^{\prime}f+|u|^{2}uf^{\prime}\) thus \[\partial(|u|^{2}u)=2|u|^{2}u^{\prime}+u^{2}\overline{u}^{\prime}+|u|^{2}u\partial, \tag{3.29}\] adding and subtracting terms in the above equality, we obtain \[2|u|^{2}u^{\prime}=-(|u|^{2}u)(k+\partial)-(k-\partial)(|u|^{2}u)-u^{2}( \overline{u}^{\prime}-2k\overline{u}), \tag{3.30}\] ### Trace of Products of Multiplier Operators Next, we will state some properties of the Products of Multiplier Operators. **Proposition 3.3**.: _Let \(M_{j}\) the multipliers operator associated to \(m_{j}\), \(u_{j}\in S(\mathbb{R})\), \(j=1,\ldots,n+1\) and \(\sigma\) a shift permutation of the n-upla \((1,2,\ldots,n)\), we have_ \[\text{tr}\left(\Pi_{j=1}^{n}M_{j}u_{j}\right)=\text{tr}\left(\Pi_{j=1}^{n}M_{ \sigma(j)}u_{\sigma(j)}\right)=\text{tr}\left(\left(\Pi_{j=1}^{n-1}u_{\sigma( j)}M_{\sigma(j+1)}\right)u_{\sigma(n)}M_{\sigma(1)}\right) \tag{3.31}\] \[\text{tr}\left(\left(\Pi_{j=1}^{n}M_{j}u_{j}\right)M_{n+1}\right)=\text{tr} \left(M_{1}M_{n+1}u_{1}\left(\Pi_{j=2}^{n}M_{j}u_{j}\right)\right) \tag{3.32}\] \[\text{tr}\left(\Pi_{j=1}^{n}u_{j}M_{j}\right)=\text{tr}\left(\left(\Pi_{j=1}^{ n-1}M_{j}u_{j+1}\right)M_{n}u_{1}\right) \tag{3.33}\] \[\text{tr}\left(\left(\Pi_{j=1}^{n}u_{j}M_{j}\right)u_{n+1}\right)=\text{tr} \left(\left(\Pi_{j=1}^{n-1}M_{j}u_{j+1}\right)M_{n}(u_{1}u_{n+1})\right) \tag{3.34}\] Proof.: Using (3.7) and example before, we have \[\text{tr}\left(\Pi_{j=1}^{n}M_{j}u_{j}\right)=\int_{\mathbb{R}^{n}}m_{1}^{ \vee}(x-\xi_{1})u_{1}(\xi_{1})m_{2}^{\vee}(\xi_{1}-\xi_{2})u_{2}(\xi_{2}) \cdots m_{n}^{\vee}(\xi_{n-1}-x)u_{n}(x)dxd\xi_{1}\cdots d\xi_{n-1} \tag{3.35}\] applying Fubini we get (3.31). Similarly \[\text{tr}\left(\left(\Pi_{j=1}^{n}M_{j}u_{j}\right)M_{n+1}\right)=\] \[\int_{\mathbb{R}^{n+1}}m_{1}^{\vee}(x-\xi_{1})u_{1}(\xi_{1})m_{2} ^{\vee}(\xi_{1}-\xi_{2})u_{2}(\xi_{2})\cdots m_{n}^{\vee}(\xi_{n-1}-\xi_{n}) u_{n}(\xi_{n})m_{n+1}^{\vee}(\xi_{n}-x)\] \[= \int_{\mathbb{R}^{n}}u_{1}(\xi_{1})m_{2}^{\vee}(\xi_{1}-\xi_{2}) u_{2}(\xi_{2})\cdots m_{n}^{\vee}(\xi_{n-1}-\xi_{n})u_{n}(\xi_{n})\int_{ \mathbb{R}}m_{1}^{\vee}(x-\xi_{1})m_{n+1}^{\vee}(\xi_{n}-x)dx\] \[= \int_{\mathbb{R}^{n}}u_{1}(\xi_{1})m_{2}^{\vee}(\xi_{1}-\xi_{2}) u_{2}(\xi_{2})\cdots m_{n}^{\vee}(\xi_{n-1}-\xi_{n})u_{n}(\xi_{n})(m_{1}m_{n+1})^{ \vee}(\xi_{n}-\xi_{1})dx\] and (3.32) follows. The proof of inequalities (3.33) and (3.34) are similar. Note that on the right side of (3.32), (3.33) and (3.34) we can still use the (3.31) property. By property (3.33) we have \[\text{tr}\left(M_{1}u_{1}M_{2}u_{2}\right)= \text{tr}\left(u_{2}M_{1}u_{1}M_{2}\right)=\int_{\mathbb{R}^{2}}m _{1}^{\vee}(x-\xi_{1})u_{1}(\xi_{1})m_{2}^{\vee}(\xi_{1}-x)u_{2}(x)d\xi_{1}dx \tag{3.36}\] \[= \int_{\mathbb{R}}u_{1}(\xi_{1})\left\{(\widehat{m}_{1}m_{2}^{ \vee})*u_{2}\right\}(\xi_{1})d\xi_{1}.\] **Lemma 3.4**.: _For \(k\neq 0\) and \(u\in S(\mathbb{R})\),_ \[\text{Re}\,\text{tr}\left\{(k-\partial)^{-1}u(k+\partial)^{-1}\overline{u} \right\}=2kc\int_{\mathbb{R}}\frac{|\widehat{u}(\xi)|^{2}}{4k^{2}+\xi^{2}}d\xi \sim_{k}\|u\|_{H^{-1}}^{2}. \tag{3.37}\] Proof.: Using (3.3), (3.36) with \(u_{1}=\overline{u}\), \(u_{2}=u\) and \(m_{1}(\xi)=\dfrac{1}{k+i\xi}\), \(m_{2}(\xi)=\dfrac{1}{k-i\xi}=\overline{m_{1}}\) and Plancherel identity, we have \(m_{2}^{\vee}=\overline{m_{1}}^{\vee}=\overline{\widehat{m_{1}}}\) \[\begin{split}\operatorname{Re}\operatorname{tr}\left\{(k-\partial) ^{-1}u(k+\partial)^{-1}\overline{u}\right\}=&\operatorname{Re} \operatorname{tr}\left\{(k+\partial)^{-1}\overline{u}(k-\partial)^{-1}u \right\}\\ =&\operatorname{Re}\int_{\mathbb{R}}\overline{\widehat {u}}(\xi)\widehat{u}(\xi)\widehat{(\widehat{m_{1}}m_{2}^{\vee})}(\xi)d\xi\\ =&\operatorname{Re}\int_{\mathbb{R}}|\widehat{u}( \xi)|^{2}\widehat{|\widehat{m_{1}}|^{2}}(\xi)d\xi\\ =& c\operatorname{Re}\int_{\mathbb{R}}|\widehat{u}( \xi)|^{2}\dfrac{1}{2k-i\xi}d\xi.\end{split} \tag{3.38}\] From the definition of norm of operator \(M_{1}uM_{2}\), (see definition (3.4)), (3.21) and property (3.32) holds \[\|M_{1}uM_{2}\|^{2}=\operatorname{tr}(M_{1}uM_{2}\overline{M_{2}}\overline{u }\overline{M_{1}})=\operatorname{tr}(\overline{M_{1}}M_{1}uM_{2}\overline{M_{ 2}}u), \tag{3.39}\] similarly \[\|M_{2}\overline{u}M_{1}\|^{2}=\operatorname{tr}(M_{2}\overline{u}M_{1} \overline{M_{1}}u\overline{M_{2}})=\operatorname{tr}(\overline{M_{2}}M_{2} \overline{u}M_{1}\overline{M_{1}}u) \tag{3.40}\] the above equalities, (3.3) and (3.9) imply that \[\|M_{1}uM_{2}\|=\|M_{2}\overline{u}M_{1}\|. \tag{3.41}\] By analogy with has gone in the case of the NLS and mKdV, we define \[\alpha(u(t),k):=\operatorname{Re}\sum_{l=1}^{\infty}\dfrac{(-1)^{l-1}}{l} \operatorname{tr}\left\{[(k-\partial)^{-1/2}u(k+\partial)^{-1}\overline{u}(k -\partial)^{-1/2}]^{l}\right\} \tag{3.42}\] Let \(R^{\pm}=(k\pm\partial)^{-1}\). Using (3.32) we deduce \[\operatorname{tr}\left\{[(k-\partial)^{-1/2}u(k+\partial)^{-1}\overline{u}(k- \partial)^{-1/2}]^{l}\right\}=\operatorname{tr}\left\{\left(R^{-}uR^{+} \overline{u}\right)^{l}\right\}. \tag{3.43}\] We consider \(u\) a solution of h-NSE. For technical reasons, we will assume that \(c_{1}=-2a\) and \(c_{2}=-6b\) i.e. \(u\) satisfies the equation \[\partial_{t}u=-L(u)+F(u),\quad x,t\in\mathbb{R}, \tag{3.44}\] where \(L(u)=ia\partial_{x}^{2}u+b\partial_{x}^{3}u\) and \(F(u)=2ia|u|^{2}u+6b|u|^{2}\partial_{x}u\). **Theorem 3.5**.: _[Conservation of \(\alpha\) for h-NSE] Let \(u(x,t)\) denote a Schwartz-space solution to HONSE. Then for any \(k>0\) holds,_ \[\frac{d}{dt}\alpha(u(t),k)=0. \tag{3.45}\] Proof.: Using (3.43) and differentiating \[\begin{split}\frac{d}{dt}\alpha(u(t),k)&=\operatorname{ Re}\sum_{l=1}^{\infty}(-1)^{l-1}\text{tr}\left\{\left(R^{-}uR^{+}\overline{u}\right)^{l-1} \left(R^{-}(\frac{d}{dt}u)R^{+}\overline{u}+R^{-}uR^{+}\frac{d}{dt}\overline{u} \right)\right\}\\ &=\operatorname{Re}\sum_{l=1}^{\infty}(-1)^{l-1}\text{tr}\left\{ \left(R^{-}uR^{+}\overline{u}\right)^{l-1}\left(R^{-}(-Lu)R^{+}\overline{u}+R^ {-}uR^{+}(-\overline{Lu})\right)\right\}\\ &+\operatorname{Re}\sum_{l=1}^{\infty}(-1)^{l-1}\text{tr}\left\{ \left(R^{-}uR^{+}\overline{u}\right)^{l-1}\left(R^{-}(F(u))R^{+}\overline{u} +R^{-}uR^{+}(\overline{F(u)}\right)\right\}\\ &=\operatorname{Re}\text{tr}\left\{\left(R^{-}(-Lu)R^{+} \overline{u}+R^{-}uR^{+}(-\overline{Lu})\right)\right\}\qquad(l=1)\\ &+\operatorname{Re}\sum_{l=1}^{\infty}(-1)^{l}\text{tr}\left\{ \left(R^{-}uR^{+}\overline{u}\right)^{l}\left(R^{-}(-Lu)R^{+}\overline{u}+R^ {-}uR^{+}(-\overline{Lu})\right)\right\}\quad(l:=l+1)\\ &+\operatorname{Re}\sum_{l=1}^{\infty}(-1)^{l-1}\text{tr}\left\{ \left(R^{-}uR^{+}\overline{u}\right)^{l-1}\left(R^{-}(F(u))R^{+}\overline{u}+ R^{-}uR^{+}(\overline{F(u)}\right)\right\}.\end{split} \tag{3.46}\] Thus (3.45) follows if \[\operatorname{Re}\text{tr}\left\{\left(R^{-}(Lu)R^{+}\overline{u}+R^{-}uR^{+} (\overline{Lu})\right)\right\}=0, \tag{3.47}\] and \[\begin{split}\operatorname{Re}\text{tr}\left\{\left(R^{-}uR^{+} \overline{u}\right)^{l}\left(R^{-}(Lu)R^{+}\overline{u}+R^{-}uR^{+}(\overline {Lu})\right)\right\}=\\ -\operatorname{Re}\text{tr}\left\{\left(R^{-}uR^{+}\overline{u} \right)^{l-1}\left(R^{-}(F(u))R^{+}\overline{u}+R^{-}uR^{+}(\overline{F(u)} \right)\right\}.\end{split} \tag{3.48}\] Using (3.36) with \(M_{1}=R^{-}\) and \(M_{2}=R^{+}\), integrating three times we get \[\text{tr}\left\{\left(R^{-}(u_{xxx})R^{+}\overline{u}\right)\right\}=\int_{ \mathbb{R}}u_{xxx}(\xi_{1})\left(\widehat{m_{1}}m_{2}^{\vee}\right)\ast \overline{u}(\xi_{1})d\xi_{1}=-\int_{\mathbb{R}}u(\xi_{1})\left(\widehat{m_{1} }m_{2}^{\vee}\right)\ast\overline{u_{xxx}}(\xi_{1})d\xi_{1}, \tag{3.49}\] similarly \[\text{tr}\left\{\left(R^{-}(iu_{xx})R^{+}\overline{u}\right)\right\}=\int_{ \mathbb{R}}iu_{xx}(\xi_{1})\left(\widehat{m_{1}}m_{2}^{\vee}\right)\ast \overline{u}(\xi_{1})d\xi_{1}=-\int_{\mathbb{R}}u(\xi_{1})\left(\widehat{m_{1} }m_{2}^{\vee}\right)\ast\overline{iu_{xx}}(\xi_{1})d\xi_{1}, \tag{3.50}\] using (3.49) and (3.50) we obtain (3.47). In order to prove (3.48), considering k:=-k and taking complex conjugate in (3.25), (3.28) we obtain \[\overline{u_{xx}}=(k-\partial)^{2}\overline{u}+\overline{u}(k+\partial)^{2}- 4k^{2}\overline{u}+2(k+\partial)\overline{u}(k-\partial). \tag{3.51}\] and \[\overline{u_{xxx}}= \overline{u}(-k-\partial)^{3}+(-k+\partial)^{3}\overline{u}+8k^{3} \overline{u}+(-k-\partial)(3\overline{u_{x}}-6k\overline{u})(-k+\partial) \tag{3.52}\] \[= -\overline{u}(k+\partial)^{3}-(k-\partial)^{3}\overline{u}+8k^{3} \overline{u}+(k+\partial)(3\overline{u_{x}}-6k\overline{u})(k-\partial).\] Using, (3.28), (3.52), (3.25) and (3.51) it is not difficult to see that \[R^{-}(u_{xxx})R^{+}\overline{u}+R^{-}uR^{+}(\overline{u_{xxx}})= R^{-}(k+\partial)^{3}uR^{+}\overline{u}+3u_{x}\overline{u}+6k|u|^{2} \tag{3.53}\] \[-R^{-}uR^{+}\overline{u}(k+\partial)^{3}+R^{-}(3u\overline{u_{x }}-6k|u|^{2})(k-\partial),\] and \[R^{-}(iu_{xx})R^{+}\overline{u}+R^{-}uR^{+}(\overline{iu_{xx}})=i\left(R^{-} \mathcal{A}uR^{+}\overline{u}+2|u|^{2}-R^{-}uR^{+}\overline{u}\mathcal{A}-2R^ {-}(|u|^{2})(k-\partial)\right), \tag{3.54}\] where \(\mathcal{A}=(k+\partial)^{2}\). From (3.53) and (3.54) we have that \[R^{-}L(u)R^{+}\overline{u}+R^{-}uR^{+}\overline{L(u)}=R^{-}\mathcal{B}uR^{+} \overline{u}-R^{-}uR^{+}\overline{u}\mathcal{B}+F_{1}(k)+R^{-}\overline{F_{1 }(-k)}(k-\partial), \tag{3.55}\] where \(F_{1}(k)=2ai|u|^{2}+3bu_{x}\overline{u}+6bk|u|^{2}\) and \(\mathcal{B}=ia\mathcal{A}+b(k+\partial)^{3}\), using Property (3.31) we have \[\operatorname{Re}\operatorname{tr}\!\left\{\left(R^{-}uR^{+}\overline{u} \right)^{l}\left(R^{-}\mathcal{B}uR^{+}\overline{u}\right)\right\}= \operatorname{Re}\operatorname{tr}\!\left\{\left(R^{-}\mathcal{B}uR^{+} \overline{u}\right)\left(R^{-}uR^{+}\overline{u}\right)^{l}\right\}, \tag{3.56}\] and using Property (3.32) we get \[\operatorname{Re}\operatorname{tr}\!\left\{\left(R^{-}uR^{+}\overline{u} \right)^{l}\left(R^{-}uR^{+}\overline{u}\mathcal{B}\right)\right\}= \operatorname{Re}\operatorname{tr}\!\left\{\left(R^{-}\mathcal{B}uR^{+} \overline{u}\right)\left(R^{-}uR^{+}\overline{u}\right)^{l}\right\} \tag{3.57}\] consequently from (3.55), (3.56) and (3.57) it follows that \[\operatorname{Re}\operatorname{tr}\!\left\{\left(R^{-}uR^{+} \overline{u}\right)^{l}\left(R^{-}L(u)R^{+}\overline{u}+R^{-}uR^{+}\overline{ L(u)}\right)\right\}\!=\!\operatorname{Re}\operatorname{tr}\!\left\{\left(R^{-}uR^{+} \overline{u}\right)^{l-1}R^{-}uR^{+}\overline{u}F_{1}(k)\right\} \tag{3.58}\] \[\quad+\operatorname{Re}\operatorname{tr}\left\{R^{-}uR^{+} \overline{u}\left(R^{-}uR^{+}\overline{u}\right)^{l-1}R^{-}\overline{F_{1}(-k )}(k-\partial)\right\}\] \[= \operatorname{Re}\operatorname{tr}\!\left\{\left(R^{-}uR^{+} \overline{u}\right)^{l-1}R^{-}uR^{+}(\overline{u}F_{1}(k))\right\}+ \operatorname{Re}\operatorname{tr}\left\{\left(R^{-}uR^{+}\overline{u}\right)^ {l-1}R^{-}u\left(\overline{F_{1}(-k)}\right)R^{+}\overline{u}\right\}\] in the last equality was used (3.31) (with the permutation \(\sigma(n)=2\), \(\sigma(n-1)=1\) and \(\sigma(j)=j+2,j=1,\ldots n-2\), similar as in (3.56)). Now, we will to consider the term with \(|u|^{2}u_{x}\) of \(F(u)\), considering k:=-k and taking complex conjugate in (3.30), we obtain \[2|u|^{2}\overline{u_{x}}= -(|u|^{2}\overline{u})(-k+\partial)-(-k-\partial)(|u|^{2} \overline{u})-\overline{u}^{2}(u_{x}+2ku) \tag{3.59}\] \[= (k+\partial)|u|^{2}\overline{u}+(|u|^{2}\overline{u})(k-\partial )-\overline{u}^{2}(u_{x}+2ku),\] By (3.30) and (3.59) we get \[\begin{split} R^{-}(2|u|^{2}u_{x})R^{+}\overline{u}&+R^ {-}uR^{+}(2|u|^{2}\overline{u_{x}})=R^{-}(-|u|^{4})-|u|^{2}uR^{+}\overline{u}+R^ {-}(-u^{2}\overline{u_{x}}+2k|u|^{2}u)R^{+}\overline{u}\\ &+R^{-}(|u|^{4})+R^{-}uR^{+}(|u|^{2}\overline{u})(k-\partial)+R^ {-}uR^{+}(-\overline{u}^{2}u_{x}-2k|u|^{2}\overline{u})\end{split} \tag{3.60}\] and using the properties (3.31) we have \[\begin{split}&\operatorname{Re}\operatorname{tr}\left\{\left(R^{-}uR^ {+}\overline{u}\right)^{l-1}\left(R^{-}(2|u|^{2}u_{x})R^{+}\overline{u}+R^{-} uR^{+}(2|u|^{2}\overline{u_{x}})\right\}=\\ &\operatorname{Re}\operatorname{tr}\left\{\left(R^{-}uR^{+} \overline{u}\right)^{l-1}\left(R^{-}(-u^{2}\overline{u_{x}}+2k|u|^{2}u)R^{+} \overline{u}+R^{-}uR^{+}(-\overline{u}^{2}u_{x}-2k|u|^{2}\overline{u})\right\} \right.\end{split} \tag{3.61}\] where the second term in RHS(3.60) in the first line is canceled with the second term in RHS(3.60) in the second line (using (3.31), similar as in (3.56). Therefore \[\begin{split}&\operatorname{Re}\operatorname{tr}\left\{\left(R^{-}uR^ {+}\overline{u}\right)^{l-1}\left(R^{-}(F(u))R^{+}\overline{u}+R^{-}uR^{+}( \overline{F(u)}\right)\right\}=\\ &\operatorname{Re}\operatorname{tr}\left\{\left(R^{-}uR^{+} \overline{u}\right)^{l-1}\left(R^{-}(-u\left(\overline{F_{1}(-k)}\right))R^{+ }\overline{u}+R^{-}uR^{+}(-\overline{u}F_{1}(k)\right)\right\},\end{split} \tag{3.62}\] where \(\overline{u}F_{1}(k)=2ia|u|^{2}\overline{u}+3b\overline{u}^{2}u_{x}+6kb|u|^{2} \overline{u}\). This equality together with (3.58) prove equality (3.48) and therefore proves the theorem **Proposition 3.6**.: _Let \(u\) solution of the IVP (1.2) and \(v\) such that_ \[u(x,t):=v(x+d_{1}t,t)e^{i(d_{2}x+d_{3}t)}. \tag{3.63}\] _where \(d_{j}\in\mathbb{R}\), \(j=1,2,3\). If \(\dfrac{d\alpha(u(t),k)}{dt}=0\), then \(\dfrac{d\alpha(v(t),k)}{dt}=0\)._ Proof.: By the definition of \(\alpha\) and (3.43), holds \[\alpha(v(t),k):=\operatorname{Re}\sum_{l=1}^{\infty}\dfrac{(-1)^{l-1}}{l} \operatorname{tr}\left\{\left(R^{-}ue^{-id_{2}x}R^{+}\overline{u}e^{id_{2}x} \right)^{l}\right\}. \tag{3.64}\] and using Property (3.31) we have \[\alpha(u(t),k):=\operatorname{Re}\sum_{l=1}^{\infty}\dfrac{(-1)^{l-1}}{l} \operatorname{tr}\left\{\left(\mathcal{R}^{-}u\mathcal{R}^{+}\overline{u} \right)^{l}\right\}. \tag{3.65}\] where \(\mathcal{R}^{\pm}=e^{\mp id_{2}x}R^{\pm}\). The rest of the proof is equal to the proof of Theorem 3.5, replacing \(R^{-}\) by \(\mathcal{R}^{-}\) and \(R^{+}\) by \(\mathcal{R}^{+}\). **Remark 3.7**.: _We consider the following Gauge transform_ \[u(x,t):=v(x+d_{1}t,t)e^{i(d_{2}x+d_{3}t)}. \tag{3.66}\] _Using this transformation the IVP (1.1) turns out to be_ \[\begin{cases}\partial_{t}v+b\partial_{x}^{3}v+i(a+3bd_{2})\partial_{x}^{2}v+i(d_{ 3}-bd_{2}^{3}-ad_{2}^{2})v+(d_{1}-3bd_{2}^{2}-2ad_{2})\partial_{x}v\\ +i(c_{1}+c_{2}d_{2})|v|^{2}v+c_{2}|v|^{2}v_{x}=0,\\ v(x,0)=v_{0}(x):=u_{0}(x)e^{-id_{2}x}.\end{cases} \tag{3.67}\] _If one chooses \(d_{1}=-\frac{a^{2}}{3b}\), \(d_{2}=\frac{-a}{3b}\), \(d_{3}=\frac{2a^{3}}{27b^{2}}\), the third, fourth, fifth terms in the first equation in (3.67) vanish i.e._ \[\begin{cases}\partial_{t}v+b\partial_{x}^{3}v+ic_{3}|v|^{2}v+c_{2}|v|^{2}v_{x}= 0,\\ v(x,0)=v_{0}(x):=u_{0}(x)e^{iax/(3b)},\end{cases} \tag{3.68}\] _where \(c_{3}=c_{1}-\frac{ac_{2}}{3b}\). Also, we note that_ \[\|u_{0}\|_{H^{s}}\sim\|v_{0}\|_{H^{s}}\quad\text{and}\quad\|u\|_{Z^{s,b}}\sim \|v\|_{X^{s,b}},\] _where \(X^{s,b}\) is the Fourier transform norm space with phase function \(\xi^{3}\), i.e., with norm_ \[\|u\|_{X^{s,b}}:=\|\langle\xi\rangle^{s}\langle\tau-\xi^{3}\rangle^{b}\widehat {u}(\xi,\tau)\|_{L^{2}_{\xi}L^{2}_{\tau}}.\] _This observation allows one to consider the parameter \(\alpha=0\)._ ## 4. Apriori estimates **Proposition 4.1**.: _Let \(2\leq p<\infty\) and \(u\in S(\mathbb{R})\) a solution to (3.44). Then, there exists \(C=C(p)\) positive such that_ \[\|u(t)\|_{M^{2,p}(\mathbb{R})}\leq C\left(1+\|u(0)\|_{M^{2,p}(\mathbb{R})} \right)^{p/2-1}\|u(0)\|_{M^{2,p}(\mathbb{R})}, \tag{4.1}\] _for any \(t\in\mathbb{R}\)._ Proof.: The proof is done in two steps. Initially we will to consider the initial data \[\|u_{0}\|_{M^{2,p}}<\epsilon\ll 1. \tag{4.2}\] Let \(u\) solution of equation (3.44) and for any \(n\in\mathbb{Z}\) we define \(u_{n}\) such that \[u(x,t)=e^{inx}e^{i(an^{2}+bn^{3})t}u_{n}(x+(2an+3bn^{2})t,t),\] then taking \(v:=u_{n}\) where \[u_{n}(x,t)=e^{-inx}e^{i(an^{2}+2bn^{3})t}u(x-(2an+3bn^{2})t,t) \tag{4.3}\] is a solution of \[\begin{cases}\partial_{t}v+iA\partial_{x}^{2}v+b\partial_{x}^{3}v=2iA|v|^{2}v+6b|v| ^{2}\partial_{x}v,\quad x,t\in\mathbb{R},\\ v(x,0)=e^{-inx}u(x,0),\end{cases} \tag{4.4}\] where \(A=a+3bn\), Theorem 3.5 and Remark 3.6 imply that \[\frac{d\alpha(u_{n}(t))}{dt}=0, \tag{4.5}\] and also have \(\widehat{u}(\xi,t)=e^{i(an^{2}+bn^{3})t}e^{i(2an+3bn^{2})(\xi-n)t}\widehat{u _{n}}(\xi-n,t)\), thus \[|\widehat{u_{n}}(\xi,t)|=|\widehat{u}(\xi+n,t)| \tag{4.6}\] for any \(n\in\mathbb{Z}\) and \(\xi,t\in\mathbb{R}\). We denote \(\alpha\) by \(\alpha(u_{n}(t))=\alpha(u_{n}(t),k)\), \(k>0\), from the definition of \(\alpha\) we get \[\alpha(u_{n}(t))=\mathrm{Re}\sum_{l=1}^{\infty}\frac{(-1)^{l-1}}{l}\mathrm{tr }\left\{\left((k-\partial_{x})^{-\frac{1}{2}}u_{n}(k+\partial_{x})^{-1} \overline{u_{n}}(k-\partial_{x})^{-\frac{1}{2}}\right)^{l}\right\}. \tag{4.7}\] but using (3.3) \[\mathrm{tr}\left((k-\partial_{x})^{-\frac{1}{2}}u_{n}(k+\partial_{x})^{-1} \overline{u_{n}}(k-\partial_{x})^{-\frac{1}{2}}\right)=\mathrm{tr}\left((k- \partial_{x})^{-1}u_{n}(k+\partial_{x})^{-1}\overline{u_{n}}\right),\] moreover of equation (3.37) and (4.7), we get \[\alpha(u_{n}(t))=2kc\int_{\mathbb{R}}\frac{|\widehat{u}_{n}(\xi,t)|^{2}}{4k^{ 2}+\xi^{2}}\,d\xi+\mathrm{Re}\sum_{l=2}^{\infty}\frac{(-1)^{l-1}}{l}\mathrm{ tr}\left\{\left[\,M^{-}u_{n}M^{+}M^{+}\overline{u}_{n}M^{-}\,\right]^{l} \right\} \tag{4.8}\] where \(M^{\mp}=(k\mp\partial_{x})^{-\frac{1}{2}}\), from (4.8) and (3.41), we get \[\left|\,\alpha(u_{n}(t))-2kc\int_{\mathbb{R}}\frac{|\widehat{u}_{n}(\xi,t)|^{ 2}}{4k^{2}+\xi^{2}}\,d\xi\,\right|\leq\sum_{l=2}^{\infty}\frac{1}{l}\left||M^ {-}u_{n}M^{+}\right||^{2l}. \tag{4.9}\] However of Lemma 3.1, \[||M^{-}u_{n}M^{+}||^{2l}\,\lesssim\,\left[\int_{\mathbb{R}}\frac{|\widehat{u} _{n}(\xi,t)|^{2}}{|k|+|\xi|}\,d\xi\right]^{l}. \tag{4.10}\] then from (4.9) and (4.10), we get \[\left|\,\alpha(u_{n}(t))-2kc\int_{\mathbb{R}}\frac{|\widehat{u}_{n}(\xi,t)|^{ 2}}{4k^{2}+\xi^{2}}\,d\xi\,\right|\lesssim\sum_{l=2}^{\infty}\left[\int_{ \mathbb{R}}<\xi>^{-1}\,|\widehat{u}_{n}(\xi,t)|^{2}\,d\xi\right]^{l}. \tag{4.11}\] Using (4.6), making a change of variables in (4.11), we get \[\left|\,\alpha(u_{n}(t))-2kc\int_{\mathbb{R}}\frac{|\widehat{u}_{n}(\xi,t)|^{2}}{4 k^{2}+\xi^{2}}\,d\xi\,\right|\,\lesssim\,\sum_{l=2}^{\infty}\left[\int_{\mathbb{R}}<\xi-n> ^{-1}\,|\widehat{u}(\xi,t)|^{2}\,d\xi\right]^{l}. \tag{4.12}\] For each \(j\in\mathbb{Z}\) consider the interval \[I_{j}=[j-\frac{1}{2},j+\frac{1}{2}[\] Next, we will estimate the right side of (4.12), when time \(t=0\). In this sense, we observe that \[\int_{\mathbb{R}}<\xi-n>^{-1}\,|\widehat{u}(\xi,0)|^{2}\,d\xi\,\sim\,\sum_{j \in\mathbb{Z}}<j-n>^{-1}\,\int_{I_{j}}\,|\widehat{u}(\xi,0)|^{2}\,d\xi. \tag{4.13}\] However, if \(p\geq 2\) and \(1<q\leq\infty\) such that \(\frac{2}{p}+\frac{1}{q}=1\), using Holder's inequality, we get \[\begin{split}\sum_{j\in\mathbb{Z}}<j-n>^{-1}\,\|\widehat{u}(\xi, 0)\|_{L^{2}(I_{j})}^{2}&\lesssim\|<j-n>^{-1}\,\|_{\ell^{q}_{j}} \,\|\,\|\widehat{u}(\xi,0)\|_{L^{2}(I_{j})}^{2}\|_{\ell^{\frac{p}{2}}_{j}( \mathbb{Z})}\\ &\lesssim\|\,\|\widehat{u}(\xi,0)\|_{L^{2}(I_{j})}\|_{\ell^{p}_{ j}(\mathbb{Z})}^{2}\,\sim\,\|\,u(0)\|_{M^{2,p}}^{2}\end{split} \tag{4.14}\] uniformly in \(n\in\mathbb{Z}\). By continuity of the solution operator, there exists a neighborhood \(I_{\lambda}=]-\lambda,\lambda[\) around \(t=0\) such that \[\|\,u(t)\,\|_{M^{2,p}}\,\leq\,\epsilon\ll 1 \tag{4.15}\] Applying similar ideas to obtain (4.14), we have \[r_{n}(t)\,=\,\int_{\mathbb{R}}<\xi-n>^{-1}\,|\widehat{u}(\xi,t)|^{2}\,d\xi\, \lesssim\,\|u(t)\|_{M^{2,p}}^{2}. \tag{4.16}\] Note that \(\sum_{2}^{\infty}\,(r_{n}(t))^{l}\) it is a geometric series of ratio \(0\leq r_{n}(t)\leq\epsilon\ll 1\) and \(t\in I_{\lambda}\) fixed any. Then Of (4.12), (4.16), we have \[\left|\,\alpha(u_{n}(t))-2kc\int_{\mathbb{R}}\frac{|\widehat{u}_{n}(\xi,t)|^{ 2}}{4k^{2}+\xi^{2}}\,d\xi\,\right|\,\lesssim\,\sum_{l=2}^{\infty}\,(r_{n}(t) )^{l}\,=\,\frac{(r_{n}(t))^{2}}{1-r_{n}(t)}\,\lesssim\,(r_{n}(t))^{2}, \tag{4.17}\] for all \(n\in\mathbb{Z},\ t\in I_{\lambda}\). Thus using Young's inequality to series, considering \(\frac{1}{p}+1=\frac{1}{q}+\frac{2}{p}\)\((q=\frac{p}{p-1})\), we have \[\left\|\alpha(u_{n}(t))-2kc\int_{\mathbb{R}}\frac{|\widehat{u}_{n}( \xi,t)|^{2}}{4k^{2}+\xi^{2}}\,d\xi\right\|_{\ell_{n}^{p/2}} \lesssim\] \[\lesssim\] \[\lesssim\] \[\lesssim\] \[\sim \|u(t)\|_{M^{2,p}}^{4}\] evenly for all \(t\in I_{\lambda}\). On the other hand, note that \[||u(t)||_{MH^{-1,p}}^{2}\,=\,\left[\sum_{n\in\mathbb{Z}}||<\xi-n>^{-1}\, \widehat{u}(\xi,t)||_{L_{\xi}^{2}}^{p}\right]^{2/p}. \tag{4.19}\] Then, changing the variable \(\mu=\xi-n\), we get \[\|u(t)\|_{MH^{-1,p}}^{2}\,=\,\left\|\int_{\mathbb{R}}\frac{|\widehat{u}_{n}( \mu,t)|^{2}}{1+\mu^{2}}\,d\mu\right\|_{\ell_{n}^{p/2}(\mathbb{Z})}. \tag{4.20}\] Of (4.18) and (4.20), we get \[\left\|2kc\int_{\mathbb{R}}\frac{|\widehat{u}_{n}(\xi,t)|^{2}}{4k^{2}+\xi^{2} }\,d\xi\right\|_{\ell_{n}^{p/2}}\,\sim_{k}\,\left\|\int_{\mathbb{R}}\frac{| \widehat{u}_{n}(\xi,t)|^{2}}{1+\xi^{2}}\,d\xi\right\|_{\ell_{n}^{p/2}}\, \lesssim\,\|\alpha(u_{n}(t))\|_{\ell_{n}^{p/2}}+\|u(t)\|_{M^{2,p}}^{4}. \tag{4.21}\] Furthermore \[||\alpha(u_{n}(t))||_{\ell_{n}^{p/2}}\,\lesssim\,||u(t)||_{M^{2,p}}^{4}+||u(t )||_{MH^{-1,p}}^{2}. \tag{4.22}\] Note that \[||u(t)||_{M^{2,p}}^{2}\,\sim\,||u(t)||_{MH^{-1,p}}^{2}\,\lesssim\,||\alpha(u_ {n}(t))||_{\ell_{n}^{p/2}}+||u(t)||_{M^{2,p}}^{4}. \tag{4.23}\] Of (4.22), (4.23) and given that \(\alpha\) is conservative, we get \[||u(t)||_{M^{2,p}}^{2}\,\lesssim\,||u(0)||_{M^{2,p}}^{4}+||u(0)||_{MH^{-1,p}}^ {2}+||u(t)||_{M^{2,p}}^{4}. \tag{4.24}\] Then \[||u(t)||_{M^{2,p}}^{2}\,\lesssim\,||u(0)||_{M^{2,p}}^{2}+||u(0)||_{M^{2,p}}^{4 }+||u(t)||_{M^{2,p}}^{4}. \tag{4.25}\] for all \(t\in I_{\lambda}\). Of (4.15) and the continuity, we get \[||u(t)||_{M^{2,p}}^{2}\,\lesssim\,||u(0)||_{M^{2,p}}^{2}. \tag{4.26}\] evenly for all \(t\in\mathbb{R}\). Finally we will consider the general case, let \(u_{\lambda}(x,t)=\lambda^{-1}u(\lambda^{-1}x,\lambda^{-3}t)\), then \(v:=u_{\lambda}\) is a solution of \[v_{t}+ia\lambda^{-1}v_{xx}+bv_{xxx}=2ia\lambda^{-1}|v|^{2}v+6b|v|^{2}v_{x}, \tag{4.27}\] using the inequality \((c_{1}+\cdots c_{n})^{a}\leq n^{a-1}(c_{1}^{a}+\cdots c_{n}^{a})\), \(c_{j}\geq 0\), \(j=1,\ldots n\), we have \[\begin{split}\|u_{\lambda}(\cdot,0)|\|_{M^{2,p}}\sim& \lambda^{-1/2}\|\,\|\widehat{u}(\cdot,0)\|_{L^{2}(J_{\lambda,n})}\, \|_{\ell_{n}^{p}},\qquad J_{\lambda,n}=\lambda I_{n}=[\lambda n-\frac{\lambda} {2},n\lambda+\frac{\lambda}{2}]\\ =&\lambda^{-1/2}\left(\sum_{n\in\mathbb{Z}}\left( \sum_{j=0}^{\lambda-1}\|\widehat{u}(\cdot,0)\|_{L^{2}(J_{\lambda,n,j})}^{2} \right)^{p/2}\right)^{1/p}\\ \leq&\lambda^{-1/p}\|u(\cdot,0)\|_{M^{2,p}}\end{split} \tag{4.28}\] where \(J_{\lambda,n,j}=[\lambda n-\frac{\lambda}{2}+j,n\lambda-\frac{\lambda}{2}+j+1]\), \(n,\lambda\in\mathbb{Z}^{+}\), \(0\leq j\leq\lambda-1\), hence choosing \(\lambda\in Z^{+}\) such that \[\lambda\sim(1+\|u(\cdot,0)\|_{M^{2,p}})^{p}\] we obtain \[\|u_{\lambda}(\cdot,0)|\|_{M^{2,p}}<\epsilon\ll 1,\] by the small data case presented above follows that. \[\|u_{\lambda}(\cdot,t)\|_{M^{2,p}}\lesssim\|u_{\lambda}(\cdot,0)\|_{M^{2,p}} \tag{4.29}\] for all \(t\in\mathbb{R}\). By scaling, (4.29) and (4.28) holds \[\|u(t)\|_{M^{2,p}}\leq\lambda^{1/2}\|u_{\lambda}(\cdot,\lambda^{3}t)\|_{M^{2, p}}\lesssim\lambda^{1/2}\|u_{\lambda}(\cdot,0)\|_{M^{2,p}}\lesssim\lambda^{1/2-1/p} \|u(\cdot,0)\|_{M^{2,p}}. \tag{4.30}\] This inequality proves the proposition. **Proposition 4.2**.: _Let \(2\leq p<\infty\), \(0\leq s<1-\frac{1}{p}\) and \(u\in S(\mathbb{R})\) a solution to (3.44). Then, there exists \(C=C(p)\) positive such that_ \[\|u(t)\|_{M^{2,p}_{*}(\mathbb{R})}\leq C\left(1+\|u(0)\|_{M^{2,p}_{*}(\mathbb{ R})}\right)^{p/2-1}\|u(0)\|_{M^{2,p}_{*}(\mathbb{R})}, \tag{4.31}\] _for any \(t\in\mathbb{R}\)._ Proof.: Is the same proof as the proof of Theorem B1 (i) in [26]. **Proof of Theorem 1.1** The proof of Theorem 1.1 in the case \(0\leq s\leq 1-\frac{1}{p}\), follows from Propositions 4.1 and 4.2 and in the case \(s>1-\frac{1}{p}\) the proof is similar as in Section 3.6 of [25].
2305.08774
Minimal orthonormal bases for pure quantum state estimation
We present an analytical method to estimate pure quantum states using a minimum of three measurement bases in any finite-dimensional Hilbert space. This is optimal as two bases are insufficient to construct an informationally complete positive operator-valued measurement (IC-POVM) for pure states. We demonstrate our method using a binary tree structure, providing an algorithmic path for implementation. The performance of the method is evaluated through numerical simulations, showcasing its effectiveness for quantum state estimation.
Leonardo Zambrano, Luciano Pereira, Aldo Delgado
2023-05-15T16:32:05Z
http://arxiv.org/abs/2305.08774v2
# Minimal orthonormal bases for pure quantum state estimation ###### Abstract We present an analytical method to estimate pure quantum states using a minimum of three measurement bases in any finite-dimensional Hilbert space. This is optimal as two bases are not sufficient to construct an informationally complete positive operator-valued measurement (IC-POVM) for pure states. We demonstrate our method using a binary tree structure, providing an algorithmic path for implementation. The performance of the method is evaluated through numerical simulations, showcasing its effectiveness for quantum state estimation. ## I Introduction Quantum state estimation [1], the process of experimentally determining the complete description of a quantum system, is essential for numerous applications, ranging from quantum information processing to quantum simulation. In a \(d\)-dimensional quantum system, states can be described by positive semi-definite complex matrices with unit trace. Therefore, quantum state estimation requires knowledge of the expectation values of at least \(d^{2}-1\) linearly independent, Hermitian operators. The traditional method for estimating these expectation values is by measuring the \(d^{2}-1\) generalized Gell-Mann matrices [2; 3]. However, this approach demands significant experimental resources and time when \(d\) is large. An alternative is to measure \(d+1\) mutually unbiased bases [4; 5; 6; 7; 8]. While this set offers better scaling, it is still linear in \(d\), and it remains unknown if mutually unbiased bases exist in arbitrary dimensions. The minimal choice for estimation is to measure just one informationally complete, positive operator-valued measurement (IC-POVM) [9; 10; 11; 12; 13]. However, implementing this measurement is challenging as it requires coupling the system with an ancilla and performing a projective measurement on the complete system. Fortunately, if the state is known to be pure, the complexity of the state estimation problem can be significantly reduced [14; 15; 16]. In this case, we require the measurements to form a pure-state informational complete (PSIC)-POVM, which is a POVM that contains at least \(2d\) elements and characterizes any pure state but a set that is dense only on a set of measure zero [10]. It has been demonstrated that by combining the computational basis with two unambiguously discriminating POVMs, which yield a total of \(3(d-1)\) independent measurement outcomes, any pure state can be reconstructed [17]. Additionally, it has been shown that using two or three measurements over a system plus an ancilla is sufficient to determine a pure state, except for a limited set of ambiguities [18]. When the problem is restricted to orthogonal bases, previous research has shown that at least four bases are required to determine any pure state in \(d=3\) and \(d\geq 5\), while for \(d=4\) it is unknown if three or four bases are enough [19; 20]. Besides, it has been shown that the measurement of five bases is sufficient to estimate arbitrary pure quantum states with post-processing based on analytic methods [21; 22; 23] or adaptive compressive sensing techniques [24; 25]. In particular, recent advances have shown two approaches to estimate pure states using only three measurement bases [26; 27]. However, the first method requires an exponential-scaling post-processing step with respect to the dimension, and the second one is only formulated for multi-qubit systems. In this article, we present an analytical method to determine almost any pure quantum state in any finite-dimensional Hilbert space using a minimum of three measurement bases. This number is optimal, as two bases are not sufficient to construct a pure-state IC-POVM [10]. To prove the method, we first show how to estimate the relative phases of a quantum state given the measurement of a set of projectors. Then, we propose a protocol that reconstructs all the amplitudes and phases of the state using the previous result. Finally, we establish that the required measurements form at least three bases. Additionally, we assess the performance of the method through numerical simulations. ## II Method Let us consider a Hilbert space \(\mathcal{H}_{\gamma}\) of dimension \(d_{\gamma}\) such that it is the direct sum of two orthogonal subspaces \(\mathcal{H}_{\alpha}\) and \(\mathcal{H}_{\beta}\) of dimension \(d_{\alpha}\) and \(d_{\beta}\), respectively, that is, \(\mathcal{H}_{\gamma}=\mathcal{H}_{\alpha}\oplus\mathcal{H}_{\beta}\). An arbitrary vector in \(\mathcal{H}_{\gamma}\) can be written as \[|\tilde{\psi}_{\gamma}\rangle=|\tilde{\psi}_{\alpha}\rangle+e^{i\varphi}|\tilde {\psi}_{\beta}\rangle, \tag{1}\] with \(|\tilde{\psi}_{\alpha}\rangle\) in \(\mathcal{H}_{\alpha}\), \(|\tilde{\psi}_{\beta}\rangle\) in \(\mathcal{H}_{\beta}\) and \(\varphi\in[0,2\pi)\). We are interested in an equation for the relative phase \(\varphi\) assuming that we know \(|\tilde{\psi}_{\alpha}\rangle\) and \(|\tilde{\psi}_{\beta}\rangle\). In order to do so, we define a set of \(N\) projectors \(\{|\gamma_{j}\rangle\langle\gamma_{j}|\}_{j=1}^{N}\) such that \(|\gamma_{j}\rangle=|\alpha_{j}\rangle+|\beta_{j}\rangle\), \(|\alpha_{j}\rangle\in\mathcal{H}_{\alpha}\) and \(|\beta_{j}\rangle\in\mathcal{H}_{\beta}\). Then, \[|\langle\gamma_{j}|\tilde{\psi}_{\gamma}\rangle|^{2}= |\langle\alpha_{j}|\tilde{\psi}_{\alpha}\rangle|^{2}+|\langle \beta_{j}|\tilde{\psi}_{\beta}\rangle|^{2}\] \[+2\operatorname{Re}\left[e^{i\varphi}\langle\tilde{\psi}_{\alpha }|\alpha_{j}\rangle\langle\beta_{j}|\tilde{\psi}_{\beta}\rangle\right]. \tag{2}\] We now assume that the vector \(|\tilde{\psi}_{\gamma}\rangle\) in Eq. (1) is part of a quantum state \(|\psi\rangle\) that belongs to a Hilbert space \(\mathcal{H}=\mathcal{H}_{\gamma}\oplus\mathcal{H}_{\gamma}^{\perp}\) of dimension \(d\geq d_{\gamma}\). Measuring the system with a POVM that contains the projectors \(\{|\gamma_{j}\rangle\langle\gamma_{j}|\}_{j=1}^{N}\) we can estimate a set of probabilities \(\{p_{j}\}_{j=1}^{N}\) such that \(p_{j}=|\langle\gamma_{j}|\psi\rangle|^{2}=|\langle\gamma_{j}|\tilde{\psi}_{ \gamma}\rangle|^{2}\). Then, we define the quantities \[\tilde{p}_{j}\equiv\frac{|\langle\gamma_{j}|\tilde{\psi}_{\gamma}\rangle|^{2} -|\langle\alpha_{j}|\tilde{\psi}_{\alpha}\rangle|^{2}-|\langle\beta_{j}| \tilde{\psi}_{\beta}\rangle|^{2}}{2} \tag{3}\] and \[\Gamma_{j}\equiv\langle\tilde{\psi}_{\alpha}|\alpha_{j}\rangle\langle\beta_{j }|\tilde{\psi}_{\beta}\rangle, \tag{4}\] which are known. Substituting Eqs. (3) and (4) into Eq. (2) we find a system of equations for the relative phase \(\varphi\) between \(|\tilde{\psi}_{\alpha}\rangle\) and \(|\tilde{\psi}_{\beta}\rangle\), which is given by \[\begin{pmatrix}\operatorname{Re}\left[\Gamma_{1}\right]&-\operatorname{Im} \left[\Gamma_{1}\right]\\ \operatorname{Re}\left[\Gamma_{2}\right]&-\operatorname{Im}\left[\Gamma_{2} \right]\\ \vdots&\vdots\\ \operatorname{Re}\left[\Gamma_{N}\right]&-\operatorname{Im}\left[\Gamma_{N} \right]\end{pmatrix}\begin{pmatrix}\cos\varphi\\ \sin\varphi\end{pmatrix}=\begin{pmatrix}\tilde{p}_{1}\\ \tilde{p}_{2}\\ \vdots\\ \tilde{p}_{N}\end{pmatrix}. \tag{5}\] We need at least two different equations for the system to have a unique solution, which in the case \(N>2\) can be found through the Moore-Penrose pseudo-inverse. If we measure just two independent projectors the solution is \[e^{i\varphi}=i\frac{\tilde{p}_{2}\Gamma_{1}^{*}-\tilde{p}_{1}\Gamma_{2}}{ \operatorname{Im}\left[\Gamma_{1}\Gamma_{2}^{*}\right]}, \tag{6}\] whenever \(\operatorname{Im}\left[\Gamma_{1}\Gamma_{2}^{*}\right]\neq 0.\) Since we know \(|\tilde{\psi}_{\alpha}\rangle\), \(|\tilde{\psi}_{\beta}\rangle\) and \(e^{i\varphi}\), we have completely characterized the vector \(|\tilde{\psi}_{\gamma}\rangle\) in Eq. (1). When \(\operatorname{Im}\left[\Gamma_{1}\Gamma_{2}^{*}\right]=0\) the phase \(\varphi\) cannot be obtained. However, if we employ the identity \(\cos^{2}(\varphi)+\sin^{2}(\varphi)=1\), we can determine this phase from a single equation \(\operatorname{Re}\left[\Gamma_{k}\right]\cos(\varphi)-\operatorname{Im}\left[ \Gamma_{k}\right]\sin(\varphi)=\tilde{p}_{k}\) except for two ambiguities \[e^{i\varphi_{\pm}}=\frac{1}{\Gamma_{k}}\left(\tilde{p}_{k}\pm\sqrt{\tilde{p}_ {k}-|\Gamma_{k}|^{2}}\right). \tag{7}\] When we choose the measurements randomly, the event \(\operatorname{Im}\left[\Gamma_{1}\Gamma_{2}^{*}\right]=0\) has probability zero (see Appendix B). However, small values of this quantity could decrease the quality of the estimation when working with a finite number measurement of shots (or ensemble size), since it affects the matrix inversion process. If this is the case, we can always measure another projector \(|\gamma_{3}\rangle\langle\gamma_{3}|\) and add it to the system of equations to improve the numerical stability of the protocol. The previous procedure can be generalized to reconstruct pure states in any dimension \(d\). For this, we use a complete, full binary tree structure with \(2d-1\) nodes, which we describe in Appendix A. Consider an arbitrary pure quantum state \[|\psi_{1}^{(d)}\rangle=\sum_{k=1}^{d}c_{k}e^{i\varphi_{k}}|k\rangle \tag{8}\] in a \(d\)-dimensional Hilbert space \(\mathcal{H}_{1}^{(d)}\). The coefficients entering in \(|\psi_{1}^{(d)}\rangle\) are such that \(0\leq c_{k}\leq 1\) and \(0\leq\varphi_{k}<2\pi\). Super-indexes denote dimension and sub-indexes denote nodes in the tree. To start the quantum state estimation protocol, we decompose the Hilbert space \(\mathcal{H}_{d}^{(d)}\) into \(d\) direct sums of one-dimensional subspaces as \(\mathcal{H}_{1}^{(d)}=\mathcal{H}_{d}^{(1)}\oplus\mathcal{H}_{d+1}^{(1)}\oplus...\oplus\mathcal{H}_{2d-1}^{(1)}\). Each Hilbert space \(\mathcal{H}_{m}^{(1)}\) contributes with a coefficient \[|\psi_{m}^{(1)}\rangle=c_{k}e^{i\varphi_{k}}|k\rangle \tag{9}\] of the state \(|\psi^{(d)}\rangle\), with \(k=m-d+1\). Measuring the canonical basis we obtain a set of probabilities \(\{p_{k}\}_{k=1}^{d}\) which can be used to estimate all the coefficients \(\{c_{k}\}_{k=1}^{d}\) in Eq. (9). Then, to fully characterize the state we just need to find the values of the phases \(\{\varphi_{k}\}_{k=1}^{d}\). To find the phases \(\varphi_{k}\) we rest in a complete, full binary tree structure. The tree has \(2d-1\) nodes, and the leaves (nodes at the bottom of the tree) are \(d\) vectors \(|\tilde{\psi}_{m}^{(1)}\rangle=c_{k}|k\rangle\). Every internal node is a vector generated by the addition of its children and a relative phase, that is, \[|\tilde{\psi}_{m}^{(d_{m})}\rangle=|\tilde{\psi}_{2m}^{(d_{2m})}\rangle+e^{i \varphi}|\tilde{\psi}_{2m+1}^{(d_{2m+1})}\rangle \tag{10}\] Figure 1: Tree structure needed to estimate a state in dimension \(5\). Super-indices denote dimension, and sub-indices denote nodes in the tree. in \(\mathcal{H}_{m}^{(d_{m})}=\mathcal{H}_{2m}^{(d_{2m})}\oplus\mathcal{H}_{2m+1}^{(d_ {2m+1})}\), \(d_{m}=d_{2m}+d_{2m+1}\). Let us assume that we know \(|\tilde{\psi}_{2m}^{(d_{2m})}\rangle\) and \(|\tilde{\psi}_{2m+1}^{(d_{2m+1})}\rangle\). Then, to obtain their parent node \(|\tilde{\psi}_{m}^{(d_{m})}\rangle\) we only need to find the phase \(\varphi\). We do this by measuring projectors on \(\mathcal{H}_{m}^{(d_{m})}\) and using Eqs. (3), (4) and (5). Since we know all the leaves, we can apply this procedure recursively starting from node \(d-1\), and continuing with nodes \(d-2\), \(d-3\), and so on. We find the state \(|\psi_{1}^{(d)}\rangle\) of the system when we reach the root of the tree (except for a global phase). Fig. 1 illustrates the tree for the case \(d=5\). The leaves \(m=9,8,...,5\) contain vectors \(|\tilde{\psi}_{m}^{(1)}\rangle=c_{m-4}|m-4\rangle\). Nodes 4, 3 and 2 correspond to the states \[|\tilde{\psi}_{4}^{(2)}\rangle =c_{4}|4\rangle+c_{5}e^{i(\varphi_{5}-\varphi_{4})}|5\rangle, \tag{11}\] \[|\tilde{\psi}_{3}^{(2)}\rangle =c_{2}|2\rangle+c_{3}e^{i(\varphi_{3}-\varphi_{2})}|3\rangle,\] (12) \[|\tilde{\psi}_{2}^{(3)}\rangle =c_{1}e^{i(\varphi_{1}-\varphi_{4})}|1\rangle+c_{4}|4\rangle+c_{5 }e^{i(\varphi_{5}-\varphi_{4})}|5\rangle, \tag{13}\] and the root of the state \[|\tilde{\psi}_{1}^{(5)}\rangle=|\tilde{\psi}_{2}^{(3)}\rangle+e^{i(\varphi_{2 }-\varphi_{4})}|\tilde{\psi}_{3}^{(2)}\rangle. \tag{14}\] This is the state of the system but a global phase. For the case of \(n\)-qubit systems, we have a perfect binary tree with \(n\) levels. The vectors at the \(n-1\) level are the same as the so-called \(j=1\) substates of Ref. [27]. Since both methods take vectors and join them in pairs using direct sums, the internal nodes of the tree coincide with the substates. Furthermore, the systems of equations that need to be solved at each step are the same, so our approach effectively generalizes the multi-qubit case to the qudit case. To estimate an arbitrary state in dimension \(d\) we first measure the canonical basis. Then, we set a complete, full binary tree with \(d\) leaves. The total number of nodes in the tree is \(2d-1\). Since we have to measure at least 2 projectors for each one of the \(d-1\) internal nodes, the total number of projectors will be at least \(2d-2\). We can organize these on 2 bases. A simple way to do this is to generate \(d-1\) random vectors, one for each \(\mathcal{H}^{(d_{m})}\) with \(d_{m}>1\), in the tree, use the Gram-Schmidt method on the set, and complete with a last element to form a basis. Another way to obtain bases for the method involves a binary tree with the same structure as the one that we use to reconstruct \(d\)-dimensional states. Each leaf \(m=d,...,2d-1\) now contains a vector in the computational basis \[|s_{m}^{(1)}\rangle=|m-d+1\rangle. \tag{15}\] The \(d-1\) internal nodes contain two orthonormal vectors \[|r_{m}^{(d_{m})}\rangle =a|s_{2m}^{(d_{2m})}\rangle+be^{i\phi}|s_{2m+1}^{(d_{2m+1})}\rangle, \tag{16}\] \[|s_{m}^{(d_{m})}\rangle =b|s_{2m}^{(d_{2m})}\rangle-ae^{-i\phi}|s_{2m+1}^{(d_{2m+1})}\rangle, \tag{17}\] with \(a\), \(b\) real, and \(\phi\) a phase. These parameters are fixed for all the nodes in the tree. Note that all the \(d-1\) vectors \(|r_{m}^{(d_{m})}\rangle\) are orthogonal to each other. To prove this, let us assume two arbitrary internal nodes \(i,j\), with \(i>j\). If \(i\) is not an ancestor of \(j\), then \(|r_{i}^{(d_{i})}\rangle\) is orthogonal to \(|r_{j}^{(d_{j})}\rangle\), since, by construction, these vectors live in different orthogonal subspaces of \(\mathcal{H}^{(d)}\). If \(i\) is an ancestor of \(j\), then \(|r_{i}^{(d_{i})}\rangle\) is of the form \[|r_{i}^{(d_{i})}\rangle=u|s_{j}^{(d_{j})}\rangle+v|x\rangle, \tag{18}\] where \(|x\rangle\) is in \((H_{j}^{(d_{j})})^{\perp}\) and \(u\) and \(v\) are constants. This also means that \(|r_{i}^{(d_{i})}\rangle\) and \(|r_{j}^{(d_{j})}\rangle\) are orthogonal, as \(\langle r_{j}^{(d_{j})}|r_{i}^{(d_{i})}\rangle=\langle r_{j}^{(d_{j})}|s_{j}^{( d_{j})}\rangle=0\). Then, a basis for the protocol will be given by the set \(\{|r_{m}^{(d_{m})}\rangle_{m=1}^{d-1}\) plus a last element to complete the basis. If we count the canonical basis, the minimum number of bases needed to reconstruct arbitrary states in any dimension is 3. For every internal node in the tree, we have to solve a system of equations in the form of Eq. (5). With three bases, there might be some cases where some of these systems have no unique solution. If this happens, the maximum number of solutions that are consistent with the measurements is \(2^{\lceil n/2\rceil-1}\). The probability of this event is negligible when we choose the bases at random (see Appendix B). However, if the bases are known, a state could be generated such that the estimation protocol fails. In such a case, we can always measure a fourth base to add more independent equations to the estimation protocol. ## III Numerical simulations In order to test the performance of the method, we perform several numerical simulations. We randomly generate a set of \(10^{3}\) Haar-uniform states \(\{|\psi_{i}\rangle\}_{i=1}^{1000}\) for \(d=5,10,15,20,25,30\). We apply the estimation protocol using 3, 5 and 9 bases, whose measurement is simulated using \(2^{13}\), \(2^{15}\), \(2^{17}\) and \(2^{19}\) shots per basis. The bases in Eq. (16) are generated with the choice \(a=b=1/\sqrt{2}\) and \(\phi\) randomly, uniformly in \([0,2\pi)\). As figure of merit for the accuracy of the estimation we use the infidelity \(I(|\psi_{i}\rangle,|\psi_{i}^{(\mathrm{est})}\rangle)=1-|\langle\psi_{i}|\psi_{ i}^{(\mathrm{est})}\rangle|^{2}\) between the target states \(|\psi_{i}\rangle\) and their estimates \(|\psi_{i}^{(\mathrm{est})}\rangle\). We expect this quantity to be close to zero. Figure 2(a) shows the median infidelity for the protocol, obtained by measuring three bases, as a function of the dimension. The different curves correspond to the different number of shots with \(2^{13}\), \(2^{15}\), \(2^{17}\), and \(2^{19}\), from top to bottom. Shaded regions indicate the interquartile range. Figs. 2(b) and 2(c) show the results of the simulations for 5 and 9 bases, respectively. Figures 2(a),2(b) and 2(c) indicate that, in general, the estimation protocol here proposed accurately estimates pure states. For a fixed number of bases or shots, the quality of the estimation decreases as the dimension increases. This is natural because as the dimension increases the number of parameters to be estimated also increases. For a fixed dimension, the estimation accuracy can be improved by increasing the number of shots. However, it is often more effective to increase the number of bases. For instance, using 3 bases in dimension 30, we achieve infidelities on the order of \(10^{-2}\) using \(3\times 2^{19}\) total shots. For the same dimension, the use of 9 bases with a total of \(9\times 2^{13}\) total shots gives better results. ## IV Discussion and Conclusions We have shown that it is possible to reconstruct pure quantum states in arbitrary dimensions using a minimum of three bases. These constitute an instance of PSICPOVM, that is, our estimation procedure allows characterization of all qudit states up to a set of measure zero. Furthermore, our protocol is optimal in the sense that at least \(2d-1\) independent POVM elements are required to form a PSIC-POVM [10]. As 2 bases only contain \(2d-2\) independent POVM elements, they are insufficient to determine a generic pure state, and thus the measurement of a third basis is mandatory to generate a PSIC-POVM. Thus, our estimation protocol has the minimal number of basis required to form a PSIC-POVM. Our estimation protocol is formulated in terms of a binary tree structure. This provides a direct implementation of the protocol for any dimension and a recipe to build the required bases. It is possible that for certain states the estimation procedure fails. Even when these states form a null measure set, this problem can be overcome by adding at least one more basis. Adding more basis also increases the precision of the estimation for a finite number of shots, as our numerical simulations show. Recently, high-dimensional quantum systems have been renewed subject of interest for applications in quantum information, for instance, multi-core fibers [28; 29], orbital angular moment of light [30; 31; 32], and qudit-based quantum processors [33; 34; 35]. The characterization of this class of systems requires procedures with the least number of measurement results and efficient post-processing, such as the estimation procedure proposed here. Furthermore, this procedure can also estimate the pure states affected by white noise (see Appendix C), making it well-suited for use in experimental contexts. A potential extension of this work is a better characterization of the states that the method cannot reconstruct. It would be interesting to design an adaptive procedure that can detect, with just the measurement of the canonical basis, when a given state cannot be accurately estimated, and then adjust the subsequent measurements accordingly. By doing so, we could potentially expand the range of pure quantum states that can be successfully reconstructed, or even establish a "really" PSIC-POVM [36; 37]. Furthermore, an extension to multi-qudit systems may be of interest for quantum computing applications. ###### Acknowledgements. LZ was supported by the Government of Spain (Severo Ochoa CEX2019-000910-S, TRANQI and European Union NextGenerationEU PRTR-C17.11), Fundacio Cellex, Fundacio Mir-Puig and Generalitat de Catalunya (CERCA program). LP was supported by ANID-PFCHA/DOCTORADO-BECAS-CHILE/2019-772200275, the CSIC Interdisciplinary Thematic Platform (PTI+) on Quantum Technologies (PTI-QTEP+), the CAM/FEDER Project No. S2018/TCS-4342 (QUITEMAD-CM), and the Proyecto Sinergico CAM 2020 Y2020/TCS-6545 (NanoQuCo-CM). AD was supported by ANID - Millennium Science Initiative Program - ICN17\({}_{-}\)012 and FONDECYT Grants 1231940 and 1230586.
2307.06120
Recognizing student identification numbers from the matrix templates using a modified U-net architecture
This paper presents an innovative approach to student identification during exams and knowledge tests, which overcomes the limitations of the traditional personal information entry method. The proposed method employs a matrix template on the designated section of the exam, where squares containing numbers are selectively blackened. The methodology involves the development of a neural network specifically designed for recognizing students' personal identification numbers. The neural network utilizes a specially adapted U-Net architecture, trained on an extensive dataset comprising images of blackened tables. The network demonstrates proficiency in recognizing the patterns and arrangement of blackened squares, accurately interpreting the information inscribed within them. Additionally, the model exhibits high accuracy in correctly identifying entered student personal numbers and effectively detecting erroneous entries within the table. This approach offers multiple advantages. Firstly, it significantly accelerates the exam marking process by automatically extracting identifying information from the blackened tables, eliminating the need for manual entry and minimizing the potential for errors. Secondly, the method automates the identification process, thereby reducing administrative effort and expediting data processing. The introduction of this innovative identification system represents a notable advancement in the field of exams and knowledge tests, replacing the conventional manual entry of personal data with a streamlined, efficient, and accurate identification process.
Filip Pavičić
2023-07-12T12:20:04Z
http://arxiv.org/abs/2307.06120v1
Recognizing student identification numbers from the matric templates using a modified U-net architecture ###### Abstract This paper presents an innovative approach to student identification during exams and knowledge tests, which overcomes the limitations of the traditional personal information entry method. The proposed method employs a matrix template on the designated section of the exam, where squares containing numbers are selectively blackened. The methodology involves the development of a neural network specifically designed for recognizing students' personal identification numbers. The neural network utilizes a specially adapted U-Net architecture, trained on an extensive dataset comprising images of blackened tables. The network demonstrates proficiency in recognizing the patterns and arrangement of blackened squares, accurately interpreting the information inscribed within them. Additionally, the model exhibits high accuracy in correctly identifying entered student personal numbers and effectively detecting erroneous entries within the table. This approach offers multiple advantages. Firstly, it significantly accelerates the exam marking process by automatically extracting identifying information from the blackened tables, eliminating the need for manual entry and minimizing the potential for errors. Secondly, the method automates the identification process, thereby reducing administrative effort and expediting data processing. The introduction of this innovative identification system represents a notable advancement in the field of exams and knowledge tests, replacing the conventional manual entry of personal data with a streamlined, efficient, and accurate identification process. _Index Terms-student identification, multi-label detection, U-net, automated identification process_ ## 1 Introduction In today's education system, the increasing number of students poses challenges when conducting knowledge checks and correcting exams. The process of correcting exams is often time-consuming and prone to errors due to the large volume of work involved. This article presents an innovative approach aimed at streamlining exam administration and correction processes. The proposed approach revolves around the use of a unique student identification number to identify students during exams, replacing the traditional method of entering personal data. Instead, a method is introduced that involves blackening squares with numbers inside a matrix template on the designated part of the exam. This standardized approach eliminates reliance on individual handwriting and enables automated student identification using a neural network. Additionally, this method allows for the detection of incorrect entries in the matrix display. In the event of an incorrect entry, the examiner retains the ability to manually identify the student and rectify the error. The implementation of a customized neural network for square recognition within the matrix template enables automatic identification and interpretation of student personal information. Leveraging deep learning models, the neural network analyzes the layout of the black squares and interprets the information based on learned patterns. As a result, this approach facilitates a fast and accurate student identification process during exams. It is worth noting that this approach is not limited solely to the education sector. Similar concepts can be applied in other industries where users possess unique identification numbers, such as healthcare or administration. Implementing this technology can provide an intuitive and efficient identification method across various domains. The subsequent sections of this article will delve into the practical implementation of student personal number recognition. This includes presenting the dataset utilized for training the network, describing the data annotation methodology, and providing a detailed explanation of the network architecture employed for identification number recognition. Furthermore, the article will present the results of network performance evaluations and discuss potential avenues for further system enhancement. ## 2 Related work The existing body of literature encompasses several scholarly papers that explore similar issues and topics. The paper [1] from 2012 focuses on off-line restricted-set handwritten word recognition for student identification in a short answer question automated assessment system. The proposed system incorporates the Gaussian Grid and Modified Direction Feature Extraction Techniques, which achieve promising recognition rates (up to 99.08% for Modified Direction and up to 98.28% for Gaussian Grid). Another paper [2] from 2014 introduces an off-line handwriting recognition system for identifying Thai student names in an automated assessment system. The system utilizes the Gaussian Grid and Modified Direction Feature extraction techniques on upper and lower contours, loops, and full word contour images. Results show encouraging recognition rates, with both techniques achieving 99.27% accuracy using artificial neural networks and support vector machine classifiers. The paper [3] from 2015 introduces a Short Answer Assessment System (SAAS) that incorporates an off-line handwriting recognition system and novel combined features. The SAAS aims to automate the assessment of handwritten short answer questions, reduce marking time, and minimize errors in transcription. The proposed system utilizes advanced feature extraction techniques and achieves improved recognition rates. The SAAS system achieves a recognition accuracy of 95.99% using WRLGGF, outperforming other feature extraction techniques such as GGF, MDF, and WRLMDF. The proposed system successfully recognizes and marks examination papers while also identifying students from their name components. While existing literature focuses on hand-written student identification, to the best of our knowledge, no studies have explored student identification based on recognizing unique student ID numbers within a matrix template. This approach offers distinct advantages, as it circumvents challenges associated with unclear handwriting and enables the unambiguous identification of individual students, as multiple students may share the same name but must possess a unique identification number. ## III Dataset The dataset utilized for training the neural network to recognize student ID numbers comprises scanned portions of exams containing completed matrix templates. This dataset was collected from a course conducted at the Faculty of Electrical Engineering and Computing, University of Zagreb. The matrix template represents a table of size \(n\times 10\), where \(n\) corresponds to the number of digits in the student's identification number. Specifically, the student personal number at the aforementioned university consists of 10 digits, resulting in a matrix representation of size \(10\times 10\) (see Fig. 1). The data set comprises a total of 1703 examples, consisting of images featuring labeled matrix templates and their associated labels. Out of these, 1658 examples contain correctly filled matrix templates, while the remaining examples encompass incorrectly filled matrix templates. A correctly filled matrix template (CFMT) contains precisely one blackened row in each column, representing the corresponding digit. Conversely, incorrectly filled matrix templates include instances where multiple blackened fields are present in the same column, no blackened fields are present in any of the columns, or the matrix template is visibly filled incorrectly (e.g., crossed out). The labels are represented by a 2D vector of numbers, wherein the blackened positions in the table are denoted with the number 1 within the vector, while all other positions are marked with the number 0. For the sake of clarity, the vector can also be presented as a textual record. Columns with a single unit in the column are displayed with the respective digit, empty columns are indicated with the symbol "x", and in the case of two or more units in a column, all marked digits are enclosed within square brackets (e.g., "[34]") (see Fig. 2). ## IV Architecture Recognizing student ID numbers poses a unique challenge that cannot be simply categorized as a classic image classification problem. To address this specific problem, this article presents a specially designed architecture optimized for solving this task. The architecture employed in this research to recognize student personal numbers is Fig. 1: An exemplary empty matrix template. Fig. 2: An example of a correctly and incorrectly filled matrix template with associated labels and textual records. based on a modified version of the U-Net [4] architecture. The modifications primarily focus on the decoder part of the architecture. Traditionally, the decoder part of the architecture consists of layers that gradually increase the resolution from the lowest level to match the input layer resolution. However, in this model, the decoder part extends only up to the layer that increases the resolution to the size \(16\times 16\). Following the U-Net segment of the architecture, a convolution layer (transformation layer) is introduced, with a kernel size adjusted to produce an output layer of the desired resolution, specifically \(10\times 10\). Finally, a convolutional output layer with a sigmoid activation function is utilized to generate prediction labels (see Fig. 3). The decoder part consists of resolution reduction blocks. Each block comprises a convolutional layer with a kernel size of 5, a LeakyReLU activation function (with alpha set to 0.2), and a dropout layer with a dropout rate of 0.1. The upscaling block in the architecture incorporates a convolutional transpose layer with a kernel size of 3 and a stride of 2. This layer facilitates the expansion of resolution from the previous layer. The resulting layer is then concatenated with the corresponding layer from the downsampling part of the network, followed by dropout and another convolutional layer. The number of channels for each layer in the model is determined by the hyperparameter "channels," which is specified as a list with four values. The first value in the list corresponds to the number of channels for layers 0 and 1 in the encoder. The second value is utilized for layers 2 and 3, the third value for layers 4 and 5, and the fourth value for the bottleneck layer. The decoder part employs the same number of channels as the corresponding encoder layers but in the reverse order. For instance, if the number of channels in layers 0 and 1 is set to 16, then the decoder layers connected to those encoder layers will also have 16 channels. The transformation layer of the model has a fixed number of channels determined by the "last_channel" parameter. The two aforementioned parameters provide the flexibil-ity which enables achieving an optimal balance between network size and prediction quality. By adjusting the number of channels in the layers, the network's capacity and its ability to learn relevant features can be controlled. This custom architecture combines various layers and techniques to attain high accuracy and reliability in recognizing student 1D numbers. ## V Training The network was trained using the following procedures. Initially, the images were resized to a target size of 128x128 pixels to ensure uniformity in the input grid size. Prior to being fed into the network, the images underwent random augmentation to enhance the training data. Augmentations applied include rotation (rt), shearing (sh), and scaling (sc). The degree of augmentation is controlled by the hyperparameter \(p_{\text{org}}\), which determines the expected percentage of non-augmented examples in each batch. Each augmentation type has an augmentation factor (\(\nu_{\text{rt}}\), \(\nu_{\text{sh}}\), \(\nu_{\text{sc}}\)) between 0 and 1, and based on \(p_{\text{org}}\), a parameter \(\mu\) satisfying the equation "(1)" is obtained using the golden-section search. Finally, each augmentation probability (\(p_{\text{rt}}\), \(p_{\text{sh}}\), \(p_{\text{sc}}\)) is calculated by multiplying augmentation factor by the calculated \(\mu\). The augmentation factors are set as \(\nu_{\text{rt}}=0.4\), \(\nu_{\text{sh}}=0.3\), and \(\nu_{\text{sc}}=0.3\). The value of the hyperparameter \(p_{\text{org}}\) is set to 0.5, indicating that approximately 50% of the images in each batch are expected to be original. This approach strikes a balance between using original images for the model to learn the inherent patterns and utilizing augmented examples to enable the model to capture variations and enhance its robustness. The Adam optimizer is employed for network training, while binary cross-entropy serves as the loss function. The training process consists of 150 epochs. Additionally, the initial learning rate is set to 0.0015. After every 20 epochs, the learning rate decreases by 10%, resulting in a new learning rate that is 90% of the previous value. This learning rate schedule helps the model to converge effectively and avoid overshooting the optimal solution. \[p_{\text{org}} =\,(1\,\square\,\mu\nu_{\text{rt}})(1\,\square\,\mu\nu_{\text{sh}})(1 \,\square\,\mu\nu_{\text{sc}}) \tag{1}\] \[p_{\text{rt}} =\,\mu\nu_{\text{rt}}\] (2) \[p_{\text{sh}} =\,\mu\nu_{\text{sh}}\] (3) \[p_{\text{sc}} =\,\mu\nu_{\text{sc}} \tag{4}\] ## VI Results This section presents the results of detecting student personal numbers using the custom architecture proposed in this article. Additionally, a comparison of model performance with varying complexities is presented. ### _Evaluation Metrics_ Given the unique nature of the problem, the evaluation metrics are tailored to suit the specific requirements of the task. Let \(\nu^{(t)}\) represent the 2D vector of the actual label for the \(i\)-th example, and let \(\hat{y}^{(i)}\) represent the 2D vector of the predicted label for the same example. We define the equality of these two vectors if all their corresponding elements are equal, formally expressed as: \[\nu^{(i)}_{ij}=\,\hat{y}^{(i)}_{ij}\quad\forall i,j\] The accuracy (ACC) of the prediction is defined as the mean value of the indicator function \(l(y^{(i)},\hat{y}^{(i)})\) (Eq. 5), where \(l\) is 1 if the vectors are equal, and 0 if they are not, for all examples. \[\text{ACC} =\,\frac{1}{N}\,\begin{aligned} \text{X}^{\text{N}} &\,l(\boldsymbol{y}^{(i)},\hat{y}^{(i)})\\ \,l(\boldsymbol{y}^{(i)},\hat{y}^{(i)})&=\,\boldsymbol{ \lambda},\quad\text{if }\boldsymbol{y}^{(i)}=\,\hat{y}^{(i)}\\ & 0,\quad\text{otherwise}\end{aligned} \tag{6}\] The alpha error (critical error) represents the model's error in which \(\mathbf{y}^{(t)}=\hat{\mathbf{y}}^{(t)}\) and \(\hat{\mathbf{y}}^{(t)}\) satisfies CFMT. This error is considered critical because tags that satisfy CFMT do not require interaction with a rectifier, making it challenging to detect. The alpha error rate represents the proportion of alpha error examples in relation to the total number of predictions that satisfy CFMT. On the other hand, the beta error represents the model's error in which \(\mathbf{y}^{(t)}=\hat{\mathbf{y}}^{(t)}\) and \(\mathbf{y}^{(t)}\) satisfies CFMT. Although less critical, the beta error still requires manual review, even though the matrix record is correctly filled. The beta error rate represents the proportion of beta error examples in relation to the total number of tags that satisfy CFMT. ### _Model Performance Evaluation_ To evaluate the model, the k-fold method was employed to divide the dataset into 5 parts. Each model was trained 5 times, with one part used as a validation dataset and the remaining 4 parts used for training. The reported metrics in Table I are the mean values obtained from the 5 training runs. K-fold validation improves the robustness of performance estimation by mitigating the impact of random validation set selection, particularly when dealing with limited or variable datasets, reducing potential bias resulting from a single split of the data. The results show that all three models achieve remarkably high accuracy and low alpha and beta error rates. As expected, the model with the highest number of parameters demonstrates the best performance across all metrics. The experimental findings demonstrate that the best model achieves an accuracy of 97.02%. Additionally, with an alpha error rate of 0.43% and a beta error rate of 1.63%, the model exhibits high reliability and suitability for real-life scenarios. These results highlight the practical applicability of the proposed model in accurately predicting the correct student identification number. While the most complex model demonstrates the best performance, it is important to consider alternative models when deploying on devices with lower hardware capabilities, prioritizing the optimal balance between speed and accuracy. Therefore, the selection of the model should carefully account for the hardware limitations of the target device, ensuring efficient and effective deployment. ## VII Conclusion and Future Work This paper introduces an architecture for recognizing student personal numbers from a matrix template. The proposed model, based on a modified U-Net architecture, achieves high accuracy in detecting student identification numbers for automated student identification systems. Additionally, the paper outlines the annotation method for marked examples and presents evaluation metrics for assessing model performance. The experimental results demonstrate the accurate detection of matrix template entries, with the best model achieving an accuracy of 97.02% on validation data, with an alpha error of 0.43%, and a beta error of 1.63%. Based on these metrics, it can be concluded that the model successfully detects student numbers from the matrix template. In future work, there is potential to further enhance the model's performance. One avenue for exploration is expanding the dataset to encompass diverse scenarios and variations of student 1D numbers. By incorporating a larger and more comprehensive dataset, the model can improve its generalization capabilities.
2303.02869
Enhancing Border Security and Countering Terrorism Through Computer Vision: a Field of Artificial Intelligence
Border security had been a persistent problem in international border especially when it get to the issue of preventing illegal movement of weapons, contraband, drugs, and combating issue of illegal or undocumented immigrant while at the same time ensuring that lawful trade, economic prosperity coupled with national sovereignty across the border is maintained. In this research work, we used open source computer vision (Open CV) and adaboost algorithm to develop a model which can detect a moving object a far off, classify it, automatically snap full image and face of the individual separately, and then run a background check on them against worldwide databases while making a prediction about an individual being a potential threat, intending immigrant, potential terrorists or extremist and then raise sound alarm. Our model can be deployed on any camera device and be mounted at any international border. There are two stages involved, we first developed a model based on open CV computer vision algorithm, with the ability to detect human movement from afar, it will automatically snap both the face and the full image of the person separately, and the second stage is the automatic triggering of background check against the moving object. This ensures it check the moving object against several databases worldwide and is able to determine the admissibility of the person afar off. If the individual is inadmissible, it will automatically alert the border officials with the image of the person and other details, and if the bypass the border officials, the system is able to detect and alert the authority with his images and other details. All these operations will be done afar off by the AI powered camera before the individual reach the border
Tosin Ige, Abosede Kolade, Olukunle Kolade
2023-03-06T03:37:43Z
http://arxiv.org/abs/2303.02869v1
Enhancing Border Security and Countering Terrorism through Computer Vision: A Field of Artificial Intelligence ###### Abstract Border security had been a persistent problem in international border especially when it get to the issue of preventing illegal movement of weapons, contraband, drugs, and combating issue of illegal or undocumented immigrant while at the same time ensuring that lawful trade, economic prosperity coupled with national sovereignty across the border is maintained. In this research work, we used open source computer vision (Open CV) and adaboost algorithm to develop a model which can detect a moving object a far off, classify it, automatically snap full image and face of the individual separately, and then run a background check on them against worldwide databases while making a prediction about an individual being a potential threat, intending immigrant, potential terrorists or extremist and then raise sound alarm. Our model can be deployed on any camera device and be mounted at any international border. There are two stages involved, we first developed a model based on open CV computer vision algorithm, with the ability to detect human movement from afar, it will automatically snap both the face and the full image of the person separately, and the second stage is the automatic triggering of background check against the moving object. This ensures it check the moving object against several databases worldwide and is able to determine the admissibility of the person afar off. If the individual is inadmissible, it will automatically alert the border officials with the image of the person and other details, and if the bypass the border officials, the system is able to detect and alert the authority with his images and other details. All these operations will be done afar off by the AI powered camera before the individual reach the border. Artificial intelligence, computer vision, counter terrorism, border security, National security, face recognition,. ## 1 Introduction Recent advancement in artificial intelligence had led to computer vision as a subfield of artificial intelligence thereby enabling computer to derive and analysis information from visual data (images, videos and several other graphical) inputs, these advancements had position Computer vision which is a field of artificial intelligence as ultimate solution to border insecurity, maritime insecurity, terrorism, airport arrival and departure point. Insecurity is the condition in which a nation cannot adequately defend herself against threats arising from danger or aggression which can jeopardize her independence and or territory. Insecurity has no border and it had grown to being a global threat which consistently threatened our peaceful co-existence and international relation. There are several causes of international insecurity across border, some of which includes but not limited to balancing power between superpowers, pursuing of arms race, constant conflict, territorial claim and terrorism. The incidence of September 11, 2021 multiple terrorist attacks on the United States leads to formation of the department of homeland security, whose mission is saddled with counter terrorism, cyber security, border security, and also natural disaster planning and response (Fig. 1)[13]. With the sum of $133 billion having been spent since the creation of department of homeland security in 2003 by the US government, border security and enforcement remains a big challenge despite the increase in the number of border security and interior enforcement officer exceeding 50,000. In the same vein, The EU faces a number of threats at the external borders, stemming from factors such as irregular migration, people smuggling, human trafficking, illegal weapons, and drugs [1]. Artificial Intelligence (AI) yields great potential to significantly enhance the effectiveness of security and operational activities at the borders and beyond, with direct and multiplying impact on internal security. It was these challenges that led to the commission of RAND Europe by the Frontex [6],[7] to explore and search for ways by which European borders can adopt the use of artificial intelligence to secure international borders within the European union [2], while at the same time looking at the prospects of developing and deploying fair and secure AI with capability to maximize the support for the management of European Union's border. Despite the fact that billions of Dollars is being budgeted annually for department of homeland security to counter terrorism, secure border, defense cyber security, respond to natural disaster, cracking down on criminal networks, identification and prevention of unauthorized migration and potential terrorists. The problem remains daunting and unsolved yet by the department of homeland security as evidence by recent statistic from Pen research center (Fig 2)[14]; Fig 1: Border threats and DHS Mission In the same vein, terrorism on its own which involves the use of threats of violence by non state actor to achieve their goals which may be political, economical, or to force international relationship. Data from the Center for Specific and International studies shows annual geometric increment in the percentage of terrorists' attacks and plots in the United States between 2015 and 2020 (Fig. 3)[16]. Currently at various international borders, AI is only being used as complements for border officials to monitor and scan migrants and territory. This current usage of AI at securing border is not effective considering the possibility of evasion at the border by illegal migrants, also there is the question of how to control influx of terrorist from a very large border of several kilometers long, and this increases the porosity. In order to effectively control and man an international border, there is need to automate an AI powered control tower decorated with that is able to detect movement afar off along with automated background check before the migrant or terrorist reach the border. Problem of securing border, counter terrorism, and illegal immigrant migration becomes more challenging and daunting due to frequently changing tactics adopted by potential immigrants to bit security system at airport and land border. It is for this purpose that we developed a machine learning model using artificial neural network and adaBoost algorithm for image identifier and object recognition. Two algorithms are implemented in the models which are neural network and adaBoost. The adaBoost algorithm implementation enables the recognition and identification of walking objects across the border while the artificial neural network implementation scan deep and analysis each and Figure 3: Percentage of U.S. Terrorist attacks and plots perpetrated by Active Duty, or Reserve Service Members, 2015-2020 Figure 2: Migrant encounters at U.S.-Mexico border are at a 21-year high every objects in the body of the image afar off and then make informed prediction. ## 2 Background Study Current background study on border security can me categorized into two; the first category is of the opinion that it is inevitable to impose security on peoples movement so far the likelihood of illegal, criminal, and terrorism activities is at the barest minimum, the other category is of the opinion that subjecting people at the border crossing to extensive security control not a necessity rather than tends to subjects people to unnecessary stress and human right violation [3]. This category represents mostly governmental approaches that can sometimes be based on public opinion such as the euro barometer survey [4] and a host of other reasons to secure their international border [5]. It is also the fundamental duty of the government to secure its border to minimize terrorist activities, illegal migration, and human and drug trafficking, etc. While several artificial intelligence had been deployed at various borders and airports, there had been very little or no success as evidence by the series of illegal migrant at various international border, terrorism, and extremism. Harel [8] did a good job in raising ethical issues in deploying artificial intelligence but his concern is based on biometric identification, while Hayes and Vermeulen [9] were only concerned with the economic cost and fundamental implication it will have on human right while failing to proffer solution to the problem. The fact that some artificial intelligence had been implemented at various international land border [10] for border control with little success cannot be over emphasize due its ability to infringe and tamper with the fundamental human right[11],[12] of people at border crossing and checkpoint. In addition to the fact that implemented AI at some of the international border has little success, it constantly infringe on people human fundamental human rigl like the lie detector, mine reader, etc. This lead to our development of a new computer vision AI which is based on Open source computer vision algorithm without any infringement on privacy ## 3 Research Methodology For this research work we use; 1. Open source computer vision (OpenCV) Library 2. Face Recognition library 3. Live stream video from Webcam 4. Python 3.10.2 5. MSSQL Server Database 6. Restful API service Due to the massive movements in illegal immigrant, terrorism, and criminality against humanity which had resulted in insecurity across the globe. There is need for an artificial intelligence powered solution that will detect, track and raise ring by notifying appropriate authority around on any potential terrorist, extremist, illegal immigrant, drug cartel, or someone on the wanted list of Interpol or FBI. In our approach to implementing artificial intelligence that is based on computer vision to proffers permanent solution to this, we wrote a machine learning algorithm that is able to detect image in a live streaming camera using open source computer vision machine learning algorithm. We wrote two separate algorithms namely; 1. Human-Detection algorithm 2. Face-Extraction algorithm The function of the human detection algorithm is to detect human in a live stream camera and then save the image in a particular folder on the system. The face-extraction algorithm role is to complement the work of the human-detection algorithm by constantly checking the system for any saved images, and then extract the face part into another location. We developed a standalone restful API service using python FLASK framework, the restful web service takes the faces, convert them individually to bite array and then use the converted bite array to make a thorough background search against several and security agents' database such as police database, public database, government database, immigration database, Interpol, etc. If the person is on wanted list, the service is able to notify the appropriate officers at the border while the individual is still afar off, should incase the individual find his way and beat the security officer at the border, the system is able to detect, log it on the database, notify security authority, and also raise necessary alarm. Since the program is artificial intelligence based using computer vision algorithm. All that is needed is to mount an external camera or webcam, our system is able to detect human being in the camera and save it immediately. If it is a group of people that are present in the camera surveillance, it will split them into individual group, extract their faces individually, and convert each faces to separate byte array, before saving it to database. As soon as it hit the database, another restful API service is triggered which pick the byte array and then make a thorough search against several database for background check. The flow chart is such that once the surveillance camera (Fig 4)[12] detects an image, the system automatically classified it into, object, animal, and human, if the image is categorized as object or animal, the system ignores, but if it is human, it will extract the face of the person, while automatically triggering a specially built restful application programming interface (API) algorithm that converts the face to a bite array and then do a thorough background check from multiple worldwide database such as social media ( for terrorism or extremism view), immigration (Document of undocumented) record, police record (criminal record), inter-pol (wanted list) using asynchronous method. All process happen in couple of seconds and then the system is able to classify the result into two categories (admissible or inadmissible). If the result of all those background check against multiple worldwide database is good, then it is classify as admissible which will be ignore by the system, but if the result is otherwise not good, the image is classify as inadmissible, and so the system automatically report through a dedication channel, send alert and sound alarm to the officials at the border. Figure 4: Video surveillance system on Port Huron Michigan Photo by Health Stephens The whole process is based on multi-threading as there can be hundreds of people crossing the border at the same time, hence, the system uses asynchronous method to run multiple tasks underground at the same time against each image (Fig 5). The program is based on Open Source Computer Vision (Open CV) algorithm (Fig 6.), with the capability to detect, capture and extract face from image in a live stream Figure 5: Border Security Application flow chart Diagram Figure 6: Algorithm for image detection and face extraction from Surveillance camera or surveillance camera. On successful detection and extraction of face from image in the camera (Fig 7), another background service in the system pick the extracted faces, convert it to byte array, and check against various worldwide database to determine the admissibility or inadmissibility of the person to the country. ### Performance Evaluation and Validation For our validation and performance evaluation, we use several images at different distances to look for possible false positive and false negative. We also use group of people walking side by side and also separately for test. In all we have true positive and true negative in each case which signifies the presence of low variance and low bias in our model. The only area where we have false positive is when the person is wearing a cloth which has image of a person on it, the system pick both the face of the person as well as the face of the image printed on the cloth. Fig 8: True Positive and False Positive from Real Time Face Detection and Extraction from Live surveillance Camera So, here we have both true positive and false positive, extracting the face of the image printed on the cloth is false positive, this is going to stress the system to some extent because the restful API services algorithm will do background check on it against various databases. It is of my opinion that a new model can be developed in the near future to automatically detect and exclude images printed on the cloth so as Fig 7: Real Time Face Detection and Extraction from Live surveillance Camera to eliminate the false positive scenario. ## Conclusion In this research work, we used open source computer vision (Open CV) algorithm to develop a model which can detect a moving object a far off, classify it into animate object, inanimate object or human, automatically snap full image and face of the individual separately in the process, and then run a background check on them against worldwide databases and then make a prediction about an individual being a potential threat, intending immigrant, potential terrorists or extremist and then sound alarm. Our model can be deployed on any camera device and be mounted at any international border. With this research and implementation which is based on computer vision branch of artificial intelligence, the problems of border security which had been a persistent problem in international border especially when it get to the issue of preventing illegal movement of weapons, contraband, drugs, and combating issue of illegal or undocumented immigrant can be effectively solve while at the same time ensuring that lawful trade, economic prosperity coupled with national sovereignty across the border is maintained and enhanced. Since, this research is based on computer vision, we used use true positive and false positive for our (Fig 8) performance evaluation and cross validation which was generally good and deployable. The only area where we have false positive is if the person is wearing a shirt or cloth with an image on it, the system will extract both the face of the person as well as the face of the image printed on the shirt. We categorize this as false positive because the system will automatically trigger and run a background check against the extracted face from the printed image on the shirt. So, for someone wearing cloth with image printed on it, the system will perform background check on both the person and the image printed on his cloth. Then, what happen if the person is admissible but the image of the person printed on the shirt is a terrorist and inadmissible? It means there will be false alarm and wrong notification at the border. This is an area where more research work is still needed in the near future. There is no risk or danger here because it is just the image from the person printed on the cloth and the actual person is not at the border, we believe further research is needed here as regards the false alarm arising from extracting face from the image printed on the shirt of someone crossing border. ## Limitation We had limitation in accessing records like Interpol, Federal Bureau of Investigation (FBI), police record, immigration records, and so on because special permission from the appropriate authority is required to access them due to the privacy and sensitivity of such records. ## Data Availability We had limitation in accessing records like Interpol, Federal Bureau of Investigation (FBI), police record, immigration records, and so on because special permission from the appropriate authority is required to access them due to the privacy and sensitivity of such records.
2306.01566
Fatigue detection via sequential testing of biomechanical data using martingale statistic
Injuries to the knee joint are very common for long-distance and frequent runners, an issue which is often attributed to fatigue. We address the problem of fatigue detection from biomechanical data from different sources, consisting of lower extremity joint angles and ground reaction forces from running athletes with the goal of better understanding the impact of fatigue on the biomechanics of runners in general and on an individual level. This is done by sequentially testing for change in a datastream using a simple martingale test statistic. Time-uniform probabilistic martingale bounds are provided which are used as thresholds for the test statistic. Sharp bounds can be developed by a hybrid of a piece-wise linear- and a law of iterated logarithm- bound over all time regimes, where the probability of an early detection is controlled in a uniform way. If the underlying distribution of the data gradually changes over the course of a run, then a timely upcrossing of the martingale over these bounds is expected. The methods are developed for a setting when change sets in gradually in an incoming stream of data. Parameter selection for the bounds are based on simulations and methodological comparison is done with respect to existing advances. The algorithms presented here can be easily adapted to an online change-detection setting. Finally, we provide a detailed data analysis based on extensive measurements of several athletes and benchmark the fatigue detection results with the runners' individual feedback over the course of the data collection. Qualitative conclusions on the biomechanical profiles of the athletes can be made based on the shape of the martingale trajectories even in the absence of an upcrossing of the threshold.
Rupsa Basu, Katharina Proksch
2023-06-02T14:22:56Z
http://arxiv.org/abs/2306.01566v2
# Fatigue detection via sequential testing of biomechanical data using martingale statistic ###### Abstract Injuries to the knee joint are very common for long-distance and frequent runners, an issue which is often attributed to fatigue. We address the problem of fatigue detection from biomechanical data from different sources, consisting of lower extremity joint angles and ground reaction forces from running athletes with the goal of better understanding the impact of fatigue on the biomechanics of runners in general and on an individual level. This is done by sequentially testing for change in a datastream using a simple martingale test statistic. Time-uniform probabilistic martingale bounds are provided which are used as thresholds for the test statistic. Sharp bounds can be developed by a hybrid of a piece-wise linear- and a law of iterated logarithm- bound over all time regimes, where the probability of an early detection is controlled in a uniform way. If the underlying distribution of the data gradually changes over the course of a run, then a timely upcrossing of the martingale over these bounds is expected. The methods are developed for a setting when change sets in gradually in an incoming stream of data. Parameter selection for the bounds are based on simulations and methodological comparison is done with respect to existing advances. The algorithms presented here can be easily adapted to an online change-detection setting. Finally, we provide a detailed data analysis based on extensive measurements of several athletes and benchmark the fatigue detection results with the runners' individual feedback over the course of the data collection. Qualitative conclusions on the biomechanical profiles of the athletes can be made based on the shape of the martingale trajectories even in the absence of an upcrossing of the threshold. _Keywords:_ Time series, sequential & online testing, biomechanical data analysis, fatigue detection, change point detection, martingales, sports data ## 1 Introduction The past decades have seen an immense increase in data collection, processing, analysis, and storage concerning almost all aspects of science and technology. Many modern data sets are too large or complex to be analyzed by traditional approaches, in particular if the goal is to use as much of the information provided by the data as quickly as possible. Additionally, some applications require for the information to be processed real-time in which case new methods have to be devised with the capability to relay results 'on the fly'. And while the amount of data is ever increasing, attempts have to be constantly made to not only update pre-existing methods for analysing them but also to ensure that the computational time and memory usage needed are lesser than before. For example, with the advent of wearable devices in sports and other biomechanical applications, the RAM capabilities of such devices are often limited. Further, present day scientific research is multidisciplinary with subject areas under consideration being studied by members and teams of wide-ranging fields. Therefore, for applications of theoretical methods of one area, techniques applied should be well comprehensible by experts from other fields. In this paper, we focus on some crucial questions regarding statistical analysis of biomechanics of human movement, with applications particularly in data obtained from running athletes. The research question pursued in this work pertains to addressing fatigue detection in biomechanical sports data obtained from running athletes. In particular, fatigue changes the movement pattern of a runner, hereby increasing the risk of overloading or getting injured. Hence, the goal is to identify the changes undergone in the body when subjected to fatigue as a consequence of prolonged running. There has been an increasing trend to monitor biomechanical movement patterns due to wide application areas including but not restricted to; engineering (Postema et al. (1997)), clinical (Lu and Chang (2012)), rehabilitation (Yoshioka et al. (2009)) and sports applications (Zandbergen et al. (2022)). Understanding general movement during sports performance may be done by making use of several devices, in particular activity trackers like smart watches, measuring parameters such as the heart rate of an athlete, or even more complex body worn (inertial) sensors, or in special laboratory setting with force-plates integrated into treadmills and bio-marker sets on the walls (all of which are included in this work). Running has been an immensely popular recreational sport worldwide over the past decades, with a large number of people suffering from running related injuries; see, Scheerder et al. (2015), Buist et al. (2010) and Zandbergen et al. (2023) for further references. To address this problem, appropriate study of the biomechanical data obtained from lower extremity joints (most susceptible to injuries) is required. This paper presents a case study based on a data set collected within the multidisciplinary project _Sports, Data, and Interaction1_. Data collection was designed in such a manner that it is ensured that the athlete taking part in the study would surely but steadily get tired during the course of the run. Such a protocol to obtain fatigue data can be designed in a variety of ways, for example as in Apte et al. (2021). In our setting, the runners ware asked for their average speed in an 8 km run and the speed of the experi mental design for data collection would be adjusted to 103% of this value, ensuring fatigue. Devices used in data collection are smartwatches tracking the heart rate, IMUs (inertial measurement units) attached to the body of the athletes and multiple cameras and marker systems, tracking the movement of the runner. Additionally, the participants were asked to assess their perceived level of fatigue during the runs on a scale of 1 to 10 (the so-called Borg scale, Borg (1982)), which is particularly valuable as it provides information on the _perceived level of fatigue_ and defines a ground truth that we use to benchmark our data analysis. Each data set contains a tremendous amount of information, and we focus here on measurements that are easy to understand for a general audience. We mostly focus on the analysis of data of the lower extremity joint angles, which can be obtained from the measurements of the IMUs. These measurements show the progression of the angles over time, creating repetitive patterns from stride to stride (see Figure 1 for such exemplary patterns and a sketch of the angles measured). For the analysis of the given data, in this paper we will provide statistical methodology based on martingale theory which is (i) particularly well suited to analyze the data at hand, (ii) distribution-free and simple to implement and to adjust to different data types/ channels and a combination thereof, (iii) simple and hence easy to communicate, (iv) applicable in online monitoring contexts, also with limited computational capacities. Detection of fatigue is clearly related to detecting some form of change in the data, as will be further elaborated in the following sections. Given a precise data and change point model, specialized procedures may outperform the methods presented herein with respect to precision of the estimated change point, but which comes at the cost of points Figure 1: Joint angle patterns for runner3.(A) Hip (B) Knee (C) Ankle angles, over time. (ii)-(iv) above as well as the inherent robustness regarding the data distribution and comprehensively capturing a generic form of change (not limited to just changes in mean and variance for example). By means of this approach, in our data example, we focus on questions such as, (a) feature selection in terms of the type of measurements are suited for fatigue detection in general and (b) aggregation of measurements and features from multiple sources to improve detection. Finally, our methods yield the possibility to build personalised _risk profiles_ for individual runners summarising the effect of fatigue on the runner. This paper is structured as follows. We first introduce the basic model assumptions (Section 2), the mathematical preliminaries and theoretical results (both Section 3), where all technical proofs are deferred to the Appendix. The main theorems forming the basis of our methods are in Theorem 1 and Lemma 1, written more concisely in Algorithm 1 and Algorithm 2 applied in the analysis of the sports data presented herein. Further, the methods presented in Section 3 require certain parameters as input, which are chosen based on simulations for different case scenarios. These simulations are conducted in Section 6 and provides us with the required parametric inputs for computations in the application of our methods to the biomechanical sports data. Section 5 is devoted to our data analysis. There, we present all relevant details and our main findings and conclusions, with complementary results presented in the supplement. ## 2 Basic model assumptions In this paper, we assume that independent data, \(X_{i}\in\mathcal{X},i=1,2\ldots\) are obtained in a stream, where the endpoint (\(i=N\)), i.e., the sample size (\(N\)) is unknown in advance and \(\mathcal{X}\) is some (abstract) sampling space. We wish to set up a sequential hypothesis testing procedure, applicable to data point \(X_{i}\) as soon as it is available (i.e., without waiting for \(X_{i+1}\)). In this respect, our test procedure may be closed-ended (when all data points uptil \(i=N\) are available) or open-ended (full sample not available). Let us assume the following _local test problems_ at time points \(i\), with null hypothesis (\(H_{i}\)) and alternative (\(K_{i}\)): \[H_{i}:X_{i}\quad\text{has property}\;\;\mathcal{P}\qquad\quad\text{vs.}\qquad \quad K_{i}:X_{i}\quad\text{ does not have property }\mathcal{P} \tag{1}\] In our analysis of the movement of running athletes, it is observed that the movement patterns remain stable up to a certain point \(T_{0}\) in time. More precisely, from time-point \(T=1\) to time-point \(T=T_{0}\), collected movement patterns follow the same average pattern. This corresponds to the data having a (desired) property \(\mathcal{P}\) up to point \(T_{0}\). After this point in time, which may correspond to the point in time where the runner begins to fatigue, the movement patterns change and gradually deviate progressively from the initial patterns. This corresponds to the data no longer having property \(\mathcal{P}\). Following this example, we will make the following model assumption \[X_{1},\ldots,X_{T_{0}}\text{ have property }\mathcal{P}\qquad X_{i},\,i>T_{0} \quad\text{do not have property }\mathcal{P}. \tag{2}\] We further assume that a statistical _local level-\(\alpha\) test_\(\Phi_{i}\) is available for each local hypothesis \(H_{i}\). The tests \(\Phi_{i}\) do not need to be identical. In particular, the tests may look at varying features of the data, which, in our application, may be local peak and trough values of biomechanical knee angles during the course of the run. We also note here that during a long run, a massive amount of data can be produced, not all of which can and needs to be used. Therefore, we are particularly interested in a method that is not memory intensive. Our approach is as follows. For each measurement \(X_{i}\), which in our case could be a single cycle from the lower extremity joint angles collected from the knee, hip and ankle; we perform the hypothesis test \(\Phi_{i}\), for test problem (1). Instead of keeping track of all the data, we will only keep track of the number of rejections and store data close to the current point in time (buffering). The number of rejections is monitored over time and the global null hypotheses at time points \(t\), \(\mathcal{H}_{t}:=\bigcap_{i=i}^{T}H_{i}\), are accepted until the total number of rejections exceeds a certain time-dependent bound. We will provide two different kinds of bounds that can be used for monitoring the number of rejections in our sequential test problem. Both bounds will be based on recent martingale theory (Balsubramani (2014); Balsubramani and Ramdas (2015); Howard et al. (2020)) discussed further in Section 4. ## 3 Mathematical Preliminaries **Definition 1**.: (Local test function \(\Phi_{i}\)) For the local (i.e., at fixed time points \(i\)) null hypotheses defined in Eq. (1), we define the local statistical hypothesis tests \(\Phi_{i}\) by \[\Phi_{i}:\begin{cases}\mathcal{X}\rightarrow\{0,1\},&\text{where} \quad i=1,2,\ldots\\ X_{i}\mapsto\Phi_{i}(X_{i})&\end{cases}\] such that it holds that, \[\mathbb{P}_{H_{i}}\big{(}\Phi_{i}(X_{i})=1\big{)}=\alpha. \tag{3}\] **Definition 2**.: (Sequential setup) We define the _sequential hypotheses_ \[\mathcal{H}_{t}=\bigcap_{i=1}^{t}H_{i},\quad\text{and}\quad\mathcal{K}_{t}= \bigcup_{i=1}^{t}K_{i}, \tag{4}\] and we denote by \(\mathcal{H}_{\infty}\) the _overall null hypothesis_ \[\mathcal{H}_{\infty}=\bigcap_{j=1}^{\infty}H_{i}. \tag{5}\] For given sequential data points \(X_{i}\in\mathcal{X},i=1,2,\ldots\); for the test problem with null hypothesis (4) and local level \(\alpha\) as in in Definition 1, we define the sequential test statistic \(M_{t}\) as \[M_{t}=M_{t}(\alpha)=\sum_{i=1}^{t}\mathds{1}\{\Phi_{i}=1\}-t \alpha,\quad\forall t\in\mathbb{N}, \tag{6}\] where each \(\Phi_{i}\) is any appropriately defined local level- \(\alpha\) test as in Eq. (3) for the local test problem in Eq. (1). **Remark 1**.: The sequential test statistic Eq. (6) has the following properties: 1. The random variables \(\mathds{1}\{\Phi_{i}=1\},i=1,2,\ldots\) are independent and identically distributed, even though the local tests applied throughout may differ. 2. Because of (a), for any fixed \(\alpha\in(0,1)\), \(M_{t}\) is a centered martingale and \(M_{t}+t\alpha\sim\text{Bin }(t,\alpha)\) under the sequential null hypothesis, \(\mathcal{H}_{t}\) in Eq. (4). In Section 4, we provide two time-uniform critical thresholds \(\Gamma_{t}\) for \(M_{t}\) in Eq. (6) such that, \[\mathbb{P}_{\mathcal{H}_{\infty}}\bigg{(}\exists t\geq s_{0}:\ M _{t}>\Gamma_{t}\bigg{)}\leq\delta, \tag{7}\] for some initial time \(s_{0}\geq 1\), and a global level \(\delta\) for sequentially monitoring our data. The different time-uniform thresholds are applicable uniquely or in combination for different time regimes as well as specific applications, see Algorithm 1 and Algorithm 2. Notice that our procedure requires the choice of two significance levels \(\alpha,\delta\in(0,1)\). The choice of \(\delta\) determines the overall significance level of the sequential procedure and controls the occurrence of false positives, i.e., of early or false detections and is therefore naturally chosen to be small. The local level \(\alpha\), on the other hand, can be seen as a parameter that can, in principle, be set to an arbitrary value in \((0,1)\). Surprisingly perhaps, choosing \(\alpha\approx 0.2\) will result in favourable properties, whereas small values of \(\alpha\), such as the standard choice of \(\alpha=0.05\) lead to inferior performance, as will be demonstrated in our simulations Section 6. ### Related literature Our approach to sequentially monitoring complex data is related to quite a few relevant topics in the statistics literature. While an exhaustive review is infeasible and outside the scope of this work, the following closely related topics and literature are covered along with similarities and differences to this work. #### 3.1.1 Statistical Process Control Clearly, there is a close connection of our initial question to the topic of statistical process control (SPC, See, e.g, Montgomery (2020) for a general introduction to the topic or Qiu (2017) for a general discussion of SPC in the context of big data analysis). In SPC control charts are applied to monitor processes, often in the manufacturing industry, for change that may correspond to a production system being out of control. Our martingale statistic \(M_{t}(\alpha),\,t=1,2,\ldots\) can be considered as a type of control chart, where the control limits are provided by the time-uniform bounds \(\Gamma_{t}\). However, the performance of a control chart is usually judged in terms of the _run length distribution_ and its _average run length_ (ARL). Our viewpoint is slightly different: Using time-uniform bounds, we control the overall probability of having a false alarm uniformly in the sense of (7) for the following reason. One main issue in the monitoring of sports data is the typical run length under control required from a control chart. An hour of running provides data of \(10,000\) or more strides, these are numbers for which typical control charts are not designed. Moreover, the precise detection of change location is not the primary focus as, say, a delay of \(100\) strides corresponds to less than a minute of running and can still be considered a timely detection, whereas false alarms need to be ruled out. Finally, many frequently used, traditional control charts require strong parametric assumptions such as a normal distribution and cannot handle a change in the quantities that are monitored (Champ and Woodall, 1987; Shewart, 1931; Lowry et al., 1992; Crowder, 1989; Crosier, 1988). While non-parametric methodology based on ranks, order statistics, signs or general quantiles (see, e.g., Bhattacharya and Jr. (1981); Qiu and Hawkins (2001, 2003); Janacek and Meikle (1997); Amin et al. (1995); Chakraborti et al. (2004)) have been proposed in the literature as well, these approaches are not the right fit for our data analysis, as they were all setup following the usual vantage point assumed in process control. #### 3.1.2 Higher Criticism In the context of conducting many independent hypothesis tests with the goal of rejecting the joint null hypothesis, various versions of the test statistic, \[\text{HC}_{\alpha,t}=\frac{M_{t}(\alpha)}{\sqrt{\alpha(1-\alpha)}},\] are often referred to as Higher Criticism statistic in Donoho and Jin (2004) and have been proposed in the literature from as early as Brozek and Tiede (1952). The higher criticism statistic is of particular interest in the context of detecting sparse mixtures, ensuring optimality in a certain regime in a variety of different models (see, e.g., Donoho and Jin (2004); Arias-Castro and Ying (2019); Tony Cai et al. (2011) and the references therein). In an applied context, referred to as Sequential-Goodness-of-fit, (Carvajal-Rodriguez et al., 2009) propose to use the higher criticism statistic to identify the number of hypotheses that can be rejected in a multiple testing procedure providing some sort of relaxed multiplicity control compared to strong control of the FWER and the FDR (neither of which are ensured by this method). While related by using the same kind of statistic, a direct comparison to the methodology proposed in this paper is difficult, as the objectives and models imposed are fundamentally different. It should be noted that the higher criticism statistic is applied to _needle in a haystack_ type of problems, i.e., in unordered settings with few but strong signals. While we expect that using the ordered structure in (2) can be explicitly incorporated to improve the accuracy of the detection of change, this will certainly happen at the expense of simplicity of the proposed method and is therefore not pursued further. #### 3.1.3 Change Point Detection The literature on change point (CP) detection is vast and a full review is outside the ambit of this work. However, change point detection can be inferred as either detecting abrupt changes in the the data when a certain property changes (Kawahara and Sugiyama, 2009) or an accumulation of change over time due to external situations, example, remote sensing image data (Wen et al., 2021). A branch of CP- detection, known better as CP-estimation, models known changes in data and interprets the nature and plausibility of the estimates of change. The focus in these is where the existence of change is known and the degree and explanation of change is to be ascertained (Hido et al., 2008). Within CP detection, a well known sub-field is curated by CUSUM (cumulative sum)-like techniques. CUSUM methods are designed for an offline application and/or are computationally quite intense. For an introduction to the case of analysing CP retrospectively, an excellent review can be found in (Siegmund and Venkatraman, 1995). For online change detection in mean, there exist moving window change-detection algorithms, where the size of change is to be specified or all possible window sizes are considered, see (Kirch and Weber, 2018) for a survey article, where various CUSUM-detector statistics are considered, namely Page-CUSUM, MOSUM and mMOSUM. Alternatively, Otto and Breitung (2022), consider backward-CUSUM test, extendable to online monitoring scenario after \(t>T\). In such cases, asymptotics typically lead to a Wiener process due to a functional central limit theorem. Under various regularity and mixing conditions, quantiles may be approximated by appropriate bootstrap strategies. Further, parameter choices for these methods affect the kind of changes that these algorithms have the most power to detect. Another substantial shortcoming is the prerequisite of having the knowledge of pre-change and post-change mean (in Page-CUSUM). These techniques, therefore, have stringent parametric prerequisites and misspecifications in these models result in unpredictable results and are hard to interpret. A more detailed discussion of these drawbacks is found in (Romano et al., 2021). (Romano et al., 2021) and (Oskiper and Poor, 2002) also serve as a reference for using a CUSUM-detector for online CP- detection, with the latter proposing to construct CUSUM matrices, however again with the assumptions on density functions of the underlying data. The intensive advances by now in CUSUM type detectors may translate to precise CP- estimates, but may not be so relevant in applications which do not demand the most precise estimates of change location but rather require controlling early detection and achieving robustness towards model misspecification while also being computationally less intense. Further, methods herein only require change analysis over any chosen property of the dataset and is applicable to any choice of feature. A prac titioner has full freedom in feature selection, relevant for our work in human movement data and is not restricted to traditional monitoring statistics like mean and variance. #### 3.1.4 Martingale bounds and the law of the iterated logarithm (LIL) Concentration bounds for martingales that are uniform over finite times were derived in Balsubramani (2014). The bounds presented therein take the form of the LIL upper bound for very large times \(t\) and provide CLT-type bound below the LIL-rate for small enough \(t\) and thus provide the theoretical basis for our work. The results have been applied in the context of sequential testing in Balsubramani and Ramdas (2015)for two sample mean testing. As we are only interested in monitoring one particular, very simple martingale, we adjust and extend the proofs in (Balsubramani, 2014) to formulate a bound tailored more precisely towards \(M_{t}(\alpha)\). Since the LIL-based bounds obtained in this manner are not valid for small values of \(t\), we use results of (Howard et al., 2020) to derive time-uniform piece-wise linear bounds which can be applied right from the beginning of a monitoring procedure. The bounds can be hybridized to provide sharp bounds over all time regimes. ## 4 Theoretical results In this section, we will state two theoretical results which will be the basis for different sequential procedures proposed in this work. In Theorem 1, we provide the time-uniform bound \(\Gamma_{t}^{\text{LIL}}\) based on the law of the iterated logarithm, which is a refined version of a bound derived in Balsubramani (2014). This theorem is applicable only after a stopping time is reached, hereafter denoted by \(s_{0,\text{LIL}}\) and is applicable in a (large-) time regime i.e. for \(t\geq s_{0,\text{LIL}}\). **Theorem 1**.: Let, for \(\alpha\in(0,1/2]\), the random variables \(\mathbbm{1}\left\{\Phi_{i}=1\right\}\overset{\text{i.i.d}}{\sim}\text{Bin}(1,\alpha)\) and \(M_{t}=M_{t}(\alpha)\) be as in Eq. (6). For any \(\delta\in(0,1/2]\), any \(k\in(0,1)\), and \(\kappa\) with \[\kappa\geq\frac{\frac{1}{2}+\frac{1}{20e^{8}}-0.4\alpha+\max\{\frac{1}{6e^{4} }-0.1\alpha,0\}}{1-\alpha}\quad\text{and}\quad s_{0,\text{LIL}}=\left\lceil \frac{e^{4}(1+\sqrt{k})^{2}}{\kappa\alpha(1-\alpha)}\log\left(\frac{1}{\delta }\right)\right\rceil,\] it holds with probability at least \(1-\delta\), for all \(t\geq s_{0}\) simultaneously, that \[M_{t}\leq\Gamma_{t}^{LIL},\] where \[\Gamma_{t}^{LIL}=\sqrt{\frac{4}{1-k}\kappa\alpha(1-\alpha)t\bigg{(}2\log\log \frac{2\kappa\alpha(1-\alpha)t}{(1-\sqrt{k})}+\log\frac{2}{\delta\log(\frac{1 +\sqrt{k}}{1-\sqrt{k}})}\bigg{)}}\wedge\frac{2\kappa\alpha(1-\alpha)t}{e^{2}} \lor 1.\] The proof of the above theorem is quite technical and therefore deferred to Section 8.2 in the Appendix. **Remark 2**.: Note that the initial time \(s_{0,\mathrm{LIL}}\) in Theorem 1 can easily reach values of \(10,000\) and more, when (un-)desirable values of the parameters such as small values of \(\kappa,\alpha\) are chosen. As, on the other hand, a small value of \(\kappa\) decreases the bound \(\Gamma^{LIL}\) by a factor of \(\sqrt{\kappa}\), whereas too large values render the bound useless in practice. We will, as a compromise, accept a larger starting point \(s_{0,\mathrm{LIL}}\) and use another bound for time points \(1\leq t\leq s_{0,\mathrm{LIL}}\) i.e., in time-regimes prior to stopping time \(s_{0,\mathrm{LIL}}\). To this end, Lemma 1 establishes a piece-wise linear function as a time-uniform bound with an initial time \(s_{0,\mathrm{Linear}}=1\). **Lemma 1**.: Fix \(p\in\mathbb{N}\cup\{\infty\}\), \(\alpha\in(0,1)\), \(\delta\in(0,1)\), \(\Delta_{1},\ldots,\Delta_{p}\in(0,1)\) such that \(\sum_{j=1}^{p}\Delta_{i}\leq\delta\). Define further a sequence of time points \(t_{1},\ldots,t_{p}\) such that \(t_{1}\leq t_{2}\leq\ldots\leq t_{p}\). Set \[\tau_{0} :=2\alpha\log(\frac{1}{\Delta_{1}}), \tag{8}\] \[\tau_{j} :=\sqrt{t_{j}t_{j+1}}\frac{\sqrt{\log(\frac{1}{\Delta_{j+1}})t_{ j+1}}-\sqrt{\log(\frac{1}{\Delta_{j}})t_{j}}}{\sqrt{\log(\frac{1}{\Delta_{j}})t_{ j+1}}-\sqrt{\log(\frac{1}{\Delta_{j+1}})t_{j}}},\quad j=1,\ldots,p. \tag{9}\] If \(\Delta_{j}\) and \(t_{j}\), \(j=1,\ldots,p\) are such that \(\tau_{j},j=1,\ldots,p-1\) satisfies \[t_{1}\leq\tau_{1}\leq t_{2}\leq\tau_{2}\leq\ldots\leq\tau_{p}\leq t_{p}, \tag{10}\] and the piece-wise linear function \(\Gamma^{\mathrm{Linear}}_{t}\) is defined by \[\Gamma^{\mathrm{Linear}}_{t}=\sum_{j=0}^{p-1}\sqrt{\frac{1}{8}\log(\frac{1}{ \Delta_{j}})}\left(\frac{1}{\sqrt{t_{j}}}t+\sqrt{t_{j}}\right)\mathds{1}\{t\in [\tau_{j},\tau_{j+1})\}, \tag{11}\] the following holds \[\mathbb{P}_{\mathcal{H}_{\infty}}\left(\forall t\in\mathbb{N}\;M_{t}\leq \Gamma^{\mathrm{Linear}}_{t}\right)\geq 1-\delta. \tag{12}\] **Remark 3**.: (Properties of Lemma 1) 1. It is clear that using one linear bound in \(t\) to monitor the martingale \(M_{t}\) over time provides an undesirable (linear) asymptotic rate in \(t\). Nonetheless, locally, at the time points \(t_{j}\), we obtain the following bound \[\Gamma^{L}_{t_{j}}=\sqrt{\frac{t_{j}}{2}\log\left(\frac{1}{\Delta_{j}}\right) }\quad\text{for}\quad j=1,\ldots,p,\] i.e., the correct asymptotic behaviour in \(t\), up to a logarithmic factor. Away from the time points \(t_{j}\), for \(p<\infty\), \(\Delta_{j}\equiv\frac{\delta}{2p}\) and \(t_{j+1}=t_{j}+\frac{s_{0,\mathrm{Linear}}}{p}\) for some initial value \(s_{0,\text{Linear}}>0\), the additional error for \(t_{j}\leq t\leq t_{j+1}\) is bounded from above by the difference \[\Gamma_{t_{j+1}}^{L}-\Gamma_{t_{j}}^{L}\approx\sqrt{\frac{s_{0,\text{Linear}} \log(\frac{2p}{\delta})}{8pj}},\] i.e., a moderate deviation, in particular if \(p\) and \(j\) assume large values. This issue will be discussed in more detail in Section 4.2. 2. Notice that each \(\tau_{j}\) defined in (8) reduces to \(\tau_{j}=\sqrt{t_{j}t_{j+1}}\) if \(\Delta_{j}\equiv\frac{\delta}{2p}\) and (10) is satisfied automatically. ### Monitoring procedures The bounds presented in Section 4 provide multiple possibilities of defining thresholds for monitoring procedures for the martingale process. We provide two general approaches which can be readily used or adjusted according to information on the problem at hand. To this end, we will put Theorem 1 and Lemma 1 to use and provide two algorithms for sequential analysis in our setting. ``` 1:Fix \(\alpha,\delta\in(0,1)\) and \(k\in(0,1)\) 2:\(\kappa\leftarrow\frac{\frac{1}{2}+\frac{1}{20\epsilon}-0.4\alpha+\max\{\frac {1}{6\epsilon^{4}}-0.1\alpha,0\}}{1-\alpha}\) 3:\(s_{0,\text{LIL}}\leftarrow\left\lceil\frac{e^{4}(1+\sqrt{k})^{2}}{\kappa\alpha (1-\alpha)}\log\left(\frac{1}{\delta}\right)\right\rceil.\) 4:\(\Gamma_{t}^{\text{LIL}}\leftarrow\sqrt{\frac{4}{1-k}}\kappa\alpha(1-\alpha)t \bigg{(}2\log\log\frac{2\kappa\alpha(1-\alpha)t}{(1-\sqrt{k})}+\log\frac{2}{ \delta\log(\frac{1+\sqrt{k}}{1-\sqrt{k}})}\bigg{)}\) 5:for\(t=s_{0,\text{LIL}},s_{0,\text{LIL}}+1,\ldots\)do 6:if\(M_{t}>\Gamma_{t}^{\text{LIL}}\)then 7:return\(\widehat{T}_{0,\text{LIL}}=t\) 8:else set\(t=t+1\) 9:endif 10:endfor ``` **Algorithm 1** Sequential testing via LIL bounds Algorithm 1 provides a monitoring method for large time points. However, in cases when monitoring the martingale already before point \(s_{0,\text{LIL}}\) is desired, a hybrid approach in Algorithm 2 is proposed, which combines the piece-wise linear bound from Lemma 1 and the LIL-bound from Theorem 1. In this approach, the algorithm splits the overall level \(\delta\) of the sequential procedure equally between early and later times. Before time point \(s_{0,\text{LIL}}\), \(p\) linear bounds are used whose construction is based on equidistantly spaced time points \(\tau_{0}=t_{1}\leq t_{2}\leq\ldots\leq t_{p}=s_{0,\text{LIL}}\). ``` 1:Fix\(\alpha,\delta\in(0,1)\), \(p\in\mathbb{N}\) and \(k\in(0,1)\) 2:Choose\(\Delta_{1},\ldots,\Delta_{p}\in(0,1)\) such that \(\sum_{j=1}^{p}\Delta_{i}\leq\delta\). 3:\(\tau_{0}=t_{1}\gets 2\alpha\log(\frac{1}{\Delta_{1}})\) 4:\(\kappa\leftarrow\frac{\frac{1}{2}+\frac{1}{20\cos^{8}-0.4\alpha+\max\{\frac{1}{ 6e^{4}}-0.1\alpha,0\}}}{1-\alpha}\) 5:\(s_{0,\text{LIL}}\leftarrow\left\lceil\frac{e^{4}(1+\sqrt{k})^{2}}{\kappa\alpha(1 -\alpha)}\log\left(\frac{1}{\delta}\right)\right\rceil.\) 6:\(t_{j}\gets t_{1}+(j-1)/(p-1)(s_{0}-t_{1})\) 7:\(\tau_{j}\leftarrow\sqrt{t_{j}t_{j+1}}\frac{\sqrt{\log(\frac{1}{\Delta_{j+1}})t_ {j+1}}-\sqrt{\log(\frac{1}{\Delta_{j}})t_{j}}}{\sqrt{\log(\frac{1}{\Delta_{j} })t_{j+1}}-\sqrt{\log(\frac{1}{\Delta_{j+1}})t_{j}}},\quad j=1,\ldots,p-1.\) 8:\(\Gamma_{t}^{\text{Linear}}\leftarrow\sqrt{\frac{1}{8}\log(\frac{1}{\Delta_{ j}})}\left(\frac{1}{\sqrt{t_{j}}}t+\sqrt{t_{j}}\right),\quad t\in[\tau_{j}, \tau_{j+1}),\) 9:\(\widehat{T}_{0,\text{hybrid}}\gets 0\) 10:for\(t=t_{0},\ldots,s_{0,\text{LIL}}\)do 11:if\(M_{t}>\Gamma_{t}^{\text{Linear}}\)then 12:return\(\widehat{T}_{0,\text{hybrid}}=t\) 13:else set\(t=t+1\) 14:endif 15:endfor 16:if\(T_{0,\text{hybrid}}=0\)then 17:\(\Gamma_{t}^{\text{LIL}}\leftarrow\sqrt{\frac{4}{1-k}\kappa\alpha(1-\alpha)t \bigg{(}2\log\log\frac{2\kappa\alpha(1-\alpha)t}{(1-\sqrt{k})}+\log\frac{2}{ \delta\log(\frac{1+\sqrt{k}}{1-\sqrt{k}})}\bigg{)}}\) 18:for\(t=s_{0,\text{LIL}},s_{0,\text{LIL}}+1,\ldots\)do 19:if\(M_{t}>\Gamma_{t}^{LIL}\)then 20:return\(\widehat{T}_{0,\text{hybrid}}=t\) 21:else set\(t=t+1\) 22:endif 23:endfor 24:endif ``` **Algorithm 2** Sequential testing (Hybrid Algorithm) ### Properties of the bounds We will now discuss the properties of the two bounds and provide a comparison. Clearly, the piece-wise linear bounds are sharpest for \(\alpha=\frac{1}{2}\), whereas the LIL-bound scales with \(\alpha\). While \(\alpha=\frac{1}{2}\) may not be the most obvious choice for the parameter \(\alpha\), the procedures remain valid with (global-) level of the sequential procedures given by \(\delta\) (and not \(\alpha\)). Therefore, in this section, we consider the case of \(\alpha=\frac{1}{2}\).To obtain a better understanding of the methods presented here, we will compute each bound at time-points \(t_{j}=j\cdot K\). Then, the bounds are rewritten in the following form: \[\phi^{-1}\left(1-\gamma_{j}\right)\sqrt{\alpha(1-\alpha)t_{j}}\] where \(\phi\) denotes the cdf of the standard normal distribution.This allows for a direct comparison to the asymptotic pointwise bound at time point \(t_{i}\) at level \(\delta\) obtained via the CLT, that is, \[\phi^{-1}\left(1-\delta\right)\sqrt{\alpha(1-\alpha)t_{j}}.\] At the points \(t_{j}\), the linear bound is given by \[\Gamma_{t_{i}}^{\text{Linear}}=\sqrt{\frac{1}{2}t_{i}\log\left(\frac{1}{ \Delta_{i}}\right)}.\] Using Mill's ratio for the normal distribution, we approximate \[\sqrt{2\log(1/\Delta_{i})}\approx\phi^{-1}\left(1-\frac{\Delta_{i}}{\sqrt{2 \pi}\sqrt{2\log(\frac{1}{\Delta_{i}})}}\right)\] and find \[\Gamma_{t_{j}}^{\text{Linear}}\approx\phi^{-1}\left(1-\frac{\Delta_{j}}{ \sqrt{2\pi}\sqrt{2\log(\frac{1}{\Delta_{j}})}}\right)\sqrt{\frac{1}{4}t_{j}},\] i.e., a moderate adjustment of the level for reasonably chosen \(\Delta_{j}\). Setting \(\alpha=\frac{1}{2}\), \(k=0.1\) and \(\kappa=1-k\) yields a relatively moderate starting point of \(s_{0,\text{LIL}}=615\) and a bound \[\Gamma_{t}^{\text{LIL}}\leq\sqrt{t\Big{(}2\log(\log(0.66t))+\log\Big{(}\frac{ 3.06}{\delta}\Big{)}\Big{)}}.\] Using \(t_{j}=j\cdot K\) we find that the LIL bound is of the following form \[\Gamma_{t_{j}}^{\text{LIL}}\leq\phi^{-1}\left(1-\frac{\delta}{C\log(j)^{4} \sqrt{\log(\log(j))}}\right)\sqrt{\frac{1}{4}t_{j}},\] for a positive constant \(C>0\). Due to the requirement that \(\sum_{j=1}^{\infty}\Delta_{j}\leq\delta\), it is clear that for sufficiently large values of \(t\) and hence \(j\), we have \[\frac{\Delta_{j}}{\sqrt{2\pi}\sqrt{2\log(\frac{1}{\Delta_{j}})}}\gg\frac{\delta }{C\log(j)^{4}\sqrt{\log(\log(j))}}\] Therefore, asymptotically, the bound \(\Gamma_{t_{j}}^{\text{LIL}}\) is superior to the linear bound. However as discussed, this only concerns rather large values of \(t\). ## 5 Case study: Performance analysis of runners As discussed before, we will focus on the analysis of the performance of athletes during training, in particular in running, which has gained considerable attention in the biomechanical literature in recent years (see, e.g., Saarakkala et al. (2020) for a review on the use of sensor technology in this context). Our analysis provides a step towards our ultimate goal of giving feedback or intervene via a device or application when data is collected, analysed and results are conveyed back in real time. The aim is therefore to detect change in the movement patterns due to fatigue before the runner notices any tiredness. However, a moderate delay in fatigue detection is acceptable. As is obvious, running entails higher risk of injuries in the lower extremity joints i.e., the hip, knee and ankle joints. Further, it is known that long-distance and long-duration runs are to be associated with injuries due to the fatigue of the runner (Tam et al., 2017) and specifically in the knees (see, (Van Gent et al., 2007), (Nielsen et al., 2012)) and tracking the onset of changes which may lead to such injuries via the joint angles can help to stop or alter training appropriately. It is therefore, of particular interest to study how running kinematics change due to running-induced fatigue (see Apte et al. (2021) and Zandbergen et al. (2023) as well as references therein). This is done by the study of biomechanical data obtained by using inertial measurement units (IMUs), activity trackers like FitBits as well as recorded video data. For the application of the methods presented here, we focus on the data from running, obtained using a fatigue protocol. In other words, the data collection is designed in such a manner that it is ensured that the athlete taking part in the study will surely but steadily get tired during the course of the run. As discussed before, the fatigue protocol for data collection may be designed in different ways, see (Apte et al., 2021). In our case, the runners ware asked for their average speed in an 8 km run and then the speed of the treadmill would be adjusted to 103% of this speed for the indoor run. In an outdoor setting this was slightly more complicated but was achieved by having a cyclist ride a bike parallel to the runner at the required 103% of the average speed of the runner. Fatigue has different effects on the knee biomechanics of the runner (Harato et al. (2021)). This is because all lower extremity joints, namely, the hip, knee and ankle joints have to somewhat _adjust_ to endure the movement even in a fatigued state and therefore are the joints undergoing maximum change during the course of the run. The movement of these joints are recorded via the joint angles, which are repeated patterns of the form shown in Figure 1. In sections 5.4, 5.5, 5.8, and 5.9 we study these joint angles in detail and apply different versions of the sequential testing methods given in Algorithm 1 and Algorithm 2 for change point detection. To complement this study, we also analyze data from additional sources as follows. In Section 5.6, we look at the ground reaction forces exerted by the foot, obtained from the forceplates embedded in the treadmill. Obviously, this accounts for indoor setting of the run. We only look at the peak values of this data which are effectively recorded when the foot takes off the ground further incorporating the situation of missing values, i.e., when some of these peak values remain undetected due to external factors and the effect this may have on change point detection. Finally, Section 5.7, we look at the contact time of the left and right foot with the ground during the course of the run. The results from the complete analysis is given in Section 5.10. ### Source devices of data As mentioned before, we obtain this dataset by conducting running trials with fatigue protocol. Devices used in data collection are (i) inertial sensors produced by Xsens ( Xsens MVN link sensors, sampling at 240 Hz, see Schepers et al. (2018)) attached to the body of the athletes, and (ii) Force plate data obtained when athletes ran on one belt of a dual-belt treadmill with an integrated three-dimensional (3D) force plate (custom Y-mill, Culemborg, The Netherlands). From the IMUs, one can obtain angles of the hips, knees and ankles (lower extremity joint angles) in 3- dimensions. From a biomechanical point of view, it makes sense to look at the lower extremity joint angles in the sagittal plane (the plane dividing the body into left and right halves). From the forceplates, the ground reaction forces that are associated with the strike and take-off of the foot are obtained. Note further, that for our data collection experiments, both indoor and outdoor, we had 6 healthy male and female runners between the ages of \(22-39\), with no reported prior injuries to the knee joint. Further, the subjects under study in this data analysis are known to have between \(1-11\) years of experience in running. All subjects also reported to be regularly running at least a couple of times between \(10-60\) km per week as their current training patterns. ### Modelling the data In this section, we model the data on the example of the progression of the joint angles obtained from the runners. Figure 1 shows a few exemplary cycles of such data for each of the three joints (hip, knee and ankle) under consideration. For each run, several thousand of such stride curves, varying around the respective mean curves, are available for each joint. In such biomechanical data, one stride is basically one cycle, or one functional observation, from a functional data set. Therefore, in the following, we implicitly impose a simple functional signal plus noise model to describe the joint angle data mathematically per stride. Let \(Y_{i}^{k}(t)\) denote the i-th stride of runner \(k\), \(i=i,\ldots,T_{k}\), \(k=0,\ldots,5\), where \[Y_{i}^{k}=\mu^{k}(t)+\epsilon_{i}^{k}(t)\qquad\text{for all }t\in[0,1]\;i=1, \ldots,T_{k},\;k=0,\ldots,5, \tag{13}\] where \(\mu^{k}(t)\) is the mean function of the \(k\)-th runner and \(\epsilon_{i}^{k}(t)\) are zero mean stochastic process independent of the _true_ signal \(\mu^{k}(t)\). Such problems are considered as a pre-processing step, (see Hormann and Jammoul (2022)). Post-processing, there is a tremendous amount of appealing methods from functional data analysis to apply to such data. Excellent surveys of functional data analysis (FDA) methods are found in (Wang et al., 2016), (Hsing and Eubank, 2015) and (Horvath and Kokoszka, 2012), while a recent advanced work also incorporates the relevant size of the change in the banach space of continuous functions (see Dette et al. (2020)). However, to avoid technicalities and to demonstrate the appealing simplicity of our approach, even for such complex data, we will refrain from pursuing this direction in this example and introduce a simple pipeline for data analysis in the following section, which also works for other data models than (13). ### Typical procedure for the data analysis To put our monitoring approach to use, we perform the following steps: **(1)** Use the first \(m_{1}\) data points (or prior data/information) to estimate a reference stride profile for each of the joints in a rested state of the runner. _(2)_ Use the subsequent \(m_{2}\) data points (strides) to compute the distances \(D_{1},\ldots,D_{m_{2}}\) of each stride to the reference pattern. (We use the \(L^{2}\)-distance.) _(3)_ From the \(m_{2}\) computed differences, estimate the \((1-\alpha)\)-quantile \(\hat{q}_{1-\alpha}\), e.g., the median for a choice of \(\alpha=0.5\). _(4)_ Start monitoring afterwards: Set \(S_{0}:=0\) and for stride \(i\) after the initial \(m_{1}+m_{2}\) strides: * Add 0 if \(D_{m_{2}+i}<\hat{q}_{1-\alpha}\): \(S_{i}=S_{i-1}\) * Add 1 else: \(S_{i}=S_{i-1}+1\) We look for changes in the data using the martingales \(M_{i}(\alpha)=S_{i}-\alpha i\), using the sequential monitoring scheme provided in Algorithm 2. Note that if not mentioned otherwise, we set local level \(\alpha=0.22\), the global level to \(\delta=0.1\) and \(p=10\) (number of partitions for linear bounds, see Lemma 1) and compute the martingale by taking the \((1-\alpha)-\) quantile of the initial third of the dataset \(S_{i}\). Further, since the length of the dataset is not too large, we mostly make use of the linear bounds as a threshold for the monitoring scheme. Finally, to convey our result of fatigue detection, we give the percentage of the run where a -change- in the data is detected. This is because, every runner ran a different length of time, i.e., up to a point in time when they said they could run no more. Presenting the results in percentage of the full run allows us to standardise the comparison between runners. Further, note that in all of the following plots, the \(x\)-axis always corresponds to the sequence of strides during the course of the run, unless otherwise stated. ### Analysis 0: Joint angle data (Single runner, pilot study) As a first proof of concept, in this section we use the hip, knee and ankle joint angles recorded over time during the course of the fatiguing protocol based on a data set that was collected prior to the data analyzed in the following sections and was a part of a pilot study for formalising the protocol of data collection from multiple runners to follow. As shown in Fig. 2 a _profile_ of a single runner can be obtained in terms of the martingale statistic from the hip, knee and ankle angles. Further, this is provided along with the point of change given by the upcrossing of the martingale over the linear bounds. We present here the results for \(\alpha=0.22\), which was suggested by our simulations as the most stable parameter choice for which change was detected relatively early, and for \(\alpha=0.05\), the default choice for \(\alpha\). A couple of first interesting results that we find are (a) the martingale statistic of the right ankle angle is significantly different from the rest of the runner profile, as it does not clearly show a steady deviation from the initial reference movement pattern. This may be connected to the known that fatigue impacts the lower joint angles such that increased asymmetry in the load of hip, knee and ankle joints may be seen (see Gao et al. (2022)). (b) The local level \(\alpha\) plays a significant role in fatigue detection: As expected, change is detected sooner for \(\alpha=0.22\) and the detected change points within- (i.e., per joint) and between- joints are more consistent in this case with CP at approximately 40% of the run. Qualitatively, the progression of the martingale trajectories is such that before the Figure 2: Single runner profile for lower joint angles. Average curves (left col.) and the corresponding martingale statistic for \(\alpha=0.22\) (middle col.) and for \(\alpha=0.05\) (right col.). onset of a steep increase indicating change in movement pattern, there is an obvious decrease, indicating a very stable phase preceding the onset of fatigue, which is consistent with biomechanical modelling of typical phases during a fatiguing run. The steep increase on the other hand suggests that fatigue does in fact manifest altered movement patterns and therefore larger deviations from the initial reference pattern. This pattern of the martingale trajectory is also seen and discussed in the following analyses for multiple runners and features other than joint angles. ### Analysis I: Joint angle data (Multiple runners, follow-up study) A runner profile as in Fig. 2 may be obtained for a group of runners by further applying this methodology across all the runners in both indoor and outdoor settings. This is shown in Fig. 3 and Fig. 9 (for indoors and outdoors) respectively. The percentage of the run where the change is detected by the martingale statistic in both cases is further noted in Table 10 and Table 11 (indoor and outdoor) respectively. Remarkably, we observe for most runners and joint angles the characteristic _check-mark-like form_ of the martingale trajectories. For most runners, however, there does appear to be asymmetries per joint. Such individual characteristics of fatiguing are of independent interest in a biomechanical context, as they might be used to provide _personalized feedback_ and to create _individual risk profiles_ for a runner paving the way for deeper analyses in case of stark asymmetries. ### Analysis II: Using specific features/ missing data Using the data from the force plates integrated into the treadmill, we only monitor the peak values at the maximum of each curve. These are the values corresponding to the take-off of the foot during the run. Further we mimic the possibility of missing values in the data. In practise, this would help in case a measurement system does not detect a few signals due to some external influence. In Fig. 4, we use the force plate data from the single runner from the pilot study, shown in Fig. 2. As can be seen, the forceplate data too detects changes at around 40% of the run as seen before in the joint angles. As this data is not available for multiple runners, we only suggest this as a potential application and to point out that the methodology can also be applied to a wide range of other features. ### Analysis III: Contact time of foot In this section, we analyse the time duration of contact of the left and right foot with the ground during the course of the indoor run. In Fig. 10 of the Appendix, we visualise the data for the left and right foot of the single runner (pilot study) along with the corresponding martingales. In Fig. 5, the analysis for 4 runners is shown. Quantitative analysis for multiple runners is hard due to the fact that data is only available in a laboratory setting for a few runners. Evidently, martingale upcrossing over the bounds is rare. However, as before, qualitative analysis shows the sudden onset of a steep increase in all trajectories, reiterating that information on change is contained in the recorded contact times. More clearly than in all the other analyses, we see that initially, an adverse effect plays a role: there seems to be a distinct stable phase before the onset of fatigue, resulting again in _check-mark-shaped_ martingale trajectory. In order to capture such a stable phase as well, a segmentation using first a lower martingale bound followed by the given upper bound could be applied resulting in more than one CP, also discussed later in Section 5.10. Further, current literature also records conflicting results in terms of contact times (a) decreasing during fatigue (Morin et al., 2011) while (b) another study by (Morin et al., 2011) records no changes with (c) (Apte et al., 2021) recording a significant increase. Due to these difficulties, we propose the possibility of this application without further discussion. ### Analysis IV: pooling features from left and right joint angles In this section we consider aggregated data from left and right joints combined. Using the notation introduced in the box in Section 5.3, we take the distances \(D^{l}_{ij},D^{r}_{ij},i=1,\ldots,m_{2}\), \(j\in\{\text{hip, knee, ankle}\}\) from the left and the right sides of the body (the latter corresponding to superscripts \(l,r\)) and first use \(D^{\text{max}}_{j}=\max\{D^{l}_{ij},D^{r}_{ij}\}\) to construct the martingale at local level \(\alpha\). The result of such a feature selection are shown in corresponding plots in Figure 3: Indoor fatigue profile of all runners. Analysis for local level \(\alpha=0.22\). Fig. 6 and the points of up-crossing the bound (in percentages) are provided in Table 12 in the supplement. Similarly to the previous case, we also look at \(D_{j}^{\min}=\min\{D_{ij}^{l},D_{ij}^{r}\}\) and \(D_{j}^{\rm{ave}}=\frac{1}{2}(D_{ij}^{l}+D_{ij}^{r})\). Corresponding results are shown in Table 14 and Table 13, respectively. Further, as a comparison, the same is done for outdoor data setting for the outdoor data setting. Figure 4: Force plate data. (top left) Data from full run with peaks marked by red dots, (top right) Zoomed in plot, (bottom left) martingale and corresponding bounds, (bottom right) change detected by dashed red line with respect to data from full run. Figure 5: Martingales for contact times of the foot with ground. Visualise data in Fig. 10. case of \(D_{j}^{\max}\) (see Table 15). An inspection of the tables suggests the use of \(D_{j}^{\min}\) to avoid too early detection due to short instabilities during the run. The maximum and mean produce similar results in general, where \(D_{j}^{\max}\) produces the most stable results in cases where left and right side are unbalanced. ### Analysis V: Benchmarking with CUSUM and Borg Scale In order to benchmark our results of detecting fatigue, we use the response of the runner during the course of the run which was taken in the Borg Scale, a scale from 1 to 10, from relaxed to total exhaustion. For a discussion on the Borg scale, see (Borg, 1982). In Table 1, the recorded Borg scale of the runners from our dataset is given. Further, a methodological benchmarking analysis for this particular application was also performed using a CUSUM-like change point detector (see statistic in Eq. (16)), the results of which are in Table 16, Table 17 and Table 18 in the Appendix. It is to be noted that using a CUSUM detector on the sports data, change points appear to be detected either too early or too late. Figure 7 shows a summary of the maximum-pooled values for all angles per subject for the indoors data alongwith Borg scale values. It can be seen that up to at least a reported/_perceived_ level of fatigue of 4 (the transition from light activity to moderate activity) a stable phase can be consistently observed for all the runners. Furthermore, change is typically detected while moving from moderate to vigorous activity (the latter starting at 7). Therefore, we conclude that the martingale trajectory captures the progression of the perceived level of fatigue consistently. data measured in a controlled (indoor) environment, results are extremely stable and change points are detected consistently at the same point throughout features (see Figure 2). This is even the case if much noisier data is used for the analysis (Analysis II, Figure 4). Further, it is also observed that a (local-)level of \(\alpha=0.22\) produces good results simultaneously for all data, especially joint angle and forceplate, even in non-standard situations like missing data, studied in Section 5.6. Further, as seen in the case of aggregated data in Fig. 6 (corresponding CPs reported in Table 12 in the supplement), CP-detection is seen to have certain alignment within the joint angles, especially in the case of ankle and hip joint. Qualitatively, the deviations, for example, in the case of Subject 0 (ankle); Subject 0 and Subject 4 (knee), the statistic is seen to stabilize with respect to the null hypothesis. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{4}{c||}{**Indoor**} & \multicolumn{4}{c|}{**Outdoor**} \\ \hline \% of run & Subject 0 & Subject 1 & Subject 2 & Subject 3 & Subject 4 & Subject 5 & Subject 0 & Subject 1 & Subject 2 & Subject 3 & Subject 4 & Subject 5 \\ \hline 0.0 & 1 & 2 & 1 & 1 & 2 & 1 & 1 & 2 & 1 & 1 & 1 \\ \hline 25.0 & 4 & 5 & 3 & 6 & 7 & 7 & 4 & 5 & 3 & 4 & 4 & 7 \\ \hline 50.0 & 6 & 6 & 6 & 7 & 8 & 7 & 6 & 7 & 6 & 7 & 6 & 8 \\ \hline 75.0 & 7 & 8 & 7 & 8 & 8 & 9 & 7 & 8 & 8 & 7 & 7 & 9 \\ \hline 100.0 & 8 & 9 & 9 & 9 & 9 & 10 & 8 & 9 & 10 & 9 & 8 & 10 \\ \hline \end{tabular} \end{table} Table 1: Reported Borg scales of all subjects, indoor and outdoor corresponding to percent of run. Figure 7: Joint angle data analysis (pooled for the left and right sides and taking max as in Section 5.8) from indoor per subject with the corresponding Borg scales. It is to be also noted that even though upcrossing of the martingale may not always take place (see Fig. 5), the martingale process contains information in its trajectory, which can be used to analyze a run retrospectively. In this context, it is worth mentioning that the trajectories of all the runners suggest a certain stability and then show a steady increase (with CP at around 40% of the run) from which an inference can be made that most local tests at these (later) time-points were rejected. We remark that qualitative information can be deciphered from the use of the sequential martingale statistic, yielding the positions of increase, where it follows that incoming data are progressing deviating from the reference. Further, as a proof of concept for the use of joint angle data for the purpose of fatigue detection, from Figures 3 and 9 it can be safely concluded that all runners have at least one joint suitable for this task. This is supplemented in Fig. 7 with a subject wise analysis benchmarked by the corresponding Borg scale (indoor data), showing that the behaviour of the martingale trajectory is in line with the reported _(perceived)_ level of fatigue. When monitoring joint angles, our analysis showed a strong variability in symmetry within individuals, which can be reduced by a combination of different features in cases where the main focus is on fatigue detection. For simplicity and proof of concept, in this study we only considered aggregation of left and right joints, as these are automatically on the same scale. However, aggregation of a larger number of features promises even better results, but needs to be carefully designed and studied, potentially in future research. Our data analysis shows that the use of the martingale statistic is well suited for the analysis of such data. Future research and extension towards a quantitative analysis of the whole martingale trajectory along-with the development of an adjusted procedure incorporating a lower bound of the martingale (follows due to symmetry), is foreseen to be advantageous in multiple change detection. In our example, this would be particularly beneficial for the definition of different phases in a run with respect to levels of fatigue. From an application point of view, such a refined approach would allow a practitioner to track subtle changes, where even slight departures from reference movements during the course of the activity are monitored. This is expected to be particularly helpful in human movement analysis not just in fatigue detection and injury prevention but also in rehabilitation. **Ethics statement:** The local ethics committee (Ethical Committee EEMCS (EC-CIS), University of Twente, ref.: RP 2022-20) approved the experimental protocol of this study. **Acknowledgements:** The author thanks RUB for their hospitality where a part of the research was carried out. Both authors thank all members of the _Sports, Data, and Interaction project_, in particular Dennis Reidsma and Jasper Reenalda, for helpful discussions and valuable feedback. Further, the authors report there are no competing interests to declare.
2302.13495
LMSeg: Language-guided Multi-dataset Segmentation
It's a meaningful and attractive topic to build a general and inclusive segmentation model that can recognize more categories in various scenarios. A straightforward way is to combine the existing fragmented segmentation datasets and train a multi-dataset network. However, there are two major issues with multi-dataset segmentation: (1) the inconsistent taxonomy demands manual reconciliation to construct a unified taxonomy; (2) the inflexible one-hot common taxonomy causes time-consuming model retraining and defective supervision of unlabeled categories. In this paper, we investigate the multi-dataset segmentation and propose a scalable Language-guided Multi-dataset Segmentation framework, dubbed LMSeg, which supports both semantic and panoptic segmentation. Specifically, we introduce a pre-trained text encoder to map the category names to a text embedding space as a unified taxonomy, instead of using inflexible one-hot label. The model dynamically aligns the segment queries with the category embeddings. Instead of relabeling each dataset with the unified taxonomy, a category-guided decoding module is designed to dynamically guide predictions to each datasets taxonomy. Furthermore, we adopt a dataset-aware augmentation strategy that assigns each dataset a specific image augmentation pipeline, which can suit the properties of images from different datasets. Extensive experiments demonstrate that our method achieves significant improvements on four semantic and three panoptic segmentation datasets, and the ablation study evaluates the effectiveness of each component.
Qiang Zhou, Yuang Liu, Chaohui Yu, Jingliang Li, Zhibin Wang, Fan Wang
2023-02-27T03:43:03Z
http://arxiv.org/abs/2302.13495v1
# LMSeg: Language-guided Multi-dataset ###### Abstract It's a meaningful and attractive topic to build a general and inclusive segmentation model that can recognize more categories in various scenarios. A straightforward way is to combine the existing fragmented segmentation datasets and train a multi-dataset network. However, there are two major issues with multi-dataset segmentation: (i) the inconsistent taxonomy demands manual reconciliation to construct a unified taxonomy; (ii) the inflexible one-hot common taxonomy causes time-consuming model retraining and defective supervision of unlabeled categories. In this paper, we investigate the multi-dataset segmentation and propose a scalable Language-guided Multi-dataset Segmentation framework, dubbed LMSeg, which supports both semantic and panoptic segmentation. Specifically, we introduce a pre-trained text encoder to map the category names to a text embedding space as a unified taxonomy, instead of using inflexible one-hot label. The model dynamically aligns the segment queries with the category embeddings. Instead of relabeling each dataset with the unified taxonomy, a category-guided decoding module is designed to dynamically guide predictions to each dataset's taxonomy. Furthermore, we adopt a dataset-aware augmentation strategy that assigns each dataset a specific image augmentation pipeline, which can suit the properties of images from different datasets. Extensive experiments demonstrate that our method achieves significant improvements on four semantic and three panoptic segmentation datasets, and the ablation study evaluates the effectiveness of each component. ## 1 Introduction Image Segmentation has been a longstanding challenge in computer vision and plays a pivotal role in a wide variety of applications ranging from autonomous driving (Levinson et al., 2011; Maurer et al., 2016) to remote sensing image analysis (Ghassemian, 2016). Building a general and inclusive segmentation model is meaningful to real-world applications. However, due to the limitation of data collection and annotation cost, there are only fragmented segmentation datasets of various scenarios available, such as ADE20K (Zhou et al., 2017), Cityscapes (Cordts et al., 2016), COCO-stuff (Caesar et al., 2018), _etc._ Meanwhile, most work of segmentation (Long et al., 2015; Chen et al., 2018; Zheng et al., 2021) focus on single-dataset case, and overlook the generalization of the deep neural networks. Generally, for different data scenarios, a new set of network weights are supposed to be trained. As a compromise of expensive images and annotations for all scenarios, how to construct a multi-dataset segmentation model with the existing fragmented datasets is attractive for supporting more scenarios. The primary issue of multi-dataset learning is the inconsistent taxonomy, including category coincidence, ID conflict, naming differences, _etc._ For example, the category of "person" in ADE20k dataset are labeled as "person" and "rider" in Cityscapes dataset. As shown in Figure 1(a), Lambert et al. (2020) manually establish a unified taxonomy with the one-hot label, relabel each dataset, and then train a segmentation model for all involved datasets, which is time-consuming and error-prone. Moreover, the one-hot taxonomy is inflexible and unscalable. When extending the datasets or categories, the unified taxonomy demands reconstruction and the model requires retraining. A group of advanced researches (Wang et al., 2022) utilizes multi-head architecture to train a weight-shared encoder-decoder module and multiple dataset-specific headers, as shown in Figure 1(b). The multi-head approach is a simple extension of traditional single-dataset learning, not convenient during inference. For example, to choose the appropriate segmentation head, which dataset the test image comes from needs to be predefined or specified during inference. To cope with these challenges, we propose a language-guided multi-dataset segmentation (LMSeg) framework that supports both semantic and panoptic segmentation tasks (Figure 1(c)). On the one hand, in contrast to manual one-hot taxonomy, we introduce a pre-trained text encoder to automatically map the category identification to a unified representation, _i.e._, text embedding space. The image encoder extracts pixel-level features, while the query decoder bridges the text and image encoder and associates the text embeddings with the segment queries. Figure 2 depicts the core of text-driven taxonomy for segmentation. As we can see that the text embeddings of categories reflect the semantic relationship among the classes, which cannot be expressed by one-hot labels. Thus, the text-driven taxonomy can be extended infinitely without any manual reconstruction. On the other hand, instead of relabeling each dataset with a unified taxonomy, we dynamically redirect the model's predictions to each dataset's taxonomy. To this end, we introduce a category-guided decoding (CGD) module to guide the model to predict involved labels for the specified taxonomy. In addition, the image properties of different datasets are various, such as resolution, style, ratio, _etc_. And, applying appropriate data augmentation strategy is necessary. Therefore, we design a dataset-aware augmentation (DAA) strategy to cope with this. In a nutshell, our contributions are four-fold: * We propose a novel approach for multi-dataset semantic and panoptic segmentation, using text-query alignment to address the issue of taxonomy inconsistency. * To bridge the gap between cross-dataset predictions and per-dataset annotations, we design a category-guided decoding module to dynamically guide predictions to each dataset's taxonomy. * A dataset-aware augmentation strategy is introduced to adapt the optimal preprocessing pipeline for different dataset properties. * The proposed method achieves significant improvements on four semantic and three panoptic segmentation datasets. ## 2 Related Work ### Semantic Segmentation As a dense prediction task, semantic segmentation plays a key role in high-level scene understanding. Since the pioneering work of fully convolutional networks (FCNs) (Long et al., 2015), pixel Figure 1: Comparison of different multi-dataset segmentation approaches. wise classification has become the dominant approach for deep learning-based semantic segmentation. After FCNs, most semantic segmentation models focus on aggregating long-range context in the final feature map. These methods include atrous convolutions with different atrous rates (Chen et al., 2018, 2017), pooling operations with varying kernel sizes (Zhao et al., 2017), and variants of non-local blocks (Fu et al., 2019; Yuan et al., 2021; Huang et al., 2019; Wang et al., 2018). More recently, SETR (Zheng et al., 2021) and Segmenter (Strudel et al., 2021) replace the traditional convolutional backbone with Vision Transformers (ViT) (Dosovitskiy et al., 2021) to capture long-range context from the first layer. Pixel-wise semantic segmentation methods are difficult to extend to instance-level segmentation. Some mask classification-based methods (Carion et al., 2020; Wang et al., 2021) have recently emerged to unify semantic and instance segmentation tasks. The most representative work is MaskFormer (Cheng et al., 2021, 2022), which solves these two segmentation tasks in a unified manner. However, these segmentation methods typically follow the setting of single-dataset training and can not maintain high accuracy on other datasets without finetuning. ### Multi-Dataset Training Compared to single-dataset learning, recently multi-dataset learning has received increasing attention in consequence of its robustness and generalization. Perrett and Damen (2017) apply multi-dataset learning to action recognition during the pre-training stage. To settle the challenge of multi-dataset object detection, Zhao et al. (2020) propose a pseudo-labeling strategy and Yao et al. (2020) propose a dataset-aware classification loss. To avoid manually building unified taxonomy, Zhou et al. (2022c) propose a formulation to automatically integrate the dataset-specific outputs of the partitioned detector into a unified semantic taxonomy. For multi-dataset semantic segmentation, MSeg (Lambert et al., 2020) manually creates a common taxonomy to unite segmentation datasets from multiple domains. Shi et al. (2021) first pre-trains the network on multiple datasets with a contrast loss and then fine-tunes it on specific datasets. Based on the multi-head architecture, CDCL (Wang et al., 2022) proposes a dataset-aware block to capture the heterogeneous statistics of different datasets. Yin et al. (2022) utilize text embedding to improve the zero-shot performance on semantic segmentation and claim to adopt sentences instead of category names for better text embedding. In contrast to Yin et al. (2022), we focus on more general multi-dataset segmentation, including semantic and panoptic segmentation, and further propose solutions for the problem of incomplete annotation in multi-dataset training. ### Vision-Language Pre-training Vision-Language Pre-training (VLP) has achieved significant progress in the last few years, which aims to jointly obtain a pair of image and text encoders that can be applied to numerous multi-modal tasks. CLIP (Radford et al., 2021) is a landmark work of VLP, which jointly trains the encoders on 400 million image-text pairs collected from the web. Following CLIP, many researchers attempt Figure 2: t-SNE (van der Maaten and Hinton, 2008) visualization of category embeddings for several semantic segmentation datasets using CLIP’s text encoder for feature extraction. The text embedding space of CLIP is suitable as a unified taxonomy, with semantically similar categories holding closer text embeddings. to transfer the CLIP model to downstream tasks. Zhou et al. (2022b;a); Gao et al. (2021); Zhang et al. (2021) show that CLIP can help to achieve state-of-the-art performance on few-shot or even zero-shot classification tasks. DenseCLIP (Rao et al., 2022) applies CLIP to dense prediction tasks via language-guided fine-tuning and achieves promising results. We claim that the text embedding space of CLIP can be regarded as a unified taxonomy, as shown in Figure 2, and propose text-query alignment instead of a softmax classifier for multi-dataset segmentation. ## 3 Method ### Overview In this section, we introduce the proposed LMSeg framework, a new language-guided multi-dataset segmentation framework, supporting both semantic and panoptic segmentation under multi-dataset learning. As shown in Figure 3, the inputs consist of an image and a set of class names corresponding to the dataset to which the image belongs. The LMSeg is decomposed of an encoder-decoder pixel feature extractor, a pre-trained text encoder, a Transformer decoder with category-guided module and a dataset-aware augmentation strategy. The image is first preprocessed using the proposed dataset-aware augmentation strategy, and then image features are extracted through the image encoder and pixel decoder. The class names are mapped to text embeddings by the pre-trained text encoder. The category-guided decoding module bridges the text embeddings and the image features to semantic and panoptic mask predictions. We detail each component of our framework in the following. ### Text Encoder and Pixel Feature Extractor The text encoder takes the class names of a dataset as input and outputs the corresponding text embeddings \(\epsilon_{\text{text}}^{k}\in\mathbb{R}^{(K_{k}+1)\times C_{t}}\) for that dataset, where \(K_{k}\) is the number of classes of the \(k\)-th dataset, and \(C_{t}\) is the text embedding dimension. The implementation of the text encoder follows CLIP (Radford et al., 2021). During the training phase, the parameters of the text encoder are initialized with a pre-trained CLIP model and fixed without parameter update. The text encoder's output dimension \(C_{t}\) is incompatible with the segment query embedding, and we use a linear adapter layer to adjust the dimension of the text embeddings. It is worth noting that we can store the text embeddings of class names after training. That is, by reducing the overhead introduced by the text encoder, the inference time of LMSeg is barely increased compared to other segmentation frameworks (Cheng et al., 2021). Existing work show that, text prompt (_e.g._, "a photo of [class name]") can improve the transfer performance of pre-trained CLIP models. Inspired by CoOp (Zhou et al., 2022b), when generating the text embeddings \(\epsilon_{\text{text}}^{k}\), we utilize a learnable prompt, namely "[v]\({}_{1}\)[v]\({}_{2}\)...[v]\({}_{L}\)[class name]", where \(L\) is the length of the learnable vectors and is set to 8 by default. Note that the learnable vectors are shared across all datasets. The pixel feature extractor takes an image of size \(3\times H\times W\) as input. The image encoder first converts the image into a low-resolution image feature map \(\mathcal{F}\in\mathbb{R}^{C_{\mathcal{F}}\times H^{{}^{\prime}}\times W^{{}^{ \prime}}}\), and then the pixel decoder gradually upsamples the feature map to generate per-pixel embeddings \(\epsilon_{\text{pixel}}\in\mathbb{R}^{C_{\epsilon}\times H\times W}\) Figure 3: LMSeg: language-guided multi-dataset segmentation framework. where \(C_{\epsilon}\) is the embedding dimension. Finally, we obtain each binary mask prediction \(m_{i}\in[0,1]^{H\times W}\) via a dot product between the \(i\)-th segment query embedding and the per-pixel embeddings \(\epsilon_{\text{pixel}}\). The image encoder can use arbitrary backbone models, not limited to the ResNet (He et al., 2016) as we use in this work. ### Text-Query Alignment In multi-dataset segmentation, establishing a unified taxonomy is often an essential step, which is usually time-consuming and error-prone, especially as the number of categories increases. In this subsection, we introduce the proposed text-query alignment, which uses the text embedding as a unified taxonomy representation without manually construction. Specifically, we remove the static classifier layer (usually a fully-connected layer), which transforms the segment query embeddings into the final classification logits. Instead, we align the segment query embeddings \(\epsilon_{\text{query}}\in\mathbb{R}^{N\times C}\) and the text embeddings \(\epsilon_{\text{text}}\in\mathbb{R}^{(K+1)\times C}\) via a contrastive loss. Taking the samples of \(k\)-th dataset as an example, we dot product the text embeddings and query embeddings to predict the classification logits \(\mathbf{p}^{k}\in\mathbb{R}^{N\times(K_{k}+1)}\) for \(N\) segment queries: \[\mathbf{p}^{k}=\frac{\epsilon_{\text{query}}\cdot(\epsilon_{\text{text}}^{k} )^{T}}{\tau}\,, \tag{1}\] where \(\tau\) is the temperature parameter and is set to 0.07 by default. The \(\epsilon_{\text{query}}\in\mathbb{R}^{N\times C}\) denotes the \(N\) query embeddings with dimension \(C\), which is the output of the category-guided decoding (CGD) module in Figure 3. The \(\epsilon_{\text{text}}^{k}\in\mathbb{R}^{(K_{k}+1)\times C}\) represents the text embedding for the \(K_{k}+1\) categories of the \(k\)-th dataset (including one "background" category), and is the output of the text encoder module in Figure 3. The dynamic classification logits \(\mathbf{p}^{k}\) are supervised with a contrastive objectives \(\mathcal{L}_{\text{cl}}\) as: \[\mathcal{L}_{\text{cl}}=\frac{1}{N}\sum_{i=0}^{N-1}\left(-\text{log}\frac{ \text{exp}(\mathbf{p}_{i,j+}^{k})}{\sum_{j=0}^{K_{k}}\text{exp}(\mathbf{p}_{ i,j}^{k})}\right)\,, \tag{2}\] where the outer sum is over \(N\) segment queries. For the \(i\)-th query, the loss is the log loss of a \((K_{k}+1)\)-way softmax-based classifier that tries to classify the \(i\)-th query as \(j+\). ### Category-guided Decoding Module The text-query alignment does not entirely address the issue of taxonomy differences in multi-dataset segmentation. For example, the query embedding corresponding to a human might be forced to align with the text embedding of "person" or "rider" in the Cityscapes dataset but only to "person" in the ADE20K dataset. That is, for input images, the alignment targets of query embeddings are non-deterministic (varies with the dataset), which affects the performance of multi-dataset segmentation, as our experiments demonstrate. Instead of relabeling each dataset with a unified taxonomy, we propose a simpler approach that dynamically redirects the model's predictions to each dataset's taxonomy. Through prediction redirection, we can arbitrarily specify the categories that the model needs to predict so that we can use the original annotations of each dataset. We propose a category-guided decoding module to dynamically adapt to classes to be predicted by the model, as shown in Figure 4. The decoder module follows the standard architecture of the transformer, using multi-head self- and cross-attention mechanisms and an FFN module to transform \(N\) segment queries \(\epsilon_{\text{query}}\). The self-attention to query embeddings enables the model to make global inferences for all masks using pairwise relationships between them. The cross-attention between query embeddings and image features \(\mathcal{F}\) is able to use the entire image as context. Finally, the cross-attention between query embeddings \(\epsilon_{\text{query}}\) and text embeddings \(\epsilon_{\text{text}}\) guides the query embeddings to the classes corresponding to the input text embeddings. We use 6 decoder modules with 100 segment queries by default. The \(N\) segment queries are initialized as zero vectors, each associated with a learnable positional encoding. ### Dataset-aware Augmentation Different datasets usually have different characteristics (_e.g._, image resolution) and therefore different data augmentations are used for training in general. For example, in MaskFormer (Cheng et al., 2021), a crop size of \(512\times 1024\) is used for Cityscapes, while a crop size of \(512\times 512\) for ADE20K. For multi-dataset training, MSeg (Lambert et al., 2020) first upsamples the low-resolution training images with a super-resolution model, then uses the same data augmentation to train all datasets. Unlike MSeg, we propose a dataset-aware augmentation strategy, shown in Figure 5, which is simpler and more scalable. Specifically, for each training sample, we determine which dataset the sample comes from and choose the corresponding augmentation strategy \(A_{i}\). Furthermore, the dataset-aware augmentation strategy allows us to make fair comparisons with single-dataset training models while keeping the data augmentation for each dataset the same. ### Total Training Objective The total training objective of LMSeg consists of a contrastive loss \(\mathcal{L}_{\text{cl}}\) and two binary mask losses, focal loss \(\mathcal{L}_{\text{focal}}\)(Lin et al., 2017) and dice loss \(\mathcal{L}_{\text{dice}}\)(Milletari et al., 2016), which are applied to the batch of samples from multiple datasets: \[\mathcal{L}=\sum_{k=1}^{M}\sum_{n=1}^{N^{k}}\left(\mathcal{L}_{\text{cl}}+ \lambda_{\text{focal}}\cdot\mathcal{L}_{\text{focal}}+\lambda_{\text{dice}} \cdot\mathcal{L}_{\text{dice}}\right)\,, \tag{3}\] where \(M\) is the number of datasets. \(N^{k}\) is the number of training samples from the \(k\)-th dataset. The hyper-parameters \(\lambda_{\text{focal}}\) and \(\lambda_{\text{dice}}\) are set to 20.0 and 1.0 by default. The weight for the "no object" (\(\emptyset\)) in the contrastive loss \(\mathcal{L}_{\text{cl}}\) is set to 0.1. Before computing the loss function, we have to figure out how to assign the ground truth targets to the model's predictions since LMSeg outputs \(N\) unordered predictions. We take one image from the \(k\)-th dataset as an example to illustrate how the ground truth is assigned during training. We denote \(\bar{y}=\{(\bar{c}_{i}^{k},\bar{m}_{i}^{k})|i=1,\cdots,\bar{N}\}\) as the ground truth set of masks in the image, where \(\bar{c}_{i}^{k}\) is the label and \(\bar{m}_{i}^{k}\) represents the binary mask. LMSeg infers a fixed-size set of \(N\) unordered predictions \(y=\{(p_{i}^{k},m_{i}^{k})|i=1,\cdots,N\}\) for the input image. Assuming that \(N\) is larger than the number of target masks in the image, we pad set \(\bar{y}\) with \(\varnothing\) (no object) to the size of \(N\). To find a bipartite matching between these two sets we search for a permutation of \(N\) elements \(\sigma\in\mathfrak{S}_{N}\) with the lowest cost: \[\hat{\sigma}=\underset{\sigma\in\mathfrak{S}_{N}}{\text{arg min}}\sum_{i}^{N} \mathcal{L}_{\text{match}}(y_{i},\bar{y}_{\sigma(i)})\,, \tag{4}\] \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline Total training cost & Backbone & Model & Pre-train & M.D.T & \multicolumn{4}{c|}{Per-dataset imIoU} & Average midU \\ \cline{4-9} & & & & & ADE20K & Cityscapes & COCO-Suff & Mobility Vitas \\ \hline \multirow{3}{*}{160k} & \multirow{3}{*}{R50} & \multirow{3}{*}{R50} & MiniFormer & ImageNet & & 40.24 & 75.43 & 36.83 & 46.72 & 49.80 \\ & & & MaskFormer & CLIP & 39.75 & 77.30 & 38.80 & 47.37 & 50.73 \\ & & LMSeg (Ours) & CLIP & ✓ & 41.37 & 79.13 & 39.01 & 50.82 & **52.58** \\ \hline \multirow{3}{*}{320k} & \multirow{3}{*}{R50} & \multirow{3}{*}{MaskFormer} & ImageNet & & 42.55 & 76.56 & 36.33 & 50.45 & 51.47 \\ & & & MaskFormer & CLIP & ✓ & 43.18 & 78.20 & 38.69 & 52.15 & 53.05 \\ & & LMSeg (Ours) & CLIP & ✓ & 44.89 & 79.81 & 39.21 & 52.27 & **54.04** \\ \hline \multirow{3}{*}{640k} & \multirow{3}{*}{R50} & \multirow{3}{*}{MaskFormer} & ImageNet & & 44.57 & 76.37 & 35.89 & 53.70 & 52.63 \\ & & & LMSeg (Ours) & CLIP & ✓ & 45.66 & 79.95 & 38.16 & 53.16 & 53.73 \\ \cline{1-1} \cline{3-1} \cline{3-1} where the matching cost \(\mathcal{L}_{\text{match}}(y_{i},\bar{y}_{\sigma(i)})\) takes into account the class prediction and the similarity of predicted and ground truth masks, \[\mathcal{L}_{\text{match}}(y_{i},\bar{y}_{\sigma(i)})=\begin{cases}-p_{i}^{k}(c_ {\sigma(i)}^{k})+\lambda_{\text{local}}\cdot\mathcal{L}_{\text{local}}(m_{i}^ {k},\bar{m}_{\sigma(i)}^{k})+\lambda_{\text{disc}}\cdot\mathcal{L}_{\text{ disc}}(m_{i}^{k},\bar{m}_{\sigma(i)}^{k})\,,&\bar{c}_{\sigma(i)}^{k}\neq\varnothing\\ +\infty\,,&\bar{c}_{\sigma(i)}^{k}=\varnothing\end{cases} \tag{5}\] During the training phase, we compute the optimal assignment \(\hat{\sigma}\) for each sample in the batch and then accumulate the loss function of Equation 3 over these samples. ## 4 Experiments ### Implementation details Datasets.For semantic segmentation, we evaluate on four public semantic segmentation datasets: ADE20K (Zhou et al., 2017) (150 classes, containing 20k images for training and 2k images for validation), COCO-Stuff-10K (Caesar et al., 2018) (171 classes, containing 9k images for training and 1k images for testing), Cityscapes (Cordts et al., 2016) (19 classes, containing 2975 images for training, 500 images for validation and 1525 images for testing), and Mapillary Vistas (Neuhold et al., 2017) (65 classes, containing 18k images for training, 2k images for validation and 5k images for testing). For panoptic segmentation, we use COCO-Panoptic (Lin et al., 2014) (80 "things" and 53 "stuff" categories), ADE20K-Panoptic (Zhou et al., 2017) (100 "things" and 50 "stuff" categories) and Cityscapes-Panoptic (Cordts et al., 2016) (8 "things" and 11 "stuff" categories). Training setup.We use Detectron2 (Wu et al., 2019) to implement our LMSSeg. Without specific instruction, dataset-aware augmentation is adopted and the same as MaskFormer (Cheng et al., 2021) for each dataset. We use AdamW (Loshchilov and Hutter, 2019) and the poly (Chen et al., 2018) learning rate schedule with an initial learning rate of \(1e^{-4}\) and a weight decay of \(1e^{-4}\). Image encoders are initialized with pre-trained CLIP (Radford et al., 2021) weights. Following MaskFormer, a learning rate multiplier of 0.1 is applied to image encoders. Other common data augmentation strategies like random scale jittering, random horizontal flipping, random cropping and random color jittering are utilized. For the ADE20K dataset, we use a crop size of 512\(\times\)512. For the Cityscapes dataset, we use a crop size of 512\(\times\)1024. For the COCO-Stuff-10k dataset, we use a crop size of 640\(\times\)640. For the Mapillary Vistas dataset, we use a crop size of 1280\(\times\)1280. All models are trained with 8 A100 GPUs and a batch size of 16. Each image in the batch is randomly sampled from all datasets. For panoptic segmentation, we follow exactly the same architecture, loss, and training procedure as \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Total training cost} & \multirow{2}{*}{Backbone} & \multirow{2}{*}{Model} & \multirow{2}{*}{Pre-train} & \multicolumn{3}{c|}{Per-dataset IDU} & \multirow{2}{*}{Average nIoU} \\ \cline{5-8} & & & & ADE20K & & Cityscapes & COCO-Stuff \\ \hline \multirow{2}{*}{160K} & \multirow{2}{*}{R50} & Manual-axonomy & CLIP & 40.24 & 79.37 & 38.36 & 46.70 & 51.16 \\ & & Multi-head & CLIP & 41.19 & 79.29 & 38.26 & 49.25 & 51.99 \\ & & LMSSeg (Ours) & CLIP & 41.37 & 79.13 & 39.01 & 50.82 & **52.58** \\ \hline \multirow{2}{*}{320K} & \multirow{2}{*}{R50} & Manual-axonomy & CLIP & 43.46 & 80.23 & 39.01 & 51.58 & 53.56 \\ & & Multi-head & CLIP & 44.30 & 80.06 & 38.57 & 52.13 & 53.76 \\ & & LMSSeg (Ours) & CLIP & 44.89 & 79.81 & 39.21 & 52.27 & **54.04** \\ \hline \multirow{2}{*}{640K} & \multirow{2}{*}{R50} & Manual-axonomy & CLIP & 45.06 & 80.71 & 39.43 & 52.95 & 54.53 \\ & & Multi-head & CLIP & 46.47 & 80.07 & 37.73 & 53.22 & 54.37 \\ & & LMSSeg (Ours) & CLIP & 45.16 & 80.93 & 38.60 & 54.34 & **54.75** \\ \hline \hline \end{tabular} \end{table} Table 2: Semantic segmentation accuracy (mIoU) compared with multi-dataset models. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Training Setting} & \multirow{2}{*}{Backbone} & \multirow{2}{*}{Method} & \multirow{2}{*}{Pre-train} & \multicolumn{3}{c|}{Per-dataset PQ (number of classes)} & \multirow{2}{*}{Average PQ} \\ \cline{5-8} & & & & ADE20K-Panoptic & & Cityscapes-Panoptic & COCO-Panoptic \\ \hline \multirow{2}{*}{320K} & \multirow{2}{*}{R50} & Manual-taxonomy & CLIP & 25.88 & 51.50 & 31.26 & 36.21 \\ & & Multi-head & CLIP & 23.51 & 48.56 & 27.61 & 33.22 \\ & & LMSSeg (Ours) & CLIP & 30.84 & 51.75 & 34.16 & **38.91** \\ \hline \multirow{2}{*}{640K} & \multirow{2}{*}{R50} & Manual-taxonomy & CLIP & 28.65 & 52.18 & 34.76 & 38.53 \\ & & multi-head & CLIP & 28.66 & 50.05 & 31.27 & 36.66 \\ & & LMSSeg (Ours) & CLIP & 34.20 & 55.28 & 37.72 & **42.40** \\ \hline \multirow{2}{*}{960K} & \multirow{2}{*}{R50} & Manual-taxonomy & CLIP & 30.68 & 53.47 & 36.63 & 40.26 \\ & & multi-head & CLIP & 31.99 & 54.17 & 35.18 & 40.44 \\ \cline{1-1} & LMSSeg (Ours) & CLIP & 35.43 & 54.77 & 38.55 & **42.91** \\ \hline \hline \end{tabular} \end{table} Table 3: Panoptic segmentation accuracy (PQ) compared with other multi-dataset training methods. we use for semantic segmentation. The only difference is supervision: _i.e._, category region masks in semantic segmentation vs. object instance masks in panoptic segmentation. The data augmentation of each dataset follows MaskFormer, and we also provide the detail of augmentation in the appendix. ### Results and Comparisons When conducting multi-dataset training, we train a single LMSeg model and report the model's performance on each dataset. While conducting single-dataset training, we train a separate MaskFormer model for each dataset, and the total training cost is cumulative over all datasets. When comparing with multi-dataset training methods, we re-implement the "multi-head" and "manual-taxonomy" methods based on MaskFormer for a fair comparison. For the multi-head method denoted by "MaskFormer + Multi-head" in Table 2 and Table 3, we share the components of MaskFormer on various datasets, except for the classification head. For the manual-taxonomy method denoted by "MaskFormer + Manual-taxonomy" in Table 2 and Table 3, we manually construct the unified taxonomy. For simplicity, we only unite duplicate class names across datasets. Semantically similar classes such as "apparel" and "clothes" are not split or merged. #### 4.2.1 Multi-dataset Semantic Segmentation Table 1 depicts the comparison between LMSeg and single-dataset training models. Our LMSeg outperforms single-dataset training models in all settings. This experimental results show that compared with single-dataset training, multi-dataset training can obtain a single robust model capable of recognizing more categories and improving the average performance on multiple datasets. For a more intuitive comparison, we plot the average mIoU w.r.t. total training cost for four datasets in Figure 6. Table 2 shows the comparison between LMSeg and other multi-dataset training methods. Experimental results show that LMSeg outperforms "multi-head" and "manual-taxonomy" under various settings. #### 4.2.2 Multi-dataset Panoptic Segmentation Considering the high training cost of panoptic segmentation, we only give a comparison of LMSeg with other multi-dataset training methods. As shown in Table 3, for the more challenging panoptic segmentation task, LMSeg outperforms "multi-head" and "manual-taxonomy" by a large margin in various settings. ### Ablation Study Weight Initialization.LMSeg learns to align text embedding and segmented query embedding in the same embedding space. This subsection shows that Vision-Language pre-training (VLP) benefits LMSeg, as VLP helps initialize an aligned embedding space for vision and language. As shown in Table 4, when the text encoder is initialized with CLIP, and the image encoder is initialized with ImageNet pre-training, the experimental results correspond to language pre-training initialization. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline \multicolumn{2}{c|}{Pre-training weights} & \multicolumn{5}{c|}{Per-dataset mIoU} & \multicolumn{1}{c}{Average mIoU} \\ \hline Text encoder & Image encoder & ADE20K & Cityscapes & COCO-Stuff & Mapillary Vistas & \\ \hline CLIP-R50 & INT-R50 & 42.57 & 74.93 & 38.52 & 45.21 & 50.30 \\ CLIP-R50 & CLIP-R50 & 43.44 & 77.72 & 40.76 & 49.11 & 52.75 \\ \hline \end{tabular} \end{table} Table 4: Ablation experiments of VLP initialization in LMSeg. All models are trained with a batch size of 16 and a total number of iterations of 320k. When text and image encoders are both initialized with CLIP, the experimental results correspond to vision-language pre-training initialization. Experimental results show that LMSeg with VLP initialization significantly outperforms language pre-training initialization, improving the average mIoU on four datasets from 50.30 to 52.75. Category-guided Decoding Module.There are two cross-attention layers in the category-guided decoding module. In this subsection, we study the effect of their order. "text-visual" indicates that the cross-attention module between segment query and text embedding is in front; otherwise "visual-text". As shown in Table 5, the experimental results show that "visual-text" performs better than "text-visual". "visual-text" is used by default. Factor-by-factor Ablation.As shown in Table 6 and Table 7, the proposed category-guided decoding module increases the average mIoU on four datasets from 52.91 to 53.41 and the average PQ on three datasets from 34.38 to 37.88, verifying the module's effectiveness. The proposed dataset-aware augmentation strategy adopts separate suitable data augmentation for each dataset, thereby improving the model's overall performance. The average mIoU and average PQ increase from 53.41 to 54.04 and 37.88 to 38.91, respectively.
2304.14074
Linear and Nonlinear Parareal Methods for the Cahn-Hilliard Equation
In this paper, we propose, analyze and implement efficient time parallel methods for the Cahn-Hilliard (CH) equation. It is of great importance to develop efficient numerical methods for the CH equation, given the range of applicability of the CH equation has. The CH equation generally needs to be simulated for a very long time to get the solution of phase coarsening stage. Therefore it is desirable to accelerate the computation using parallel method in time. We present linear and nonlinear Parareal methods for the CH equation depending on the choice of fine approximation. We illustrate our results by numerical experiments.
Gobinda Garai, Bankim C. Mandal
2023-04-27T10:16:35Z
http://arxiv.org/abs/2304.14074v1
# Linear and nonlinear Parareal methods for the Cahn-Hilliard equation ###### Abstract. In this paper, we propose, analyze and implement efficient time parallel methods for the Cahn-Hilliard (CH) equation. It is of great importance to develop efficient numerical methods for the CH equation, given the range of applicability of the CH equation has. The CH equation generally needs to be simulated for a very long time to get the solution of phase coarsening stage. Therefore it is desirable to accelerate the computation using parallel method in time. We present linear and nonlinear Parareal methods for the CH equation depending on the choice of fine approximation. We illustrate our results by numerical experiments. Key words and phrases:Parallel-in-Time (PinT), Parallel computing, Convergence analysis, Cahn-Hilliard equation, Parareal method Submitted 2 So the CH equation describes energy minimization and the total mass conservation while the system evolves. The existence of the solution of the CH equation (1.1) can be seen form [7] and also results for other variants of the CH equation are shown in [9, 22]. Various research have been done in finding numerical scheme for the CH equation to approximate the solution with either Dirichlet [7, 10] or Neumann boundary conditions [8, 14, 26, 24] and references therein. Recently a new approach to approximate the solution of the CH equation has been proposed in [28, 29] based on quadratization of the energy \(\mathcal{E}(u)\) of the CH equation. A modification on energy quadratization approach yields a new method known as scalar auxiliary variable [23]. A review on numerical treatment of the CH equation can be found in [20]. The possible application of CH equation as a model are: image inpainting [2], tumour growth simulation [27], population dynamics [6], dendritic growth [19], planet formation [25], etc. The above described works are all in time stepping fashion for advancement of evolution of the CH equation. Therefore to get a solution of CH equation need to be solved sequentially over long time for capturing the long term behaviour of the CH equation, specially the phase coarsening stage. Consequently, it is of great importance to accelerate the simulation using parallel computation, which can be fulfilled by time parallel techniques. In last few decades there is a lot of efforts on formulating various type of time parallel techniques, for an overview see [15]. To speed-up the computation we construct the Parareal methods for the CH equation (1.1). The Parareal method [21] is a well known iterative time parallel method, that can also be viewed as multiple shooting method or time-multigrid method; see [16]. The method rely on computing fine and coarse resolution and eventually converge to fine resolution. It has been successfully applied to: fluid-structure interaction in [12], Navier-Stokes equation in [13], molecular-dynamics in [1]. The main objective of this work is to adapt the Parareal algorithm for the CH equation (1.1) and study the convergence behaviour. The rest of this paper is arranged as follows. We introduce in Section 2 the time parallel algorithm for equation (1.1). In section 3 we present the stability and convergence results. To illustrate our analysis, the accuracy and robustness of the proposed formulation, we show numerical results in Section 4. ## 2. Parareal Method To solve the following system of ODEs \[\frac{du}{dt}=f(t,u),\;u(0)=u^{0},\;t\in(0,T], \tag{2.1}\] Lions et al. proposed the Parareal algorithm in [21], where \(f:\mathbb{R}^{+}\times\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) is Lipschitz. The method constitutes of the following strategy: first a non-overlapping decomposition of time domain \((0,T]\) into \(N\) smaller subintervals of uniform size, i.e., \((0,T]=\cup_{n=1}^{N}[T_{n-1},T_{n}]\) with \(T_{n}-T_{n-1}=\Delta T=T/N\) is considered, secondly each time slice \([T_{n-1},T_{n}]\) is divided into \(J\) smaller time slices with \(\Delta t=\Delta T/J\), then a fine propagator \(\mathcal{F}\) which is expensive but accurate, and a coarse propagator \(\mathcal{G}\) which is cheap but may be inaccurate are assigned to compute the solution in fine grid and coarse grid respectively. Then the Parareal algorithm for (2.1) starts with the initial approximation \(U_{n}^{0}\) at \(T_{n}\)'s, obtained by the coarse operator \(\mathcal{G}\) and solve the following prediction-correction scheme for \(k=0,1,...\) \[\begin{split} U_{0}^{k+1}&=u^{0},\\ U_{n+1}^{k+1}&=\mathcal{G}(T_{n+1},T_{n},U_{n}^{k+1} )+\mathcal{F}(T_{n+1},T_{n},U_{n}^{k})-\mathcal{G}(T_{n+1},T_{n},U_{n}^{k}), \end{split} \tag{2.2}\] where operator \(\mathcal{S}(T_{n+1},T_{n},U_{n}^{k})\) provides solution at \(T_{n+1}\) by taking the initial solution \(U_{n}^{k}\) at \(T_{n}\) for \(\mathcal{S}=\mathcal{F}\)or \(\mathcal{G}\). At current iteration \(U_{n}^{k}\)'s are known, hence one computes \(\mathcal{F}(T_{n+1},T_{n},U_{n}^{k})\) in parallel using \(N\) processor. The Parareal solution converges towards the fine resolution in finite steps. To get a practical parallel algorithm we should have \(k\ll N\). ### Discretization and Formulation To formulate the Parareal method for the CH equation (1.1) we first look into possible discretization of (1.1) in both spatial and temporal variables. Since the non-increasing of the total energy and mass conservation property (1.3) are essential features of the CH equation (1.1), they are expected to be preserved for long time simulation under any proposed numerical scheme as well. To deal with this, Eyre proposed an unconditionally gradient stable scheme in [10, 11]. The idea is to split the homogeneous free energy \[F(u)\ =\ \ \frac{u^{4}}{4}+1\ +\ \ \ \ \ \ \ \ \frac{-u^{2}}{2}\ \ 2. We fix both the fine propagator \(\mathcal{F}\) and coarse propagator \(\mathcal{G}\) to be the linear scheme in (2.5) in the Parareal iteration (2.2). We call this algorithm PA-II. 3. We fix the fine propagator \(\mathcal{F}\) to be the linear scheme in (2.5) and coarse propagator \(\mathcal{G}\) to be the linear scheme in (2.4) in the Parareal iteration (2.2). We call this algorithm PA-III. 4. We fix the fine propagator \(\mathcal{F}\) to be the nonlinear scheme in (2.3) and coarse propagator \(\mathcal{G}\) to be the linear scheme in (2.4) in the Parareal iteration (2.2). We call this algorithm NPA-I. 5. We fix both the fine propagator \(\mathcal{F}\) and coarse propagator \(\mathcal{G}\) to be the nonlinear scheme in (2.3) in the Parareal iteration (2.2). We call this algorithm NPA-II. The first three algorithms are linear whereas the last two algorithms are nonlinear as either the fine solver or the coarse solver or both involve nonlinear scheme. Next we discuss the stability and convergence properties of the proposed Parareal algorithms. ## 3. Stability and Convergence First we rewrite the fine and coarse propagators in simplified operator form. For the approximation in (2.4) we have \[\frac{W^{n+1}-W^{n}}{\Delta t}=D_{h}\operatorname{diag}(W^{n})^{2}W^{n+1}- \epsilon^{2}D_{h}^{2}W^{n+1}-D_{h}W^{n}, \tag{3.1}\] where \(W\in\mathbb{R}^{(N_{x}-2)}\) and the discrete Laplacian \(D_{h}\) with Dirichlet boundary condition is the following \[D_{h}=\frac{1}{h^{2}}\begin{bmatrix}-2&1&&&\\ 1&-2&1&&\\ &\ddots&\ddots&\ddots&\\ &&1&-2&1\\ &&&1&-2\end{bmatrix}\in\mathbb{R}^{(N_{x}-2)\times(N_{x}-2)}. \tag{3.2}\] Numerical tests suggest that the term \(\operatorname{diag}(W^{n})^{2}\) behaves as \(I\), away from interface region, which also observed in [10]. Thus for analysing purpose we consider \(\operatorname{diag}(W^{n})^{2}\approx I\). Then (3.1) can be written as \(W^{n+1}=\left(I-\Delta tD_{h}+\epsilon^{2}\Delta tD_{h}^{2}\right)^{-1}\left( I-\Delta tD_{h}\right)W^{n}\), where \(I\) is the identity matrix. Then the fine and coarse propagator corresponding to the scheme (2.4) can be written as \[\mathcal{F}(T_{n+1},T_{n},U) =\left(I-\Delta tD_{h}+\epsilon^{2}\Delta tD_{h}^{2}\right)^{-1} \left(I-\Delta tD_{h}\right)U,\;U\in\mathbb{R}^{N_{x}-2}, \tag{3.3a}\] \[\mathcal{G}(T_{n+1},T_{n},U) =\left(I-\Delta TD_{h}+\epsilon^{2}\Delta TD_{h}^{2}\right)^{-1} \left(I-\Delta TD_{h}\right)U,\;U\in\mathbb{R}^{N_{x}-2}, \tag{3.3b}\] respectively. Similarly we can write the fine and coarse propagator corresponding to the scheme (2.5) as \[\mathcal{F}(T_{n+1},T_{n},U) =\left(I-2\Delta tD_{h}+\epsilon^{2}\Delta tD_{h}^{2}\right)^{-1} \left(I-2\Delta tD_{h}\right)U,\;U\in\mathbb{R}^{N_{x}-2}, \tag{3.4a}\] \[\mathcal{G}(T_{n+1},T_{n},U) =\left(I-2\Delta TD_{h}+\epsilon^{2}\Delta TD_{h}^{2}\right)^{-1} \left(I-2\Delta TD_{h}\right)U,\;U\in\mathbb{R}^{N_{x}-2}. \tag{3.4b}\] The matrix \(D_{h}\) in (3.2) is symmetric negative definite; eigenvalues of \(D_{h}\) are \(\lambda_{p}=\frac{2}{h^{2}}\left\{\cos\left(\frac{p\pi}{N_{x}-1}\right)-1 \right\},p=1,\cdots,N_{x}-2.\)\(\lambda_{p}\)'s are distinct and satisfy \(0,\forall p\). Now we define few matrices that we use later in the paper. The matrices are \(P_{i}:=\left(I-i\Delta TD_{h}+\epsilon^{2}\Delta TD_{h}^{2}\right)^{-1}(I-i \Delta TD_{h})\) for \(i=1,2,3\) and \(P_{J_{i}}:=\left[\left(I-\frac{i\Delta T}{J}D_{h}+\epsilon^{2}\frac{\Delta T}{J }D_{h}^{2}\right)^{-1}\left(I-\frac{i\Delta T}{J}D_{h}\right)\right]^{J}\) for \(i=1,2\). Before stating the stability and convergence results for the linear Parareal algorithms we first state and prove some auxiliary results. **Lemma 3.1**.: _Let \(J\in\mathbb{N}\) such that \(J\geqslant 2,\Delta T>0,\epsilon>0\) and \(y\in(0,\infty)\). Then the functions \(g_{i}(y)=\frac{1+i\Delta Ty}{1+i\Delta Ty+\epsilon^{2}\Delta Ty^{2}}\) for \(i=1,2,3\) satisfy \(g_{i}(y)\in(0,1),\forall y\)._ Proof.: It is clear that each \(g_{i}\) is continuous in \((0,\infty)\), and \(\lim\limits_{y\to 0^{+}}g_{i}(y)=1\) and \(\lim\limits_{y\to\infty}g_{i}(y)=0\), so we have \(g_{i}(y)\in(0,1),\forall y\). **Lemma 3.2**.: _Let \(J\in\mathbb{N}\) such that \(J\geqslant 2,\Delta T>0,\epsilon>0\) and \(y\in(0,\infty)\). Then the followings hold_ * _for_ \(i=1,2\) _the function_ \(\phi_{i}(y):=\left(g_{i}(y/J)\right)^{J}-g_{i}(y)\) _satisfies_ \(\left|\phi_{i}(y)\right|<1,\forall y\)_._ * _the function_ \(\phi_{3}(y):=\left(g_{2}(y/J)\right)^{J}-g_{1}(y)\) _satisfies_ \(\left|\phi_{3}(y)\right|<1,\forall y\)_._ Proof.: First we prove the statement \((i)\). We have \(\left|\phi_{i}(y)\right|<1\iff\left(g_{i}(y/J)\right)^{J}<1+g_{i}(y)\And\left(g _{i}(y/J)\right)^{J}>-1+g_{i}(y)\). Using Lemma 3.1 we have \(g_{i}^{J}(y/J)\in(0,1)\) and thus we have \(g_{i}^{J}(y/J)<1+g_{i}(y),\forall y\). The term \(-1+g_{i}(y)=\frac{-\epsilon^{2}\Delta Ty^{2}}{1+i\Delta Ty+\epsilon^{2}\Delta Ty ^{2}}<0\), hence \(\left(g(y/J)\right)^{J}>-1+g_{i}(y)\) holds. Similarly we can get the result \((ii)\). **Lemma 3.3** (Matrix inverse).: _Let \(n\in\mathbb{N},\beta\in\mathbb{R}^{+}\) then_ \[M(\beta)^{-1}:=\begin{bmatrix}1&0&&&\\ -\beta&1&0&&\\ &\ddots&\ddots&\ddots&\\ &&-\beta&1&0\\ &&&-\beta&1\end{bmatrix}_{n\times n}^{-1}=\begin{bmatrix}1&0&&&\\ \beta&1&0&&\\ \vdots&&\ddots&\ddots&\\ \beta^{n-2}&\beta^{n-3}&\cdots&1&0\\ \beta^{n-1}&\beta^{n-2}&\cdots&\beta&1\end{bmatrix}_{n\times n} \tag{3.5}\] Proof.: We prove the result (3.5) by induction. Clearly the statement is true for \(n=2\). Let us assume that the result (3.5) is true for \(n=l\). Then for \(n=l+1\) the matrix \(M(\beta)\) can be written as the following block form \[M(\beta)=\begin{bmatrix}A_{l\times l}&B_{l\times 1}\\ C_{1\times l}&D_{1\times 1}\end{bmatrix}_{(l+1)\times(l+1)}, \tag{3.6}\] where \(A=M(\beta)_{l\times l},B=[\mathbf{0}]_{l\times 1},C=[\mathbf{0},-\beta]_{1 \times l},D=[\mathbf{1}]_{1\times 1}\). As we know the inverse of \(A\) we have \[M(\beta)^{-1}_{(l+1)\times(l+1)}=\begin{bmatrix}A^{-1}+A^{-1}BS^{-1}CA^{-1}&-A^ {-1}BS^{-1}\\ -S^{-1}CA^{-1}&S^{-1}\end{bmatrix}, \tag{3.7}\] where \(S=D-CA^{-1}B\). Clearly \(S^{-1}=[1]_{1\times 1}\), and thus we have \(A^{-1}BS^{-1}CA^{-1}=[\mathbf{0}]_{l\times l},-A^{-1}BS^{-1}=[\mathbf{0}]_{l \times 1}\), and \(-S^{-1}CA^{-1}=[\beta^{l},\beta^{l-1},\cdots,\beta]_{1\times l}\). Hence we have the Lemma. **Lemma 3.4** (Matrix power).: _Let \(\beta>0\) and \(\mathbb{T}(\beta)\) be a strictly lower triangular Toeplitz matrix of size \(N\) whose elements are defined by its first column_ \[\mathbb{T}_{i,1}=\begin{cases}0\;\;\text{if}\;\;i=1,\\ \beta^{i-2}\;\;\text{if}\;\;2\leqslant i\leqslant N.\end{cases}\] _Then the \(i\)-th element of the first column of the \(k\)-th power of \(\mathbb{T}\) is_ \[\mathbb{T}_{i,1}^{k}=\begin{cases}0&\text{if}\;\;1\leqslant i\leqslant k,\\ \binom{i-2}{k-1}\beta^{i-1-k}&\text{if}\;\;k+1\leqslant i\leqslant N.\end{cases}\] Proof.: See [16]. **Lemma 3.5**.: _For \(0<\beta<1\) the infinity norm of \(\mathbb{T}(\beta)^{k}\) is given by_ \[\parallel\mathbb{T}^{k}\parallel_{\infty}\leqslant\min\left\{\left(\frac{1 -\beta^{N-1}}{1-\beta}\right)^{k},\binom{N-1}{k}\right\}\] Proof.: See [16]. **Theorem 3.6** (Stability of PA-I).: _The algorithm PA-I is stable, i.e., for each \(n\) and \(k\) the Parareal iteration satisfies \(\parallel U_{n+1}^{k+1}\parallel\leqslant\parallel u^{0}\parallel+(n+1) \left(\max\limits_{0\leqslant j\leqslant n}\parallel U_{j}^{k}\parallel \right)\)._ Proof.: Using the fine propagator (3.3a) and coarse propagator (3.3b) in the Parareal scheme (2.2) we have \[U_{n+1}^{k+1} =P_{1}U_{n}^{k+1}+\left(P_{J_{1}}-P_{1}\right)U_{n}^{k},\] \[\xlongebox{\text{by taking norm}}{\text{by taking norm}}{ \text{by taking norm}}\parallel U_{n+1}^{k+1}\parallel\leqslant \parallel P_{1}\parallel\parallel U_{n}^{k+1}\parallel+\parallel P_{J_{ 1}}-P_{1}\parallel\parallel U_{n}^{k}\parallel\] \[\leqslant \parallel U_{n}^{k+1}\parallel+\parallel U_{n}^{k}\parallel, \tag{3.8}\] where in the second inequality (3.8) we use Lemma 3.1 & 3.2. By the repeated application of the recurrence in (3.8) for \(n\) and taking the sum we have \[\parallel U_{n+1}^{k+1}\parallel-\parallel U_{0}^{k+1}\parallel\leqslant\sum \limits_{j=0}^{n}\parallel U_{j}^{k}\parallel\leqslant(n+1)\left(\max\limits_{ 0\leqslant j\leqslant n}\parallel U_{j}^{k}\parallel\right).\] Then using \(U_{0}^{k+1}=u^{0}\) we get the stated result. **Theorem 3.7** (Convergence of PA-I).: _The algorithm PA-I is convergent, i.e., for the error \(E_{n+1}^{k+1}=U(T_{n+1})-U_{n+1}^{k+1}\) the algorithm PA-I satisfies the following error estimate \(\max\limits_{1\leqslant j\leqslant N}\parallel E_{j}^{k+1}\parallel\leqslant \alpha^{k+1}\min\left\{\left(\frac{1-\beta^{N-1}}{1-\beta}\right)^{k+1}, \binom{N-1}{k+1}\right\}\max\limits_{1\leqslant j\leqslant N}\parallel E_{j} ^{0}\parallel\), where \(\alpha=\parallel P_{J_{1}}-P_{1}\parallel,\beta=\parallel P_{1}\parallel\)._ Proof.: From the Parareal scheme (2.2) we have \[U(T_{n+1})-U_{n+1}^{k+1} =U(T_{n+1})-\mathcal{G}(T_{n+1},T_{n},U_{n}^{k+1})-\mathcal{F}(T _{n+1},T_{n},U_{n}^{k})+\mathcal{G}(T_{n+1},T_{n},U_{n}^{k})\] \[=\mathcal{F}(T_{n+1},T_{n},U_{n})-\mathcal{G}(T_{n+1},T_{n},U_{n})\] \[-\left(\mathcal{F}(T_{n+1},T_{n},U_{n}^{k})-\mathcal{G}(T_{n+1},T _{n},U_{n}^{k})\right)\] \[+\mathcal{G}(T_{n+1},T_{n},U_{n})-\mathcal{G}(T_{n+1},T_{n},U_{n}^ {k+1}).\] Using the fine propagator (3.3a) and coarse propagator (3.3b) in (3.9) we have the recurrence relation for the error \(E_{n+1}^{k+1}\) as \[E_{n+1}^{k+1} =\left(P_{J_{1}}-P_{1}\right)E_{n}^{k}+P_{1}E_{n}^{k+1}, \tag{3.10}\] \[\xrightleftharpoons[]{\text{by taking norm}}\parallel E_{n+1}^{k+ 1}\parallel\leqslant \parallel P_{J_{1}}-P_{1}\parallel\parallel E_{n}^{k}\parallel+ \parallel P_{1}\parallel\parallel E_{n}^{k+1}\parallel.\] The recurrence relation in (3.10) can be written in the following matrix form \[\left[\begin{array}{c}\parallel E_{1}\parallel\\ \parallel E_{2}\parallel\\ \vdots\\ \parallel E_{N-1}\parallel\\ \parallel E_{N}\parallel\end{array}\right]^{k+1} \leqslant\left[\begin{array}{cccccccc}1&0&&&&\\ -\beta&1&0&&\\ &\ddots&\ddots&\ddots&\\ &&-\beta&1&0\\ &&&-\beta&1\end{array}\right]^{-1}\left[\begin{array}{cccccccc}0&&&&\\ \alpha&0&0&&\\ &\ddots&\ddots&\ddots&\\ &&\alpha&0&0\\ &&&\alpha&0\end{array}\right]\left[\begin{array}{c}\parallel E_{1}\parallel \\ \parallel E_{2}\parallel\\ \vdots\\ \parallel E_{N-1}\parallel\\ \parallel E_{N}\parallel\end{array}\right]^{k} \tag{3.11}\] \[\leqslant\alpha\left[\begin{array}{cccccccc}0&0&&&&\\ 1&0&0&&\\ \vdots&&\ddots&\ddots&\\ \beta^{N-3}&\beta^{N-4}&\cdots&0&0\\ \beta^{N-2}&\beta^{N-3}&\cdots&1&0\end{array}\right]\left[\begin{array}{c} \parallel E_{1}\parallel\\ \parallel E_{2}\parallel\\ \vdots\\ \parallel E_{N-1}\parallel\\ \parallel E_{N}\parallel\end{array}\right]^{k},\] where \(\alpha=\parallel P_{J_{1}}-P_{1}\parallel,\beta=\parallel P_{1}\parallel\), and on second inequality we use the Lemma 3.3. Now observe that the iteration matrix appearing in (3.11) is Nilpotent, so for \(k=N\) we have finite step convergence. Using Lemma 3.4 and Lemma 3.5 in (3.11) we get the stated error contraction relation. **Theorem 3.8** (Stability of PA-II).: _The algorithm PA-II is stable, i.e., for each \(n\) and \(k\) the Parareal iteration satisfies \(\parallel U_{n+1}^{k+1}\parallel\leqslant\parallel u^{0}\parallel+(n+1)\left( \max\limits_{0\leqslant j\leqslant n}\parallel U_{j}^{k}\parallel\right)\)._ Proof.: Emulating the proof of Theorem 3.6 we have the stated result. **Theorem 3.9** (Convergence of PA-II).: _The algorithm PA-II is convergent, i.e., for the error \(E_{n+1}^{k+1}=U(T_{n+1})-U_{n+1}^{k+1}\) the algorithm PA-II satisfies the following error estimate \(\max\limits_{1\leqslant j\leqslant N}\parallel E_{j}^{k+1}\parallel\leqslant \alpha^{k+1}\min\left\{\left(\frac{1-\beta^{N-1}}{1-\beta}\right)^{k+1},\binom{ N-1}{k+1}\right\}\max\limits_{1\leqslant j\leqslant N}\parallel E_{j}^{0}\parallel\), where \(\alpha=\parallel P_{J_{2}}-P_{2}\parallel,\beta=\parallel P_{2}\parallel\)._ Proof.: The proof follows from the Theorem 3.7. **Theorem 3.10** (Stability of PA-III).: _The algorithm PA-III is stable, i.e., for each \(n\) and \(k\) the Parareal iteration satisfies \(\parallel U_{n+1}^{k+1}\parallel\leqslant\parallel u^{0}\parallel+(n+1)\left( \max\limits_{0\leqslant j\leqslant n}\parallel U_{j}^{k}\parallel\right)\)._ Proof.: Emulating the proof of Theorem 3.6 we have the result. **Theorem 3.11** (Convergence of PA-III).: _The algorithm PA-III is convergent, i.e., for the error \(E_{n+1}^{k+1}=U(T_{n+1})-U_{n+1}^{k+1}\) the algorithm PA-III satisfies the following error estimation \(\max\limits_{1\leqslant j\leqslant N}\parallel E_{j}^{k+1}\parallel\leqslant \alpha^{k+1}\min\left\{\left(\frac{1-\beta^{N-1}}{1-\beta}\right)^{k+1},\binom{ N-1}{k+1}\right\}\max\limits_{1\leqslant j\leqslant N}\parallel E_{j}^{0}\parallel\), where \(\alpha=\parallel P_{J_{2}}-P_{1}\parallel,\beta=\parallel P_{1}\parallel\)._ Proof.: The proof follows from the Theorem 3.7. Next we prove a few relevant results before discussing the stability and convergence of nonlinear Parareal method. **Lemma 3.12** (Growth of Coarse Operator in NPA-I).: _The coarse operator in (3.3b) satisfies the growth condition \(\parallel\mathcal{G}(T_{n+1},T_{n},U)\parallel\leqslant\parallel U\parallel, \forall U\in\mathbb{R}^{N_{x}-2}\)._ Proof.: We have \(\parallel\mathcal{G}(T_{n+1},T_{n},U)\parallel\leqslant\parallel P_{1} \parallel\parallel U\parallel\). Now \(\parallel P_{1}\parallel<1\) follows from Lemma 3.1, hence the result. **Lemma 3.13** (Lipschitz Property of \(\mathcal{G}\)).: _The coarse operator in (3.3b) satisfies the Lipschitz condition \(\parallel\mathcal{G}(T_{n+1},T_{n},U_{1})-\mathcal{G}(T_{n+1},T_{n},U_{2}) \parallel\leqslant\parallel P_{1}\parallel\parallel U_{1}-U_{2}\parallel, \forall U_{1},U_{2}\in\mathbb{R}^{N_{x}-2}\)._ Proof.: The result is straight forward. **Lemma 3.14** (Local Truncation Error (LTE) Differences in NPA-I).: _Let \(\mathcal{F}(T_{n+1},T_{n},U)\) be the fine operator generated by the nonlinear scheme in (2.3) and \(\mathcal{G}(T_{n+1},T_{n},U)\) be the coarse operator in (3.3b). Then the following LTE differences hold_ \[\mathcal{F}(T_{n+1},T_{n},U)-\mathcal{G}(T_{n+1},T_{n},U)=c_{2}(U)\Delta T^{2} +c_{3}(U)\Delta T^{3}+\cdots,\] _where \(c_{j}(U)\) are continuously differentiable function for \(j=2,3,...\)_ Proof.: Let \(\mathcal{S}(T_{n+1},T_{n},U)\) be the exact solution of (1.1), then \[\mathcal{F}(T_{n+1},T_{n},U)-\mathcal{G}(T_{n+1},T_{n},U) =\mathcal{F}(T_{n+1},T_{n},U)-\mathcal{S}(T_{n+1},T_{n},U)\] \[+\mathcal{S}(T_{n+1},T_{n},U)-\mathcal{G}(T_{n+1},T_{n},U)\] \[=\tilde{c_{2}}(U)\Delta T^{2}+\tilde{c_{3}}(U)\Delta T^{3}+\cdots\] \[+\tilde{c_{2}}(U)\Delta T^{2}+\hat{c_{3}}(U)\Delta T^{3}+\cdots\] \[=c_{2}(U)\Delta T^{2}+c_{3}(U)\Delta T^{3}+\cdots.\] Hence the Lemma. **Theorem 3.15** (Stability of NPA-I).: _The algorithm NPA-I is stable, i.e., for each \(n\) and \(k\), \(\parallel U_{n+1}^{k+1}\parallel\leqslant\parallel u^{0}\parallel+C(n+1) \Delta T^{2}\left(\max\limits_{0\leqslant j\leqslant n}\parallel U_{j}^{k} \parallel\right)\), for a constant \(C\)._ Proof.: Taking the norm in the correction scheme (2.2) we have \[\begin{split}\parallel U_{n+1}^{k+1}\parallel\leqslant& \parallel\mathcal{G}(T_{n+1},T_{n},U_{n}^{k+1})\parallel+\parallel \mathcal{F}(T_{n+1},T_{n},U_{n}^{k})-\mathcal{G}(T_{n+1},T_{n},U_{n}^{k}) \parallel\\ \leqslant&\parallel U_{n}^{k+1}\parallel+C\Delta T ^{2}\parallel U_{n}^{k}\parallel,\end{split} \tag{3.12}\] where in the 2nd inequality we use Lemma (3.12) and (3.14). Taking the sum over \(n\) on the recurrence relation (3.12) we get \[\parallel U_{n+1}^{k+1}\parallel-\parallel U_{0}^{k+1}\parallel\leqslant C \Delta T^{2}\sum\limits_{j=0}^{n}\parallel U_{j}^{k}\parallel\leqslant C(n+1) \Delta T^{2}\left(\max\limits_{0\leqslant j\leqslant n}\parallel U_{j}^{k} \parallel\right).\] Then using \(U_{0}^{k+1}=u^{0}\) we get the stated result. **Theorem 3.16** (Convergence of NPA-I).: _If the propagator \(\mathcal{F}\) in (2.3) and \(\mathcal{G}\) in (2.4) satisfy LTE differences given in Lemma 3.14 and \(\mathcal{G}\) satisfy Lipschitz condition given in Lemma 3.13, then the algorithm NPA-I satisfies the following error estimate_ \[\max\limits_{1\leqslant j\leqslant N}\parallel E_{j}^{k+1}\parallel\leqslant \left(C_{1}\Delta T^{2}\right)^{k+1}\min\left\{\left(\frac{1-\beta^{N-1}}{1- \beta}\right)^{k+1},\binom{N-1}{k+1}\right\}\max\limits_{1\leqslant j \leqslant N}\parallel E_{j}^{0}\parallel,\] _where \(\beta=\parallel P_{1}\parallel\), and \(C_{1}\) is a constant related to LTE._ Proof.: From the Parareal scheme (2.2) we have \[U(T_{n+1})-U_{n+1}^{k+1} =U(T_{n+1})-\mathcal{G}(T_{n+1},T_{n},U_{n}^{k+1})-\mathcal{F}(T_{n +1},T_{n},U_{n}^{k})+\mathcal{G}(T_{n+1},T_{n},U_{n}^{k}) \tag{3.13}\] \[=\mathcal{F}(T_{n+1},T_{n},U_{n})-\mathcal{G}(T_{n+1},T_{n},U_{n})\] \[-\big{(}\mathcal{F}(T_{n+1},T_{n},U_{n}^{k})-\mathcal{G}(T_{n+1}, T_{n},U_{n}^{k})\big{)}\] \[+\mathcal{G}(T_{n+1},T_{n},U_{n})-\mathcal{G}(T_{n+1},T_{n},U_{n} ^{k+1})\] \[=(c_{2}(U_{n})\Delta T^{2}+c_{3}(U_{n})\Delta T^{3}+\cdots)-(c_{2 }(U_{n}^{k})\Delta T^{2}+c_{3}(U_{n}^{k})\Delta T^{3}+\cdots)\] \[+\mathcal{G}(T_{n+1},T_{n},U_{n})-\mathcal{G}(T_{n+1},T_{n},U_{n} ^{k+1}),\] where in the third equality we use the Lemma 3.14. As \(c_{j},j\geq 2\) are continuously differentiable function we have \[\begin{split}&\parallel(c_{2}(U_{n})\Delta T^{2}+c_{3}(U_{n}) \Delta T^{3}+\cdots)-(c_{2}(U_{n}^{k})\Delta T^{2}+c_{3}(U_{n}^{k})\Delta T^{3 }+\cdots)\parallel\\ &\leq\Delta T^{2}\parallel c_{2}(U_{n})-c_{2}(U_{n}^{k}) \parallel+\Delta T^{3}\parallel c_{3}(U_{n})-c_{3}(U_{n}^{k})\parallel+\cdots \\ &\leq C_{2}\Delta T^{2}\parallel U_{n}-U_{n}^{k}\parallel+C_{3} \Delta T^{3}\parallel U_{n}-U_{n}^{k}\parallel+\cdots=C_{1}\Delta T^{2} \parallel U_{n}-U_{n}^{k}\parallel.\end{split} \tag{3.14}\] Taking norm in (3.13) and using (3.14) and the Lipschitz condition given in Lemma 3.13 we have the following recurrence relation for the error \(E_{n+1}^{k+1}=U(T_{n+1})-U_{n+1}^{k+1}\) as \[\parallel E_{n+1}^{k+1}\parallel\leqslant C_{1}\Delta T^{2}\parallel E_{n}^{ k}\parallel+\parallel P_{1}\parallel\parallel E_{n}^{k+1}\parallel. \tag{3.15}\] The recurrence relation in (3.15) can be written in the following matrix form \[e^{k+1}=C_{1}\Delta T^{2}\mathbb{T}(\beta)e^{k}, \tag{3.16}\] where \(e^{k}=\big{[}\parallel E_{1}^{k}\parallel,\parallel E_{2}^{k}\parallel,\cdots \parallel E_{N}^{k}\parallel\big{]}^{t}\), and \(\beta=\parallel P_{1}\parallel\). Clearly the iteration matrix in (3.16) is Nilpotent, hence we have finite step convergence. Now to get the stated result we use the Lemma 3.4 and infinity norm in (3.16). Next we discuss the stability and convergence behaviour of the Parareal algorithm NPA-II. In this case both fine and coarse propagators are nonlinear. To get the coarse operator in its explicit form we use Newton method to the nonlinear system. So the solution of the nonlinear coarse operator in (2.3) is the zeros of the following nonlinear equations \[H(Y)=Y+\epsilon^{2}\Delta tD_{h}^{2}Y-\Delta tD_{h}Y^{3}-\hat{Y}, \tag{3.17}\] where \(Y=U^{n+1}\in\mathbb{R}^{N_{x}-2}\) and \(\hat{Y}=(I-\Delta tD_{h})U^{n}\in\mathbb{R}^{N_{x}-2}\). After applying the Newton method on (3.17) with iteration index \(m\) and then simplifying we have \[Y_{m+1}=(I+\epsilon^{2}\Delta tD_{h}^{2}-3\Delta tD_{h}\operatorname{diag}(Y_{ m}^{2}))^{-1}(\hat{Y}-2\Delta tD_{h}\operatorname{diag}(Y_{m}^{2})Y_{m}). \tag{3.18}\] Numerical experiments suggest that upon convergence of the Newton Method the term \(\operatorname{diag}(Y_{m}^{2})\approx I\) away from interface region of width \(\epsilon\), similar behaviour also observed in [3]. If one uses initial solution as an initial guess for the Newton method then the coarse operator (2.3) takes the following form \[\mathcal{G}(T_{n+1},T_{n},U)=\big{[}(I+\Delta TD_{h}^{2}-3\Delta TD_{h})^{-1}(I -3\Delta TD_{h})\big{]}^{m}\,U, \tag{3.19}\] for some Newton iteration \(m\). Next we prove some auxiliary results. **Lemma 3.17** (Growth of Coarse Operator in NPA-II).: _The coarse operator in (3.19) satisfies the growth condition \(\parallel\mathcal{G}(T_{n+1},T_{n},U)\parallel\leqslant\parallel U\parallel, \forall U\in\mathbb{R}^{N_{x}-2}\)._ Proof.: We have \(\parallel\mathcal{G}(T_{n+1},T_{n},U)\parallel\leqslant\parallel P_{3} \parallel^{m}\parallel U\parallel\). Now \(\parallel P_{3}\parallel<1\) follows from Lemma 3.1, hence the result. **Lemma 3.18** (Lipschitz of \(\mathcal{G}\) in NPA-II).: _The coarse operator in (3.19) satisfies the Lipschitz condition \(\parallel\mathcal{G}(T_{n+1},T_{n},U_{1})-\mathcal{G}(T_{n+1},T_{n},U_{2}) \parallel\leqslant\parallel P_{3}\parallel\parallel U_{1}-U_{2}\parallel, \forall U_{1},U_{2}\in\mathbb{R}^{N_{x}-2}\)._ Proof.: The result is straight forward. **Lemma 3.19** (LTE Differences in NPA-II).: _Let \(\mathcal{F}(T_{n+1},T_{n},U)\) and \(\mathcal{G}(T_{n+1},T_{n},U)\) be the fine and coarse operator generated by the nonlinear scheme in (2.3), then the following LTE differences hold_ \[\mathcal{F}(T_{n+1},T_{n},U)-\mathcal{G}(T_{n+1},T_{n},U)=c_{2}(U)\Delta T^{2} +c_{3}(U)\Delta T^{3}+\cdots,\] _where \(c_{j}(U)\) are continuously differentiable function for \(j=2,3,...\)_ Proof.: The result follows from the Lemma 3.14. **Theorem 3.20** (Stability of NPA-II).: _The algorithm NPA-II is stable, i.e., for each \(n\) and \(k\), \(\parallel U_{n+1}^{k+1}\parallel\leqslant\parallel u^{0}\parallel+C(n+1) \Delta T^{2}\left(\max\limits_{0\leqslant j\leqslant n}\parallel U_{j}^{k} \parallel\right)\), for some constant \(C\)._ Proof.: The proof can be obtained by following the proof of Theorem 3.15. **Theorem 3.21** (Convergence of NPA-II).: _If the propagator \(\mathcal{F}\) and \(\mathcal{G}\) in (2.3) satisfy LTE differences given in Lemma 3.19 and \(\mathcal{G}\) satisfies the Lipschitz condition given in Lemma 3.18, then the algorithm NPA-II satisfies the following error estimate_ \[\max\limits_{1\leqslant j\leqslant N}\parallel E_{j}^{k+1}\parallel\leqslant \left(C_{2}\Delta T^{2}\right)^{k+1}\min\left\{\left(\frac{1-\beta^{N-1}}{1- \beta}\right)^{k+1},\binom{N-1}{k+1}\right\}\max\limits_{1\leqslant j \leqslant N}\parallel E_{j}^{0}\parallel,\] _where \(\beta=\parallel P_{3}\parallel\), and \(C_{2}\) is a constant related to LTE._ Proof.: The proof is similar to the proof of Theorem 3.16. **Remark 3.22**.: 1. _One can obtain the convergence estimate of NPA-II at the semi-discrete level by estimating coarse operator at the semi-discrete level._ 2. _Explicit expression of the linear and nonlinear Paraeal algorithms in 2D or 3D can be achieved by extending the 1D case naturally._ 3. _Convergence proof of Parareal method in higher dimension follows from the 1D case by deriving the discrete Laplacian_ \(D_{h}\) _for regular or irregular computational domain._ 4. _The term_ \(\binom{N-1}{k+1}=\frac{1}{(k+1)!}\prod_{j=0}^{k-1}(N-j)\) _which appears in all of the convergence results says that methods converges at most_ \(N+1\) _iteration. So we always have finite step convergence to the fine solution._ ## 4. Numerical Illustration _In this section we present the numerical experiments for the linear and non-linear Parareal algorithms, which are analyzed in this article. The Parareal iterations start with an initial guess given by coarse operator and stop as the error measured in \(\parallel U-U^{k}\parallel_{L^{\infty}(0,T;L^{2}(\Omega))}\) reaches a tolerance of \(10^{-6}\), where \(U\) is the discrete fine solution and \(U^{k}\) is the discrete Parareal solution at \(k\)-th iteration. We consider the spatial domain \(\Omega=(0,1)\) in 1D and \(\Omega=(0,1)^{2}\) in 2D._ ### Numerical Experiments of PA-I _We first discuss the numerical experiments in 1D. We run the PA-I algorithm with fixed parameters \(T=1,h=1/64,N=20,J=200\) and two different \(\epsilon=0.0725,0.725\). The comparison of theoretical error estimate from Theorem 3.7 and numerical error reduction can be seen in Figure 1. We observe that for larger \(\epsilon\) the theoretical bound given in Theorem 3.7 is much sharper than the bound corresponding to smaller \(\epsilon\). The reason being is that even though \(\alpha<1\) & \(\beta<1\) in Theorem 3.7 for every choice of \(\epsilon\), the values of \(\alpha,\beta\) increases to one as \(\epsilon\) decreases. Now we study the convergence behaviour of_ _PA-I on the choice of \(\Delta T\). In Figure 2 we plot the error curves for different \(\Delta T\) with fixed parameters \(\epsilon=0.0725,h=1/64,J=200\) on the left panel and we can see that the method works well for large \(\Delta T\). On the right in Figure 2 we plot the error curves for different mesh sizes with \(T=1,\epsilon=0.0725,h=1/64,\Delta t=1/200\). We observe that convergence is independent of mesh parameters. We plot the error curves on the left panel in Figure 3 for short as well as long time window with Figure 1. PA-I: Comparison of theoretical and numerical error. On the left \(\epsilon=0.0725\), and on the right \(\epsilon=0.725\). Figure 2. PA-I: On the left convergence for different \(\Delta T\); On the right convergence for different \(h,\Delta t\). \(\epsilon=0.0725,J=200\) and \(h=1/64\). The method converges in four iterations to the fine solution of temporal accuracy \(O(10^{-4})\) for different \(T\). By ignoring the computational cost of the coarse operator, we can see that the Parareal method is 40 times faster than serial method on a single processor for \(T=8\). It is evident from the left plot of Figure 3 that one can achieve more speed up by including more processors (\(N\)). To see the dependency on the parameter \(\epsilon\), we plot the error curves on the right panel in Figure 3 for different \(\epsilon\) by taking \(T=1,N=50,J=200\). We observe that the method is almost immune to the choice of \(\epsilon\)._ _To perform the numerical experiments in 2D we take the discretization parameter \(h=1/32\) on both direction. We plot the comparison of error contraction on the left panel in Figure 4 for \(T=1,N=20,J=200\) and \(\epsilon=0.0725\). We plot the error curves on the right in Figure 4 for short as well as long time window with \(\epsilon=0.0725,J=200\). The method converges in four iterations to the fine solution of temporal accuracy \(O(10^{-4})\) for different \(T\). We observe similar convergence behaviour of PA-I in 2D as in 1D with respect to different situation and so we skip those experiments here._ ### Numerical Experiments of PA-II _1D case: The comparison of numerical error and theoretical estimate from Theorem 3.9 can be seen from the left plot in Figure 5 for \(T=1,h=1/64,N=20,J=200\) and \(\epsilon=0.0725\). On the right we plot the error curves for more refined solution for \(T=1,\epsilon=0.0725,h=1/64,\Delta t=1/200\). We can see that the convergence is independent of mesh parameters. We plot the error curves on the left in Figure 6 for short as well as long time window with \(\epsilon=0.0725,J=200\) and \(h=1/64\). We can see that one get the speed up compared Figure 4. PA-I: On the left comparison of theoretical and numerical error, and on the right convergence for different \(T,N\). Figure 3. PA-I: On the left convergence for different \(T,N\), and on the right convergence for different \(\epsilon\). to serial solve. To see the dependency on the parameter \(\epsilon\), we plot the error curves on the right in Figure 6 for different \(\epsilon\) by taking \(T=1,N=50,J=200\). We can see that the PA-II is sensitive to the choice of \(\epsilon\), namely for the very small \(\epsilon\)._ _2D case: We take the same discretization parameter \(h=1/32\) on both direction. We plot the comparison of error contraction on the left panel in Figure 7 for \(T=1,N=20,J=200\) and \(\epsilon=0.0825\). We plot the error curves on the right in Figure 7 for short as well as long time window with \(\epsilon=0.0725,J=200\). We observe similar convergence behaviour of PA-II in 2D as in 1D with respect to different situation._ _Figure 8 for \(T=1,h=1/64,N=20,J=200\) and \(\epsilon=0.0725\). On the right panel we plot the error curves for more refined solution for \(T=1,\epsilon=0.0725,h=1/64,\Delta t=1/200\). We can see that convergence is independent of mesh parameters. We plot the error curves on the left in Figure 9 for short as well as long time window with \(\epsilon=0.0725,J=200\) and \(h=1/64\). The method converges in four iteration to the fine resolution of temporal accuracy \(O(10^{-4})\) for different \(T\) and one get the speed up compared to sequential solve. To see the dependency on the parameter \(\epsilon\), we plot the error curves on the right in Figure 9 for different \(\epsilon\) by taking \(T=1,N=50,J=200\). We can see that the convergence of PA-III is independent of the choice of \(\epsilon\). As PA-II and PA-III converge to the fine solution given by (2.5), we can compare them. Since PA-II is sensitive towards small \(\epsilon\), therefore PA-III is the best choice to approximate fine solution given by (2.5)._ _2D case: We take the discretization parameter \(h=1/32\) on both direction and plot the comparison of error contraction on the left in Figure 10 for \(T=1,N=20,J=200\) and \(\epsilon=0.0625\). We plot the error curves on the right for short as well as long time window with \(\epsilon=0.0725,J=200\). We observe similar convergence behaviour of PA-III in 2D as in 1D for different situation._ ### Numerical Experiments of NPA-I _1D case: The nonlinear fine propagator is obtained using the Newton method with a tolerance \(1e{-}10\). To implement the theoretical bound prescribed in Theorem 3.16 we have to estimate the quantity \(C_{1}\) numerically, which depends on choice of \(\epsilon,J,\Delta T,u^{0}\). The comparison of numerical error and theoretical estimates can be seen on the left plot of Figure Figure 8. PA-III: On the left comparison of theoretical and numerical error, and on the right convergence for different mesh sizes. Figure 9. PA-III: On the left convergence for different \(T,N\), and on the right \(\epsilon\) dependency on the convergence. 11 for \(\Delta T=1,h=1/64,N=20,J=200,\epsilon=0.0725\) and \(C_{1}=0.1181\). On the right panel in Figure 11 we plot the error curves for more refined solution for \(T=1,\epsilon=0.0725,h=1/64,\Delta t=1/200\). We can see that convergence is independent of mesh parameters. We plot the error curves on the left in Figure 12 for short as well as long time window with \(\epsilon=0.0725,J=200\) and \(h=1/64\). To see the dependency on the parameter \(\epsilon\), we plot the error curves on the right panel in Figure 12 for different \(\epsilon\) by taking \(T=1,N=50,J=200\). We can see that the NPA-I is independent of the choice of \(\epsilon\). On the left panel in Figure 13 we plot the error curves with respect to different number of time slices for \(T=50,\epsilon=0.0725,h=1/64,J=150\). We can see that convergence is independent of mesh parameters. Figure 11. NPA-I: On the left comparison of theoretical and numerical error, and on the right convergence of refined solution. Figure 10. PA-III: On the left comparison of theoretical and numerical error, and on the right convergence for different \(T,N\). of time decomposition. One can observe that a speed up of \(80\) times compared to serial solve for \(N=400\)._ _2D case: We take the discretization parameter \(h=1/32\) on both direction. As we observe similar convergence behaviour in 2D as in 1D, we only plot the error curves on the right in Figure 13 for short as well as long time window with \(\epsilon=0.0725,J=200\). We omit the other experiments in 2D as we observe similar convergence behaviour as in 1D._ ### Numerical Experiments of NPA-II _1D case: In this case we have nonlinear solvers for both fine and coarse propagator by the Newton method with a tolerance \(1e{-}10\). To implement the theoretical bound prescribed in Theorem 3.21 we have to estimate the quantity \(C_{1}\) numerically, which depends on the choice of \(\epsilon,J,\Delta T,u^{0}\). The comparison of numerical error and theoretical estimate can be seen on the left plot of Figure 14 for \(\Delta T=1,h=1/64,N=20,J=200,\epsilon=0.0725\) and \(C_{1}=0.1498\). On the right we plot the error curves for more refined solution for \(T=1,\epsilon=0.0725,h=1/64,\Delta t=1/200\). We can see that convergence is independent of mesh parameters. We plot the error curves on the left in Figure 15 for short as well as long time window with \(\epsilon=0.0725,J=150\) and \(h=1/64\). To see the dependency on the parameter \(\epsilon\), we plot the error curves on the right in Figure 15 for different \(\epsilon\) by taking \(T=1,N=50,J=200\). We can see that the NPA-II is independent of the choice of \(\epsilon\). At this point we can compare NPA-I and NPA-II as both have the fine solution given by (2.3). Between these two, NPA-II is expansive because of the nonlinear coarse solver and we take almost same number Figure 14. NPA-II: On the left comparison of theoretical and numerical error, and on the right convergence of refined solution. Figure 13. NPA-I: On the left convergence for different \(N\), and on the right convergence for different \(T,N\) in 2D. iteration to converge as in the case of NPA-I. Therefore it is better to use NPA-I while computing the nonlinear approximation of the CH equation. We skip the numerical experiments in 2D as we observe similar behaviour as in 1D._ ### Numerical Experiments of Neumann-Neumann method as fine solver in PA-I _In all of the above experiments we use the scheme (_2.3_), (_2.4_) or (_2.5_) as fine solver. In practise one try to solve the CH equation in much larger domain with very fine mesh, that results in a very large scale algebraic system (as the spatial dimension increases). In this context one introduce parallelism in space by using Domain Decomposition (DD) based techniques, here we use a non-overlapping DD method, namely Neumann-Neumann (NN) method. The NN method for the CH equation in space is considered in [17] for two subdomain decomposition and in [18] for multiple subdomain decomposition, where they use (2.3) and (2.4) to build linear and nonlinear NN solver. Here we use linear NN method as fine solver in the PA-I algorithm. In every subinterval \([T_{n-1},T_{n}]\) we compute the solution as the following:_ _Let \(\Omega\subset\mathbb{R}\) is decomposed into non-overlapping subdomains \(\{\Omega_{i},1\leq i\leq N_{0}\}\). So to solve (2.4) at each time level the NN method starts with initial guesses \(g_{i}^{[0]},h_{i}^{[0]}\) along the interfaces \(\Gamma_{i}=\partial\Omega_{i}\cap\partial\Omega_{i+1}\) for \(i=1,\dots,N_{0}-1\), and then it's a two step execution: at each iteration \(\nu\), one first solves Dirichlet sub-problems on each \(\Omega_{i}\) in parallel, and then compute the jump in Neumann traces on the interfaces and one solves the Neumann subproblems on each \(\Omega_{i}\) in parallel,_ \[\left\{\begin{aligned} &\begin{bmatrix}I&-\delta_{t}\Delta\\ \epsilon^{2}\Delta-c^{2}&I\end{bmatrix}&\begin{bmatrix}u_{i}^{[\nu]}\\ v_{i}^{[\nu]}\end{bmatrix}&=\begin{bmatrix}f_{u}\\ f_{v}\end{bmatrix},\quad\text{in}\;\;\Omega_{i},\\ &\begin{bmatrix}u_{i}^{[\nu]}\\ v_{i}^{[\nu]}\end{bmatrix}&=0,\quad\text{on}\;\;\partial\Omega_{i}\cap \partial\Omega,\\ &\begin{bmatrix}u_{i}^{[\nu]}\\ v_{i}^{[\nu]}\end{bmatrix}&=\begin{bmatrix}g_{i-1}^{[\nu-1]}\\ h_{i-1}^{[\nu-1]}\end{bmatrix}&\text{on}\;\;\Gamma_{i-1},\\ &\begin{bmatrix}u_{i}^{[\nu]}\\ v_{i}^{[\nu]}\end{bmatrix}&=\begin{bmatrix}g_{i}^{[\nu-1]}\\ h_{i}^{[\nu-1]}\end{bmatrix}&\text{on}\;\;\Gamma_{i},\\ \end{aligned}\right. \tag{4.1}\] Figure 15. NPA-II: On the left convergence for different \(T,N\), and on the right \(\epsilon\) dependency on convergence. \[\left\{\begin{bmatrix}I&-\delta_{t}\Delta\\ \epsilon^{2}\Delta-c^{2}&I\end{bmatrix}\begin{bmatrix}\phi_{i}^{[\nu]}\\ \psi_{i}^{[\nu]}\end{bmatrix}=0,\quad\text{in}\;\;\Omega_{i},\\ \begin{bmatrix}\phi_{i}^{[\nu]}\\ \psi_{i}^{[\nu]}\end{bmatrix}=0,\quad\text{on}\;\;\partial\Omega_{i}\cap \partial\Omega,\\ \frac{\partial}{\partial x}\begin{bmatrix}\phi_{i}^{[\nu]}\\ \psi_{i}^{[\nu]}\end{bmatrix}=\frac{\partial}{\partial x}\begin{bmatrix}u_{i-1 }^{[\nu]}-u_{i}^{[\nu]}\\ v_{i-1}^{[\nu]}-v_{i}^{[\nu]}\end{bmatrix},\quad\text{on}\;\;\Gamma_{i-1}\\ \frac{\partial}{\partial x}\begin{bmatrix}\phi_{i}^{[\nu]}\\ \psi_{i}^{[\nu]}\end{bmatrix}=\frac{\partial}{\partial x}\begin{bmatrix}u_{i- 1}^{[\nu]}-u_{i}^{[\nu]}\\ v_{i-1}^{[\nu]}-v_{i}^{[\nu]}\end{bmatrix},\quad\text{on}\;\;\Gamma_{i-1},\end{bmatrix} \tag{4.2}\] \[\frac{\partial}{\partial x}\begin{bmatrix}\phi_{i}^{[\nu]}\\ \psi_{i}^{[\nu]}\end{bmatrix}=\frac{\partial}{\partial x}\begin{bmatrix}u_{i- 1}^{[\nu]}-u_{i}^{[\nu]}\\ v_{i}^{[\nu]}-v_{i+1}^{[\nu]}\end{bmatrix},\quad\text{on}\;\;\Gamma_{i},\] except for the first and last subdomains, where at the physical boundaries the Dirichlet condition in the Dirichlet step and Neumann condition in the Neumann step are replaced by homogeneous Dirichlet condition. Then the interface traces are updated by \[\begin{bmatrix}g_{i}^{[\nu]}\\ h_{i}^{[\nu]}\end{bmatrix}=\begin{bmatrix}g_{i}^{[\nu-1]}\\ h_{i}^{[\nu-1]}\end{bmatrix}-\theta\begin{bmatrix}\phi_{i}^{[\nu]}-\phi_{i+1}^{ [\nu]}\\ \psi_{i}^{[\nu]}-\psi_{i+1}^{[\nu]}\end{bmatrix}_{\Gamma_{i}},\] where \(\theta\in(0,1)\) is a relaxation parameter. In NN method (4.1)-(4.2), \(\delta_{t}=\Delta t\) is the fine time step, \(f_{u}=u^{n},f_{v}=-u^{n},c=(u^{n})^{2}\), where \(u^{n}\) is solution of the CH equation at \(n\)-th time step. In a similar fashion one can formulate NN method for the scheme given in (2.5) and use in PA-III as a fine solver. There is also a nonlinear version of (4.1) in [18], which can be used as a fine solver in the nonlinear Parareal case. To see the numerical experiments in 1D, we take \(N_{0}=8\) (equal subdomain), \(\theta=1/4\). Note that the parareal solution converges towards fine solution given by the NN method. For convergence of NN method at each time level we set the tolerance as \(\parallel g_{i}^{[\nu+1]}-g_{i}^{[\nu]}\parallel_{L^{2}}\leqslant 10^{-10}\) and \(\parallel h_{i}^{[\nu+1]}-h_{i}^{[\nu]}\parallel_{L^{2}}\leqslant 10^{-10}\). The convergence of NN method described in [18]; here we study the convergence of Parareal method PA-I to the NN solution given by (4.1). We plot the error curves on the left in Figure 16 for short as well as long time window with \(\epsilon=0.0725,J=200\) and \(h=1/128\). The left plot in Figure 16 is almost identical to the left plot given in Figure 3. So we have similar convergence behaviour for NN method as fine solver with an advantage of more parallelism in the system. To see the dependency on the parameter \(\Delta T\), we plot the error curves on the right in Figure 16 for different \(\Delta T\) by taking \(T=1,J=150,\epsilon=0.0725\). We can observe that convergence is robust. Figure 16. On the left: different \(T,N\), and on the right: different \(\Delta T\). ## 5. Conclusions _We propose and studied the linear and nonlinear Parareal algorithms for the CH equation. We showed convergence of all the proposed Parareal algorithms. Numerical experiments show that proposed methods are very robust and one obtains a reasonable speed up by introducing more processor._ ## Acknowledgement _The authors would like to thank the CSIR (File No:09/1059(0019)/2018-EMR-I) and DST-SERB (File No: SRG/2019/002164) for the research grant and IIT Bhubaneswar for providing excellent research environment._
2305.03621
Influence of the Lower Atmosphere on Wave Heating and Evaporation in Solar Coronal Loops
We model a coronal loop as a three-dimensional magnetic cylinder in a realistic solar atmosphere that extends from the chromosphere to the corona. Kink oscillations, believed ubiquitous in the solar corona, are launched in the loop. Heating is expected due to the dissipation of wave energy at small structures that develop from the Kelvin-Helmholtz instability induced by kink oscillations. Increases in temperature and internal energy can be observed in the coronal counterpart of the driven loop. With the presence of thermal conduction, chromospheric evaporation can also be seen. Although the volume averaged temperature and density changes seem slight ($\sim4\%$ relative to a non-driven loop), the enthalpy flow from the lower atmosphere redistributes the density and temperature in the vertical direction, thus enhancing the dissipation of wave energy in the corona. The efficient heating in the coronal counterpart of the loop can complement the thermal conductive losses shown in the current model and thus maintain the internal energy in the corona.
Mingzhe Guo, Timothy Duckenfield, Tom Van Doorsselaere, Konstantinos Karampelas, Gabriel Pelouze, Yuhang gao
2023-05-05T15:28:46Z
http://arxiv.org/abs/2305.03621v1
# Influence of the Lower Atmosphere on Wave Heating and Evaporation in Solar Coronal Loops ###### Abstract We model a coronal loop as a three-dimensional magnetic cylinder in a realistic solar atmosphere that extends from the chromosphere to the corona. Kink oscillations, believed ubiquitous in the solar corona, are launched in the loop. Heating is expected due to the dissipation of wave energy at small structures that develop from the Kelvin-Helmholtz instability induced by kink oscillations. Increases in temperature and internal energy can be observed in the coronal counterpart of the driven loop. With the presence of thermal conduction, chromospheric evaporation can also be seen. Although the volume averaged temperature and density changes seem slight (\(\sim 4\%\) relative to a non-driven loop), the enthalpy flow from the lower atmosphere redistributes the density and temperature in the vertical direction, thus enhancing the dissipation of wave energy in the corona. The efficient heating in the coronal counterpart of the loop can complement the thermal conductive losses shown in the current model and thus maintain the internal energy in the corona. ## 1 Introduction More and more observations have confirmed the omnipresence of kink waves in coronal loops (e.g., Nakariakov & Kolotkov, 2020; Nakariakov et al., 2021, for recent reviews) and thus bring kink waves into the forefront of coronal heating consideration (see De Moortel & Nakariakov, 2012; Arregui, 2015; Van Doorsselaere et al., 2020, for recent reviews). In the heating process, effective dissipation of wave energy can be achieved when the Lundquist number and Reynolds number, which are typically very large (\(\sim 10^{12}\)) in the solar corona, become very small (Soler & Terradas, 2015). Thus efficient heating relies on the occurrence of small spatial scales. Generally, kink waves are believed to suffer resonant absorption (see Goossens et al., 2011, for a review) that transfers wave energy from collective kink modes to local Alfven waves with the occurrence of transverse inhomogeneity around the loop boundary (e.g., Guo et al., 2020). The converted localized azimuthal Alfven waves undergo phase mixing (e.g., Heyvaerts & Priest, 1983). The subsequent phase mixed Alfven waves can enhance the Kelvin-Helmholtz instability (KHI, Browning & Priest, 1984), which is induced by velocity shear between loops and the background corona, thereby facilitating the wave dissipation by further generating smaller scales. This scenario is first demonstrated by Ofman et al. (1994) in coronal loop models. Recently, numerical progress confirmed that transverse waves in coronal loops can induce the KHI, forming the TWIKH rolls (Transverse Wave Induced Kelvin-Helmholtz rolls, see e.g., Terradas et al., 2008; Antolin et al., 2015; Karampelas et al., 2017; Howson et al., 2017; Guo et al., 2019, 2019; Shi et al., 2021). Non-linear damping of kink waves associated with the KHI has been investigated numerically by e.g., Magyar & Van Doorsselaere (2016) and analytically by e.g., Van Doorsselaere et al. (2021). During such a process, the energy of collective kink waves can dissipate at such turbulent small structures, and quantitative evaluations of heating effects have been made by e.g., Karampelas et al. (2017, 2019); Guo et al. (2019, 2019). Recent numerical progress shows that the heating effects induced by kink waves can balance the radiative loss in the solar corona (Shi et al., 2021; De Moortel and Howson, 2022), which shed new light on understanding coronal heating from the perspective of waves. The influence of chromospheric evaporation on the heating of coronal loops was first considered by Ofman et al. (1998) based on resonant kink waves. However, the aforementioned investigations associated with TWIKH rolls mainly focus on the coronal counterpart of loops. In reality, the lower atmosphere can act as a mass and energy reservoir for physical processes in the corona. In the process associated with impulsive heating, nanoflares (Parker, 1988) for instance, enthalpy flux plays an important role in the thermal evolution of coronal loops (e.g., Bradshaw and Cargill, 2010; Cargill et al., 2012). Regarding the AC models, Ohmic heating has been confirmed by both observations (Van Doorsselaere et al., 2007) and numerical simulations (Karampelas et al., 2019; Guo et al., 2019) in coronal loops. In such a scenario, wave heating is expected to happen near loop footpoints, where the current density that is essential for resistive dissipation has maximum values at loop ends (Van Doorsselaere et al., 2007). In this way, the cooler chromosphere can be heated straightforwardly. Given this, a natural question arises: Can the heating near loop footpoints trigger chromosphere evaporation? The scenario of evaporation is usually discussed when chromospheric materials are heated during such energy release processes as solar flares (e.g., Fisher et al., 1985; Milligan et al., 2006; Tian and Chen, 2018). Although the energy dissipation of waves is not so fierce as flares, gentle chromosphere evaporations are still expected (Fisher et al., 1985). The influence of such evaporation flows on the efficiency of wave heating in the corona remains to be evaluated in recent models. Recent progress by Van Damme et al. (2020) investigated the influence of chromosphere evaporation on the phase mixing of Alfven waves in a two-dimensional coronal loop model. The density shell around the loop boundary that is essential for energy dissipation of Alfven waves seems not significantly changed in the modest heating process. However, wave energy dissipation happens over almost the whole deformed cross-section of loops when it comes to transverse oscillations (e.g., Karampelas and Van Doorsselaere, 2018; Guo et al., 2019, 2019). It remains to be seen how chromosphere evaporation influences wave energy dissipation in coronal loops deformed by kink oscillations. In this paper, we aim to investigate the influence of the lower atmosphere on coronal heating effects. The paper is organized as follows. Section 2 describes the model, including the equilibrium and numerical setup. In Section 3, we present the numerical results. Section 4 summarizes our findings, ending with some further discussion. ## 2 Numerical model We use a similar magnetic flux tube employed in Pelouze et al. (2023, P23), with the key difference that we simulate half of a closed loop with both footpoints anchored in the chromosphere. The model is initiated from a 2D hydrostatic equilibrium in cylindrical coordinates \((r,z)\). A loop-oriented gravity \(g(z)=-g_{\odot}\sin\left[\pi(L-z)/(2L)\right]\) is considered, with \(L=100\)Mm being the half length of the loop. The initial magnetic field is \(\mathbf{B}=B_{0}\hat{z}\), with \(B_{0}\) set to 42G. The temperature profile, derived from Aschwanden and Schrijver (2002), is given by \[T(r,z)=\begin{cases}T_{\rm ch},&z\leq z_{\rm ch},\\ T_{\rm ch}+\left[T_{\rm co}(r)-T_{\rm ch}\right]\left[1-\left(\frac{L-z}{L-z_{ \rm ch}}\right)^{2}\right]^{0.3},&z>z_{\rm ch},\end{cases} \tag{1}\] where \(T_{\rm ch}=20000\)K is the temperature of the chromosphere, and \(z_{\rm ch}=4\)Mm represents its thickness. The transverse temperature profile is defined as \[T_{\rm co}(r)=T_{\rm e}+\frac{1}{2}\left(T_{\rm i}-T_{\rm e}\right)\left\{1- \tanh\left[\left(\frac{r}{R}-1\right)b\right]\right\}, \tag{2}\] where \(T_{\rm i}=1.2\)MK (\(T_{\rm e}=3.6\)MK) is the temperature inside (outside) the tube. \(R=1\)Mm gives the radius of the loop, and \(b=10\) defines the thickness of the inhomogeneous layer, corresponding to \(l/R=0.6\) at the loop bottom. The transverse distribution of the density in Figure 1a clearly shows the variation of this layer thickness along the height. The initial state is not in magnetohydrostatic balance, thus a relaxation is needed. In this stage, we redefine the velocity rewrite layer, which was proposed in P23 to absorb the upward flows in the loop. With the presence of the velocity rewrite layer, the large flow induced by the initial state, which is not in magnetohydrostatic balance, is weakened. Thus the process of reaching an equilibrium becomes more gradual and gentle, ensuring the stability of the simulation. The velocity is modified as \(v^{\prime}(r,z)=\alpha(t)v_{\mathrm{i}}(r,z)\). The parameter \(\alpha(t)\) is defined as \[\alpha(t)=\begin{cases}0.9995+0.0005\dfrac{t}{t_{\mathrm{c}}},0<t\leq t_{ \mathrm{c}},\\ 1,\hskip 56.905512ptt>t_{\mathrm{c}},\end{cases} \tag{3}\] where \(t_{\mathrm{c}}=37.3\)ks represents the critical time. The vertical velocity in the loop is then suppressed to be less than \(1.3\)kms\({}^{-1}\) and significant quantity changes (e.g., a drop in magnetic field strength in P231) are avoided after the relaxation. The relaxed state is then straightforwardly converted to 3D by rotating the 2D axisymmetric results. To avoid any unphysical oscillations due to boundary changes from the 2D cylindrical coordinate to the 3D Cartesian domain, we allow the 3D system to relax for another 2.8ks. The initial temperature and density distributions of the loop axis of the 2D and 3D simulations are shown in Figure 2. Footnote 1: In the current model, the magnetic field undergoes a slight change from the initial value of 42G to around 40G after relaxation. Note that this simulation is gravitationally stratified, leading to the transverse structuring of the loop changing with height. As shown in Figure 1b, the inhomogeneous layer thickness is smaller at the loop apex (\(z=100\)Mm) than at lower heights. One may expect this change in layer thickness to have an impact on the formation of KHI (e.g., Magyar & Van Doorsselaere, 2016). The chromosphere and transition region are included in the current model. The transition region has been artificially broadened to ensure its resolution, following the scheme developed by Linker et al. (2001); Lionello et al. (2009); Mikic et al. (2013). As described in P23, a critical temperature \(T_{\mathrm{c}}=2.5\times 10^{5}\mathrm{K}\) is defined, below (above) which the parallel thermal conductivity is \(\kappa_{\parallel}=\kappa_{0}T_{\mathrm{c}}^{5/2}\) (\(\kappa_{\parallel}=\kappa_{0}T^{5/2}\)), with \(\kappa_{0}\) being \(5.6\times 10^{-7}\mathrm{erg\ cm^{-1}\ s^{-1}\ K^{-7/2}}\). This modified thermal conductivity changes the temperature length scale in the vertical direction, thus leading to a significant broadening of the transition region after relaxation, see also Van Damme et al. (2020) and P23. Therefore, a coarser resolution in the vertical direction is allowed in the current simulation. Figure 1: Transverse distribution of the density along the \(r\)-direction in the 2D simulation (left) and \(x\)-direction in the 3D simulation (right). The left panel represents the initial density in the 2D simulation. The right panel shows the initial state of the 3D simulation after relaxation. Different colors represent different heights. Figure 2: Spatial distribution of temperature and density along the \(z\)-direction. The left panel displays the initial quantities before relaxation in the 2D equilibrium. The right panel shows the initial state of the 3D simulation after relaxation. Solid and dashed lines represent the distribution at the loop axis (\(x=0\), \(y=0\)) and the external loop region (\(x=8\)Mm, \(y=0\)), respectively. To solve the 3D ideal MHD equations, we employ the PLUTO code (Mignone et al., 2007). A piecewise linear scheme is used for spatial reconstruction. Numerical fluxes are computed by the Roe Riemann solver, and a second-order characteristic tracing method is used for time marching. The hyperbolic divergence cleaning method is adopted to ensure the divergence-free nature of the magnetic field. Anisotropic thermal conduction is included in our simulations. In the 2D run, the computational domain is \([0,6]\mathrm{Mm}\times[0,100]\mathrm{Mm}\). We consider 128 uniformly spaced cells in the \(r\)-direction and a uniform grid of 1024 cell points in the \(z\)-direction. In the 3D case, the simulation domain is \([-6,6]\mathrm{Mm}\times[-3,3]\mathrm{Mm}\times[0,L/2]\). We consider uniform 1024 cell points in the \(z\)-direction and 256 uniformly spaced cells in the \(x\)-direction and 128 uniform grid cells in the \(y\)-direction. The resolution in the \(x,y\) plane is about 46.9km. Although increasing the transverse resolution may reveal more pronounced heating effects, the main findings of the current work are not expected to be influenced. On one hand, the changing width of the boundary layer allows the current resolution to sufficiently resolve fine structures away from the loop apex 2. On the other hand, the turbulent structures generated in the model will further extend the thickness of the boundary layer, making it more feasible to reveal small scales within it. The vertical resolution seems lower than that in the transverse direction. Nevertheless, it is enough to resolve the broadened transition region that is more than 2Mm wide after relaxation, considering the broadening scheme described above. Footnote 2: Even at the loop apex, we can still observe relatively larger KHI structures, as shown in Figure 3b and related animations. In the 2D simulation, an axisymmetric (outflow) boundary is employed at \(r=0\) (\(r=100\mathrm{Mm}\)). A symmetric boundary condition is used at \(z=100\mathrm{Mm}\), considering the symmetric property of the fundamental kink mode considered in this study. At the bottom of the loop, the density and pressure are extrapolated from the hydrostatic equilibrium. The magnetic field is extrapolated following the zero normal gradient condition, as described by Karampelas et al. (2019). The vertical velocity \(v_{z}\) is set to be zero, while the transverse velocity \(v_{r}\) is to be outflow. In the 3D case, all the lateral boundaries are set to be outflow. Boundary conditions in the \(z\)-direction are kept the same as in the 2D except for the transverse velocity \(v_{x}\) and \(v_{y}\) at \(z=0\), which are described by a continuous and monoperiodic driver (Pascoe et al., 2010; Karampelas et al., 2017; Guo et al., 2019). In the current study, the driver period is 288s, which matches the eigenfrequency of the loop at the initial state. Note that the initial eigenfrequency of the loop slightly varies over time during the driving phase. Due to the modest changes in the density structure in the current model, the variation in eigenfrequency is not significant. Nonetheless, a broad-band driver (e.g., Afanasyev et al., 2019) needs further study and discussion. The amplitude of the driver is chosen to be 0 (non-driven model) and 4kms\({}^{-1}\) (driven model). These two models are mentioned hereafter by using subscripts "0" and "4", respectively. ## 3 Results We first examine the dynamic process of the loop. Figure 3 shows the isocontours of density and temperature for both models at \(t=8060\)s when the loop has fully evolved. The TWIKH rolls are clearly seen from the density and temperature distribution at the apex of the driven loop in Figure 3b. The core of the loop cross-section is sufficiently heated at the loop apex, even though the Ohmic dissipation is smaller than that near the loop footpoints. To further examine the evolution in the vertical direction, surface averaged temperature and density are examined along the \(z\)-direction, which are given by \[\bar{T}(z,t)=\frac{1}{A}\int_{A}T\mathrm{d}A,\bar{\rho}(z,t)=\frac{1}{A}\int_{ A}\rho\mathrm{d}A. \tag{4}\] where \(A\) represents the surface area of the region \(-6\mathrm{Mm}\leq x\leq 6\mathrm{Mm},-3\mathrm{Mm}\leq y\leq 3\mathrm{Mm}\) at given heights. \(\mathrm{d}A\) is the surface element of the plane. Figure 4a,b show the changes in \(\left(\bar{T}-\bar{T}_{0}\right)/\bar{T}_{0}\) and \(\left(\bar{\rho}-\bar{\rho}_{0}\right)/\bar{\rho}_{0}\), which clearly reveal the variations caused by waves in comparison to the non-driven case. To reveal the evaporation from the lower atmosphere, the enthalpy flux and mass flux are also considered as \[\bar{E}(z,t)=\frac{1}{A}\int_{0}^{t}\int_{A}\left(\frac{\gamma}{\gamma-1}p \right)v_{z}\mathrm{d}A\mathrm{d}t, \tag{5}\] \[\bar{M}(z,t)=\frac{1}{A}\int_{0}^{t}\int_{A}\rho v_{z}\mathrm{d}A\mathrm{d}t. \tag{6}\] Figure 3: Isosurfaces of the logarithm of density and temperature at the loop apex (\(z=100\)Mm) and \(y=0\) for (a) non-driven and (b) driven model at \(t=8060\)s. An animation of the density and temperature isosurfaces for the driven model is available in the online Journal. The animation proceeds from \(t=0\) to \(9340\)s. Figure 4: Surface-averaged temperature, density, enthalpy flux, and mass flux changes relative to the non-driven case are shown versus height, with zoomed-in lower panels for the regions outlined by dashed boxes. Figure 4c shows the changes of the \(z\)-component of the enthalpy flux (\(\Delta\bar{E}=\bar{E}-\bar{E}_{0}\)). From Figure 4a, we observe the expected temperature increases near the loop footpoint. This heating induces upward enthalpy flows, as seen in Figure 4c. The upward enthalpy flows are present throughout the simulation, although periodic downward flows occur after about 50min. Figure 4b confirms the mass changes induced by the upward flow, as evidenced by the density increases. A periodicity of \(\sim 20\)min 3 of the temperature profile is observed. The density and enthalpy flux exhibit a similar periodicity as shown in Figure 4. This periodicity is associated with the ponderomotive force and is attributed to slow waves, which has also been observed in previous numerical studies (e.g., Magyar and Van Doorsselaere, 2016; Karampelas et al., 2017). This has also been confirmed by Van Damme et al. (2020) by considering an Alfven wave driver. The excited slow oscillation can influence the vertical distribution of the temperature profile, density distribution, and enthalpy flux. The variations in the temperature and density profiles relative to the non-driven case seem slight (less than 4%). This is partly because the evaporation induced by waves, as demonstrated in Van Damme et al. (2020), can only cause modest variations in the dramatically changing atmosphere background in the vertical direction. In addition, the surface averaged values here are naturally smaller than the local quantity values in the loop region, which is no longer straightforward to quantify since our current loop is fully deformed and transversely diffused than previous coronal models. Nevertheless, the surface average procedure can still reveal the average variations of quantities of loop cross-section at a given height. The evolutions of quantities in the domain of \(2\mathrm{Mm}\leq z\leq 8\mathrm{Mm}\) are zoomed in and presented in the lower panels of Figure 4. We see upward enthalpy flows in this domain, indicating the occurrence of chromospheric evaporation. The density drops and temperature increases around \(z=4\mathrm{Mm}\) are the consequence of this evaporation. Additionally, the lower region presents a decrease (increase) in temperature (density) variations around \(z=2\mathrm{Mm}\). Seen from Figure 2, the boundary between the chromosphere and the transition region shifts downward to about \(z=1\mathrm{Mm}\), while the lower boundary of the transition region outside the loop is around \(z=2\mathrm{Mm}\). This means that denser and cooler masses from the external loop region are involved when computing the surface averaged quantities around \(z=2\)Mm. Heating effects induced by kink waves can be quantified by energy analysis (e.g., Karampelas et al., 2017; Guo et al., 2019, 2019). The continuous driving at the loop footpoint changes the equilibrium compared with the non-driven case, leading to the transition region moving with time. We thus consider an upper region starting from 9Mm as the coronal counterpart in the following analysis. The volume-averaged Poynting flux and internal energy are given by \[S(t)=-\frac{1}{V}\int_{0}^{t}\int_{A_{1}^{\prime}}\frac{1}{\mu_{0}}\left[( \boldsymbol{v}\times\boldsymbol{B})\times\boldsymbol{B}\right]\cdot\mathrm{d} \boldsymbol{A_{1}^{\prime}}\mathrm{d}t, \tag{7}\] \[I(t)=\frac{1}{V}\int_{V}\frac{p}{\gamma-1}\mathrm{d}V-\frac{1}{V}\int_{0}^{t} \int_{A_{2}^{\prime}}\left(\frac{\rho v^{2}}{2}+\rho\Phi+\frac{\gamma p}{ \gamma-1}\right)\boldsymbol{v}\cdot\mathrm{d}\boldsymbol{A_{2}^{\prime}} \mathrm{d}t, \tag{8}\] where \(V\) represents the volume of the concerned domain (\(-6\mathrm{Mm}\leq x\leq 6\mathrm{Mm},-3\mathrm{Mm}\leq y\leq 3\mathrm{Mm},9 \mathrm{Mm}\leq z\leq 100\mathrm{Mm}\)). \(\mathrm{d}\boldsymbol{A_{1}^{\prime}}\) (\(\mathrm{d}\boldsymbol{A_{2}^{\prime}}\)) is the normal vector of the bottom (lateral) surface. We also incorporate the energy fluxes of lateral boundaries into the internal energy variations, following a similar procedure in Karampelas et al. (2019). The input energy is defined as the Poynting flux of the bottom interface. As aforementioned, an enthalpy flux can be seen through the bottom boundary (\(z=9\)Mm) of the concerned domain. However, this enthalpy flux is much smaller than the Poynting flux here, thus has been neglected when considering the input energy from the bottom. Figure 5 shows the volume averaged energy density changes relative to the initial state, namely \(S(t)-S(0)\) and \(I(t)-I(0)\). In the non-driven model, the internal energy (\(I_{0}\)) drops with time due to the thermal conductive loss in the corona, while the internal energy variation in our driven model (\(I_{4}\)) stays positive before 120min and approaches zero after about 120min. This means that the internal energy first increases and then asymptotically approaches the initial equilibrium state by the end of the simulation. Therefore, the heating induced by waves can overcome or balance the thermal conductive loss in the corona, for the duration of the driving in the current model. Heating efficiency can be evaluated by considering the ratio between the relative internal energy increase and the input energy. Figure 5 also shows the difference between the driven and non driven models in the same domain. From the Poynting flux and the difference in internal energy changes, we can compute the heating efficiency of the driven model. A rapid increase in Poynting flux before 40min can be observed, leading to an average input energy flux of about \(78.1\rm{Jm^{-2}s^{-1}}\) over the considered coronal counterpart of the loop. The average internal energy increasing rate is about \(32.8\rm{Jm^{-2}s^{-1}}\), leading to a heating rate of 42%. In previous studies (e.g., Guo et al., 2019; Karampelas et al., 2019), we report the expansion of the loop cross-section when small scales and turbulence developed, inducing a decrease in the local magnetic field. This decrease can also be traced down from the corona to lower heights with decreasing strength. The variations of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and the transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30.7\rm{Jm^{-2}s^{-1}}\) from the corona to lower heights. The variation of the magnetic field in the lower interface between the corona and transition region cause a slower increase of the input energy from \(t\sim 50\rm{min}\), leading to an average input energy flux of about \(30. \(50\rm{min}\leq t\leq 100\rm{min}\). The corresponding internal energy growth rate is \(16.8\rm{Jm^{-2}s^{-1}}\) and the heating rate increases to \(55\%\). In the last 20 minutes of the simulation, the loop is fully evolved and the expansion is saturated, we see the input energy flux increase to \(45.1\rm{Jm^{-2}s^{-1}}\). The corresponding internal energy increasing rate of this phase is \(34.5\rm{Jm^{-2}s^{-1}}\), leading to a heating rate of \(76.5\%\). This shows a significant heating efficiency of the current model. Comparing with our previous models (e.g., Karampelas et al., 2017, 2019; Guo et al., 2019, 2019; Shi et al., 2021), the presence of thermal conduction allows enthalpy flows from the lower atmosphere to the corona. Such flows change the vertical density and temperature distribution in the coronal counterpart of the loop (Figure 4). The energy dissipation at the loop cross-section becomes more efficient, as indicated by the temperature increase in Figure 3. Therefore, the upward evaporation flows lead to a more efficient heating to balance the thermal conductive losses in the current model. ## 4 Discussion and Summary We model a three-dimensional magnetic cylinder in a realistic solar atmosphere from the chromosphere to the corona with thermal conduction included. Frequently observed kink oscillations are excited in this magnetic flux tube. Based on previous studies, heating effects are expected due to the presence of the Kelvin-Helmholtz instability induced by kink oscillations. Temperature and internal energy increases are indeed observed in the coronal counterpart of our driven model, with respective to a non-driven case. With the inclusion of thermal conduction, chromospheric evaporation can be observed, leading to an increase in temperature and a decrease in density. Although the spatially averaged density and temperature changes are gentle (\(\sim 4\%\) relative to the non-driven case), the corresponding enthalpy flow from the lower atmosphere can still influence the energy dissipation in the corona by redistributing the vertical density structuring. The efficient heating in the coronal counterpart of the loop can balance the thermal conductive loss in the current model, and thus maintain the internal energy in the corona. In the current simulation, we adopt an approximate model derived from Aschwanden & Schrijver (2002) to describe the vertical distribution of the solar atmosphere. This temperature distribution seems a coarser approximation than the VAL model (Vernazza et al., 1981) or its upgraded versions (Avrett and Loeser, 2008). Especially, it lacks sufficient details in the lower atmosphere. Nevertheless, it describes the main properties of the parameters (e.g., temperature and density) of the chromosphere and the transition region. In addition, note that the initial analytical profiles of temperature and density change after relaxation and a new equilibrium is achieved, as shown in Figure 2b. The initial expressions of the VAL model of the atmosphere would also be changed after the relaxation. Explicit resistivity and viscosity are not included in the current ideal MHD computation. In the solar corona, the magnetic Reynolds number is of the order of \(10^{12}\), leading to an extremely small resistivity if considering typical spatial scales and Alfven speed in the corona. In practical coronal models, however, even the numerical resistivity is several orders larger than the realistic value. Given the numerical resolution limitation, we should treat such models, especially large 3D models, as an ideal approximation to the real corona. Nevertheless, to examine the influence of explicit resistivity/viscosity on wave energy dissipation is still necessary. Howson et al. (2017) found that enhanced resistivity and viscosity that are larger than numerical values can suppress the onset of TWIKH rolls, thus reducing wave energy dissipation. The influence of explicit dissipation coefficients on wave heating has been quantitatively examined by Karampelas et al. (2019). Comparing with ideal simulations with only numerical dissipation, the heating effects are enhanced but the heating locations are the same in the resistive/viscous models in Karampelas et al. (2019). This confirms that the numerical dissipation can also play an effective role in such modeling associated with waves heating effects. In previous studies (e.g., Ofman et al., 1998), resonant absorption and phase mixing are believed to happen in resonant layers near a loop boundary. Cargill et al. (2016) believed that the resonant layer is not sustained due to the temporal evolution of the density gradient induced by Alfven waves. In our current results, however, TWIKH rolls spread over the whole cross section of the loop, associated with chromospheric evaporation not only happening near the boundary layer, but rather in the entire loop. Nevertheless, due to the slight density and temperature changes, the evaporation has no significant effect on the resonance conditions of the current model. In this simulation we are continuously driving waves at the eigenfrequency of the standing mode, and so the resultant evaporations may be viewed as an upper limit to the evaporations which would result from realistic, small amplitude (decayless) kink waves which are not so carefully driven. The limitation of current-day 3D MHD simulations having a magnetic Reynolds number several orders of magnitude off that for the corona further reinforces our results as an upper limit. The realistic thermal conductive loss in the quiet solar corona is about \(200\rm{J~{}m^{-2}~{}s^{-1}}\)(Withbroe and Noyes, 1977). The input energy in our current model does not seem to be able to complement such a high conductive loss in the real corona. However, it should be noted that the magnetic field strength of the current model is still vertically uniform. In reality, if the loop footpoint is anchored in the photosphere, the magnetic field should be much (or even several orders of magnitude) larger than that in the corona. This probably indicates that the real energy input from the photosphere is sufficient. In fact, observations have confirmed that the energy flux is indeed sufficient to balance the energy losses even in the active region (e.g., Fujimura and Tsuneta, 2009). We have simulated a more realistic coronal loop by including a dense and cold chromosphere, and despite the additional dynamics and energetics, it is encouraging to see that the results about wave heating inferred from coronal-only simulations still seems to hold. Examples include the formation of extended TWIKH rolls along the whole loop; and the increased efficiency of wave heating once the kinetic energy saturates at the later stages of the simulation and a turbulent density profile has developed. We thank the referee for helpful comments that improved the manuscript. The authors acknowledge the funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. 724326). TVD was also supported by the C1 grant TRACEspace of Internal Funds KU Leuven, and a Senior Research Project (G088021N) of the FWO Vlaanderen. K.K. acknowledges support by an FWO (Fonds voor Wetenschappelijk Onderzoek-Vlaanderen) postdoctoral fellowship (1273221N). Y.G. acknowledges the support from the China Scholarship Council (CSC) under file No. 202206010018.
2310.16924
Physician Detection of Clinical Harm in Machine Translation: Quality Estimation Aids in Reliance and Backtranslation Identifies Critical Errors
A major challenge in the practical use of Machine Translation (MT) is that users lack guidance to make informed decisions about when to rely on outputs. Progress in quality estimation research provides techniques to automatically assess MT quality, but these techniques have primarily been evaluated in vitro by comparison against human judgments outside of a specific context of use. This paper evaluates quality estimation feedback in vivo with a human study simulating decision-making in high-stakes medical settings. Using Emergency Department discharge instructions, we study how interventions based on quality estimation versus backtranslation assist physicians in deciding whether to show MT outputs to a patient. We find that quality estimation improves appropriate reliance on MT, but backtranslation helps physicians detect more clinically harmful errors that QE alone often misses.
Nikita Mehandru, Sweta Agrawal, Yimin Xiao, Elaine C Khoong, Ge Gao, Marine Carpuat, Niloufar Salehi
2023-10-25T18:44:14Z
http://arxiv.org/abs/2310.16924v1
Physician Detection of Clinical Harm in Machine Translation: Quality Estimation Aids in Reliance and Backtranslation Identifies Critical Errors ###### Abstract A major challenge in the practical use of Machine Translation (MT) is that users lack guidance to make informed decisions about when to rely on outputs. Progress in quality estimation research provides techniques to automatically assess MT quality, but these techniques have primarily been evaluated in vitro by comparison against human judgments outside of a specific context of use. This paper evaluates quality estimation feedback _in vivo_ with a human study simulating decision-making in high-stakes medical settings. Using Emergency Department discharge instructions, we study how interventions based on quality estimation versus backtranslation assist physicians in deciding whether to show MT outputs to a patient. We find that quality estimation improves appropriate reliance on MT, but backtranslation helps physicians detect more clinically harmful errors that QE alone often misses. ## 1 Introduction Empowering people to decide when and how to rely on NLP systems appropriately is a critical, albeit challenging, endeavor. Appropriate reliance is a moving target because it is difficult to operationalize, and depends on context and application domain. Research in this space has so far relied on evaluations that are abstracted from real world use cases (Doshi-Velez and Kim, 2017; Narayan et al., 2018; Boyd-Graber et al., 2022). We build on this work, and study appropriate reliance close to the actual decision that users have to make on the ground: whether or not to rely on a model output. We study this question in the context of physicians deciding when to rely on Machine Translation (MT) in an emergency room when communicating discharge instructions to a patient who does not speak their language (Mehandru et al., 2022). MT represents an example of an NLP system used by millions of people in daily life (Pitman, 2021), including in high-stakes contexts such as hospitals and courtrooms (Vieira et al., 2021). MT errors in those settings can be particularly harmful. In our setting, incorrect discharge instructions could lead to a patient misunderstanding their diagnosis or taking medications incorrectly, with potentially life-threatening consequences. Research shows that people tend to over-rely on systems (Bucinca et al., 2021), and that explainability techniques that aim to address this issue can instead increase blind trust in incorrect predictions (Bansal et al., 2021). In the case of MT, most user studies have focused on human translators (Stewart et al., 2020; Castilho et al., 2019; Green et al., 2013), who have the expertise to evaluate MT faithfulness and to correct outputs when needed. Deciding how to rely on MT is much more challenging for people who use it to communicate in a language that they do not know. Zouhar et al. (2021) find that providing quality feedback to people using MT in travel scenarios has mixed effects, and can make them feel more confident in their decisions without improving their actual task performance. This work evaluates the impact of quality estimation (QE) feedback on physicians' reliance on MT, building on decades of MT research on automatically estimating the quality of MT without reference translations (Blatz et al., 2004; Quirk, 2004; Specia et al., 2018; Fonseca et al., 2019; Han et al., 2021). However, QE systems are primarily trained to score overall translation quality outside of a specific context of use. It is unclear whether people will know how to interpret QE scores given prior evidence that they struggle to use probability and confidence estimates in explanations (Miller, 2018; Vodrahalli et al., 2022). Additionally, even when interpreted correctly, it is not clear whether seeing a QE score will make users better at deciding _when_ to rely on an MT output and when not to, e.g. due to clinically relevant errors. We compare QE with a method commonly used by lay users, including physicians, to estimate the quality of a translation: backtranslation into the source language using the same system (Mehandru et al., 2022). We conduct a randomized experiment with 65 English-speaking physicians to test how each of these interventions impacts their ability to decide when to rely on imperfect MT outputs (Figure 1). We find: * The QE treatment group had a significantly higher confidence-weighted accuracy in their overall decision to give or not give a translation to a patient. * The BT treatment group more effectively detected critical translation errors, those rated as having higher clinical risk. In sum, both interventions improve physicians' ability to assess clinical risk and their confidence in their decisions, but for complementary reasons. 1 Footnote 1: Code and data to reproduce our findings are released at [https://github.com/n-mehandru/PhysicianQE.git](https://github.com/n-mehandru/PhysicianQE.git). ## 2 Background We situate this work in the MT literature before motivating our medical use case in Section 3. Clinical MTMT shared tasks motivated by medical applications, such as scientific abstracts or clinical case translations, have led to research systems that produce translations that are more appropriate with in domain terminology than generic MT systems for diverse languages (Neves et al., 2022). However, in practical settings, clinicians turn to widely available MT systems such as Google Translate or Bing Translator (Randhawa et al., 2013; Khoong and Rodriguez, 2022). When translating Emergency Department discharge instructions from English into Chinese and Spanish, Google Translate was found to produce a majority of accurate outputs, however, a small number of inaccurate translations presented a risk of clinically significant harm (2% of sentences in Spanish and 8% of sentences in Chinese) (Khoong et al., 2019). Our work evaluates tools that are used in practice by physicians and purposefully over-samples from error-prone sentences to present a useful evaluation framework that can also be used to evaluate dedicated clinical MT systems, complementing standard reference-based evaluation metrics which do not directly account for the potential clinical impact of MT errors (Dew et al., 2018). Quality EstimationQuality Estimation (QE), the task of automatically assessing the quality of MT outputs without access to human references, Figure 1: Physicians who participate in the study are shown a sentence from a real discharge instruction, and its translation to Chinese via Google Translate. They are asked to decide whether they would hand this translation to a patient who only speaks Chinese. Participants are randomly assigned to one of two conditions: a) Quality estimation: A quality estimation model provides an evaluation of the translation, or b) Backtranslation: The participant is shown the backtranslation of the text to English using Google Translate. We found that participants who were provided the quality estimation were better at detecting when to rely on a translation. For the most severe errors, backtranslation provided more reliable assistance. The sentence shown was marked as not adequate and life-threatening by bilingual physician annotators. The human reference translation for the Google Translate output states: Your diagnosis is seizure of seizure, suggest we resume Zonisamide and Carbamazepine and one week of Clonazepam-am-bridge. is a long-standing area of research in MT (Specia et al., 2018). State-of-the-art QE systems developed for evaluating MT such as OpenKiwi (Kepler et al., 2019), TransQuest (Ranasinghe et al., 2020), or COMET-src (Rei et al., 2020) are built on top of large scale multilingual models like BERT (Devlin et al., 2019) or XLM-RoBERTa (Conneau et al., 2020). They are trained to predict direct assessment scores collected by crowdsourced workers or post-editing efforts as measured by HTER (Snover et al., 2006). QE systems have been primarily evaluated "in vitro" by measuring the correlation of their scores with generic human judgments of quality collected independently from the context of use. QE has also proved successful at guiding human translation workflows (Stewart et al., 2020). However, it remains unclear how useful QE is for non-professional end-users in practice. In this work, we present assessments derived from the state-of-the-art COMET-src to physicians who do not speak Chinese to help them decide when to rely on English-Chinese MT. BacktranslationWe focus on scenarios where the input text is translated into a language unknown to the text's author. The onus to decide whether the output is acceptable therefore falls on the author, even though they do not have the expertise to assess translation quality. In these settings, people routinely use an intuitive feedback mechanism: backtranslation, which consists of using MT again to translate the original MT output in the input language, so they can compare it with the original in a language they understand. This practice has been decried in the MT literature (Somers, 2005) as backtranslation provides a very noisy signal by lumping together errors from the forward and backward MT pass, and potentially hiding errors that the backward pass recovers from. Nevertheless, people use backtranslation routinely, perhaps encouraged by interfaces that let them switch MT translation direction with a single click. However, little is known about the usefulness of backtranslation in these practical settings. Zouhar et al. (2021) conduct a user study evaluating the impact of backtranslation by lay users for the practical task of booking a hotel room in a language that they do not speak. They hypothesize that backtranslation feedback can help people craft the content of a message so it is correctly interpreted by the recipient. Backtranslation feedback was found to greatly increase user confidence in a translation, without improving the actual translation quality. We study the impact of backtranslation in a clinical decision-making context and compare it to a QE model output. ## 3 MT for Cross-lingual Communication Between Patients and Physicians Cross-lingual communication is imperative in the presence of language barriers between physicians and patients. We focus on a specific high-stakes context: helping physicians communicate discharge instructions to patients in the Emergency Department (ED). While medical interpreters often facilitate conversations between clinicians and Limited English Proficiency (LEP) patients, this has been found to be insufficient in helping patients recall what they are supposed to do after getting discharged (Hoek et al., 2020). Further, comprehension of emergency discharge instructions is known to be an important contributor to patient non-adherence (Clarke et al., 2005). Our prior work has found that MT is frequently used in practice for tasks such as automatically translating discharge instructions (Mehandru et al., 2022), and thus provides a written record for patients to aid in comprehension at discharge time as well as recall and adherence. A key challenge is that it is difficult for physicians to ensure that patients comprehend written discharge instructions when they cannot verify the accuracy of a machine-generated translation. As an added complication, limited health literacy and discharge plan complexity can lead patients to overestimate comprehension (Glick et al., 2020). Designing MT for effective physician-patient communication involves many stakeholders. This work focuses on physicians as a starting point, as findings can inform their training and strategies for cross-lingual communication to maximize impact in the short-term. We leave to future work the equally important question of helping diverse LEP patient populations rely on MT output adequately. ## 4 Methods We conducted a randomized controlled experiment to test how quality estimation and backtranslation impact physicians' appropriate reliance on MT. ### Emergency Department Discharge Instructions Data Source TextEnglish source text for our experiment is drawn from de-identified Emergency De partment (ED) discharge instructions that were written between the years 2016 to 2021 at the University of California, San Francisco (UCSF). We select six discharge instructions, and a total of twenty-eight sentences from these notes, to ensure that they present expected key elements in a discharge instruction to a patient, including presentation of the problem (chief complaint), actual diagnosis, medication list, follow-up items, and a 24/7 callback number with the referring provider (DeSai et al., 2021; Halasyamani et al., 2006). Additionally, sentences were selected based on the source text complexity, and the presence of certain words having multiple meanings. MTThe subset of selected English sentences were automatically translated into simplified Chinese by Google Translate.2 We chose English-Chinese, because it is a high-demand language pair that is often needed in clinical settings in the United States, and Google Translate is known to be used by physicians (Khoong et al., 2019; Mehandru et al., 2022). While general translation quality is expected to be reasonably strong, on Flores, the translation quality as measured by BLEU is \(38.50\) for the English-Chinese (Simplified) devtest (Goyal et al., 2022). Medical texts are typically out of the training domain, and as a result, clinically harmful translation errors have been documented for this specific language pair (Khoong et al., 2019). Footnote 2: We use the Google Sheets Translation API. Gold AnnotationThree bilingual physicians independently annotated each MT output along two dimensions: **translation adequacy** and **clinical risk**. Adequacy was defined as whether the Chinese translation accurately conveyed the meaning of the English source text (Turian et al., 2003). Physicians rated clinical risk based on the translation presented to them according to five categories: clinically insignificant, mildly clinically significant, moderately clinically significant, highly clinically significant, and life-threatening (Napoles et al., 2015). During the annotations, physicians kept in mind how if a monolingual patient were to read the translation, whether or not the patient would understand the discharge instruction sentence. The three physicians then met with the lead author to discuss disagreements in the clinical risk ratings, and agree on a final label for each sentence. ### Experimental Design We conduct a between-subjects experiment with participants randomly assigned to one of the two treatment conditions: **backtranslation (BT)** and **quality estimation (QE)**. We added a **baseline** condition to assess participant responses to MT in the absence of feedback, which both groups completed first (within-subjects). ParticipantsWe used convenience sampling to recruit sixty-five physicians to participate in our study. Medical residents were in training programs in the United States, and practicing physicians worked in multilingual settings. Their specialties included: internal medicine, cardiology, emergency medicine, neurology, surgery, family medicine, pediatrics, allergy and immunology, intensive care, obstetrics and gynecology, infectious diseases, military medicine, and psychiatry. 45% of physicians reported interacting with LEP patients daily, while another 45% responded interacting with them either two to three times per week or bi-weekly. 17% of physicians reported using Google Translate on a monthly basis when writing discharge instructions, while 14% responded using it bi-weekly or two to three times a week. We randomized physicians into each condition for a total of thirty-five physicians in the backtranslation group and thirty in the QE group. Physicians were screened to ensure they were fluent in English, and had no knowledge of Chinese. This inclusion criteria for the experiment was in the demographic section of the survey, and was included in our recruitment emails. We further manually filtered out any participants who may have missed this statement and reported that they spoke Chinese in the pre-survey. This study was approved by our Institutional Review Board (IRB), and physicians were compensated for taking the time to participate in our experiment. Survey DesignPhysicians were first presented with the baseline condition. They were asked to read a discharge instruction, and then were presented with a sentence from the note and its respective Chinese MT translation. After the baseline condition, physicians were presented with one of the two treatment interventions, and the same set of twenty-eight English sentences from the six discharge instructions, each associated with its Chi nese MT translation except also accompanied by one of two quality feedback types. In all conditions, after seeing a Chinese translation, participants were asked: 1) whether they would give the translation to a patient who only reads Chinese (binary question, yes or no), and 2) their confidence in this decision on a five-point Likert scale (1= Not Confident and 5= Very Confident). In the treatment conditions, participants were additionally asked to assess whether a monolingual Chinese patient would understand the discharge instruction sentence after reading the Chinese MT, using a five point Likert scale (1= Patient would understand none of the meaning and 5= Patient would understand all of the meaning). Acknowledging the real-world context in which physicians would have to make these decisions, participants were asked to assume that a medical interpreter had already reviewed the discharge instruction with the Chinese-speaking patient. To ensure that they focused on patient comprehension of the discharge instruction as it pertains to clinical outcomes, physicians were also instructed to respond to questions without concern for lawsuits and regulatory requirements around showing an imperfect translation to a Chinese-speaking patient. ### Treatment Conditions Details Quality EstimationWe adopt the state-of-the-art quality estimation (QE) system, Comet-QE (wmt21-comet-qe-mqm)3 to design this treatment condition (Rei et al., 2021). Each source and MT pair is passed through the trained QE predictor to generate a score in the range of \([-1,1]\). A positive score indicates that the translation quality of the sentence is better than average, while a negative score indicates below-average quality. While model-based predictions correlate well with human judgments (Freitag et al., 2022), they are hard to interpret. Hence, we partition the \([-1,1]\) interval to define quality labels, motivated by those used to assess translation quality in human evaluation of MT (Freitag et al., 2021; Kocmi et al., 2022). Footnote 3: [https://github.com/Unbabel/CMET](https://github.com/Unbabel/CMET) To define these labels in a data-driven fashion, we collect a small development dataset by asking a bilingual physician to answer two questions about the quality of 125 sentences sampled from a related patient-physician conversational dataset: 4 a) _is the translation accurate?_ b) _can the translation error pose a clinical harm?_ (Fareez et al., 2022). We identify thresholds for QE scores on this development set based on the ROC curves for adequacy and risk prediction. The translation pairs are then labeled according to Table 2. Footnote 4: This dataset provides simulated patient-physician interactions in English across six medical cases, which are closer to discharge instructions than e.g., clinical notes aimed at other physicians rather than patients. The conversations are automatically transcribed and manually post-edited. Table 1 shows the breakdown of QE labels for clinical risk and adequacy assessment on the gold annotation data. COMET-QE achieves an accuracy of 66% and 73% on detecting adequate and clinically insignificant translations, respectively. However, COMET-QE was not effective at detecting incorrect translations that caused clinical risk. Of the nine translations that are deemed by bilingual experts to cause moderate to life-threatening harm to users (columns "moderate", "high", and "life-threatening"in Table 1), COMET-QE rated 2/9 as having "consistent" translations. The QE labels provided are thus accurate enough to be potentially \begin{table} \begin{tabular}{c useful, yet realistically imperfect, as expected of automatically generated reliability measures. Given the QE labels extracted using the above strategy, participants were then presented with a description of the quality estimation system, and the output labels they expect to see. They were asked to decide, based on the information provided, whether they would provide the translation to a patient and their confidence in the assessment. BacktranslationIn the backtranslation treatment condition, physicians were presented with the source English sentence, and were told that the Google Translation system generated a Chinese translation for the given sentence. Participants were then presented with text explaining that Google Translate translated the previous text back into English and were then shown the output translation. ### Measures Our goal in this work is to study whether physicians can more accurately rely on an MT output if they are provided with an evaluation of the quality by a QE model, or by seeing the backtranslation by the same MT system. Our outcome metric is as close to possible to the actual decision that physicians make in practice. We asked physicians to decide whether they would share a translation with a patient and also their confidence in that decision. To measure their overall performance, we use confidence weighting [1], a common metric in cognitive psychology that measures whether the participant made the correct decision weighted by their confidence in that decision. Intuitively, confidence-weighted accuracy provides a way to encapsulate the properties of appropriate reliance in one metric: making accurate decisions and calibrating confidence in the model appropriately. In other words, if the participant makes an error with high confidence, this metric penalizes them more than if they make the same error but with less confidence. Reliance MetricsGiven P physicians, S instances (sentences), let \(s^{*}\) be the correct answer for sentence s, and \(\hat{s_{p}}\) is the answer selected by the p-th physician for this sentence s with confidence \(c_{s}\) on a scale of 1 to 5. Our experiment uses the following measures: The **Physician Accuracy (%)** for each condition (BT, QE) for each physician (p) is given by is: \[Accuracy=\frac{1}{S}\sum_{s\in S}\mathbbm{1}[s^{*}==\hat{s_{p}}] \tag{1}\] The **Confidence Weighted Accuracy (CWA)** for each condition (BT, QE) for each physician (p) is given by is: \[\begin{split} CWA=\frac{1}{S}\sum_{s\in S}sign(s)\frac{c_{s}}{5} \\ sign(s)=\begin{cases}1,&\text{if }s^{*}==\hat{s_{p}}\\ -1,&\text{otherwise}\end{cases}\end{split} \tag{2}\] CorrectnessWe define a correct decision by comparing the physician's decision with the adequacy of the translation deemed by our physician annotators. _Adequate Translation._ An adequate translation is one in which the discharge instruction sentence was passed through Google Translate and annotated by bilingual English-Chinese physicians as correctly conveying the meaning of the English source text. An accurate decision in this context would be a physician giving the discharge instruction sentence to a monolingual patient. _Inadequate Translation._ An inadequate translation is one in which the discharge instruction sentence was passed through Google Translate and annotated by bilingual English-Chinese physicians as incorrectly conveying the meaning of the English source text. An accurate decision in this context would be a physician not giving the discharge instruction sentence to a monolingual patient. ## 5 Results We will show that physicians in the quality estimation (QE) condition had significantly higher confidence-weighted accuracy (CWA) than their baselines compared to those in the backtranslation (BT) condition. We also found that while the QE intervention increased their overall CWA, physicians in the QE treatment group were significantly worse than those in the BT treatment group at detecting errors, especially those labeled with higher clinical risk. We end with a discussion of the potential complementary roles that these two interventions can play for informed reliance on MT in high-stakes settings. QE Treatment Group has Higher Confidence-Weighted AccuracyPhysicians in the QE treatment group (\(M=19.6,SD=13.7\)) had a significantly higher CWA than the baseline for that group (\(M=11.2,SD=15.3\), \(t(58)=2.1,p=0.03\), Figure 2). We did not find any difference in CWA between the physicians in the BT treatment group (\(M=8.6\), \(SD=11.3\)) and their baselines (\(M=7.9,SD=11.4,t(68)=0.2,p=0.7\)). This means that physicians significantly improved in their ability to rely appropriately on MT when presented with the QE evaluation, but not when presented with the BT. The difference between the QE and BT treatment groups was significant (\(t(63)=3.4,p<0.001\)). For discharge instruction sentences that were labeled as inducing higher clinical risk (moderately clinically significant, highly clinically significant, or life-threatening), the BT treatment group identified clinically harmful errors at a much higher rate. For example, consider the last sentence in Table 3, which gives medication instructions to a patient. The MT was annotated by bilingual physicians as _not adequate_ and inducing _highly clinically significant risk_. Physicians in the BT treatment group correctly decided not to give the MT output to a Chinese patient with 94.3% accuracy, while the QE treatment group did so with only 53.5% accuracy. Post-Survey AnalysisIn a post-survey, we asked physicians to rate their confidence in using the QE and Google Translation systems in a clinical workflow on a scale of 1 to 5. Physicians in the QE group reported higher levels of confidence in the quality estimation system (\(M=2.6\)) compared to physicians in the BT (\(M=1.6\)). A Mann-Whitney U test revealed a significant difference in responses between the groups, U = 229, p <.001. We saw a similar difference in the response to whether they would like to use each tool for clinical decision-making. Appendix Figure 5 has full post-survey results. ## 6 Discussion We found that **QE and BT may play a complementary role**: QE can assist physicians in their decision to provide translated written discharge instructions to patients, while BT was more effective in detecting critical errors in MT systems. Combining aspects of BT and QE feedback may therefore benefit MT users in high-stakes settings. Our results show that, contrary to common MT wisdom, BT should not be entirely discounted as a quality feedback method for people, aligning with recent Figure 2: The QE treatment group had a significantly higher confidence-weighted accuracy score than the baseline condition for that group as well as the BT treatment group. However, we did not find any significant difference between the BT treatment group and the baseline condition for that group. ns: not significant; \(*\): significant with p-value \(<0.05\); \(***\): p-value \(<0.001\). \begin{table} \begin{tabular}{l l l} \hline \hline **Discharge Instruction** & **Clinical Risk** & \begin{tabular}{c} **Accuracy (\%)** \\ BT \\ \end{tabular} & QE \\ \hline \hline \multirow{2}{*}{We believe this was due to something called vasovagal response} & moderately significant & 80 & 23 \\ & & & \\ & & & \\ & & & \\ & & & \\ \end{tabular} \end{table} Table 3: A sample of the sentences used in our study, the clinical risk of their translation as determined by bi-lingual physicians, and participant accuracy rates across conditions for that sentence. findings that BT-based metrics can complement off-the-shelf supervised QE systems in automatic QE settings (Agrawal et al., 2022). However, it remains to be seen whether presenting the BT output itself is needed, or whether providing finer-grained QE feedback could also play that role. In the post-survey, we asked physicians how they would want translation quality estimates to be provided. Respondents reported that they would most want a binary indicator of whether the translation was correct, an explanation of where the errors occurred, and error categories relevant for medical purposes. Our study showed the potential of QE using one specific scheme, and motivates future work refining the **presentation of QE feedback** to best support physician needs. Our study took place in the midst of many discussions around the use of **large language models (LLMS) in clinical settings**, including their potential to assist physicians in writing discharge instructions (Arora and Arora, 2023), and the development of dedicated clinical language models, such as GatorTron which was trained on over 90 billion words from electronic health records (Yang et al., 2022). Our survey asked physician respondents how they perceived the use of ChatGPT in their workflows (Homolak, 2023; Dave et al., 2023; Li et al., 2023). We found 77% of physicians would use ChatGPT in their clinical workflows, barring regulatory requirements and legal implications. More specifically, 78% responded they would use ChatGPT to summarize patient notes, including symptoms and treatment plans, 60% responded they would use it to answer patient questions, 32% for clinical decision support and to make evidence-based recommendations, and 12% vowed to never use it in their actual clinical workflows. This further highlights the urgent need for NLP work that de Figure 4: The QE treatment group had an overall higher CWA in correctly relying on MT systems when (a) translations were adequate whereas the BT treatment group had a higher CWA in judging (b) inadequate translations. ns: not significant;**: significant with p-value of 0.01; **: p-value of 0.0001. Figure 3: BT enables physicians to detect errors with high clinical risk at higher accuracy. *: p-value of 0.05; velops appropriate mechanisms for people to make appropriate use of language generation tools beyond MT in clinical settings. ## 7 Conclusion We conducted a randomized experiment with 65 English-speaking physicians to test how quality estimation and backtranslation interventions impact their ability to decide when to rely on imperfect MT outputs. We found that the QE group had a significantly higher accuracy in their overall decision to give or not give a translation to a patient, while the BT group detected critical translation errors more effectively. This study paves the way for future work designing methods that combine the strengths of QE and BT, and contributes a human-centered evaluation design that can be used to further improve MT in clinical settings. More broadly, this work provides support for the usefulness of explanations for helping people rely on AI tools effectively in a real-world task. Our explanations focus on providing actionable feedback rather than explaining the internal workings of the MT model, aligning with recent calls to rethink explainable AI beyond interpretability (Miller, 2023). It remains to be seen how to provide feedback that gives users more agency in appropriately using imperfect MT outputs, and how to design for appropriate reliance on other NLP tools in clinical settings, including large language models. ## Limitations The study is naturally limited to specific experimental conditions. For example, we evaluated translation for a single language pair, and MT system. While this is motivated by real-world communication needs in hospitals in the United States, it is unclear how these findings would generalize to other language pairs, including translation into, from, or between languages that are underrepresented in the MT training data which would likely lead to lower translation quality, as well as translation between more closely related languages where users might be able to exploit cognates or other cues to assess translation quality even if they only speak one of the two languages involved. Future research will consider the combined use of multiple interventions and larger sample sizes of sentences. Emergency department discharge instructions represent one of many forms of communication between physicians and patients, and future work needs to explore how MT can be used for other settings. Additionally, our study focused on physicians' reliance on MT, but successful communication naturally requires taking into account the patient's perspective as well, which we will consider in future work. ## Ethics Statement This work was conducted in line with the EMNLP Ethics Policy. Models, datasets, and evaluation methodologies used are detailed in our Methods section. The discharge instruction sentences went through a rigorous de-identification process to ensure no patient information was compromised. The use of these discharge notes were approved to be released by the university hospital that we obtained them from. Our study was approved by our university's Institutional Review Board (IRB). Physicians gave their consent to participate in the study, and were compensated for their time. ## Acknowledgements We would like to thank the physician participants for their time and the anonymous reviewers for their constructive feedback. We would also like to thank Coye Cheshire for early feedback on the study design. This work is supported by NSF Fairness in AI Grant 2147292.
2304.03508
$\boldsymbol{T}$-algebra homomorphisms between rational function semifields of tropical curves
We prove that an injective $\boldsymbol{T}$-algebra homomorphism between the rational function semifields of two tropical curves induces a surjective morphism between those tropical curves, where $\boldsymbol{T}$ is the tropical semifield $(\boldsymbol{R} \cup \{ -\infty \}, \operatorname{max}, +)$.
JuAe Song
2023-04-07T06:56:13Z
http://arxiv.org/abs/2304.03508v1
# \(\boldsymbol{T}\)-algebra homomorphisms between rational function semifields of tropical curves ###### Abstract. We prove that an injective \(\boldsymbol{T}\)-algebra homomorphism between the rational function semifields of two tropical curves induces a surjective morphism between those tropical curves, where \(\boldsymbol{T}\) is the tropical semifield \((\boldsymbol{R}\cup\{-\infty\},\max,+)\). Key words and phrases:morphisms between tropical curves, \(\boldsymbol{T}\)-algebra homomorphisms between rational function semifields of tropical curves, chip firing moves on tropical curves 2 Here \(\mathrm{Max}^{\prime}_{x}\) is defined as the set \(\{x^{\prime}\in\varGamma_{2}\,|\,\psi(\mathrm{CF}(\{x\},l))(x^{\prime})=0\}\) when \(x\in\varGamma_{1}\backslash\varGamma_{1\infty}\) with \(l\in\boldsymbol{R}_{>0}\cup\{\infty\}\); \(\{x^{\prime}\in\varGamma_{2}\,|\,\psi(\mathrm{CF}(\varGamma_{1}\backslash(y,x ],\infty))^{\odot(-1)}(x^{\prime})=\infty\}\) when \(x\in\varGamma_{1\infty}\) with a finite point \(y\) on the unique edge incident to \(x\). Theorem 1.1 has the following corollary: **Corollary 1.2**.: _The following categories \(\mathscr{C},\mathscr{D}\) are isomorphic._ \((1)\) _The class \(\mathrm{Ob}(\mathscr{C})\) of objects of \(\mathscr{C}\) is the tropical curves._ _For \(\varGamma_{1},\varGamma_{2}\in\mathrm{Ob}(\mathscr{C})\), the set \(\mathrm{Hom}_{\mathscr{C}}(\varGamma_{1},\varGamma_{2})\) of morphisms from \(\varGamma_{1}\) to \(\varGamma_{2}\) consists of all of the injective \(\boldsymbol{T}\)-algebra homomorphisms \(\mathrm{Rat}(\varGamma_{1})\hookrightarrow\mathrm{Rat}(\varGamma_{2})\)._ \((2)\) _The class \(\mathrm{Ob}(\mathscr{D})\) of objects of \(\mathscr{D}\) is the tropical curves._ _For \(\varGamma_{1},\varGamma_{2}\in\mathrm{Ob}(\mathscr{D})\), the set \(\mathrm{Hom}_{\mathscr{D}}(\varGamma_{1},\varGamma_{2})\) of morphisms from \(\varGamma_{1}\) to \(\varGamma_{2}\) consists of all of the surjective morphisms \(\varGamma_{2}\twoheadrightarrow\varGamma_{1}\)._ The rest of this paper is structured as follows. In Section 2, we give the definitions of semirings and algebras, tropical curves, rational functions and chip firing moves on tropical curves, and morphisms between tropical curves. Section 3 contains the proofs of the assertions above. ## Acknowledgements The author thanks my supervisor Masanori Kobayashi and Yasuhito Nakajima for their helpful comments. ## 2. Preliminaries In this section, we recall several definitions which we need later. We refer to [4] (resp. [10]) for an introduction to the theory of semirings (resp. tropical geometry) and employ definitions in [8] (resp. [6]) related to semirings (resp. tropical curves). The definition of morphisms between tropical curves we employ in Subsection 2.4 is given in [2]. Today it is usual that we assume a morphism between tropical curves to be (finite) harmonic (cf. [2], [6]). However, since the pull-back of a morphism between tropical curves is a \(\boldsymbol{T}\)-algebra homomorphism between the rational function semifields of these tropical curves (see the beginning of Section 3), in our setting, it is natural to employ Chan's definition of morphisms between tropical curves in [2]. ### Semirings and algebras In this paper, a _semiring_ is a commutative semiring with the absorbing neutral element \(0\) for addition and the identity \(1\) for multiplication. If every nonzero element of a semiring \(S\) is multiplicatively invertible and \(0\neq 1\), then \(S\) is called a _semifield_. A map \(\varphi:S_{1}\to S_{2}\) between semirings is a _semiring homomorphism_ if for any \(x,y\in S_{1}\), \[\varphi(x+y)=\varphi(x)+\varphi(y),\ \varphi(x\cdot y)=\varphi(x)\cdot\varphi(y), \ \varphi(0)=0,\ \text{and}\ \varphi(1)=1.\] Given a semiring homomorphism \(\varphi:S_{1}\to S_{2}\), we call the pair \((S_{2},\varphi)\) (for short, \(S_{2}\)) a \(S_{1}\)_-algebra_. For a semiring \(S_{1}\), a map \(\psi:(S_{2},\varphi)\to S_{1}\) is called a _semiring homomorphism_. For a semiring \(S_{1}\), a map \(\psi:(S_{2},\varphi)\to S_{2}\) is called a _semiring homomorphism_. For a semiring \(S_{1}\), a map \(\psi:(S_{2},\varphi)\to S_{1}\) is called a _semiring homomorphism_. For a semiring \(S_{1}\), a map \(\psi:(S_{2},\varphi)\to S_{2}\) is called a _semiring homomorphism_. For a semiring \(S_{1}\), a map \(\psi:(S_{2},\varphi)\to S_{1}\) is called a _semiring homomorphism_. For a semiring \(S_{1}\), a map \(\psi:(S_{2},\varphi)\to S_{1}\) is called a _semiring homomorphism_. \((S^{\prime}_{2},\varphi^{\prime})\) between \(S_{1}\)-algebras is a \(S_{1}\)_-algebra homomorphism_ if \(\psi\) is a semiring homomorphism and \(\varphi^{\prime}=\psi\circ\varphi\). When there is no confusion, we write \(\psi:S_{2}\to S^{\prime}_{2}\) simply. The set \(\boldsymbol{T}:=\boldsymbol{R}\cup\{-\infty\}\) with two tropical operations: \[a\oplus b:=\max\{a,b\}\quad\text{and}\quad a\odot b:=a+b,\] where \(a,b\in\boldsymbol{T}\), becomes a semifield. Here, for any \(a\in\boldsymbol{T}\), we handle \(-\infty\) as follows: \[a\oplus(-\infty)=(-\infty)\oplus a=a\quad\text{and}\quad a\odot(-\infty)=(- \infty)\odot a=-\infty.\] \(\boldsymbol{T}\) is called the _tropical semifield_. \(\boldsymbol{B}:=(\{0,-\infty\},\max,+)\) is a subsemifield of \(\boldsymbol{T}\) called the _boolean semifield_. ### Tropical curves In this paper, a _graph_ is an unweighted, undirected, finite, connected nonempty multigraph that may have loops. For a graph \(G\), the set of vertices is denoted by \(V(G)\) and the set of edges by \(E(G)\). A vertex \(v\) of \(G\) is a _leaf end_ if \(v\) is incident to only one edge and this edge is not loop. A _leaf edge_ is an edge of \(G\) incident to a leaf end. A _tropical curve_ is the underlying topological space of the pair \((G,l)\) of a graph \(G\) and a function \(l:E(G)\to\boldsymbol{R}_{>0}\cup\{\infty\}\), where \(l\) can take the value \(\infty\) only on leaf edges, together with an identification of each edge \(e\) of \(G\) with the closed interval \([0,l(e)]\). The interval \([0,\infty]\) is the one point compactification of the interval \([0,\infty)\). We regard \([0,\infty]\) not just as a topological space but as almost a metric space. The distance between \(\infty\) and any other point is infinite. When \(l(e)=\infty\), the leaf end of \(e\) must be identified with \(\infty\). If \(E(G)=\{e\}\) and \(l(e)=\infty\), then we can identify either leaf ends of \(e\) with \(\infty\). When a tropical curve \(\varGamma\) is obtained from \((G,l)\), the pair \((G,l)\) is called a _model_ for \(\varGamma\). There are many possible models for \(\varGamma\). We frequently identify a vertex (resp. an edge) of \(G\) with the corresponding point (resp. the corresponding closed subset) of \(\varGamma\). A model \((G,l)\) is _loopless_ if \(G\) is loopless. For a point \(x\) of a tropical curve \(\varGamma\), if \(x\) is identified with \(\infty\), then \(x\) is called a _point at infinity_, else, \(x\) is called a _finite point_. \(\varGamma_{\infty}\) denotes the set of all points at infinity of \(\varGamma\). If \(\varGamma_{\infty}\) is empy, then \(\varGamma\) is called a _metric graph_. If \(x\) is a finite point, then the _valence_\(\operatorname{val}(x)\) is the number of connected components of \(U\setminus\{x\}\) with any sufficiently small connected neighborhood \(U\) of \(x\); if \(x\) is a point at infinity, then \(\operatorname{val}(x):=1\). We construct a model \((G_{\circ},l_{\circ})\) called the _canonical model_ for \(\varGamma\) as follows. Generally, we define \(V(G_{\circ}):=\{x\in\varGamma\,|\,\operatorname{val}(x)\neq 2\}\) except for the following two cases. When \(\varGamma\) is homeomorphic to a circle \(S^{1}\), we define \(V(G_{\circ})\) as the set consisting of one arbitrary point of \(\varGamma\). When \(\varGamma\) has the pair \((T,l)\) as its model, where \(T\) is a tree consisting of three vertices and two edges and \(l(E(T))=\{\infty\}\), we define \(V(G_{\circ})\) as the set of two points at infinity and any finite point of \(\varGamma\). The union of \(V(G_{\circ})\) and the set of the midpoints of all loops of \(G_{\circ}\) defines the canonical loopless model_ for \(\Gamma\). For a point \(x\) of \(\Gamma\), a _half-edge_ of \(x\) is a connected component of \(U\setminus\{x\}\) with any connected neighborhood \(U\) of \(x\) which consists of only two-valent points and \(x\). The word "an edge of \(\Gamma\)" means an edge of \(G_{\circ}\). ### Rational functions and chip firing moves Let \(\Gamma\) be a tropical curve. A continuous map \(f:\Gamma\to\boldsymbol{R}\cup\{\pm\infty\}\) is a _rational function_ on \(\Gamma\) if \(f\) is a constant function of \(-\infty\) or a piecewise affine function with integer slopes, with a finite number of pieces and that can take the values \(\pm\infty\) at only points at infinity. For a point \(x\) of \(\Gamma\) and a rational function \(f\in\operatorname{Rat}(\Gamma)\setminus\{-\infty\}\), \(x\) is a _zero_ (resp. _pole_) of \(f\) if the sign of the sum of outgoing slopes of \(f\) at \(x\) is positive (resp. negative). If \(x\) is a point at infinity, then we regard the outgoing slope of \(f\) at \(x\) as the slope of \(f\) from \(y\) to \(x\) times minus one, where \(y\) is a finite point on the leaf edge incident to \(x\) such that \(f\) has a constant slope on the interval \((y,x)\). \(\operatorname{Rat}(\Gamma)\) denotes the set of all rational functions on \(\Gamma\). For rational functions \(f,g\in\operatorname{Rat}(\Gamma)\) and a point \(x\in\Gamma\setminus\Gamma_{\infty}\), we define \[(f\oplus g)(x):=\max\{f(x),g(x)\}\quad\text{and}\quad(f\odot g)(x):=f(x)+g(x).\] We extend \(f\oplus g\) and \(f\odot g\) to points at infinity to be continuous on the whole of \(\Gamma\). Then both are rational functions on \(\Gamma\). Note that for any \(f\in\operatorname{Rat}(\Gamma)\), we have \[f\oplus(-\infty)=(-\infty)\oplus f=f\] and \[f\odot(-\infty)=(-\infty)\odot f=-\infty.\] Then \(\operatorname{Rat}(\Gamma)\) becomes a semifield with these two operations. Also, \(\operatorname{Rat}(\Gamma)\) becomes a \(\boldsymbol{T}\)-algebra with the natural inclusion \(\boldsymbol{T}\hookrightarrow\operatorname{Rat}(\Gamma)\). Note that for \(f,g\in\operatorname{Rat}(\Gamma)\), \(f=g\) means that \(f(x)=g(x)\) for any \(x\in\Gamma\). Let \(\Gamma_{1}\) be a closed subset of a tropical curve \(\Gamma\) which has a finite number of connected components and no connected components consisting of only a point at infinity, and \(l\) a positive number or infinity. The _chip firing move_ by \(\Gamma_{1}\) and \(l\) is defined as the rational function \(\operatorname{CF}(\Gamma_{1},l)(x):=-\min\{\operatorname{dist}(\Gamma_{1},x),l\}\) with \(x\in\Gamma\), where \(\operatorname{dist}(\Gamma_{1},x)\) denotes the distance between \(\Gamma_{1}\) and \(x\). ### Morphisms between tropical curves Let \(\varphi:\Gamma\to\Gamma^{\prime}\) be a continuous map between tropical curves. \(\varphi\) is a _morphism_ if there exist loopless models \((G,l)\) and \((G^{\prime},l^{\prime})\) for \(\Gamma\) and \(\Gamma^{\prime}\), respectively, such that \(\varphi\) can be regarded as a map \(V(G)\cup E(G)\to V(G^{\prime})\cup E(G^{\prime})\) satisfying \(\varphi(V(G))\subset V(G^{\prime})\) and for \(e\in\varphi(E(G))\), there exists a nonnegative integer \(\operatorname{deg}_{e}(\varphi)\) such that for any points \(x,y\) of \(e\), \(\operatorname{dist}_{\varphi(e)}(\varphi(x),\varphi(y))=\operatorname{deg}_{e} (\varphi)\cdot\operatorname{dist}_{e}(x,y)\), where \(\operatorname{dist}_{\varphi(e)}(\varphi(x),\varphi(y))\) denotes the distance between \(\varphi(x)\) and \(\varphi(y)\) in \(\varphi(e)\). ## 3. Main results In this section, we will prove Theorem 1.1 and Corollary 1.2. We first check that the converse of Theorem 1.1 holds: **Proposition 3.1**.: _Let \(\varGamma_{1},\varGamma_{2}\) be tropical curves. If \(\varphi:\varGamma_{2}\to\varGamma_{1}\) is a surjective morphism, then the pull-back \(\varphi^{*}:\operatorname{Rat}(\varGamma_{1})\to\operatorname{Rat}(\varGamma_ {2});f\mapsto f\circ\varphi\) is an injective \(\boldsymbol{T}\)-algebra homomorphism._ Proof.: Since \(\varphi\) is a morphism, for any \(f\in\operatorname{Rat}(\varGamma_{1})\), \(f\circ\varphi\) is a rational function on \(\varGamma_{2}\). By definition, \(\varphi^{*}\) is a \(\boldsymbol{T}\)-algebra homomorphism. For \(f,g\in\operatorname{Rat}(\varGamma_{1})\), if \(f\neq g\), then there exists \(x\in\varGamma_{1}\) such that \(f(x)\neq g(x)\). Since \(\varphi\) is surjective, there exists \(x^{\prime}\in\varGamma_{2}\) such that \(x=\varphi(x^{\prime})\). Hence we have \[\varphi^{*}(f)(x^{\prime}) =(f\circ\varphi)(x^{\prime})\] \[=f(\varphi(x^{\prime}))\] \[=f(x)\] \[\neq g(x)\] \[=g(\varphi(x^{\prime}))\] \[=(g\circ\varphi)(x^{\prime})\] \[=\varphi^{*}(g)(x^{\prime}).\] Thus \(\varphi^{*}\) is injective. By the following examples, we know that, in general, a semiring homomorphism between semifields may not be injective. **Example 3.2**.: The correspondence \(\boldsymbol{T}\to\boldsymbol{B};-\infty\neq t\mapsto 0;-\infty\mapsto-\infty\) is a noninjective semiring homomorphism. **Example 3.3**.: Let \(\varGamma:=[0,2]\) and \(\varGamma_{1}:=[0,1]\subset\varGamma\). The natural inclusion \(\iota:\varGamma_{1}\hookrightarrow\varGamma\) is a nonsurjective morphism and the pull-back \(\iota^{*}:\operatorname{Rat}(\varGamma)\to\operatorname{Rat}(\varGamma_{1}) ;f\mapsto f\circ\iota\) is the restriction map \(f\mapsto f|_{\varGamma_{1}}\) and is a surjective \(\boldsymbol{T}\)-algebra homomorphism which is not injective. By Example 3.3, we also know that a morphism between tropical curves may not be surjective and to consider the condition "\(\psi\) is injective" is fundamental. For more details on morphisms between tropical curves, see [2]. Let \(\varGamma_{1},\varGamma_{2}\) be tropical curves and \(\psi:\operatorname{Rat}(\varGamma_{1})\to\operatorname{Rat}(\varGamma_{2})\) an injective \(\boldsymbol{T}\)-algebra homomorphism. The following lemma is easy but fundamental and is proven in the same way as the proof of [7, Lemma 3.7]. Here we put its proof for readability. **Lemma 3.4**.: _For any \(f\in\operatorname{Rat}(\varGamma_{1})\), the following hold:_ (1)_\(\max\{f(x)\,|\,x\in\varGamma_{1}\}=\max\{\psi(f)(x^{\prime})\,|\,x^{\prime}\in \varGamma_{2}\}\), and_ (2)_\(\min\{f(x)\,|\,x\in\varGamma_{1}\}=\min\{\psi(f)(x^{\prime})\,|\,x^{\prime}\in \varGamma_{2}\}\)._ Proof.: If \(f\in\mathbf{T}\), the assertions are clear. Assume that \(f\) is not a constant function. Let \(a\) be the maximum value of \(f\). In this case, \(a\) is in \(\mathbf{R}\cup\{\infty\}\). Assume \(a\in\mathbf{R}\). For \(b\in\mathbf{R}\), we have \[f\oplus b\begin{cases}=b&\text{if }b\geq a,\\ \neq b&\text{if }b<a.\end{cases}\] Therefore we have \[\psi(f)\oplus b =\psi(f)\oplus\psi(b)=\psi(f\oplus b)\] \[\begin{cases}=\psi(b)=b&\text{if }b\geq a,\\ \neq\psi(b)=b&\text{if }b<a.\end{cases}\] Thus the maximum value of \(\psi(f)\) is \(a\). Assume \(a=\infty\). Then for any \(t\in\mathbf{T}\), we have \(f\oplus t\neq t\). Thus \[\psi(f)\oplus t=\psi(f)\oplus\psi(t)=\psi(f\oplus t)\neq\psi(t)=t\] hold. This means that the maximum value of \(\psi(f)\) is \(\infty\). For the minimum values of \(f\) and \(\psi(f)\), we can obtain the conclusion by applying the maximum value case for \(f^{\odot(-1)}=-f\) and \(\psi(f^{\odot(-1)})=-\psi(f)\) since \[\min\{f(x)\,|\,x\in\varGamma_{1}\}=-\max\{-f(x)\,|\,x\in\varGamma_{1}\}\] and \[\min\{\psi(f)(x^{\prime})\,|\,x^{\prime}\in\varGamma_{2}\}=-\max\{-\psi(f)(x^ {\prime})\,|\,x^{\prime}\in\varGamma_{2}\}.\qed\] Now we start to prove Theorem 1.1. The proof is broken into several steps. The main idea to construct the map \(\varphi:\varGamma_{2}\to\varGamma_{1}\) is that we extract the imformation of the "fibre" of \(x\in\varGamma_{1}\) from a rational function on \(\varGamma_{2}\) of the form of \(\psi(\operatorname{CF}(\{x\},l))\) when \(x\in\varGamma_{1}\setminus\varGamma_{1\infty}\) with \(l\in\mathbf{R}_{>0}\cup\{\infty\}\) or \(\psi(\operatorname{CF}(\varGamma_{1}\setminus(y,x],\infty))^{\odot(-1)}\) when \(x\in\varGamma_{1\infty}\) with a finite point \(y\) on the unique edge incident to \(x\). Note that a chip firing move of the form of \(\operatorname{CF}(\{x\},l)\) or \(\operatorname{CF}(\varGamma_{1}\setminus(y,x],\infty)^{\odot(-1)}\) takes its maximum value at and only at \(x\). **Claim 3.5**.: _For any \(x\in\varGamma_{1}\setminus\varGamma_{1\infty}\) and \(l_{1},l_{2}\in\mathbf{R}_{>0}\cup\{\infty\}\),_ \[\{x^{\prime}\in\varGamma_{2}\,|\,\psi(\operatorname{CF}(\{x\},l_{1}))(x^{ \prime})=0\}=\{x^{\prime}\in\varGamma_{2}\,|\,\psi(\operatorname{CF}(\{x\},l_ {2}))(x^{\prime})=0\}.\] Proof.: For \(l\) such that \(0<l<\infty\), since \[\operatorname{CF}(\{x\},l)=\operatorname{CF}(\{x\},\infty)\oplus(-l),\] we have \[\psi(\operatorname{CF}(\{x\},l))=\psi(\operatorname{CF}(\{x\},\infty))\oplus (-l). \tag{1}\] Hence we have the conclusion by Lemma 3.4. By Claim 3.5, for \(x\in\varGamma_{1}\setminus\varGamma_{1\infty}\), the set \(\{x^{\prime}\in\varGamma_{2}\,|\,\psi(\operatorname{CF}(\{x\},l))(x^{\prime})=0\}\) is independent of the choice of \(l\in\boldsymbol{R}_{>0}\cup\{\infty\}\). Let \(\operatorname{Max}^{\prime}_{x}\) denote this set. Similarly, we can prove the following claim: **Claim 3.6**.: _For any \(x\in\varGamma_{1\infty}\) and finite points \(y_{1},y_{2}\) on the unique edge \(e\) incident to \(x\),_ \[\{x^{\prime}\in\varGamma_{2}\,|\,\psi(\operatorname{CF}(\varGamma_{1} \setminus(y_{1},x],\infty))^{\odot(-1)}(x^{\prime})=\infty\}\] \[= \{x^{\prime}\in\varGamma_{2}\,|\,\psi(\operatorname{CF}(\varGamma_{1} \setminus(y_{2},x],\infty))^{\odot(-1)}(x^{\prime})=\infty\}.\] Proof.: We can choose a finite point \(y_{3}\) on \(e\) such that each \(y_{1}\) and \(y_{2}\) is not farther from \(x\) than \(y_{3}\). By Lemma 3.4, the maximum value of \(\psi(\operatorname{CF}(\varGamma_{1}\setminus(y_{1},x],\infty))^{\odot(-1)}\) is \(\infty\). For \(i=1,2\) and the value \(a_{i}:=\operatorname{CF}(\varGamma_{1}\setminus(y_{3},x],\infty)(y_{i})<0\), \[a_{i}\odot\operatorname{CF}(\varGamma_{1}\setminus(y_{3},x],\infty)^{\odot(- 1)}\oplus 0=\operatorname{CF}(\varGamma_{1}\setminus(y_{i},x],\infty)^{\odot(- 1)}.\] Thus we have \[a_{i}\odot\psi(\operatorname{CF}(\varGamma_{1}\setminus(y_{3},x],\infty))^{ \odot(-1)}\oplus 0=\psi(\operatorname{CF}(\varGamma_{1}\setminus(y_{i},x],\infty))^{ \odot(-1)}. \tag{2}\] Therefore we have the conclusion. By Claim 3.6, for \(x\in\varGamma_{1\infty}\), the set \(\{x^{\prime}\in\varGamma_{1}\,|\,\psi(\operatorname{CF}(\varGamma_{2} \setminus(y,x],\infty))^{\odot(-1)}(x^{\prime})=\infty\}\) is independent of the choice of a finite point \(y\) on \(e\). Let \(\operatorname{Max}^{\prime}_{x}\) denote this set. **Claim 3.7**.: _For any \(x\in\varGamma_{1}\setminus\varGamma_{1\infty}\), there exists \(\varepsilon>0\) such that \(\psi(\operatorname{CF}(\{x\},\varepsilon))\) has a constant slope on each connected component of \(U^{\prime}\setminus\operatorname{Max}^{\prime}_{x}\) and is constant \(-\varepsilon\) on \(\varGamma_{2}\setminus U^{\prime}\), where \(U^{\prime}\) is the \(\varepsilon\)-neighborhood of \(\operatorname{Max}^{\prime}_{x}\)._ Proof.: It is clear by the equality (1) in the proof of Claim 3.5. Similarly, we have the following claim: **Claim 3.8**.: _For any \(x\in\varGamma_{1\infty}\), there exists a finite point \(y\) on the unique edge incident to \(x\) such that \(\psi(\operatorname{CF}(\varGamma_{1}\setminus(y,x],\infty))^{\odot(-1)}\) has each boundary point of \(\{x^{\prime}\in\varGamma_{2}\,|\,\psi(\operatorname{CF}(\varGamma_{1} \setminus(y,x],\infty))^{\odot(-1)}(x^{\prime})=0\}\) as its zero; has from each such point a constant slope; has poles at and only at each point of \(\operatorname{Max}^{\prime}_{x}\); and has no other zeros and poles._ Proof.: It is clear by the equality (2) in the proof of Claim 3.6. **Claim 3.9**.: _For \(x,y\in\varGamma_{1}\setminus\varGamma_{1\infty}\), if \(x\neq y\), then \(\operatorname{Max}^{\prime}_{x}\cap\operatorname{Max}^{\prime}_{y}=\varnothing\)._ Proof.: Assume that \(\operatorname{Max}^{\prime}_{x}\cap\operatorname{Max}^{\prime}_{y}\neq\varnothing\). Since \(x\neq y\in\varGamma_{1}\setminus\varGamma_{1\infty}\), there exists \(\varepsilon>0\) such that \[\{z\in\varGamma_{1}\,|\,\operatorname{CF}(\{x\},\varepsilon)(z)=-\varepsilon \}\cup\{z\in\varGamma_{1}\,|\,\operatorname{CF}(\{y\},\varepsilon)(z)=- \varepsilon\}=\varGamma_{1}.\] We have \[\operatorname{CF}(\{x\},\varepsilon)^{\odot(-1)}\oplus\operatorname{CF}(\{y\}, \varepsilon)^{\odot(-1)}=\varepsilon.\] Thus \[\psi(\operatorname{CF}(\{x\},\varepsilon))^{\odot(-1)}\oplus\psi(\operatorname{ CF}(\{y\},\varepsilon))^{\odot(-1)}=\varepsilon.\] On the other hand, for any \(z^{\prime}\in\operatorname{Max}_{x}^{\prime}\cap\operatorname{Max}_{y}^{\prime}\), \[\psi(\operatorname{CF}(\{x\},\varepsilon))^{\odot(-1)}(z^{\prime})\oplus\psi( \operatorname{CF}(\{y\},\varepsilon))^{\odot(-1)}(z^{\prime})=0\neq\varepsilon.\] It is a contradiction. Thus we have \(\operatorname{Max}_{x}^{\prime}\cap\operatorname{Max}_{y}^{\prime}=\varnothing\). **Claim 3.10**.: _For \(x,y\in\varGamma_{1\infty}\), if \(x\neq y\), then \(\operatorname{Max}_{x}^{\prime}\cap\operatorname{Max}_{y}^{\prime}=\varnothing\)._ Proof.: Assume that \(\operatorname{Max}_{x}^{\prime}\cap\operatorname{Max}_{y}^{\prime}\neq\varnothing\). Since \(x\neq y\), there exists a finite point \(x_{1}\) (resp. \(y_{1}\)) on the unique edge incident to \(x\) (resp. \(y\)) such that \(\operatorname{CF}(\varGamma_{1}\setminus(x_{1},x],\infty)\oplus\operatorname{ CF}(\varGamma_{1}\setminus(y_{1},y],\infty)=0\). Thus we have \(\psi(\operatorname{CF}(\varGamma_{1}\setminus(x_{1},x],\infty))\oplus\psi( \operatorname{CF}(\varGamma_{1}\setminus(y_{1},y],\infty))=0\). On the other hand, for any \(z^{\prime}\in\operatorname{Max}_{x}^{\prime}\cap\operatorname{Max}_{y}^{\prime}\), we have \[\psi(\operatorname{CF}(\varGamma_{1}\setminus(x_{1},x],\infty))(z^ {\prime})\oplus\psi(\operatorname{CF}(\varGamma_{1}\setminus(y_{1},y], \infty))(z^{\prime})\] \[= -\infty\oplus(-\infty)\] \[= -\infty\] \[\neq 0,\] which is a contradiction. **Claim 3.11**.: _If \(x\in\varGamma_{1\infty}\) and \(y\in\varGamma_{1}\setminus\varGamma_{1\infty}\), then \(\operatorname{Max}_{x}^{\prime}\cap\operatorname{Max}_{y}^{\prime}=\varnothing\)._ Proof.: Assume that there exists an element \(z^{\prime}\in\operatorname{Max}_{x}^{\prime}\cap\operatorname{Max}_{y}^{\prime}\). There exist \(x_{1}\in\varGamma_{1}\) and \(\varepsilon>0\) satisfying \[\{z\in\varGamma_{1}\mid\operatorname{CF}(\varGamma_{1}\setminus(x_{1},x], \infty)(z)=0\}\cup\{z\in\varGamma_{1}\mid\operatorname{CF}(\{y\},\varepsilon )(z)=-\varepsilon\}=\varGamma_{1}.\] Then we have \[\operatorname{CF}(\varGamma_{1}\setminus(x_{1},x],\infty)\oplus(-\varepsilon) \odot\operatorname{CF}(\{y\},\varepsilon)^{\odot(-1)}=0,\] and hence \[\psi(\operatorname{CF}(\varGamma_{1}\setminus(x_{1},x],\infty))\oplus(- \varepsilon)\odot\psi(\operatorname{CF}(\{y\},\varepsilon))^{\odot(-1)}=0.\] On the other hand, by assumption, we have \[\psi(\operatorname{CF}(\varGamma_{1}\setminus(x_{1},x],\infty))(z ^{\prime})\oplus(-\varepsilon)\odot\psi(\operatorname{CF}(\{y\},\varepsilon) )^{\odot(-1)}(z^{\prime})\] \[= -\infty\oplus(-\varepsilon)\odot 0\] \[= -\varepsilon\] \[\neq 0,\] which is a contradiction. By Claims 3.9, 3.10, 3.11, the correspondence \(\operatorname{Max}_{x}^{\prime}\ni x^{\prime}\mapsto x\) becomes a map from \(\bigcup_{x\in\varGamma_{1}}\operatorname{Max}_{x}^{\prime}\) to \(\varGamma_{1}\). We call this map \(\varphi\). **Claim 3.12**.: _For \(x\in\varGamma_{1}\setminus\varGamma_{1\infty}\), \(\varepsilon>0\) such that \(\psi(\operatorname{CF}(\{x\},\varepsilon))\) satisfies all of the conditions in Claim 3.7 and any \(d\) such that \(0<d<\varepsilon\), \(\bigcup_{y\in\varGamma_{1}:\operatorname{dist}(x,y)=d}\operatorname{Max}_{y}^{ \prime}\subset\{x^{\prime}\in\varGamma_{2}\,|\,\psi(\operatorname{CF}(\{x\}, \varepsilon))(x^{\prime})=-d\}\)._ Proof.: For \(y\) such that \(\operatorname{dist}(x,y)=d\), let \(\delta>0\) such that \(\psi(\operatorname{CF}(\{y\},\delta))\) satisfies all of the conditions in Claim 3.7. For any positive number \(\varepsilon_{1}<\varepsilon\), by the equality (1) in the proof of Claim 3.5, \(\psi(\operatorname{CF}(\{x\},\varepsilon_{1}))\) satisfies all of the conditions in Claim 3.7. Hence, if we need, by replacing \(\varepsilon\) with a smaller positive number, we can assume that \(d<\varepsilon<d+\delta\). Since \[\operatorname{CF}(\{x\},\varepsilon)\oplus(-d)\odot\operatorname{CF}(\{y\}, \delta)=\operatorname{CF}(\{x\},\varepsilon),\] we have \[\psi(\operatorname{CF}(\{x\},\varepsilon))\oplus(-d)\odot\psi( \operatorname{CF}(\{y\},\delta))=\psi(\operatorname{CF}(\{x\},\varepsilon)).\] Hence, for any \(y^{\prime}\in\operatorname{Max}_{y}^{\prime}\), \(\psi(\operatorname{CF}(\{x\},\varepsilon))(y^{\prime})\geq-d\). Assume that there exists \(y^{\prime}\in\operatorname{Max}_{y}^{\prime}\) such that \(\psi(\operatorname{CF}(\{x\},\varepsilon))(y^{\prime})>-d\). Since \[\operatorname{CF}(\{y\},d)\oplus(-d)\odot\operatorname{CF}(\{x\}, \varepsilon)^{\odot(-1)}\] \[= -d\odot\operatorname{CF}(\{x\},\varepsilon)^{\odot(-1)},\] we have \[\psi(\operatorname{CF}(\{y\},d))\oplus(-d)\odot\psi( \operatorname{CF}(\{x\},\varepsilon))^{\odot(-1)}\] \[= -d\odot\psi(\operatorname{CF}(\{x\},\varepsilon))^{\odot(-1)}.\] On the other hand, we have \[\psi(\operatorname{CF}(\{y\},d))(y^{\prime})\oplus(-d)\odot\psi( \operatorname{CF}(\{x\},\varepsilon))^{\odot(-1)}(y^{\prime})\] \[= 0\oplus(-d)\odot\psi(\operatorname{CF}(\{x\},\varepsilon))^{ \odot(-1)}(y^{\prime})\] \[= 0\] \[> -d\odot\psi(\operatorname{CF}(\{x\},\varepsilon))^{\odot(-1)}(y ^{\prime}),\] which is a contradiction. **Claim 3.13**.: _For \(x\in\varGamma_{1}\setminus\varGamma_{1\infty}\), there exists \(\varepsilon>0\) such that \(\psi(\operatorname{CF}(\{x\},\varepsilon))\) satisfies all of the conditions in Claim 3.7 and for any \(d\) such that \(0<d<\varepsilon\), \(\bigcup_{y\in\varGamma_{1}:\operatorname{dist}(x,y)=d}\operatorname{Max}_{y}^ {\prime}\supset\{x^{\prime}\in\varGamma_{2}\,|\,\psi(\operatorname{CF}(\{x\}, \varepsilon))(x^{\prime})=-d\}\)._ Proof.: Let \(\varepsilon>0\) be such that \(\psi(\operatorname{CF}(\{x\},\varepsilon))\) satisfies all of the conditions in Claim 3.7 and \(\{y\in\varGamma_{1}\,|\,0<\operatorname{dist}(x,y)<\varepsilon\}\) consists of only two-valent points. For any \(d\) such that \(0<d<\varepsilon\), let \(y_{1}\ldots,y_{\operatorname{val}(x)}\) be all of the distinct points of \(\varGamma_{1}\) such that \(\operatorname{dist}(x,y_{i})=d\). Let \(\delta>0\) be such that each \(\psi(\operatorname{CF}(\{y_{i}\},\delta))\) satisfies all the conditions in Claim 3.7. For any positive number \(\varepsilon_{1}<\varepsilon\) (resp. \(\delta_{1}<\delta\)), by the equality (1) in the proof of Claim 3.5, \(\psi(\operatorname{CF}(\{x\},\varepsilon_{1}))\) (resp. \(\psi(\operatorname{CF}(\{y_{i}\},\delta_{1}))\)) satisfies all of the conditions in Claim 3.7. Hence, if we need, by replacing \(\varepsilon\) or \(\delta\) with a smaller positive number, we can assume that \(d<\varepsilon=d+\delta\). We have \[\big{(}\mathrm{CF}(\{x\},\varepsilon)^{\odot(-1)}\oplus d\big{)}^{ \odot(-1)}\] \[= \big{(}\mathrm{CF}(\{x\},\varepsilon)^{\odot(-2)}\oplus 2d\big{)}^{ \odot(-1)}\odot d\] \[\oplus(-d)\odot\mathrm{CF}(\{y_{1}\},\delta)\oplus\cdots\oplus(-d )\odot\mathrm{CF}(\{y_{\mathrm{val}(x)}\},\delta),\] and hence \[\big{(}\psi(\mathrm{CF}(\{x\},\varepsilon))^{\odot(-1)}\oplus d \big{)}^{\odot(-1)}\] \[= \big{(}\psi(\mathrm{CF}(\{x\},\varepsilon))^{\odot(-2)}\oplus 2d \big{)}^{\odot(-1)}\odot d\] \[\oplus(-d)\odot\psi(\mathrm{CF}(\{y_{1}\},\delta))\oplus\cdots \oplus(-d)\odot\psi(\mathrm{CF}(\{y_{\mathrm{val}(x)}\},\delta)).\] Assume that \(\bigcup_{i=1}^{\mathrm{val}(x)}\mathrm{Max}_{y_{i}}^{\prime}\not\supset\{x^ {\prime}\in\varGamma_{2}\,|\,\psi(\mathrm{CF}(\{x\},\varepsilon))(x^{\prime}) =-d\}\). There exists \(z^{\prime}\in\{x^{\prime}\in\varGamma_{2}\,|\,\psi(\mathrm{CF}(\{x\}, \varepsilon))(x^{\prime})=-d\}\setminus\bigcup_{i=1}^{\mathrm{val}(x)} \mathrm{Max}_{y_{i}}^{\prime}\). When \(z^{\prime}\) is a boundary point of \(\{x^{\prime}\in\varGamma_{2}\,|\,\psi(\mathrm{CF}(\{x\},\varepsilon))(x^{ \prime})\geq-d\}\), there exists \(w^{\prime}\) near \(z^{\prime}\) such that \(-d-\frac{\delta}{2}<\psi(\mathrm{CF}(\{x\},\varepsilon))(w^{\prime})<-d\). Since we can assume that \(\delta\) is sufficiently small so that \(\psi(\mathrm{CF}(\{y_{i}\},\delta))(w^{\prime})=-\delta\), we have \[\big{(}\psi(\mathrm{CF}(\{x\},\varepsilon))^{\odot(-1)}(w^{\prime })\oplus d)\big{)}^{\odot(-1)}\] \[= \psi(\mathrm{CF}(\{x\},\varepsilon))(w^{\prime})\] \[\neq \psi(\mathrm{CF}(\{x\},\varepsilon))^{\odot 2}(w^{\prime})\odot d\] \[= \big{(}\psi(\mathrm{CF}(\{x\},\varepsilon))^{\odot(-2)}(w^{ \prime})\oplus 2d\big{)}^{\odot(-1)}\odot d\] \[\oplus(-d)\odot\psi(\mathrm{CF}(\{y_{1}\},\delta))(w^{\prime}) \oplus\cdots\oplus(-d)\odot\psi(\mathrm{CF}(\{y_{\mathrm{val}(x)}\},\delta))( w^{\prime}).\] It is a contradiction. Hence the values such \(d\) are discrete even if there exist. Thus, if we need, by replacing \(\varepsilon\) with a smaller positive number, we have the conclusion. **Claim 3.14**.: _For \(\psi(\mathrm{CF}(\varGamma\setminus(y,x],\infty))^{\odot(-1)}\) in Claim 3.8 and any \(z\in(y,x)\), \(\mathrm{Max}_{z}^{\prime}\subset\{x^{\prime}\in\varGamma_{2}\,|\,\psi( \mathrm{CF}(\varGamma_{1}\setminus(y,x],\infty))^{\odot(-1)}(x^{\prime})= \mathrm{dist}(y,z)\}\)._ Proof.: For any \(z\in(y,x)\), since \[\mathrm{CF}(\varGamma_{1}\setminus(y,x],\infty)^{\odot(-1)}\oplus \mathrm{dist}(y,z)\odot\mathrm{CF}(\{z\},\mathrm{dist}(y,z))\] \[= \mathrm{CF}(\varGamma_{1}\setminus(y,x],\infty)^{\odot(-1)},\] we have \[\psi(\mathrm{CF}(\varGamma_{1}\setminus(y,x],\infty))^{\odot(-1 )}\oplus\mathrm{dist}(y,z)\odot\psi(\mathrm{CF}(\{z\},\mathrm{dist}(y,z)))\] \[= \psi(\mathrm{CF}(\varGamma_{1}\setminus(y,x],\infty))^{\odot(-1)}.\] Thus, for any \(z^{\prime}\in\mathrm{Max}_{z}^{\prime}\), we have \[\psi(\mathrm{CF}(\varGamma_{1}\setminus(y,x],\infty))^{\odot(-1)}(z^{\prime}) \geq\mathrm{dist}(y,z).\] Assume that there exists \(z^{\prime}\in\operatorname{Max}_{z}^{\prime}\) such that \(\psi(\operatorname{CF}(\varGamma_{1}\setminus(y,x],\infty))^{\odot(-1)}(z^{ \prime})>\) dist\((y,z)\). Since \[\operatorname{CF}(\varGamma_{1}\setminus(y,x],\infty)\oplus(-2 \operatorname{dist}(y,z))\oplus(-\operatorname{dist}(y,z))\odot\operatorname{ CF}(\{z\},\operatorname{dist}(y,z))\] \[= \operatorname{CF}(\varGamma_{1}\setminus(y,x],\infty)\oplus(-2 \operatorname{dist}(y,z)),\] we have \[\psi(\operatorname{CF}(\varGamma_{1}\setminus(y,x],\infty))\oplus(-2 \operatorname{dist}(y,z))\oplus(-\operatorname{dist}(y,z))\odot\psi( \operatorname{CF}(\{z\},\operatorname{dist}(y,z)))\] \[= \psi(\operatorname{CF}(\varGamma_{1}\setminus(y,x],\infty))\oplus( -2\operatorname{dist}(y,z)).\] On the other hand, \[\psi(\operatorname{CF}(\varGamma_{1}\setminus(y,x],\infty))(z^{ \prime})\oplus(-2\operatorname{dist}(y,z))\oplus(-\operatorname{dist}(y,z)) \odot\psi(\operatorname{CF}(\{z\},\operatorname{dist}(y,z)))(z^{\prime})\] \[= -\operatorname{dist}(y,z)\] \[> \psi(\operatorname{CF}(\varGamma_{1}\setminus(y,x],\infty))(z^{ \prime})\oplus(-2\operatorname{dist}(y,z)),\] which is a contradiction. In conclusion, for any \(z^{\prime}\in\operatorname{Max}_{z}^{\prime}\), \(\psi(\operatorname{CF}(\varGamma_{1}\setminus(y,x],\infty))^{\odot(-1)}(z^{ \prime})=\operatorname{dist}(y,z)\). **Claim 3.15**.: _For \(\psi(\operatorname{CF}(\varGamma\setminus(y,x],\infty))^{\odot(-1)}\) in Claim 3.8 and any \(z\in(y,x)\), \(\operatorname{Max}_{z}^{\prime}\supset\{x^{\prime}\in\varGamma_{2}\,|\,\psi( \operatorname{CF}(\varGamma_{1}\setminus(y,x],\infty))^{\odot(-1)}(x^{\prime})= \operatorname{dist}(y,z)\}\)._ Proof.: Assume that there exists \(z\in(y,x)\) such that \(\operatorname{Max}_{z}^{\prime}\not\supset\{x^{\prime}\in\varGamma_{2}\,|\, \psi(\operatorname{CF}(\varGamma_{1}\setminus(y,x],\infty))^{\odot(-1)}(x^{ \prime})=\operatorname{dist}(y,z)\}\). For any \(y_{1}\in(y,z)\), since \[\operatorname{CF}(\varGamma_{1}\setminus(y_{1},x],\infty)^{ \odot(-1)}\odot\operatorname{dist}(y,y_{1})\] \[= \operatorname{CF}(\varGamma_{1}\setminus(y,x],\infty)^{\odot(-1) }\oplus\operatorname{dist}(y,y_{1}),\] we have \[\{x^{\prime}\in\varGamma_{2}\,|\,\psi(\operatorname{CF}(\varGamma _{1}\setminus(y,x],\infty))^{\odot(-1)}(x^{\prime})=\operatorname{dist}(y,z)\}\] \[= \{x^{\prime}\in\varGamma_{2}\,|\,\psi(\operatorname{CF}(\varGamma _{1}\setminus(y_{1},x],\infty))^{\odot(-1)}(x^{\prime})=\operatorname{dist}(y_ {1},z)\}.\] As \(\psi(\operatorname{CF}(\varGamma_{1}\setminus(y,x],\infty))^{\odot(-1)}\) satisfies all of the conditions in Claim 3.8, so does \(\psi(\operatorname{CF}(\varGamma_{1}\setminus(y_{1},x],\infty))^{\odot(-1)}\) by the equality (2) in the proof of Claim 3.6. Thus, by replacing \(y\) with \(y_{1}\) if we need, we can assume that dist\((y,z)\) plays the role of \(\varepsilon\) in Claim 3.13 for \(\psi(\operatorname{CF}(\{z\},\operatorname{dist}(y,z)))\) and that there exists a point \(w^{\prime}\) of some half-edge of a point of \(\{x^{\prime}\in\varGamma_{2}\,|\,\psi(\operatorname{CF}(\varGamma_{1}\setminus (y,x],\infty))^{\odot(-1)}(x^{\prime})=\operatorname{dist}(y,z)\}\setminus \operatorname{Max}_{z}^{\prime}\neq\varnothing\) such that dist\((y,z)<\psi(\operatorname{CF}(\varGamma_{1}\setminus(y,x],\infty))^{\odot(-1)}(w^{\prime})<\frac{3}{2} \operatorname{dist}(y,z)\) and \(\psi(\operatorname{CF}(\{z\},\operatorname{dist}(y,z)))(w^{\prime})=\) \(-\operatorname{dist}(y,z)\). Hence we have \[\big{(}\psi(\operatorname{CF}(\varGamma_{1}\setminus(y,x],\infty))^{ \odot(-1)}(w^{\prime})\oplus\operatorname{dist}(y,z)\big{)}^{\odot(-1)}\] \[\oplus(-\operatorname{dist}(y,z))\odot\psi(\operatorname{CF}(\{z \},\operatorname{dist}(y,z)))(w^{\prime})\] \[= \psi(\operatorname{CF}(\varGamma_{1}\setminus(y,x],\infty))(w^{ \prime})\oplus(-\operatorname{dist}(y,z))\odot(-\operatorname{dist}(y,z))\] \[= \psi(\operatorname{CF}(\varGamma_{1}\setminus(y,x],\infty))(w^{ \prime})\] \[\neq \psi(\operatorname{CF}(\varGamma_{1}\setminus(y,x],\infty))^{ \odot 2}(w^{\prime})\odot\operatorname{dist}(y,z)\] \[= \psi(\operatorname{CF}(\varGamma_{1}\setminus(y,x],\infty))^{ \odot 2}(w^{\prime})\odot\operatorname{dist}(y,z)\oplus(-\operatorname{dist}(y,z)) \odot(-\operatorname{dist}(y,z))\] \[= \big{(}\psi(\operatorname{CF}(\varGamma_{1}\setminus(y,x], \infty))^{\odot(-2)}(w^{\prime})\oplus 2\operatorname{dist}(y,z)\big{)}^{\odot(-1)} \odot\operatorname{dist}(y,z)\] \[\oplus(-\operatorname{dist}(y,z))\odot\psi(\operatorname{CF}(\{z \},\operatorname{dist}(y,z)))(w^{\prime}).\] On the other hand, since \[(\operatorname{CF}(\varGamma_{1}\setminus(y,x],\infty)^{ \odot(-1)}\oplus\operatorname{dist}(y,z))^{\odot(-1)}\oplus(-\operatorname {dist}(y,z))\odot\operatorname{CF}(\{z\},\operatorname{dist}(y,z))\] \[= (\operatorname{CF}(\varGamma_{1}\setminus(y,x],\infty)^{ \odot(-2)}\oplus 2\operatorname{dist}(y,z))^{\odot(-1)}\odot\operatorname{ dist}(y,z)\] \[\oplus(-\operatorname{dist}(y,z))\odot(\operatorname{CF}(\{z\}, \operatorname{dist}(y,z)),\] we have \[\big{(}\psi(\operatorname{CF}(\varGamma_{1}\setminus(y,x],\infty ))^{\odot(-1)}\oplus\operatorname{dist}(y,z)\big{)}^{\odot(-1)}\] \[\oplus(-\operatorname{dist}(y,z))\odot\psi(\operatorname{CF}(\{z \},\operatorname{dist}(y,z)))\] \[= \big{(}\psi(\operatorname{CF}(\varGamma_{1}\setminus(y,x], \infty))^{\odot(-2)}\oplus 2\operatorname{dist}(y,z)\big{)}^{\odot(-1)} \odot\operatorname{dist}(y,z)\] \[\oplus(-\operatorname{dist}(y,z))\odot\psi(\operatorname{CF}(\{z \},\operatorname{dist}(y,z))),\] which is a contradiction. By Claims 3.12, 3.13, 3.14, 3.15, \(\varphi\) is continuous. **Claim 3.16**.: _For any \(x\in\varGamma_{1\infty}\), \(\operatorname{Max}^{\prime}_{x}\subset\varGamma_{2\infty}\)._ Proof.: By Lemma 3.4, it is clear. **Claim 3.17**.: \(\bigcup_{x\in\varGamma_{1}\setminus\varGamma_{1\infty}}\operatorname{Max}^{ \prime}_{x}\supset\varGamma_{2}\setminus\varGamma_{2\infty}\)_._ Proof.: Assume that \(\bigcup_{x\in\varGamma_{1}\setminus\varGamma_{1\infty}}\operatorname{Max}^{ \prime}_{x}\not\supset\varGamma_{2}\setminus\varGamma_{2\infty}\). Since the boundary set of \(\bigcup_{x\in\varGamma_{1}\setminus\varGamma_{1\infty}}\operatorname{Max}^{ \prime}_{x}\) in \(\varGamma_{2}\setminus\varGamma_{2\infty}\) is not empty, for any element \(z^{\prime}\) in it, there exists \(z\in\varGamma_{1}\setminus\varGamma_{1\infty}\) such that the boundary set of \(\operatorname{Max}^{\prime}_{z}\) in \(\varGamma_{2}\setminus\varGamma_{2\infty}\) contains \(z^{\prime}\). Hence \(\psi(\operatorname{CF}(\{z\},\varepsilon))\) takes values less than zero on any half-edge of \(z^{\prime}\) not in \(\operatorname{Max}^{\prime}_{z}\) with a positive number \(\varepsilon\) by Lemma 3.4. On the other hand, by Claim 3.13, when \(\varepsilon\) is sufficiently small, all such half-edges are contained in \(\operatorname{Max}^{\prime}_{z_{1}}\) for any point \(z_{1}\) near \(z\), which is a contradiction. **Claim 3.18**.: \(\bigcup_{x\in\varGamma_{1}}\operatorname{Max}^{\prime}_{x}=\varGamma_{2}\)_._ Proof.: By Claim 3.17, \(\left(\varGamma_{2}\setminus\bigcup_{x\in\varGamma_{1}\setminus\varGamma_{1\infty }}\mathrm{Max}^{\prime}_{x}\right)\subset\varGamma_{2\infty}\). Let \(x^{\prime}_{1},\ldots,x^{\prime}_{n}\) be all of the distinct points of \(\varGamma_{2}\setminus\bigcup_{x\in\varGamma_{1}\setminus\varGamma_{1\infty }}\mathrm{Max}^{\prime}_{x}\). Let \(e^{\prime}_{i}\) be the unique edge incident to \(x^{\prime}_{i}\). Let \(\{z^{\prime}_{j}\}\subset\varGamma_{2}\setminus\varGamma_{2\infty}\) be a convergent sequence such that \(z^{\prime}_{j}\to x^{\prime}_{i}\) as \(j\to\infty\). Since \(\varphi\) is continuous on \(\bigcup_{x\in\varGamma_{1}}\mathrm{Max}^{\prime}_{x}\) and \(z^{\prime}_{j}\) is on \(e^{\prime}_{i}\) for each sufficiently large number \(j\), the sequence \(\{\varphi(z^{\prime}_{j})\}\) is convergent. The limit \(x\) of \(\{\varphi(z^{\prime}_{j})\}\) must be in \(\varGamma_{1\infty}\). In fact, if \(x\) is not in \(\varGamma_{1\infty}\), then there exists a sufficiently small positive number \(\varepsilon>0\) as in Claim 3.13 for \(\psi(\mathrm{CF}(\{x\},\varepsilon))\). By Claims 3.12, 3.13, there exists a sufficiently large number \(M\) such that for any \(j>M\), \(-\varepsilon<\psi(\mathrm{CF}(\{x\},\varepsilon))(z^{\prime}_{j})<0\). This means that the sequence \(\{z^{\prime}_{j}\}\) is not convergent to a point at infinity, which is a contradiction. Thus \(x\in\varGamma_{1\infty}\). We can choose \(z^{\prime}_{j}\) whose image by \(\varphi\) is on the unique edge incident to \(x\) and such that \(\psi(\mathrm{CF}(\varGamma_{1}\setminus(\varphi(z^{\prime}_{j}),x],\infty))^{ \odot(-1)}\) satisfies all of the conditions in Claim 3.8. For any \(w\in(\varphi(z^{\prime}_{j}),x)\), since \[\mathrm{CF}(\varGamma_{1}\setminus(\varphi(z^{\prime}_{j}),x], \infty)^{\odot(-1)}\oplus\mathrm{dist}(\varphi(z^{\prime}_{j}),w)\odot \mathrm{CF}(\{w\},\mathrm{dist}(\varphi(z^{\prime}_{j}),w))\] \[= \mathrm{CF}(\varGamma_{1}\setminus(\varphi(z^{\prime}_{j}),x], \infty)^{\odot(-1)},\] we have \[\psi(\mathrm{CF}(\varGamma_{1}\setminus(\varphi(z^{\prime}_{j}),x], \infty))^{\odot(-1)}\oplus\mathrm{dist}(\varphi(z^{\prime}_{j}),w)\odot\psi( \mathrm{CF}(\{w\},\mathrm{dist}(\varphi(z^{\prime}_{j}),w)))\] \[= \psi(\mathrm{CF}(\varGamma_{1}\setminus(\varphi(z^{\prime}_{j}),x ],\infty))^{\odot(-1)}.\] This means that \(x^{\prime}_{i}\in\mathrm{Max}^{\prime}_{x}\). In conclusion, \(\bigcup_{x\in\varGamma_{1}}\mathrm{Max}^{\prime}_{x}=\varGamma_{2}\). **Claim 3.19**.: \(\varphi\) _is a surjective morphism from \(\varGamma_{2}\) to \(\varGamma_{1}\)._ Proof.: By the discussion so far and Claim 3.18, \(\varphi\) is a surjective continuous map from \(\varGamma_{2}\) to \(\varGamma_{1}\). For \(x\in\varGamma_{1}\setminus\varGamma_{1\infty}\), by the definition of \(\mathrm{Max}^{\prime}_{x}\), if \(\mathrm{Max}^{\prime}_{x}\cap\varGamma_{2\infty}\neq\varnothing\), then for each \(x^{\prime}\in\mathrm{Max}^{\prime}_{x}\cap\varGamma_{2\infty}\), \(\mathrm{Max}^{\prime}_{x}\) contains the segment \((y^{\prime},x^{\prime}]\) with a finite point \(y^{\prime}\) on the unique edge incident to \(x^{\prime}\). Since \(\varGamma_{2\infty}\) is finite, there are only a finite number of such points \(x\) by Claims 3.9, 3.10, 3.11. Also, by Claims 3.14, 3.15, for each point \(x^{\prime}\) of \(\bigcup_{x\in\varGamma_{1\infty}}\mathrm{Max}^{\prime}_{x}\), which is in \(\varGamma_{2\infty}\) by Claim 3.16, there exists a finite point \(y^{\prime}\) on the unique edge incident to \(x^{\prime}\) such that any \(z^{\prime}\in(y^{\prime},x^{\prime})\) is an isolated point of \(\mathrm{Max}^{\prime}_{\varphi(z^{\prime})}\). For each of these two kinds of \(x^{\prime}\), we fix one \(y^{\prime}\). The closed subset \(\widetilde{\varGamma_{2}}\) of \(\varGamma_{2}\) obtained from \(\varGamma_{2}\) by removing all intervals \((y^{\prime},x^{\prime}]\) is a metric graph by Claim 3.18. Since \(\widetilde{\varGamma_{2}}\) is a compact metric space, it is sequentially compact. Let \(V^{\prime}_{1}\) be the set of all points each whose neighborhood contains half-edges where \(\varphi\) has distinct expansion factors. Assume that \(V^{\prime}_{1}\) is an infinite set. Then \(\widetilde{\varGamma_{2}}\) has infinitely many points of \(V^{\prime}_{1}\). Every sequence in \(\widetilde{\varGamma_{2}}\cap V^{\prime}_{1}\) has a convergent subsequence \(\{x^{\prime}_{i}\}\). If \(x^{\prime}_{i}\to x^{\prime}\) as \(i\to\infty\), then \(\varphi(x^{\prime}_{i})\to\varphi(x^{\prime})\) as \(i\to\infty\) by the continuity of \(\varphi\). For \(x^{\prime}\), let \(\varepsilon>0\) be as in Claim 3.13. There exist infinitely many such that \(x^{\prime}_{i}\) is in \(\{y^{\prime}\in\varGamma_{2}\,|\,-\varepsilon<\psi(\operatorname{CF}(\{\varphi(x^{ \prime})\},\varepsilon))(y^{\prime})<0\}\) by Claim 3.12. Since there are only finitely many non-two-valent points on a tropical curve, we can assume that this \(x^{\prime}_{i}\) is two-valent. By Claims 3.12, 3.13, there exists a neighborhood \(U\) of \(x^{\prime}_{i}\) where \(\varphi\) has a constant expansion factor, which is a contradiction. Hence \(V^{\prime}_{1}\) must be a finite set. Let \(V^{\prime}_{2}\) be the union of \(V^{\prime}_{1}\) and the set of vertices of the underlying graph of the canonical loopless model for \(\varGamma_{2}\). Let \(V\) be the union of \(\varphi(V^{\prime}_{2})\) and the set of vertices of the underlying graph of the canonical loopless model for \(\varGamma_{1}\). Let \(V^{\prime}\) be the union of \(V^{\prime}_{2}\) and the boundary set of \(\varphi^{-1}(V)\) in \(\varGamma_{2}\). Then \(V\) and \(V^{\prime}\) determine loopless models for \(\varGamma_{1}\) and \(\varGamma_{2}\) respectively. By the constructions of \(V\) and \(V^{\prime}\), we can check that \(\varphi\) is a surjective morphism \(\varGamma_{2}\to\varGamma_{1}\) with these loopless models. The uniqueness of such \(\varphi\) is clear by the definition of \(\varphi\). In conclusion, we have Theorem 1.1. **Lemma 3.20**.: _For \(x\in\varGamma_{1}\setminus\varGamma_{1\infty}\), \(\varepsilon>0\) such that \(\psi(\operatorname{CF}(\{x\},\varepsilon))\) satisfies all of the conditions in Claim 3.7 and any \(d\) such that \(0<d<\varepsilon\), \(\bigcup_{y\in\varGamma_{1}:\operatorname{dist}(x,y)=d}\operatorname{Max}^{ \prime}_{y}\supset\{x^{\prime}\in\varGamma_{2}\,|\,\psi(\operatorname{CF}( \{x\},\varepsilon))(x^{\prime})=-d\}\)._ Proof.: By the proof of Claim 3.13, numbers \(d\) such that \(0<d<\varepsilon\) and \(\bigcup_{y\in\varGamma_{1}:\operatorname{dist}(x,y)=d}\operatorname{Max}^{ \prime}_{y}\not\supset\{x^{\prime}\in\varGamma_{2}\,|\,\psi(\operatorname{CF} (\{x\},\varepsilon))(x^{\prime})=-d\}\) are discrete even if there exist. Let \(d_{1}\) be the minimum number of such \(d\). Then for any \(l\) such that \(0<l<d_{1}\), we have \(\bigcup_{y\in\varGamma_{1}:\operatorname{dist}(x,y)=l}\operatorname{Max}^{ \prime}_{y}=\{x^{\prime}\in\varGamma_{2}\,|\,\psi(\operatorname{CF}(\{x\}, \varepsilon))(x^{\prime})=-l\}\) by the definition of \(d_{1}\) and Claim 3.12. Let \(z^{\prime}\in\{x^{\prime}\in\varGamma_{2}\,|\,\psi(\operatorname{CF}(\{x\}, \varepsilon))(x^{\prime})=-d_{1}\}\setminus\bigcup_{y\in\varGamma_{1}: \operatorname{dist}(x,y)=-d_{1}}\operatorname{Max}^{\prime}_{y}\). Since \(\psi(\operatorname{CF}(\{x\},\varepsilon))(z^{\prime})=-d_{1}>-\varepsilon\) and \(\psi(\operatorname{CF}(\{x\},\varepsilon))\) satisfies all of the conditions in Claim 3.7, \(z^{\prime}\in\varGamma_{2}\setminus\varGamma_{2\infty}\). By Claim 3.17, there exists \(z\in\varGamma_{1}\setminus\varGamma_{1\infty}\) such that \(\operatorname{Max}^{\prime}_{z}\ni z^{\prime}\). Then \(\operatorname{dist}(x,z)>d_{1}\). By Claims 3.12, 3.13, there exists \(\delta>0\) such that \(\psi(\operatorname{CF}(\{z\},\delta))\) satisfies all of the conditions in Claim 3.7 and for any \(l\) such that \(0<l<\delta\), \(\bigcup_{y\in\varGamma_{1}:\operatorname{dist}(z,y)=l}\operatorname{Max}^{ \prime}_{y}=\{x^{\prime}\in\varGamma_{2}\,|\,\psi(\operatorname{CF}(\{z\}, \delta))(x^{\prime})=-l\}\). Since rational functions on tropical curves, there exists \(y^{\prime}\in\varGamma_{2}\setminus\varGamma_{2\infty}\) near \(z^{\prime}\) such that \(\psi(\operatorname{CF}(\{x\},\varepsilon))(y^{\prime})>-d_{1}\) and \(\psi(\operatorname{CF}(\{z\},\delta))(y^{\prime})>-\delta\). Then we have \(d_{1}<\operatorname{dist}(x,z)\leq\operatorname{dist}(x,\varphi(y^{\prime}))+ \operatorname{dist}(\varphi(y^{\prime}),z)=-\psi(\operatorname{CF}(\{x\}, \varepsilon))(y^{\prime})-\psi(\operatorname{CF}(\{z\},\delta))(y^{\prime})<d_ {1}+\delta\). Since we can replace \(\delta\) with an infinitesimal positive number, it is a contradiction. **Lemma 3.21**.: _Let \(V\) be as in the proof of Claim 3.19 and \((G,l)\) the model for \(\varGamma_{1}\) such that \(V(G)=V\). Let \(e\in E(G)\). For any point \(x\) of \(e\) other than the endpoints \(v,w\) (possibly \(v=w\)) of \(e\), let \(l_{x}\) be the minimum of \(\operatorname{dist}(x,v)\) and \(\operatorname{dist}(x,w)\). Then \(\psi(\operatorname{CF}(\{x\},l_{x}))\) satisfies all of the conditions in Claim 3.7._ Proof.: Assume that the assertion does not hold. There exist \(e\in E(G)\) and a point \(x\) of \(e\) other than its endpoint(s) such that \(\psi(\operatorname{CF}(\{x\},l_{x}))\) does not satisfy some condition in Claim 3.7. By Claim 3.7 and the equality (1) in the proof of Claim 3.5, there exists \(\varepsilon\) such that \(0<\varepsilon<l_{x}\), \(\psi(\mathrm{CF}(\{x\},\varepsilon))\) satisfies all of the conditions in Claim 3.7 and for any \(l>0\), \(\psi(\mathrm{CF}(\{x\},\varepsilon+l))\) does not satisfy some condition in Claim 3.7. Let \(y_{1},y_{2}\) be the distinct points such that \(\mathrm{dist}(x,y_{i})=\varepsilon\). Let \(\delta>0\) be such that \(\psi(\mathrm{CF}(\{y_{i}\},\delta))\) satisfies all of the conditions in Claims 3.7 and \(\varepsilon+\delta\leq l_{x}\) and \(\mathrm{dist}(y_{1},y_{2})>2\delta\). Since \[\mathrm{CF}(\{x\},\varepsilon+\delta)\oplus(-\varepsilon)\odot\mathrm{CF}(\{ y_{i}\},\delta)=\mathrm{CF}(\{x\},\varepsilon+\delta),\] we have \[\psi(\mathrm{CF}(\{x\},\varepsilon+\delta))\oplus(-\varepsilon)\odot\psi( \mathrm{CF}(\{y_{i}\},\delta))=\psi(\mathrm{CF}(\{x\},\varepsilon+\delta)).\] By Claim 3.12 and Lemma 3.20, the slope of \(\psi(\mathrm{CF}(\{x\},\varepsilon+\delta))\) (resp. \(\psi(\mathrm{CF}(\{y_{i}\},\delta))\)) on each connected component of \(\{x^{\prime}\in\varGamma_{2}\,|\,-\,\varepsilon<\psi(\mathrm{CF}(\{x\}, \varepsilon+\delta))(x^{\prime})<0\}\) (resp. \(\{x^{\prime}\in\varGamma_{2}\,|\,-\,\delta<\psi(\mathrm{CF}(\{y_{i}\},\delta) )(x^{\prime})<0\}\)) coincides with the expansion factor of \(\varphi\) on the connected component. Hence there exists \(z^{\prime}\in\varGamma_{2}\) such that \(\psi(\mathrm{CF}(\{y_{i}\},\delta))(z^{\prime})=-\,\mathrm{dist}(y_{i},\varphi (z^{\prime}))>-\delta\) and \(-\varepsilon+\psi(\mathrm{CF}(\{y_{i}\},\delta))(z^{\prime})<\psi(\mathrm{CF}( \{x\},\varepsilon+\delta))(z^{\prime})\leq-\varepsilon\) for \(i=1\) or \(2\). Without loss of generality, we can assume that this \(i\) is one. Note that in this case \(\psi(\mathrm{CF}(\{y_{2}\},\delta))(z^{\prime})=-\delta\). For any positive integer \(n\), since \[\left(\mathrm{CF}(\{x\},\varepsilon+\delta)^{\odot(-1)}\oplus \varepsilon\right)^{\odot(-1)}\] \[= \left(\mathrm{CF}(\{x\},\varepsilon+\delta)^{\odot(-n)}\oplus n \varepsilon\right)^{\odot(-1)}\odot(n-1)\varepsilon\] \[\oplus(-\varepsilon)\odot\mathrm{CF}(\{y_{1}\},\delta)\oplus(- \varepsilon)\odot\mathrm{CF}(\{y_{2}\},\delta),\] we have \[\left(\psi(\mathrm{CF}(\{x\},\varepsilon+\delta))^{\odot(-1)} \oplus\varepsilon\right)^{\odot(-1)}\] \[= \left(\psi(\mathrm{CF}(\{x\},\varepsilon+\delta))^{\odot(-n)} \oplus n\varepsilon\right)^{\odot(-1)}\odot(n-1)\varepsilon\] \[\oplus(-\varepsilon)\odot\psi(\mathrm{CF}(\{y_{1}\},\delta)) \oplus(-\varepsilon)\odot\psi(\mathrm{CF}(\{y_{2}\},\delta)).\] If \(\psi(\mathrm{CF}(\{x\},\varepsilon+\delta))(z^{\prime})<-\varepsilon\), then the above equality cannot hold since the left-hand side takes the value \(\psi(\mathrm{CF}(\{x\},\varepsilon+\delta))(z^{\prime})\) at \(z^{\prime}\) and the right-hand side is less than it at \(z^{\prime}\). Thus \(\psi(\mathrm{CF}(\{x\},\varepsilon+\delta))(z^{\prime})=-\varepsilon\). By the above argument, for \(w^{\prime}\in\varGamma_{2}\), if \(\psi(\mathrm{CF}(\{y_{k}\},\delta))(w^{\prime})>-\delta\) and \(-\varepsilon+\psi(\mathrm{CF}(\{y_{k}\},\delta))(w^{\prime})<\psi(\mathrm{CF}( \{x\},\varepsilon+\delta))(w^{\prime})\leq-\varepsilon\), then \(\psi(\mathrm{CF}(\{x\},\varepsilon+\delta))(w^{\prime})=-\varepsilon\). Since \(\delta\) is small and rational functions on tropical curves are continuous, there exists \(w^{\prime}\in\varGamma_{2}\) such that \(\psi(\mathrm{CF}(\{y_{1}\},\delta))(w^{\prime})=\psi(\mathrm{CF}(\{y_{2}\}, \delta))(w^{\prime})=-\delta\) and \(-\varepsilon-\frac{\delta}{n}<\psi(\mathrm{CF}(\{x\},\varepsilon+\delta))(w^{ \prime})<-\varepsilon\). However, by the above equality, such \(w^{\prime}\) cannot exist. Therefore we have the conclusion. **Corollary 3.22**.: _Let \(\psi\) and \(\varphi\) be as in Theorem 1.1. Then_ \[\psi(f)=f\circ\varphi\] _holds for any \(f\in\mathrm{Rat}(\varGamma_{1})\)._ In the proof of Corollary 3.22, we will use the proof of the main theorem of [6] "for a tropical curve \(\varGamma\), \(\operatorname{Rat}(\varGamma)\) is finitely generated as a semifield over \(\boldsymbol{T}\)"; in that proof, the author gave a finite generating set of \(\operatorname{Rat}(\varGamma)\) as follows. Let \(\varGamma^{\prime}\) be a metric graph that is obtained from \(\varGamma\) by contracting all edges of length \(\infty\). For the canonical model \((G^{\prime}_{\circ},l^{\prime}_{\circ})\) for \(\varGamma^{\prime}\), fix a direction on edges of \(G^{\prime}_{\circ}\). We identify each edge \(e^{\prime}\in E(G^{\prime}_{\circ})\) with the interval \([0,l^{\prime}_{\circ}(e)]\) with this direction. For each edge \(e^{\prime}\in E(G^{\prime}_{\circ})\), let \(x_{e^{\prime}}=\frac{l^{\prime}_{\circ}(e^{\prime})}{4}\), \(y_{e^{\prime}}=\frac{l^{\prime}_{\circ}(e^{\prime})}{2}\), and \(z_{e^{\prime}}=\frac{3l^{\prime}_{\circ}(e^{\prime})}{4}\). Let \[f^{\prime}_{e}:=\operatorname{CF}\left(\{y_{e^{\prime}}\},\frac{l^{\prime}_{ \circ}(e^{\prime})}{2}\right),g^{\prime}_{e}:=\operatorname{CF}\left(\{x_{e^ {\prime}}\},\frac{l^{\prime}_{\circ}(e^{\prime})}{4}\right),h^{\prime}_{e}:= \operatorname{CF}\left(\{z_{e^{\prime}}\},\frac{l^{\prime}_{\circ}(e^{\prime}) }{4}\right).\] The natural inclusion \(\iota:\varGamma^{\prime}\hookrightarrow\varGamma\) induces the natural inclusion \(\kappa:\operatorname{Rat}(\varGamma^{\prime})\hookrightarrow\operatorname{Rat }(\varGamma)\) such that for any \(f^{\prime}\in\operatorname{Rat}(\varGamma^{\prime})\) and \(x^{\prime}\in\varGamma^{\prime}\), \(\kappa(f^{\prime})(\iota(x^{\prime}))=f^{\prime}(x^{\prime})\) and \(\kappa(f^{\prime})\) is extended to be constant on each connected component of \(\varGamma\backslash\iota(\varGamma^{\prime})\). Let \(L_{1},\ldots,L_{m}\) be all the connected components of \(\varGamma\backslash\iota(\varGamma^{\prime})\). Then \(\{\kappa(f_{e^{\prime}}),\kappa(g_{e^{\prime}}),\kappa(h_{e^{\prime}}),\kappa (\operatorname{CF}(\{v^{\prime}\},\infty)),\operatorname{CF}(\varGamma \backslash L_{1},\infty),\ldots,\operatorname{CF}(\varGamma\backslash L_{m}, \infty)\,|\,e^{\prime}\in E(G^{\prime}_{\circ}),v^{\prime}\in V(G^{\prime}_{ \circ})\}\) is the desired generating set (see [6, Section 1 and the proof of Lemma 1.4]). Note that we chose the canonical model for \(\varGamma^{\prime}\) so that the generating set is as small as possible, but the same proof holds for any model for \(\varGamma^{\prime}\). Proof of Corollary 3.22.: Let \(x\in\varGamma_{1}\setminus\varGamma_{1\infty}\) and \(\varepsilon>0\) be such that \(\psi(\operatorname{CF}(\{x\},\varepsilon))\) satisfies all of the conditions in Claim 3.7. Let \(y\in\varGamma_{1\infty}\) and \(z\) be a finite point on the unique edge incident to \(y\) such that \(\psi(\operatorname{CF}(\varGamma_{1}\setminus(z,y],\infty))^{\odot(-1)}\) satisfies all of the conditions in Claim 3.8. Let \(f:=\operatorname{CF}(\{x\},\varepsilon)\) and \(g:=\operatorname{CF}(\varGamma_{1}\setminus(z,y],\infty)^{\odot(-1)}\). By Lemma 3.20 and Claims 3.12, 3.14, 3.15, and the definition of \(\varphi\), we have \[\psi(f)=f\circ\varphi\quad\text{and}\quad\psi(g)=g\circ\varphi.\] Since all of such \(f\) and \(g\) generate \(\operatorname{Rat}(\varGamma_{1})\) as a tropical semifield over \(\boldsymbol{T}\), for any \(h\in\operatorname{Rat}(\varGamma_{1})\), we have \(\psi(h)=h\circ\varphi\). In fact, if \(\psi(\widetilde{f})=\widetilde{f}\circ\varphi\) and \(\psi(\widetilde{g})=\widetilde{g}\circ\varphi\), then we have \[\psi(\widetilde{f}\oplus\widetilde{g}) =\psi(\widetilde{f})\oplus\psi(\widetilde{g})\] \[=\widetilde{f}\circ\varphi\oplus\widetilde{g}\circ\varphi\] \[=(\widetilde{f}\oplus\widetilde{g})\circ\varphi\] and \[\psi(\widetilde{f}\odot\widetilde{g}) =\psi(\widetilde{f})\odot\psi(\widetilde{g})\] \[=\widetilde{f}\circ\varphi\odot\widetilde{g}\circ\varphi\] \[=(\widetilde{f}\odot\widetilde{g})\circ\varphi.\] For each \(y\in\varGamma_{1\infty}\), fix \(z\) as above. Let \(\varGamma_{11}\) be the metric graph obtained from \(\varGamma_{1}\) by contracting all edges of length \(\infty\) to each \(z\) and \(\kappa:\operatorname{Rat}(\varGamma_{11})\hookrightarrow\operatorname{Rat}( \varGamma_{1})\) as above. By lemma 3.21, for a model \((G_{1},l_{1})\) for \(\varGamma_{11}\) such that \(V(G_{1})\) contains \(V\setminus\varGamma_{1\infty}\), where \(V\) is as in the proof of Claim 3.19, and any edge \(e\in E(G_{1})\), we can choose \(\kappa(f_{e})\), \(\kappa(g_{e})\), \(\kappa(h_{e})\) as \(f\) above. By [7, Remark 3.1] (cf. [7, Lemma 3.3]), \(\operatorname{CF}(\{v\},\infty)\) is contained in the semifield generated by \(\{f_{e},g_{e},h_{e},\operatorname{CF}(\{w\},\varepsilon_{w})\,|\,e\in E(G_{1}), w\in V(G_{1})\}\) over \(\boldsymbol{T}\) on \(\varGamma_{11}\), where \(\varepsilon_{w}\) is a positive number such that \(\psi(\operatorname{CF}(\{w\},\varepsilon_{w}))\) satisfies all of the conditions in Claim 3.13. Hence, we have the conclusion. Now we can prove Corollary 1.2: Proof of Corollary 1.2.: Clearly, both \(\mathscr{C},\mathscr{D}\) are categories. Let \[F:\mathscr{C}\to\mathscr{D}\] be \[\operatorname{Ob}(\mathscr{C})\to\operatorname{Ob}(\mathscr{D});\qquad \varGamma\mapsto\varGamma\] and for \(\varGamma_{1},\varGamma_{2}\in\operatorname{Ob}(\mathscr{C})\), \[\operatorname{Hom}_{\mathscr{C}}(\varGamma_{1},\varGamma_{2})\to \operatorname{Hom}_{\mathscr{D}}(\varGamma_{2},\varGamma_{1});\qquad\psi\mapsto\varphi,\] where \(\varphi\) is the surjective morphism \(\varGamma_{2}\twoheadrightarrow\varGamma_{1}\) defined as in Theorem 1.1. Let \[G:\mathscr{D}\to\mathscr{C}\] be \[\operatorname{Ob}(\mathscr{D})\to\operatorname{Ob}(\mathscr{C});\qquad \varGamma\to\varGamma\] and for \(\varGamma_{1},\varGamma_{2}\in\operatorname{Ob}(\mathscr{D})\), \[\operatorname{Hom}_{\mathscr{D}}(\varGamma_{1},\varGamma_{2})\to \operatorname{Hom}_{\mathscr{C}}(\varGamma_{2},\varGamma_{1});\qquad\varphi \mapsto\varphi^{*}.\] By Theorem 1.1 and Proposition 3.1, \(F\) and \(G\) are defined. Then \(F\) and \(G\) are (contravariant) functors. By Corollary 3.22, we have \(G\circ F=\operatorname{id}_{\mathscr{C}}\). Clearly, \(F\circ G=\operatorname{id}_{\mathscr{D}}\) holds.